Model-SafeTensors/Llama-3.1-Tango-70b

Llama-3.1-Tango-70b is a 70 billion parameter language model, merged from nvidia/Llama-3.1-Nemotron-70B-Instruct-HF and sandbox-ai/Llama-3.1-Tango-70b using the passthrough method. This model is designed for general language understanding and generation tasks, leveraging the combined strengths of its base models. It supports a context length of 32768 tokens, making it suitable for processing extensive inputs and generating detailed responses.

Warm
Public
70B
FP8
32768
Hugging Face

No reviews yet. Be the first to review!