allenai/Llama-3.1-Tulu-3-70B
Llama-3.1-Tulu-3-70B is a 70 billion parameter instruction-following model developed by AllenAI, fine-tuned from Meta's Llama 3.1 base model. It offers a comprehensive post-training package with open-source data, code, and recipes. This model is designed for state-of-the-art performance across diverse tasks, including chat, mathematical reasoning (MATH, GSM8K), and instruction following (IFEval), with a context length of 32768 tokens.