nvidia/OpenMath2-Llama3.1-70B

The nvidia/OpenMath2-Llama3.1-70B is a 70 billion parameter language model developed by NVIDIA, fine-tuned from Llama3.1-70B-Base with the OpenMathInstruct-2 dataset. This model is specifically optimized for advanced mathematical reasoning and problem-solving, demonstrating a 3.9% improvement over Llama3.1-70B-Instruct on the MATH benchmark. It is designed for use cases requiring high accuracy in mathematical tasks, leveraging a 32768 token context length.

Warm
Public
70B
FP8
32768
License: llama3.1
Hugging Face