nvidia/OpenMath-Nemotron-32B

The nvidia/OpenMath-Nemotron-32B is a 32.8 billion parameter, Qwen2.5-based transformer decoder-only language model developed by NVIDIA. Fine-tuned on the OpenMathReasoning dataset, it specializes in advanced mathematical reasoning tasks. This model achieves state-of-the-art results on popular mathematical benchmarks and supports a 131,072 token context length. It is designed for commercial use and research in mathematical problem-solving.

Warm
Public
32.8B
FP8
131072
License: cc-by-4.0
Hugging Face