nvidia/NFT-32B

NFT-32B is a 32.5 billion parameter math reasoning model developed by NVIDIA, Tsinghua University, and Stanford University. Fine-tuned from Qwen2.5-32B using the Negative-aware Fine-Tuning (NFT) algorithm, it learns from both correct and incorrect answers to autonomously improve performance. This model excels at competition-level mathematics and general mathematical reasoning, supporting a context length of up to 131,072 tokens.

Warm
Public
32.8B
FP8
32768
License: nvidia-non-commercial-license
Hugging Face