nvidia/OpenReasoning-Nemotron-7B
OpenReasoning-Nemotron-7B is a 7.6 billion parameter decoder-only Transformer model developed by NVIDIA, derived from Qwen2.5-7B. This model is specifically post-trained and optimized for advanced reasoning tasks across mathematics, code generation, and scientific problem-solving, supporting up to 131072 tokens context length. It demonstrates strong performance on competitive reasoning benchmarks and is suitable for commercial and non-commercial research use in complex problem-solving scenarios.