nvidia/OpenCodeReasoning-Nemotron-32B

OpenCodeReasoning-Nemotron-32B is a 32.8 billion parameter large language model developed by NVIDIA, derived from Qwen2.5-32B-Instruct. This model is specifically post-trained for reasoning in code generation tasks, supporting a context length of up to 32,768 tokens. It excels in competitive programming benchmarks like LiveCodeBench and CodeContest, making it suitable for advanced code-related reasoning applications.

Cold
Public
32.8B
FP8
131072
License: apache-2.0
Hugging Face

No reviews yet. Be the first to review!