nvidia/AceReason-Nemotron-7B
NVIDIA's AceReason-Nemotron-7B is a 7.6 billion parameter language model specializing in math and code reasoning, trained entirely through reinforcement learning (RL) from a DeepSeek-R1-Distilled-Qwen-7B base. It achieves 69.0% on AIME 2024 and 51.8% on LiveCodeBench v5, demonstrating enhanced performance in complex problem-solving. This model is optimized for tasks requiring advanced mathematical and coding logic, leveraging a unique two-stage RL training approach.
No reviews yet. Be the first to review!