deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B

The DeepSeek-R1-Distill-Qwen-1.5B model by DeepSeek-AI is a 1.5 billion parameter language model distilled from the larger DeepSeek-R1 reasoning model, built upon the Qwen2.5-Math-1.5B architecture. It is specifically fine-tuned using reasoning data generated by DeepSeek-R1, aiming to transfer advanced reasoning patterns to a smaller, more efficient model. This model excels in mathematical and reasoning tasks, demonstrating strong performance on benchmarks like AIME 2024 and MATH-500.

Warm
Public
1.5B
BF16
131072
License: mit
Hugging Face