unsloth/DeepSeek-R1-Distill-Qwen-7B

DeepSeek-R1-Distill-Qwen-7B is a 7.6 billion parameter language model developed by DeepSeek AI, distilled from the larger DeepSeek-R1 reasoning model and based on the Qwen2.5-Math-7B architecture. This model is specifically fine-tuned using reasoning patterns generated by DeepSeek-R1, aiming to transfer advanced reasoning capabilities to a smaller, more efficient dense model. It demonstrates strong performance across mathematical, coding, and general reasoning benchmarks, making it suitable for applications requiring robust analytical problem-solving.

Warm
Public
7.6B
FP8
131072
License: apache-2.0
Hugging Face