unsloth/DeepSeek-R1-Distill-Qwen-14B
The unsloth/DeepSeek-R1-Distill-Qwen-14B is a 14 billion parameter language model developed by DeepSeek AI, distilled from the larger DeepSeek-R1 reasoning model and based on the Qwen2.5 architecture. It is specifically fine-tuned using reasoning data generated by DeepSeek-R1 to enhance performance on complex reasoning, math, and coding tasks. This model offers a powerful, smaller alternative for applications requiring strong reasoning capabilities, with a context length of 32768 tokens.