deepseek-ai/DeepSeek-R1-Distill-Llama-70B

DeepSeek-R1-Distill-Llama-70B is a 70 billion parameter language model developed by DeepSeek-AI, distilled from the larger DeepSeek-R1 model and based on the Llama-3.3-70B-Instruct architecture. This model is specifically fine-tuned using reasoning data generated by DeepSeek-R1, excelling in complex reasoning, mathematical, and coding tasks. It features a 32,768 token context length and demonstrates strong performance on benchmarks like AIME 2024 and MATH-500, making it suitable for applications requiring advanced problem-solving capabilities.

Warm
Public
70B
FP8
32768
License: mit
Hugging Face

No reviews yet. Be the first to review!