Spestly/Atlas-Flash-7B-Preview

Spestly/Atlas-Flash-7B-Preview is a 7.6 billion parameter model from the Atlas family, built on Deepseek's R1 distilled Qwen models with a 131072 token context length. It is specifically designed to excel in advanced reasoning, contextual understanding, and domain-specific expertise. This model demonstrates significant improvements in coding, conversational AI, and STEM problem-solving, making it highly versatile for complex technical tasks.

Warm
Public
7.6B
FP8
131072
License: mit
Hugging Face