Qwen/Qwen2.5-Coder-7B

Qwen/Qwen2.5-Coder-7B is a 7.61 billion parameter causal language model developed by Qwen, part of the Qwen2.5-Coder series. This pre-trained model is specifically optimized for code generation, code reasoning, and code fixing, building upon the Qwen2.5 architecture. It features a transformer architecture with a full context length of 131,072 tokens, making it suitable for complex coding tasks and long-context applications.

Warm
Public
7.6B
FP8
131072
License: apache-2.0
Hugging Face