Qwen/Qwen2.5-Coder-14B-Instruct

Qwen2.5-Coder-14B-Instruct is a 14.7 billion parameter instruction-tuned causal language model developed by Qwen, part of the Qwen2.5-Coder series. This model is specifically optimized for code generation, reasoning, and fixing, building upon the Qwen2.5 architecture with 5.5 trillion training tokens including source code and text-code grounding. It supports a long context length of 131,072 tokens, making it suitable for complex coding tasks and real-world Code Agent applications.

Warm
Public
14.8B
FP8
131072
License: apache-2.0
Hugging Face