Qwen/Qwen2.5-0.5B-Instruct

Qwen2.5-0.5B-Instruct is a 0.49 billion parameter instruction-tuned causal language model developed by Qwen, featuring a 32,768 token context length. This model significantly improves upon its predecessor with enhanced knowledge, particularly in coding and mathematics, and better instruction following, long text generation, and structured data understanding, including JSON output. It is designed for multilingual applications, supporting over 29 languages, making it suitable for diverse global use cases requiring robust instruction adherence and structured output from a compact model.

Warm
Public
0.5B
BF16
32768
License: apache-2.0
Hugging Face