Qwen/Qwen2.5-1.5B-Instruct

Qwen2.5-1.5B-Instruct is a 1.54 billion parameter instruction-tuned causal language model developed by Qwen, part of the Qwen2.5 series. This model significantly improves upon Qwen2 in coding, mathematics, and instruction following, offering enhanced long-text generation and structured data understanding. It supports a 128K token context length and is optimized for generating structured outputs like JSON, making it suitable for diverse chatbot and data processing applications.

Warm
Public
1.5B
BF16
131072
License: apache-2.0
Hugging Face