Qwen/Qwen1.5-0.5B-Chat

Qwen1.5-0.5B-Chat is a 0.6 billion parameter, transformer-based decoder-only language model developed by Qwen. This chat-optimized model offers multilingual support and a stable 32K context length, making it suitable for efficient, small-scale conversational AI applications. It incorporates architectural improvements like SwiGLU activation and attention QKV bias for enhanced performance.

Cold
Public
0.6B
BF16
32768
License: tongyi-qianwen-research
Hugging Face