Qwen/Qwen1.5-0.5B

Qwen1.5-0.5B is a 0.6 billion parameter, decoder-only transformer language model developed by Qwen. As a beta version of Qwen2, it features stable 32K context length support and multilingual capabilities. This base model is designed for further fine-tuning, such as SFT or RLHF, rather than direct text generation.

Warm
Public
0.6B
BF16
32768
License: tongyi-qianwen-research
Hugging Face