Qwen/Qwen1.5-1.8B
Qwen1.5-1.8B is a 1.8 billion parameter, transformer-based decoder-only language model developed by Qwen, serving as a beta version of Qwen2. It is pretrained on a large dataset and supports a stable 32K context length. This model is designed for further fine-tuning and post-training applications rather than direct text generation.