Qwen/Qwen1.5-7B

Qwen1.5-7B is a 7.7 billion parameter, transformer-based decoder-only language model developed by Qwen. As a beta version of Qwen2, it features significant performance improvements, multilingual support, and a stable 32K context length across all model sizes. This model is designed for further fine-tuning, such as SFT or RLHF, rather than direct text generation.

Cold
Public
7.7B
FP8
32768
License: tongyi-qianwen
Hugging Face

No reviews yet. Be the first to review!