Qwen/Qwen1.5-32B-Chat

Qwen1.5-32B-Chat is a 32.5 billion parameter transformer-based decoder-only language model developed by Qwen. This chat-optimized model offers significant performance improvements in human preference and stable multilingual support with a 32K context length. It is designed for conversational AI applications requiring robust language understanding and generation across various languages.

Cold
Public
32.5B
FP8
32768
License: tongyi-qianwen
Hugging Face

No reviews yet. Be the first to review!