Qwen/QwQ-32B-Preview

QwQ-32B-Preview is an experimental 32.5 billion parameter causal language model developed by the Qwen Team, featuring a transformer architecture with RoPE, SwiGLU, and RMSNorm. This model is specifically focused on advancing AI reasoning capabilities, particularly excelling in mathematical and coding tasks. It supports a substantial context length of 32,768 tokens, making it suitable for complex analytical problems.

Warm
Public
32.8B
FP8
131072
License: apache-2.0
Hugging Face