featherless-ai/QRWKV-QwQ-32B
QRWKV-QwQ-32B is a 32 billion parameter RWKV-variant language model developed by featherless-ai, based on the Qwen 2.5 QwQ 32B architecture. It features a 32768-token context length and utilizes linear attention to significantly reduce computational costs for large contexts. This model is optimized for efficient inference and broad accessibility, inheriting its knowledge and multilingual capabilities (approximately 30 languages) from its Qwen parent model.
No reviews yet. Be the first to review!