a-m-team/AM-Thinking-v1

AM-Thinking-v1 is a 32 billion parameter dense language model developed by a-m-team, built upon the Qwen 2.5-32B-Base architecture. This model is specifically optimized for advanced reasoning tasks, demonstrating performance comparable to much larger Mixture-of-Experts (MoE) models while remaining deployable on a single high-end GPU. It excels in areas such as code generation, logical problem-solving, and creative writing, making it suitable for applications requiring strong analytical and generative capabilities.

Warm
Public
32B
FP8
32768
License: apache-2.0
Hugging Face

No reviews yet. Be the first to review!