MiniMaxAI/MiniMax-M2

MiniMaxAI's MiniMax-M2 is a 229 billion parameter Mixture-of-Experts (MoE) model with 10 billion active parameters, designed for high efficiency in coding and agentic workflows. It features a 32768 token context length and excels in multi-file edits, coding-run-fix loops, and complex toolchain execution across various environments. MiniMax-M2 offers competitive general intelligence, ranking highly among open-source models, while providing lower latency and cost due to its efficient activation size, making it ideal for interactive agents and batched sampling.

Warm
Public
229B
FP8
32768
License: modified-mit
Hugging Face

No reviews yet. Be the first to review!