lmstudio-community/magistral-small-2506-mlx-bf16

The magistral-small model by mistral-ai is a 24 billion parameter language model, specifically the bfloat16 version, optimized for Apple Silicon. This model is provided as an MLX quantization, making it suitable for efficient local inference on Apple hardware. It leverages the MLX framework for performance, targeting developers working within the Apple ecosystem.

Warm
Public
24B
FP8
32768
License: apache-2.0
Hugging Face