lmstudio-community/magistral-small-2506-mlx-bf16
The magistral-small model by mistral-ai is a 24 billion parameter language model, specifically the bfloat16 version, optimized for Apple Silicon. This model is provided as an MLX quantization, making it suitable for efficient local inference on Apple hardware. It leverages the MLX framework for performance, targeting developers working within the Apple ecosystem.