unsloth/Mistral-Small-24B-Instruct-2501

The unsloth/Mistral-Small-24B-Instruct-2501 is a 24 billion parameter instruction-tuned language model developed by Mistral AI, based on the Mistral-Small-24B-Base-2501 architecture. It features a 32k context window and is optimized for agentic capabilities, including native function calling and JSON outputting. This model excels in conversational and reasoning tasks, supporting dozens of languages, and is designed for efficient local deployment on hardware like an RTX 4090 or 32GB RAM MacBook.

Warm
Public
24B
FP8
32768
License: apache-2.0
Hugging Face