m8than/Mistral-Nemo-Instruct-2407-lenient-chatfix

m8than/Mistral-Nemo-Instruct-2407-lenient-chatfix is a 12 billion parameter instruction-tuned language model based on Mistral-Nemo-Instruct-2407, featuring a 32768 token context length. This model is specifically modified to offer a more lenient chat format compared to its base, making it adaptable for various conversational AI applications. Its primary differentiator is the relaxed chat format, enhancing flexibility for developers integrating it into diverse use cases.

Warm
Public
12B
FP8
32768
Hugging Face

No reviews yet. Be the first to review!