mistralai/Mistral-7B-v0.1

Mistral-7B-v0.1 is a 7 billion parameter pretrained generative text model developed by Mistral AI. This transformer model incorporates Grouped-Query Attention, Sliding-Window Attention, and a Byte-fallback BPE tokenizer. It demonstrates performance superior to Llama 2 13B across all tested benchmarks, making it suitable for various general-purpose text generation tasks where efficiency and strong performance are critical.

Warm
Public
7B
FP8
4096
License: apache-2.0
Hugging Face

No reviews yet. Be the first to review!