HuggingFaceH4/zephyr-7b-beta

HuggingFaceH4/zephyr-7b-beta is a 7 billion parameter language model, fine-tuned from Mistral-7B-v0.1, designed to function as a helpful assistant. This model utilizes Direct Preference Optimization (DPO) on synthetic datasets to enhance its helpfulness, achieving top performance among 7B chat models on MT-Bench and AlpacaEval benchmarks. It is primarily English-language focused and excels in chat-based applications, offering strong performance compared to larger models in general conversational tasks.

Warm
Public
7B
FP8
8192
License: mit
Hugging Face

No reviews yet. Be the first to review!