haoranxu/ALMA-7B-R

ALMA-7B-R is a 7 billion parameter language model developed by Haoran Xu and collaborators, specifically fine-tuned for machine translation. It utilizes Contrastive Preference Optimization (CPO) on top of ALMA models, enabling it to achieve high performance in translation tasks, matching or exceeding GPT-4 and WMT winners. This model is optimized for accurate and high-quality machine translation.

Warm
Public
7B
FP8
4096
License: mit
Hugging Face

No reviews yet. Be the first to review!