haoranxu/ALMA-13B-Pretrain

ALMA-13B-Pretrain by haoranxu is a 13 billion parameter LLaMA-2-based language model, pre-trained on 12 billion monolingual tokens. It serves as the foundational model for the ALMA (Advanced Language Model-based Translator) series, which specializes in machine translation. This specific model requires further LoRA fine-tuning with parallel data to function as a translator, distinguishing it from direct translation models.

Warm
Public
13B
FP8
4096
License: mit
Hugging Face