bosonai/Higgs-Llama-3-70B
Higgs-Llama-3-70B is a 70 billion parameter language model developed by bosonai, post-trained from Meta-Llama-3-70B with an 8192-token context length. It is specifically tuned for role-playing scenarios while maintaining strong general-domain instruction-following and reasoning capabilities. The model utilizes supervised fine-tuning and iterative preference optimization, including a special strategy for aligning behavior with system messages. It demonstrates competitive performance on challenging benchmarks like MMLU-Pro and Arena-Hard, often outperforming its base model.
No reviews yet. Be the first to review!