princeton-nlp/Llama-3-Base-8B-SFT-SimPO

The princeton-nlp/Llama-3-Base-8B-SFT-SimPO is an 8 billion parameter language model developed by princeton-nlp. This model is a fine-tuned version of Llama-3-Base, utilizing Supervised Fine-Tuning (SFT) and Simplified Preference Optimization (SimPO). Its primary purpose is to serve as a foundational model for various natural language processing tasks, leveraging its base architecture and optimization methods.

Warm
Public
8B
FP8
8192
Hugging Face