princeton-nlp/Llama-3-Base-8B-SFT-IPO

princeton-nlp/Llama-3-Base-8B-SFT-IPO is an 8 billion parameter language model developed by Princeton NLP. This model is based on the Llama-3 architecture and is fine-tuned using the SimPO (Simple Preference Optimization) method, which utilizes a reference-free reward. It is designed for general language generation tasks, leveraging its preference optimization for improved response quality.

Warm
Public
8B
FP8
8192
Hugging Face