SakanaAI/DiscoPOP-zephyr-7b-gemma

SakanaAI's DiscoPOP-zephyr-7b-gemma is an 8.5 billion parameter language model, fine-tuned from HuggingFaceH4/zephyr-7b-gemma-sft-v0.1 with an 8192-token context length. This model distinguishes itself by utilizing DiscoPOP, a novel Discovered Preference Optimization algorithm, instead of standard Direct Preference Optimization (DPO). It is designed for general language tasks, leveraging its unique optimization method for improved performance.

Warm
Public
8.5B
FP8
8192
License: gemma
Hugging Face