jondurbin/airoboros-13b

jondurbin/airoboros-13b is a 13 billion parameter LLaMA-based model fine-tuned by jondurbin using a completely synthetic dataset. This model was an experimental attempt to generate a broader range of data, including potentially 'harmful' content, by employing a 'jailbreak' prompt with OpenAI's GPT-4 and GPT-3.5-turbo. It achieves a GPT-3.5 adjusted score of 98.087 in internal evaluations, indicating strong performance relative to GPT-3.5, but the developer advises against its general use due to output quality and potential harmful content.

Loading
Public
13B
FP8
4096
License: cc-by-nc-4.0
Hugging Face