jondurbin/airoboros-l2-13b-gpt4-m2.0
jondurbin/airoboros-l2-13b-gpt4-m2.0 is a 13 billion parameter Llama-2 based instruction-tuned language model developed by jondurbin. It is fine-tuned using synthetic instructions generated by the airoboros project, exclusively from the 0614 version of GPT-4, with the 1.4.1 dataset merged. This model is optimized for advanced instruction following, including context-obedient question answering, complex code generation, agent/function calling, and chain-of-thought reasoning.
No reviews yet. Be the first to review!