jondurbin/airoboros-70b-3.3
jondurbin/airoboros-70b-3.3 is a 70 billion parameter instruction-tuned causal language model developed by jondurbin, fine-tuned from Meta's Llama-3-70B-Instruct. This model is primarily trained on synthetic data generated by airoboros, alongside various other datasets, and is optimized for context-obedient question answering, summarization, and complex instruction following, including agent-style execution planning and function calling. It utilizes the Llama-3 instruct chat template and has an 8192 token context length, making it suitable for tasks requiring precise adherence to provided context and structured outputs.
No reviews yet. Be the first to review!