pankajmathur/orca_mini_v8_1_70b
The pankajmathur/orca_mini_v8_1_70b is a 70 billion parameter instruction-tuned causal language model, fine-tuned by pankajmathur on the Llama-3.3-70B-Instruct base model. This model is designed as a comprehensive general model, trained with various Supervised Fine-Tuning (SFT) datasets. It supports advanced features like tool use and is intended as a foundational base for further fine-tuning, DPO, PPO, or ORPO tuning, and model merges.
No reviews yet. Be the first to review!