pankajmathur/orca_mini_v9_3_70B
The pankajmathur/orca_mini_v9_3_70B model is a 70 billion parameter instruction-tuned language model, fine-tuned by pankajmathur using various Supervised Fine-Tuning (SFT) datasets on the Llama-3.3-70B-Instruct base architecture. This model is designed as a comprehensive general-purpose model, offering a 32768 token context length. It is intended to serve as a foundational base for further customization, including full fine-tuning, DPO, PPO, ORPO tuning, or model merges, encouraging innovation and specific enhancements by developers.
No reviews yet. Be the first to review!