pankajmathur/orca_mini_v8_0_70b
pankajmathur/orca_mini_v8_0_70b is a 70 billion parameter instruction-tuned language model built upon the Llama-3.3-70B-Instruct architecture. This model is trained with various Supervised Fine-Tuning (SFT) datasets, designed as a comprehensive general model for broad applications. It supports a 32768 token context length and is intended to serve as a foundational base for further fine-tuning, DPO, PPO, ORPO tuning, or model merges.