codelion/Llama-3.3-70B-o1
codelion/Llama-3.3-70B-o1 is a 70 billion parameter Llama-3.3 model fine-tuned by codelion for enhanced reasoning capabilities. This model specializes in generating Chain-of-Thought (CoT) style reasoning traces, outputting a 'thinking' process before the final solution. It is optimized for tasks requiring explicit step-by-step problem-solving, making it suitable for complex analytical queries. The model has a 32768 token context length and was trained using QLoRA fine-tuning.