open-thoughts/OpenThinker2-7B
OpenThinker2-7B is a 7.6 billion parameter instruction-tuned language model developed by open-thoughts, fine-tuned from Qwen2.5-7B-Instruct. It is specifically optimized for reasoning tasks, demonstrating performance comparable to other state-of-the-art 7B models on benchmarks like AIME24, AMC23, and MATH500. This model excels in complex problem-solving and mathematical reasoning, making it suitable for applications requiring advanced analytical capabilities.
No reviews yet. Be the first to review!