WiroAI/OpenR1-Qwen-7B-Turkish
WiroAI/OpenR1-Qwen-7B-Turkish is a 7.6 billion parameter language model developed by WiroAI, fine-tuned from Qwen2.5-Instruct on a Turkish dataset. This model is specifically optimized for improved reasoning in Turkish, addressing limitations of other models in low-resource languages. It features a substantial context length of 131072 tokens, making it suitable for complex Turkish language tasks requiring extensive context.