Qwen/Qwen2.5-14B-Instruct
Qwen2.5-14B-Instruct is a 14.7 billion parameter instruction-tuned causal language model developed by Qwen, featuring a 131,072 token context length. This model significantly improves capabilities in coding, mathematics, instruction following, and long text generation, building upon the Qwen2 series. It excels at understanding structured data and generating structured outputs like JSON, making it suitable for complex conversational AI and data processing tasks.