Triangle104/DS-R1-Distill-Q2.5-14B-Harmony_V0.1
Triangle104/DS-R1-Distill-Q2.5-14B-Harmony_V0.1 is a 14 billion parameter language model created by Triangle104, formed by merging two DeepSeek-R1-Distill-Qwen-14B variants using the SLERP method. This model is designed for general language tasks, leveraging its merged architecture to achieve a balanced performance across various benchmarks. With a 32768 token context length, it is suitable for applications requiring moderate context understanding and generation.