DAMO-NLP-SG/Qwen2.5-7B-LongPO-128K
Qwen2.5-7B-LongPO-128K is a 7 billion parameter language model developed by DAMO-NLP-SG, fine-tuned using the LongPO method to achieve a 128K token context length. This model excels at maintaining strong long-context alignment and performance without degrading short-context capabilities, a key differentiator from other long-context models. Its primary use case is for applications requiring robust performance across both short and extended context windows, such as document summarization, long-form Q&A, and complex reasoning over large texts.