EVA-UNIT-01/EVA-Qwen2.5-72B-v0.1
EVA-UNIT-01/EVA-Qwen2.5-72B-v0.1 is a 72.7 billion parameter full-parameter finetune of the Qwen2.5-72B architecture, developed by Kearm, Auri, and Cahvay. Optimized for roleplay and storywriting, this model leverages a greatly expanded mixture of synthetic and natural data, including the Celeste 70B 0.1 data mixture. It demonstrates significant improvements in instruction following, long context understanding, and overall coherence, making it suitable for creative text generation tasks with its 131072 token context length.
5.0 based on 1 review
very nice model
Roleplay