EVA-UNIT-01/EVA-Qwen2.5-72B-v0.2
EVA-UNIT-01/EVA-Qwen2.5-72B-v0.2 is a 72.7 billion parameter, full-parameter fine-tuned Qwen2.5 model developed by Kearm, Auri, and Cahvay, with a context length of 131072 tokens. This model specializes in roleplay and story writing, leveraging an expanded mixture of synthetic and natural data for enhanced versatility and creativity. It is optimized for generating engaging narrative content and complex character interactions, making it suitable for advanced creative text generation tasks.
5.0 based on 2 reviews
My favorite of the 70+B models, always surprises me with it's prose. Takes direction really well, if needed. Not great at task oriented stuff ("write 2 paragraphs only") , but it's not fine tuned for it. So gets a pass from me.
It does have a codeblock "```" and section start/end "---" problem though. Negative logit biasing will help when it comes in the API.
Roleplay
Novel Writing
Reasoning
Best 72B I've ever used, straight up.
I really liked the writing style of the 14B when i first used it, but it wasn't very smart.
This fixes everything.
Roleplay
Reasoning
Novel Writing