EVA-UNIT-01/EVA-Qwen2.5-72B-v0.0

EVA-UNIT-01/EVA-Qwen2.5-72B-v0.0 is a 72.7 billion parameter full-parameter finetune of the Qwen2.5-72B model, developed by Kearm and Auri. Optimized for roleplay and storywriting, it leverages a diverse mixture of synthetic and natural datasets, including an expanded Celeste 70B 0.1 data mixture. This model is specifically designed to enhance versatility, creativity, and narrative 'flavor' in generative text applications, supporting a context length of 131072 tokens.

Warm
Public
72.7B
FP8
131072
License: qwen
Hugging Face

No reviews yet. Be the first to review!