EVA-UNIT-01/EVA-Qwen2.5-32B-v0.2
EVA-UNIT-01/EVA-Qwen2.5-32B-v0.2 is a 32.8 billion parameter full-parameter fine-tune of the Qwen2.5-32B architecture, developed by Kearm, Auri, and Cahvay. This model specializes in roleplay and storywriting, leveraging a diverse mixture of synthetic and natural datasets. It is optimized for versatility, creativity, and generating nuanced narrative content, making it ideal for applications requiring advanced conversational and creative text generation.