e-n-v-y/Legion-V2.1-LLaMa-70B-Elarablated-v0.8-hf
e-n-v-y/Legion-V2.1-LLaMa-70B-Elarablated-v0.8-hf is a 70 billion parameter LLaMa-based model developed by e-n-v-y, fine-tuned with a unique "Elarablation" process to reduce repetitive phrases and common AI-generated writing "slop." This model, built from a merge of 20 specialized models, focuses on improving creative writing quality by minimizing predictable patterns and enhancing natural prose. It is designed for applications requiring high-quality, less repetitive text generation, particularly in creative writing and roleplay scenarios, with a context length of 32768 tokens.
No reviews yet. Be the first to review!