ReadyArt/L3.3-The-Omega-Directive-70B-Unslop-v2.1

ReadyArt/L3.3-The-Omega-Directive-70B-Unslop-v2.1 is a 70 billion parameter language model, fine-tuned from Steelskull/L3.3-Shakudo-70b, featuring a 32768 token context window. It is specifically designed for extreme roleplay and narrative coherence, utilizing a 100% 'unslopped' 39M token dataset for enhanced unalignment and character integrity. This model excels at long-form, multi-character scenarios with superior instruction following and reduced repetition.

Warm
Public
70B
FP8
32768
License: llama3.3
Hugging Face