ReadyArt/MS3.2-The-Omega-Directive-24B-Unslop-v2.0
ReadyArt/MS3.2-The-Omega-Directive-24B-Unslop-v2.0 is a 24 billion parameter language model based on anthracite-core/Mistral-Small-3.2-24B-Instruct-2506-Text-Only, fine-tuned by gecfdo and sleepdeprived3. It features a 128K context window and is trained on an expanded 43M token dataset with 0% 'slop' for enhanced coherence and extreme roleplay capabilities. This model excels at long-form, multi-character scenarios with superior instruction following and maintains character integrity without moral or ethical constraints.