MuXodious/gpt-4o-distil-Llama-3.1-8B-Instruct-PaperWitch-heresy

MuXodious/gpt-4o-distil-Llama-3.1-8B-Instruct-PaperWitch-heresy is an 8 billion parameter instruction-tuned model, fine-tuned from Llama-3.1-8B-Instruct. Developed by MuXodious using P-E-W's Heretic abliteration engine with Magnitude-Preserving Orthogonal Ablation, this model is distinguished by its low refusal rate (7/100) and KL Divergence (0.0274) after heretication. It is primarily designed for general instruction-following tasks, offering a balance of performance and efficiency for applications requiring reduced model 'refusals'.

Cold
Public
8B
FP8
32768
Hugging Face