YanLabs/gemma-3-27b-it-abliterated-normpreserve

YanLabs/gemma-3-27b-it-abliterated-normpreserve is a 27 billion parameter causal language model based on Google's Gemma-3-27b-it, developed by YanLabs. This model has undergone norm-preserving biprojected abliteration to surgically remove refusal behaviors and safety guardrails, making it suitable primarily for mechanistic interpretability research. It retains the original capabilities of its base model while allowing for the study of LLM safety mechanisms without traditional fine-tuning.

Warm
Public
Vision
27B
FP8
32768
License: gemma
Hugging Face