UnfilteredAI/DAN-L3-R1-8B

UnfilteredAI/DAN-L3-R1-8B is an 8 billion parameter, uncensored, and unrestricted language model based on DeepSeek-R1-Distill-Llama-8B, featuring a 128k token context length. It is specifically fine-tuned to operate without safety rails, generating raw, explicit, and potentially harmful content. This model is designed for dark research, AI safety research, and advanced AI prototyping to explore the boundaries of AI alignment and unmoderated environments.

Warm
Public
8B
FP8
32768
License: apache-2.0
Hugging Face