failspy/Smaug-Llama-3-70B-Instruct-abliterated-v3

The failspy/Smaug-Llama-3-70B-Instruct-abliterated-v3 is a 70 billion parameter instruction-tuned language model, derived from abacusai/Smaug-Llama-3-70B-Instruct, with a context length of 8192 tokens. This model has undergone a unique "abliteration" process using orthogonalized bfloat16 safetensor weights to specifically inhibit refusal behaviors, based on research into refusal directions in LLMs. It maintains the original model's knowledge and training while being optimized for use cases requiring a less restrictive, uncensored response generation.

Warm
Public
70B
FP8
8192
License: llama3
Hugging Face

No reviews yet. Be the first to review!