failspy/llama-3-70B-Instruct-abliterated
The failspy/llama-3-70B-Instruct-abliterated model is a 70 billion parameter instruction-tuned language model, derived from Meta's Llama-3-70B-Instruct. This model has undergone specific weight manipulation to orthogonalize the refusal direction, aiming to inhibit the model's tendency to express refusal or lecture on ethics. It maintains the original Llama-3-70B-Instruct tuning in all other aspects, offering an 8192-token context length. Its primary differentiator is the experimental reduction of refusal behaviors, making it suitable for use cases where direct responses are preferred over ethical caveats.
No reviews yet. Be the first to review!