NaniDAO/Llama-3.3-70B-Instruct-ablated

NaniDAO/Llama-3.3-70B-Instruct-ablated is a 70 billion parameter instruction-tuned causal language model based on Meta's Llama 3.3 architecture, featuring a 32768-token context length. This model has undergone an ablation technique to reduce refusal behavior, aiming for a more helpful and uncensored user experience. It is designed for applications requiring a less restrictive AI assistant, offering enhanced utility for a broader range of valid requests.

Warm
Public
70B
FP8
32768
License: llama3
Hugging Face

No reviews yet. Be the first to review!