georgesung/llama2_7b_chat_uncensored
The georgesung/llama2_7b_chat_uncensored model is a 7 billion parameter Llama-2 architecture fine-tuned by georgesung. It was trained on an uncensored Wizard-Vicuna conversation dataset using QLoRA, optimized for open-ended, unfiltered conversational responses. This model is designed for applications requiring less restrictive or more direct AI interactions, offering a distinct conversational style compared to standard Llama-2 chat models. It supports a 4096-token context length and is available in fp16 HuggingFace format.
No reviews yet. Be the first to review!