carsenk/llama3.2_1b_2025_uncensored_v2
The carsenk/llama3.2_1b_2025_uncensored_v2 is a 1 billion parameter Llama 3.2 base model fine-tuned by Carsen Klock with a 32768 token context length. This model is specifically optimized for uncensored responses, medical reasoning, mathematics problem-solving, and code generation. It leverages a diverse dataset including specialized instruction, math, code, and uncensored conversation data. Its primary strength lies in providing direct, unfiltered answers across various technical and sensitive topics.
No reviews yet. Be the first to review!