p-e-w/Qwen3-4B-Instruct-2507-heretic

p-e-w/Qwen3-4B-Instruct-2507-heretic is a 4 billion parameter instruction-tuned causal language model, derived from Qwen/Qwen3-4B-Instruct-2507, with a massive 262,144 token context length. This model has been decensored using the Heretic tool, significantly reducing refusals from 99/100 to 21/100 while maintaining the original model's enhanced capabilities in instruction following, logical reasoning, mathematics, coding, and long-context understanding. It is primarily designed for applications requiring a highly capable, open-ended language model with reduced content restrictions.

Warm
Public
4B
BF16
40960
License: apache-2.0
Hugging Face

No reviews yet. Be the first to review!