Qwen/Qwen3-4B-SafeRL
Qwen/Qwen3-4B-SafeRL is a 4 billion parameter safety-aligned causal language model developed by Qwen, built upon the Qwen3-4B architecture. It utilizes Reinforcement Learning with a hybrid reward function to enhance safety against harmful prompts while minimizing unnecessary refusals. This model is optimized for robust, helpful, and safe conversational AI applications, maintaining a 40960-token context length.