janhq/Jan-v1-4B

Jan-v1-4B is a 4 billion parameter agentic language model developed by janhq, based on the Qwen3-4B-thinking architecture with a 40960 token context length. It is designed for enhanced reasoning and problem-solving within the Jan App, demonstrating significant performance gains in factual question answering with 91.1% accuracy on SimpleQA. This model is optimized for complex agentic tasks and tool utilization.

Warm
Public
4B
BF16
40960
License: apache-2.0
Hugging Face