Qwen/Qwen3-1.7B-Base

Qwen/Qwen3-1.7B-Base is a 1.7 billion parameter causal language model developed by Qwen, pre-trained on 36 trillion tokens across 119 languages. This model incorporates architectural refinements and a three-stage pre-training process to enhance reasoning, coding, and long-context comprehension up to 32,768 tokens. It is designed for broad language modeling and general knowledge acquisition, with a focus on improved stability and performance across diverse tasks.

Cold
Public
2B
BF16
40960
License: apache-2.0
Hugging Face

No reviews yet. Be the first to review!