Klingspor/Qwen3-1.7B-SFT

Klingspor/Qwen3-1.7B-SFT is a 1.7 billion parameter supervised fine-tuned (SFT) version of Qwen3-1.7B, specifically designed to act as a Questioner in the game of 20 Questions. This model excels at asking strategic yes-or-no questions to deduce a secret word, serving as an initial checkpoint for reinforcement learning models. It is optimized for multi-turn interactive language agent research and playing deductive question games.

Warm
Public
2B
BF16
32768
License: apache-2.0
Hugging Face

No reviews yet. Be the first to review!