HelpingAI/Dhanishtha-2.0-preview

Dhanishtha-2.0-preview is a 14 billion parameter causal language model developed by HelpingAI, built upon the Qwen3-14B foundation. This model is the world's first to feature Intermediate Thinking capabilities, allowing it to pause, reflect, and self-correct its reasoning multiple times within a single response. With a 32,768 token context length and support for 39+ languages, it excels at complex problem-solving, multi-step reasoning, and educational assistance by making its thought processes transparent.

Warm
Public
14B
FP8
32768
License: apache-2.0
Hugging Face