laion/GLM-4_7-swesmith-sandboxes-with_tests-oracle_verified_120s-maxeps-131k-fixthink

The laion/GLM-4_7-swesmith-sandboxes-with_tests-oracle_verified_120s-maxeps-131k-fixthink model is an 8 billion parameter language model, fine-tuned from Qwen/Qwen3-8B. It was trained on a specific dataset focused on 'thinking preprocessed' data, suggesting an optimization for complex reasoning or problem-solving tasks. With a context length of 32768 tokens, it is designed for applications requiring extensive contextual understanding and processing.

Warm
Public
8B
FP8
32768
License: apache-2.0
Hugging Face

No reviews yet. Be the first to review!