TeichAI/Qwen3-8B-Gemini-3-Pro-Preview-Distill-1000x

TeichAI/Qwen3-8B-Gemini-3-Pro-Preview-Distill-1000x is an 8 billion parameter Qwen3 model developed by TeichAI, fine-tuned from unsloth/Qwen3-8B-unsloth-bnb-4bit. This model was trained using Unsloth and Huggingface's TRL library, achieving 2x faster training speeds. With a 32768 token context length, it is optimized for efficient performance in various language generation tasks.

Warm
Public
8B
FP8
32768
License: apache-2.0
Hugging Face

No reviews yet. Be the first to review!