Alex007ander/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-fierce_yawning_leopard

Alex007ander/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-fierce_yawning_leopard is a 0.5 billion parameter instruction-tuned causal language model, fine-tuned from unsloth/Qwen2.5-0.5B-Instruct. This model was trained using the GRPO method, as introduced in the DeepSeekMath paper, focusing on mathematical reasoning. It is designed for general instruction-following tasks, leveraging its small size for efficient deployment while incorporating advanced training techniques.

Warm
Public
0.5B
BF16
131072
Hugging Face

No reviews yet. Be the first to review!