unsloth/Qwen2-0.5B

unsloth/Qwen2-0.5B is a 0.5 billion parameter causal language model from the Qwen2 family, optimized by Unsloth for efficient fine-tuning. It features a 32768 token context length and is designed to enable significantly faster fine-tuning with reduced memory consumption compared to standard methods. This model is particularly suited for developers seeking to quickly adapt small language models for specific tasks on resource-constrained hardware.

Warm
Public
0.5B
BF16
32768
License: apache-2.0
Hugging Face