unsloth/gemma-2-2b

The unsloth/gemma-2-2b model is a 2.6 billion parameter Gemma 2 language model, directly quantized to 4-bit using bitsandbytes. Developed by Unsloth, it is specifically optimized for efficient fine-tuning, offering significantly faster training speeds and reduced memory consumption compared to standard methods. This model is ideal for developers seeking to fine-tune Gemma 2 on resource-constrained hardware like Google Colab Tesla T4 GPUs.

Warm
Public
2.6B
BF16
8192
License: gemma
Hugging Face

No reviews yet. Be the first to review!