google/gemma-2-27b

Gemma 2 27B is a 27 billion parameter, decoder-only large language model developed by Google, part of the Gemma family built from the same research as Gemini models. It is a text-to-text model available in English, designed for a variety of text generation tasks including question answering, summarization, and reasoning. Trained on 13 trillion tokens, it offers state-of-the-art performance for its size, making it suitable for deployment in resource-limited environments.

Warm
Public
27B
FP8
32768
License: gemma
Hugging Face
Gated

No reviews yet. Be the first to review!