google/gemma-3-1b-pt
Gemma 3 1B PT is a 1 billion parameter pre-trained multimodal language model developed by Google DeepMind, built from the same research and technology as the Gemini models. It handles text and image inputs to generate text outputs, featuring a 32K token context window and multilingual support for over 140 languages. This model is well-suited for text generation and image understanding tasks like question answering, summarization, and reasoning, and is designed for deployment in resource-limited environments.
No reviews yet. Be the first to review!