Lamapi/next-4b

Lamapi/next-4b is a 4.3 billion parameter multimodal Vision-Language Model (VLM) based on Gemma 3, designed for efficient text and image understanding. As Türkiye’s first open-source VLM, it excels at visual understanding, reasoning, and creative generation, with strong multilingual capabilities including Turkish support. Optimized for low-resource deployment, it supports 8-bit quantization for consumer-grade GPUs, making it suitable for accessible multimodal AI applications.

Cold
Public
Vision
4.3B
BF16
32768
License: mit
Hugging Face

No reviews yet. Be the first to review!