google/medgemma-4b-it
MedGemma-4b-it is a 4.3 billion parameter instruction-tuned variant of Google's Gemma 3 model, specifically trained for performance on medical text and image comprehension. It utilizes a SigLIP image encoder pre-trained on diverse de-identified medical data, including chest X-rays, dermatology, ophthalmology, and histopathology images. This multimodal model excels at medical applications involving text generation, visual question answering, and report generation, outperforming base Gemma 3 models on clinically relevant benchmarks. It supports a long context length of at least 128K tokens for comprehensive medical data processing.
5.0 based on 1 review
Productivity
Productivity