vanta-research/atom-v1-preview-12b

Atom V1 Preview 12B is a 12 billion parameter multimodal transformer model developed by VANTA Research, based on Google's Gemma 3 12B Instruct architecture. Fine-tuned using LoRA, it features a 131,072 token context window and a hybrid attention pattern for efficient long-context processing. This model specializes as a collaborative thought partner, excelling in exploratory dialogue, brainstorming, research assistance, and technical problem-solving with an engaging conversational style.

Cold
Public
Vision
12B
FP8
32768
License: gemma
Hugging Face

No reviews yet. Be the first to review!