nlpai-lab/ko-gemma-7b-v1

The nlpai-lab/ko-gemma-7b-v1 is an 8.5 billion parameter language model developed by nlpai-lab, based on the Gemma architecture. This model is designed for general language understanding and generation tasks, with a context length of 8192 tokens. Its primary differentiator is its focus on Korean language processing, making it suitable for applications requiring strong performance in Korean.

Cold
Public
8.5B
FP8
8192
Hugging Face