Bllossom/llama-3.2-Korean-Bllossom-3B
Bllossom/llama-3.2-Korean-Bllossom-3B is a 3.2 billion parameter language model developed by the Bllossom team, based on the Llama 3.2 architecture with a 32768 token context length. It is specifically enhanced for Korean and English bilingual performance through 150GB of additional pre-training on refined Korean data. This model maintains strong English capabilities while significantly improving Korean language support, making it suitable for applications requiring robust bilingual processing.