bigdefence/Llama-3.1-8B-Ko-bigdefence

bigdefence/Llama-3.1-8B-Ko-bigdefence is an 8 billion parameter Llama-3.1 model developed by Bigdefence, fine-tuned for Korean language tasks. This model leverages the Llama-3.1 architecture and a 32768 token context length, specifically optimized using the KoCommercial-Dataset. It is designed for applications requiring strong performance in Korean language generation and understanding.

Warm
Public
8B
FP8
32768
License: apache-2.0
Hugging Face