tokyotech-llm/Llama-3.1-Swallow-8B-Instruct-v0.5
Llama-3.1-Swallow-8B-Instruct-v0.5 is an 8 billion parameter instruction-tuned causal language model developed by tokyotech-llm. Built upon Meta Llama 3.1, it significantly enhances Japanese language capabilities through continual pre-training on a 200 billion token Japanese web corpus while retaining strong English performance. This model excels in Japanese multi-turn dialogue, achieving state-of-the-art performance on Japanese MT-Bench among open-source LLMs of comparable size.
No reviews yet. Be the first to review!