tokyotech-llm/Llama-3.1-Swallow-8B-Instruct-v0.2
The Llama-3.1-Swallow-8B-Instruct-v0.2 is an 8 billion parameter instruction-tuned causal language model developed by tokyotech-llm. Built upon Meta's Llama 3.1 architecture, this model undergoes continual pre-training with approximately 200 billion tokens, significantly enhancing its Japanese language capabilities while maintaining strong English performance. It is optimized for multi-turn dialogue and various Japanese and English tasks, including question answering, summarization, and code generation.
No reviews yet. Be the first to review!