aixsatoshi/Meta-Llama-3.1-8B-Instruct-plus-Swallow

aixsatoshi/Meta-Llama-3.1-8B-Instruct-plus-Swallow is an 8 billion parameter Llama-3.1 derived model, specifically enhanced for Japanese language fluency. It integrates the Japanese continuous pre-training improvements from the Swallow-8B model into the Meta Llama-3.1-8B-Instruct base. This model excels in Japanese language tasks, leveraging its 32768 token context length for nuanced understanding and generation.

Warm
Public
8B
FP8
32768
License: llama3.1
Hugging Face