cyberagent/Llama-3.1-70B-Japanese-Instruct-2407

cyberagent/Llama-3.1-70B-Japanese-Instruct-2407 is a 70 billion parameter instruction-tuned causal language model developed by CyberAgent, continually pre-trained from Meta's Llama-3.1-70B-Instruct. This model is specifically optimized for high-quality Japanese language understanding and generation. It leverages the robust Llama 3.1 architecture to provide advanced performance for Japanese-centric natural language processing tasks.

Warm
Public
70B
FP8
32768
License: llama3.1
Hugging Face