AXCXEPT/Llama-3.1-8B-EZO-1.1-it

AXCXEPT/Llama-3.1-8B-EZO-1.1-it is an 8 billion parameter instruction-tuned causal language model developed by AXCXEPT, based on Meta AI's Llama 3.1 architecture. This model has been fine-tuned to significantly enhance its performance on Japanese language tasks. It leverages a 32K context window and an innovative training approach using high-quality Japanese Wikipedia and FineWeb data. The primary use case for this model is generating high-quality responses in Japanese across various contexts.

Warm
Public
8B
FP8
32768
License: llama3.1
Hugging Face

No reviews yet. Be the first to review!