mmnga/Llama-3-70B-japanese-suzume-vector-v0.1
The mmnga/Llama-3-70B-japanese-suzume-vector-v0.1 is an experimental 70 billion parameter Llama-3-based model developed by mmnga, designed to integrate Japanese language capabilities into the Meta-Llama-3-70B-Instruct model. This model applies a chat-vector approach by extracting differences between a Japanese-tuned 8B model and the base Llama-3-8B-Instruct, then upsampling and applying these differences to the larger 70B model. Its primary purpose is to explore methods for transferring language-specific fine-tuning from smaller models to larger Llama-3 variants for enhanced Japanese language processing.
No reviews yet. Be the first to review!