tokyotech-llm/Gemma-2-Llama-Swallow-27b-it-v0.1
The tokyotech-llm/Gemma-2-Llama-Swallow-27b-it-v0.1 is a 27 billion parameter instruction-tuned language model developed by tokyotech-llm, built upon the Gemma 2 architecture. This model was continually pre-trained on approximately 200 billion tokens, significantly enhancing its Japanese language capabilities while retaining strong English performance. It excels in multi-turn dialogue and various Japanese and English benchmarks, making it suitable for applications requiring robust bilingual understanding and generation.
No reviews yet. Be the first to review!