abeja/ABEJA-Qwen2.5-32b-Japanese-v1.0
ABEJA-Qwen2.5-32b-Japanese-v1.0 is a 32.8 billion parameter large language model developed by ABEJA, based on Qwen/Qwen2.5-32B-Instruct. This model underwent continuous pre-training with a focus on Japanese language data, followed by supervised fine-tuning (SFT) and Direct Preference Optimization (DPO). It is specifically optimized for Japanese language understanding and generation tasks, offering a 131072 token context length.
No reviews yet. Be the first to review!