e-palmisano/Qwen2-1.5B-ITA-Instruct
e-palmisano/Qwen2-1.5B-ITA-Instruct is a 1.5 billion parameter Qwen2-based causal language model developed by e-palmisano. It has been fine-tuned specifically to improve Italian language capabilities using the gsarti/clean_mc4_it and FreedomIntelligence/alpaca-gpt4-italian datasets. This model is optimized for Italian language understanding and instruction-following tasks, offering a specialized solution for Italian NLP applications. It leverages Unsloth for faster training and supports a context length of 131072 tokens.
No reviews yet. Be the first to review!