e-palmisano/Qwen2-1.5B-ITA-Instruct

e-palmisano/Qwen2-1.5B-ITA-Instruct is a 1.5 billion parameter Qwen2-based causal language model developed by e-palmisano. It has been fine-tuned specifically to improve Italian language capabilities using the gsarti/clean_mc4_it and FreedomIntelligence/alpaca-gpt4-italian datasets. This model is optimized for Italian language understanding and instruction-following tasks, offering a specialized solution for Italian NLP applications. It leverages Unsloth for faster training and supports a context length of 131072 tokens.

Warm
Public
1.5B
BF16
131072
License: apache-2.0
Hugging Face