e-palmisano/Qwen2-0.5B-ITA-Instruct
The e-palmisano/Qwen2-0.5B-ITA-Instruct is a 0.5 billion parameter Qwen2-based causal language model developed by e-palmisano. It has been fine-tuned using Unsloth for improved Italian language capabilities and instruction following, leveraging the gsarti/clean_mc4_it and FreedomIntelligence/alpaca-gpt4-italian datasets. This model is optimized for Italian language tasks and instruction-based interactions, offering a context length of 32768 tokens.