irlab-udc/Llama-3.1-8B-Instruct-Galician
The irlab-udc/Llama-3.1-8B-Instruct-Galician model, also known as Cabuxa 2.0, is an 8 billion parameter instruction-tuned causal language model developed by UDC Information Retrieval Lab (IRLab). It is a continued pretraining version of Meta's Llama-3.1-8B-Instruct, specifically adapted for the Galician language using the CorpusNós dataset. This model is optimized for natural language processing tasks in Galician, aiming to improve AI accessibility for underrepresented languages.