aloobun/Reyna-Mini-1.8B-v0.1
Reyna-Mini-1.8B-v0.1 by aloobun is a 1.8 billion parameter causal language model, fine-tuned from Qwen/Qwen1.5-1.8B-Chat with a 32768 token context length. This model utilizes SFT on the OpenHermes-2.5 dataset, establishing the foundation for aloobun's Qwen1.5 LLM series. It is designed for chat-based applications, formatted for ChatML, and demonstrates an average benchmark score of 41.46 across various tasks including ARC, HellaSwag, MMLU, TruthfulQA, Winogrande, and GSM8K.
No reviews yet. Be the first to review!