habanoz/TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-3epochs-oasst1-top1-instruct-V1
habanoz/TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-3epochs-oasst1-top1-instruct-V1 is a 1.1 billion parameter instruction-tuned language model, fine-tuned by habanoz using the OpenAssistant/oasst_top1_2023-08-25 dataset. This model, based on the TinyLlama architecture, was trained with QLoRA and has a context length of 2048 tokens. It is optimized for instruction-following tasks, demonstrating capabilities in general reasoning and question answering, as indicated by its Open LLM Leaderboard evaluation results.
No reviews yet. Be the first to review!