habanoz/TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-2.2epochs-oasst1-top1-instruct-V1
habanoz/TinyLlama-1.1B-intermediate-step-715k-1.5T-lr-5-2.2epochs-oasst1-top1-instruct-V1 is a 1.1 billion parameter causal language model, fine-tuned by habanoz using the OpenAssistant/oasst_top1_2023-08-25 dataset. This model, based on the TinyLlama architecture, was instruction-tuned for 2 epochs using QLoRA, resulting in an average Open LLM Leaderboard score of 35.45. It is optimized for general instruction-following tasks within its 2048-token context window, offering a compact solution for conversational AI applications.
No reviews yet. Be the first to review!