TinyLlama/TinyLlama-1.1B-Chat-v0.4

TinyLlama/TinyLlama-1.1B-Chat-v0.4 is a 1.1 billion parameter Llama-2 architecture language model developed by the TinyLlama project. It was pretrained on 3 trillion tokens and subsequently fine-tuned for chat applications using the OpenAssistant/oasst_top1_2023-08-25 dataset. This compact model is designed for scenarios requiring restricted computation and memory footprints, making it suitable for efficient conversational AI tasks.

Warm
Public
1.1B
BF16
2048
License: apache-2.0
Hugging Face