SebastianSchramm/tinyllama-1.1B-intermediate-step-715k-1.5T-dpo-lora-merged
SebastianSchramm/tinyllama-1.1B-intermediate-step-715k-1.5T-dpo-lora-merged is a 1.1 billion parameter GPT-like model, fine-tuned primarily for English language tasks. This model is an adaptation of PY007/TinyLlama-1.1B-intermediate-step-715k-1.5T, further refined using a mix of publicly available and synthetic datasets. Its compact size makes it suitable for applications requiring efficient inference while maintaining general language understanding capabilities.
No reviews yet. Be the first to review!