mlabonne/FineLlama-3.1-8B
FineLlama-3.1-8B is an 8 billion parameter language model developed by mlabonne, fine-tuned from Meta-Llama-3.1-8B with a 32768 token context length. It was trained on 100k high-quality samples from the mlabonne/FineTome-100k dataset. This model serves primarily as an educational resource to demonstrate efficient fine-tuning techniques using Unsloth.