Magpie-Align/Llama-3-8B-Magpie-Align-SFT-v0.3
Magpie-Align/Llama-3-8B-Magpie-Align-SFT-v0.3 is an 8 billion parameter Llama 3-based model fine-tuned by Magpie-Align with a context length of 8192 tokens. This model is specifically enhanced for multi-lingual and reasoning capabilities through supervised fine-tuning on custom datasets. It achieves performance comparable to the official Llama-3-8B-Instruct model using only SFT, making it suitable for general instruction-following tasks.
No reviews yet. Be the first to review!