rinna/llama-3-youko-70b-instruct
rinna/llama-3-youko-70b-instruct is a 70 billion parameter instruction-tuned language model developed by rinna, based on the Llama 3 architecture with an 8192-token context length. It was fine-tuned using supervised fine-tuning on a rinna dataset and enhanced with a Chat Vector approach, making it suitable for instruction-following tasks. This model adopts the Llama-3 chat format and utilizes the original Meta-Llama-3-70B-Instruct tokenizer.
No reviews yet. Be the first to review!