Weyaxi/Einstein-v7-Qwen2-7B

Einstein-v7-Qwen2-7B is a 7.6 billion parameter causal language model developed by Weyaxi, fine-tuned from Qwen/Qwen2-7B. This model is trained on diverse datasets using the ChatML prompt template, making it suitable for general conversational AI tasks. It features a substantial 131,072 token context length, enhancing its ability to handle extensive inputs and generate coherent, long-form responses.

Warm
Public
7.6B
FP8
131072
License: other
Hugging Face

No reviews yet. Be the first to review!