dphn/Dolphin-Mistral-24B-Venice-Edition
Dolphin Mistral 24B Venice Edition is a 24 billion parameter Mistral-based language model developed collaboratively by dphn and Venice.ai, featuring a 32768 token context length. This model is specifically designed to be uncensored and highly steerable, allowing users full control over system prompts and alignment. It aims to provide a general-purpose AI tool that prioritizes user control and data privacy, making it suitable for applications requiring custom ethical guidelines and consistent model behavior.
No reviews yet. Be the first to review!