nothingiisreal/L3-8B-dolphin-2.9.1-WritingPrompts
The nothingiisreal/L3-8B-dolphin-2.9.1-WritingPrompts model is an 8 billion parameter language model, fine-tuned using KTO (Kahneman-Tversky Optimization) on a dataset derived from r/WritingPrompts and r/DirtyWritingPrompts. This model, based on the Llama 3 architecture with a 8192-token context length, is specifically optimized for generating creative and engaging story responses to writing prompts. Its training focused on removing 'sloppy' outputs, aiming for higher quality and more coherent narrative generation.
No reviews yet. Be the first to review!