allura-org/GLM4-32B-Neon-v2
GLM4-32B-Neon-v2 is a 32 billion parameter instruction-tuned causal language model, fine-tuned by Auri for roleplay and short story generation. Based on the GLM-4-32B-0414 architecture, this model exhibits a distinct personality and varied prose, making it suitable for creative text generation tasks. It was trained on 77 million tokens of synthetic roleplay and short story data.