ConicCat/GL-Marvin-32k-32B

ConicCat/GL-Marvin-32k-32B is a 32 billion parameter language model based on the GLM-4 architecture, fine-tuned for improved context handling and Alpaca evaluation performance. It features a 32,768 token context window, optimized to run efficiently on consumer-grade GPUs. This model is designed for general language tasks, with a focus on maximizing performance within its context length and parameter count.

Cold
Public
32B
FP8
32768
License: apache-2.0
Hugging Face

No reviews yet. Be the first to review!