zai-org/GLM-4.7-Flash

GLM-4.7-Flash is a 30 billion parameter Mixture-of-Experts (MoE) model developed by zai-org, designed for efficient and high-performance lightweight deployment. It demonstrates strong capabilities across various benchmarks, particularly excelling in agentic tasks, reasoning, and coding. This model offers a balanced solution for performance and efficiency in the 30B class.

Warm
Public
30B
FP8
32768
License: mit
Hugging Face

No reviews yet. Be the first to review!