GLM-5
Zhipu AI's open-weight 744B MoE foundation model purpose-built for agentic engineering, trained entirely on Huawei Ascend chips
GLM-5
Z.ai • February 2026
Training Data
28.5 trillion tokens up to late 2025
GLM-5
February 2026
Parameters
744 billion (40B active)
Training Method
MoE with MLA and DeepSeek Sparse Attention on Huawei Ascend 910B chips
Context Window
200,000 tokens
Knowledge Cutoff
Not disclosed
Key Features
MIT-Licensed 744B MoE • DeepSeek Sparse Attention • Trained on Huawei Ascend 910B
Capabilities
Reasoning: Excellent
Coding: Excellent
Agentic Tasks: Very Good
What's New in This Version
Doubles parameters from GLM-4.5's 355B to 744B with 28.5T training tokens and reduces hallucination rate from 90% to 34% through asynchronous RL
Zhipu AI's open-weight 744B MoE foundation model purpose-built for agentic engineering, trained entirely on Huawei Ascend chips
What's New in This Version
Doubles parameters from GLM-4.5's 355B to 744B with 28.5T training tokens and reduces hallucination rate from 90% to 34% through asynchronous RL
Technical Specifications
Key Features
Capabilities
Other Z.ai Models
Explore more models from Z.ai
GLM-5.1
Zhipu AI's current flagship refined from GLM-5 for agentic engineering, topping the SWE-Bench Pro leaderboard with sustained 8-hour autonomous execution
GLM-4.7
Z.ai's flagship model with industry-leading coding and multi-step task handling
GLM-4.6V
Open-source vision-language model optimized for multimodal reasoning and frontend automation