LATEST MODEL

GLM-4.6V

Z.ai Released December 2025

Open-source vision-language model optimized for multimodal reasoning and frontend automation

GLM-4.6V

Z.aiDecember 2025

Latest

Training Data

Up to late 2025

GLM-4.6V

December 2025

Parameters

106 billion

Training Method

Vision-Language Pre-training

Context Window

128,000 tokens

Knowledge Cutoff

November 2025

Key Features

Vision-Language • Frontend Automation • Tool Calling • Open Source

Capabilities

Vision: Excellent

Multimodal: Outstanding

Tool Use: Excellent

What's New in This Version

Native tool-calling vision model for production multimodal applications

Open-source vision-language model optimized for multimodal reasoning and frontend automation

What's New in This Version

Native tool-calling vision model for production multimodal applications

Technical Specifications

Parameters 106 billion
Context Window 128,000 tokens
Training Method Vision-Language Pre-training
Knowledge Cutoff November 2025
Training Data Up to late 2025

Key Features

Vision-Language Frontend Automation Tool Calling Open Source

Capabilities

Vision: Excellent
Multimodal: Outstanding
Tool Use: Excellent
Theme
Language
Support
© funclosure 2025