Grok 4.20
xAI's flagship multi-agent model combining four specialized agents on a shared MoE backbone for industry-leading speed and lowest hallucination rates
Grok 4.20
xAI • March 2026
Training Data
Up to September 2025
Grok 4.20
March 2026
Parameters
Not disclosed (estimated ~3T total, ~500B active)
Training Method
Multi-agent MoE with scale RL and persona adapters
Context Window
2,000,000 tokens
Knowledge Cutoff
September 2025
Key Features
Multi-Agent Architecture • Lowest Hallucination Rate (~4.2%) • 2M Token Context
Capabilities
Reasoning: Outstanding
Agentic Tool Use: Outstanding
Coding: Excellent
What's New in This Version
Introduces four-agent collaborative architecture with dramatically lower hallucinations, 6x faster inference, and native multi-agent reasoning vs Grok 4.1
xAI's flagship multi-agent model combining four specialized agents on a shared MoE backbone for industry-leading speed and lowest hallucination rates
What's New in This Version
Introduces four-agent collaborative architecture with dramatically lower hallucinations, 6x faster inference, and native multi-agent reasoning vs Grok 4.1
Technical Specifications
Key Features
Capabilities
Other xAI Models
Explore more models from xAI
Grok 4.1
xAI's enhanced model focusing on personality refinement and real-world usability improvements
Grok-3
xAI's advanced model trained with 10x more compute than Grok-2
Grok-4
World's most intelligent model with native tool use and real-time search