MiniMax Models

Chinese AI pioneer • MoE efficiency leader

6 Models Latest: MiniMax-M2.7

MiniMax-M2.7

Released March 2026

LATEST

MiniMax's self-evolving agent model pioneering recursive self-improvement with frontier agentic coding performance at a fraction of competitor cost

Parameters
230 billion (10B active)
Context
204,800 tokens
Key Features
Self-Evolving Agent Model Agent Teams & Dynamic Tool Search Open Weights (205K Context)
View Details →

MiniMax-M2.5

Released February 2026

MiniMax's flagship model matching frontier performance at 1/20th the cost with 80.2% SWE-bench Verified

Parameters
230B (10B active)
Context
200,000 tokens
Key Features
Frontier Performance Ultra Low Cost Open Weights (MIT) +1 more
View Details →

MiniMax-M2.5-Lightning

Released February 2026

Ultra-fast variant of M2.5 generating 100 tokens per second at $1/hour continuous operation

Parameters
230B (10B active)
Context
200,000 tokens
Key Features
100 tok/s Speed $1/hour Operation Lightning Fast +1 more
View Details →

MiniMax-M2.1

Released December 2025

Lightweight coding-focused model with strong multi-language programming and agentic capabilities

Parameters
230B (10B active)
Context
196,000 tokens
Key Features
Multi-language Coding Agentic Workflows Open Weights +1 more
View Details →

MiniMax-M2

Released October 2025

MiniMax's general-purpose foundation model with strong tool use and deep search capabilities

Parameters
Not disclosed
Context
Not disclosed
Key Features
Open Source Tool Use Deep Search +1 more
View Details →

MiniMax-M1

Released June 2025

World's first open-source large-scale hybrid-attention reasoning model with 1M token context

Parameters
456B (45.9B active)
Context
1,000,000 tokens
Key Features
1M Context Lightning Attention Open Source +1 more
View Details →
Theme
Language
Support
© funclosure 2025