Qwen3.6-Plus
Alibaba's flagship agentic AI model with hybrid linear attention, always-on reasoning, and autonomous multi-step coding workflows
Qwen3.6-Plus
Qwen • April 2026
Training Data
Up to early 2026
Qwen3.6-Plus
April 2026
Parameters
Not disclosed
Training Method
Hybrid linear attention + sparse Mixture of Experts
Context Window
1,000,000 tokens
Knowledge Cutoff
Not disclosed
Key Features
1M Context with 65K Output • Always-On Chain-of-Thought • Native Function Calling & Agentic Coding
Capabilities
Coding: Excellent
Reasoning: Excellent
Multilingual: Very Good
What's New in This Version
Expands context to 1M tokens, uses ~515 fewer reasoning tokens per task, achieves perfect consistency (10.0 vs 9.0), and delivers significantly faster inference vs Qwen3.5-Plus
Alibaba's flagship agentic AI model with hybrid linear attention, always-on reasoning, and autonomous multi-step coding workflows
What's New in This Version
Expands context to 1M tokens, uses ~515 fewer reasoning tokens per task, achieves perfect consistency (10.0 vs 9.0), and delivers significantly faster inference vs Qwen3.5-Plus
Technical Specifications
Key Features
Capabilities
Other Qwen Models
Explore more models from Qwen
Qwen3.5-Plus
Alibaba's hosted flagship combining hybrid linear-attention MoE with native multimodal understanding for agentic workflows across 201 languages
Qwen3-Max
Alibaba's flagship model with over 1 trillion parameters and exceptional reasoning
QwQ-32B
Reasoning-focused model with extended thinking capabilities