Qwen3.5-Plus
Alibaba's hosted flagship combining hybrid linear-attention MoE with native multimodal understanding for agentic workflows across 201 languages
Qwen3.5-Plus
Qwen • February 2026
Training Data
Up to late 2025
Qwen3.5-Plus
February 2026
Parameters
397 billion (17B active)
Training Method
Hybrid Gated DeltaNet + MoE with multi-step multi-token prediction and scale RL
Context Window
1,000,000 tokens
Knowledge Cutoff
Not disclosed
Key Features
Native Multimodal Agents • Hybrid Linear-Attention MoE • 201 Language Support
Capabilities
Reasoning: Excellent
Coding: Excellent
Multimodal: Excellent
What's New in This Version
Matches Qwen3-Max reasoning while being 19x faster at long-context decoding, 8.6x faster for standard workflows, at ~60% lower cost with native multimodal capabilities
Alibaba's hosted flagship combining hybrid linear-attention MoE with native multimodal understanding for agentic workflows across 201 languages
What's New in This Version
Matches Qwen3-Max reasoning while being 19x faster at long-context decoding, 8.6x faster for standard workflows, at ~60% lower cost with native multimodal capabilities
Technical Specifications
Key Features
Capabilities
Other Qwen Models
Explore more models from Qwen
Qwen3.6-Plus
Alibaba's flagship agentic AI model with hybrid linear attention, always-on reasoning, and autonomous multi-step coding workflows
Qwen3-Max
Alibaba's flagship model with over 1 trillion parameters and exceptional reasoning
QwQ-32B
Reasoning-focused model with extended thinking capabilities