LATEST MODEL

Qwen3.5-Plus

Qwen 🧠 The Thinker Released February 2026

Alibaba's hosted flagship combining hybrid linear-attention MoE with native multimodal understanding for agentic workflows across 201 languages

Qwen3.5-Plus

QwenFebruary 2026

Latest

Training Data

Up to late 2025

Qwen3.5-Plus

February 2026

Parameters

397 billion (17B active)

Training Method

Hybrid Gated DeltaNet + MoE with multi-step multi-token prediction and scale RL

Context Window

1,000,000 tokens

Knowledge Cutoff

Not disclosed

Key Features

Native Multimodal Agents • Hybrid Linear-Attention MoE • 201 Language Support

Capabilities

Reasoning: Excellent

Coding: Excellent

Multimodal: Excellent

What's New in This Version

Matches Qwen3-Max reasoning while being 19x faster at long-context decoding, 8.6x faster for standard workflows, at ~60% lower cost with native multimodal capabilities

Alibaba's hosted flagship combining hybrid linear-attention MoE with native multimodal understanding for agentic workflows across 201 languages

What's New in This Version

Matches Qwen3-Max reasoning while being 19x faster at long-context decoding, 8.6x faster for standard workflows, at ~60% lower cost with native multimodal capabilities

Technical Specifications

Parameters 397 billion (17B active)
Context Window 1,000,000 tokens
Training Method Hybrid Gated DeltaNet + MoE with multi-step multi-token prediction and scale RL
Knowledge Cutoff Not disclosed
Training Data Up to late 2025

Key Features

Native Multimodal Agents Hybrid Linear-Attention MoE 201 Language Support

Capabilities

Reasoning: Excellent
Coding: Excellent
Multimodal: Excellent
Theme
Language
Support
© funclosure 2025