Qwen3-30B-A3B
Efficient MoE model with 30B total but only 3B active parameters
Qwen3-30B-A3B
Qwen • April 2025
Training Data
Up to early 2025
Qwen3-30B-A3B
April 2025
Parameters
30B (3B active)
Training Method
Mixture of Experts
Context Window
128,000 tokens
Knowledge Cutoff
March 2025
Key Features
MoE Architecture • Efficient Inference • Cost Effective • Fast Response
Capabilities
Efficiency: Outstanding
Speed: Excellent
Reasoning: Very Good
What's New in This Version
Excellent performance-to-cost ratio with sparse activation
Efficient MoE model with 30B total but only 3B active parameters
What's New in This Version
Excellent performance-to-cost ratio with sparse activation
Technical Specifications
Key Features
Capabilities
Other Qwen Models
Explore more models from Qwen
Qwen3.6-Plus
Alibaba's flagship agentic AI model with hybrid linear attention, always-on reasoning, and autonomous multi-step coding workflows
Qwen3.5-Plus
Alibaba's hosted flagship combining hybrid linear-attention MoE with native multimodal understanding for agentic workflows across 201 languages
Qwen3-Max
Alibaba's flagship model with over 1 trillion parameters and exceptional reasoning