Mistral AI Models

Mixture of experts • French precision

16 Models Latest: Mistral Small 4

Mistral Small 4

Released March 2026

LATEST

Mistral's unified open-weight MoE model combining reasoning, multimodal, and coding capabilities under Apache 2.0 with only 6.5B active parameters

Parameters
119 billion (6.5B active)
Context
256,000 tokens
Key Features
Configurable Reasoning Native Multimodal Unified Architecture (Apache 2.0)
View Details →

Mistral Large 3

Released December 2025

LATEST

Mistral's state-of-the-art open-weight frontier model with multimodal and multilingual capabilities under Apache 2.0

Parameters
675 billion (41B active)
Context
256,000 tokens
Key Features
Open Source (Apache 2.0) Multimodal Multilingual +1 more
View Details →

Devstral 2

Released December 2025

LATEST

Mistral's next-generation coding model designed to compete with Anthropic and other coding-focused LLMs

Parameters
~22 billion
Context
128,000 tokens
Key Features
Code Specialization 128K Context Vibe Coding
View Details →

Ministral 3 14B

Released December 2025

Mistral's high-performance dense model in the new Ministral 3 family

Parameters
14 billion
Context
32,000 tokens
Key Features
Dense Architecture High Performance Balanced Size
View Details →

Ministral 3 8B

Released December 2025

Mistral's efficient edge-ready model for drones, cars, robots, phones and laptops

Parameters
8 billion
Context
32,000 tokens
Key Features
Edge Ready Device Deployment Efficient
View Details →

Ministral 3 3B

Released December 2025

Mistral's ultra-compact model for resource-constrained edge deployments

Parameters
3 billion
Context
32,000 tokens
Key Features
Ultra Compact Minimal Resources Fast Inference
View Details →

Devstral Small 2

Released December 2025

Mistral's compact open-source coding model that runs on consumer hardware under Apache 2.0

Parameters
24 billion
Context
128,000 tokens
Key Features
Open Source (Apache 2.0) Consumer Hardware Code Specialization +1 more
View Details →

Magistral Medium

Released June 2025

Mistral's flagship reasoning model with advanced multi-step logic capabilities

Parameters
~200 billion
Context
40,000 tokens
Key Features
Advanced Reasoning Multi-step Logic Multilingual Excellence
View Details →

Magistral Small

Released June 2025

Mistral's open-source reasoning model with Apache 2.0 license

Parameters
24 billion
Context
40,000 tokens
Key Features
Open Source Reasoning Capabilities Apache 2.0 License
View Details →

Ministral 3B

Released October 2024

Mistral's ultra-compact model for edge deployment

Parameters
3 billion
Context
32,000 tokens
Key Features
Ultra Compact Edge Deployment Low Resource
View Details →

Ministral 8B

Released October 2024

Mistral's efficient model for laptops and edge devices with enhanced capabilities

Parameters
8 billion
Context
32,000 tokens
Key Features
Laptop-ready Enhanced Edge Balanced Performance
View Details →

Mistral Large 2

Released July 2024

Mistral's most advanced flagship model

Parameters
123 billion
Context
128,000 tokens
Key Features
Code Generation Function Calling Multilingual
View Details →

Codestral

Released May 2024

Mistral's specialized code generation model

Parameters
22 billion
Context
32,000 tokens
Key Features
Code Specialization Fill-in-the-middle Multi-language
View Details →

Mixtral 8x22B

Released April 2024

Mistral's largest mixture-of-experts model

Parameters
8x22B (MoE)
Context
64,000 tokens
Key Features
Large MoE Extended Context High Performance
View Details →

Mixtral 8x7B

Released December 2023

Mistral's efficient mixture-of-experts model

Parameters
8x7B (MoE)
Context
32,000 tokens
Key Features
Mixture of Experts High Efficiency Open Weights
View Details →

Mistral 7B

Released September 2023

Mistral's compact and efficient base model

Parameters
7 billion
Context
8,192 tokens
Key Features
Compact Size High Efficiency Open Weights
View Details →
Theme
Language
Support
© funclosure 2025