Mistral Small 3

Mistral Small 3 is a 24B-parameter language model optimized for low-latency performance across common AI tasks. Released under the Apache 2.0 license, it features both pre-trained and instruction-tuned versions designed for efficient local deployment.

The model achieves 81% accuracy on the MMLU benchmark and performs competitively with larger models like Llama 3.3 70B and Qwen 32B, while operating at three times the speed on equivalent hardware. Read the blog post about the model here.

Share

Model details

Context window32,768 tokens
Max completion size15 tokens
Prompt cost / 1K tokens$0.00000005
Completion cost / 1K tokens$0.00000008
Accepts
Produces

Benchmark performance

Overall

62
score
23rd
placement

Cost

100
score
1st
placement

Logic

51
score
17th
placement

Speed

95
score
5th
placement

Scoring

38
score
12th
placement

Tool Use

19
score
9th
placement

Hallucination

91
score
4th
placement

Classification

19
score
4th
placement

Structured Output

33
score
8th
placement

Pricing

Usage pricing
Prompt
$0.00000005
Completion
$0.00000008
Request
FREE
Image
FREE
Web Search
FREE
Internal Reasoning
FREE

Best Overall scoring LLMs

xAI

Grok 4 Fast

88
score
1st
placement
Qwen

Qwen3 VL 235B A22B Instruct

86
score
2nd
placement
xAI

Grok 4.1 Fast

84
score
3rd
placement
OpenAI

GPT-5.1 Chat

82
score
4th
placement
OpenAI

GPT-5.1-Codex

82
score
4th
placement
Anthropic

Claude Haiku 4.5

80
score
5th
placement