Mixtral 8x22B Instruct

Mistral's official instruct fine-tuned version of Mixtral 8x22B. It uses 39B active parameters out of 141B, offering unparalleled cost efficiency for its size. Its strengths include:

  • strong math, coding, and reasoning
  • large context length (64k)
  • fluency in English, French, Italian, German, and Spanish

See benchmarks on the launch announcement here.
#moe

Share

Model details

Context window65,536 tokens
Max completion size48 tokens
Prompt cost / 1K tokens$0.000002
Completion cost / 1K tokens$0.000006
Accepts
Produces

Benchmark performance

Overall

65
score
20th
placement

Cost

80
score
14th
placement

Logic

68
score
12th
placement

Speed

91
score
9th
placement

Scoring

53
score
7th
placement

Tool Use

32
score
7th
placement

Hallucination

69
score
12th
placement

Classification

19
score
4th
placement

Structured Output

50
score
6th
placement

Pricing

Usage pricing
Prompt
$0.000002
Completion
$0.000006
Request
FREE
Image
FREE
Web Search
FREE
Internal Reasoning
FREE

Best Overall scoring LLMs

xAI

Grok 4 Fast

88
score
1st
placement
Qwen

Qwen3 VL 235B A22B Instruct

86
score
2nd
placement
xAI

Grok 4.1 Fast

84
score
3rd
placement
OpenAI

GPT-5.1 Chat

82
score
4th
placement
OpenAI

GPT-5.1-Codex

82
score
4th
placement
Anthropic

Claude Haiku 4.5

80
score
5th
placement