MiniMax M2

MiniMax-M2 is a compact, high-efficiency large language model optimized for end-to-end coding and agentic workflows. With 10 billion activated parameters (230 billion total), it delivers near-frontier intelligence across general reasoning, tool use, and multi-step task execution while maintaining low latency and deployment efficiency.

The model excels in code generation, multi-file editing, compile-run-fix loops, and test-validated repair, showing strong results on SWE-Bench Verified, Multi-SWE-Bench, and Terminal-Bench. It also performs competitively in agentic evaluations such as BrowseComp and GAIA, effectively handling long-horizon planning, retrieval, and recovery from execution errors.

Benchmarked by Artificial Analysis, MiniMax-M2 ranks among the top open-source models for composite intelligence, spanning mathematics, science, and instruction-following. Its small activation footprint enables fast inference, high concurrency, and improved unit economics, making it well-suited for large-scale agents, developer assistants, and reasoning-driven applications that require responsiveness and cost efficiency.

To avoid degrading this model's performance, MiniMax highly recommends preserving reasoning between turns. Learn more about using reasoning_details to pass back reasoning in our docs.

Share

Model details

Context window196,608 tokens
Max completion size75 tokens
Prompt cost / 1K tokens$0.0000002
Completion cost / 1K tokens$0.000001
Accepts
Produces

Benchmark performance

Overall

60
score
25th
placement

Cost

86
score
12th
placement

Logic

73
score
10th
placement

Speed

55
score
31st
placement

Scoring

16
score
22nd
placement

Tool Use

45
score
5th
placement

Hallucination

57
score
15th
placement

Classification

39
score
2nd
placement

Structured Output

42
score
7th
placement

Pricing

Usage pricing
Prompt
$0.0000002
Completion
$0.000001
Request
FREE
Image
FREE
Web Search
FREE
Internal Reasoning
FREE
Input Cache Read
FREE

Best Overall scoring LLMs

xAI

Grok 4 Fast

88
score
1st
placement
Qwen

Qwen3 VL 235B A22B Instruct

86
score
2nd
placement
xAI

Grok 4.1 Fast

84
score
3rd
placement
OpenAI

GPT-5.1 Chat

82
score
4th
placement
OpenAI

GPT-5.1-Codex

82
score
4th
placement
Anthropic

Claude Haiku 4.5

80
score
5th
placement