DeepSeek V3.1

DeepSeek-V3.1 is a large hybrid reasoning model (671B parameters, 37B active) that supports both thinking and non-thinking modes via prompt templates. It extends the DeepSeek-V3 base with a two-phase long-context training process, reaching up to 128K tokens, and uses FP8 microscaling for efficient inference. Users can control the reasoning behaviour with the reasoning enabled boolean. Learn more in our docs

The model improves tool use, code generation, and reasoning efficiency, achieving performance comparable to DeepSeek-R1 on difficult benchmarks while responding more quickly. It supports structured tool calling, code agents, and search agents, making it suitable for research, coding, and agentic workflows.

It succeeds the DeepSeek V3-0324 model and performs well on a variety of tasks.

Share

Model details

Context window163,840 tokens
Max completion size7 tokens
Prompt cost / 1K tokens$0.0000002
Completion cost / 1K tokens$0.0000008
Accepts
Produces

Benchmark performance

Overall

70
score
15th
placement

Cost

98
score
3rd
placement

Logic

77
score
8th
placement

Speed

70
score
26th
placement

Scoring

20
score
20th
placement

Tool Use

51
score
4th
placement

Hallucination

91
score
4th
placement

Classification

19
score
4th
placement

Structured Output

83
score
3rd
placement

Pricing

Usage pricing
Prompt
$0.0000002
Completion
$0.0000008
Request
FREE
Image
FREE
Web Search
FREE
Internal Reasoning
FREE
Input Cache Read
FREE
Input Cache Write
FREE

Best Overall scoring LLMs

xAI

Grok 4 Fast

88
score
1st
placement
Qwen

Qwen3 VL 235B A22B Instruct

86
score
2nd
placement
xAI

Grok 4.1 Fast

84
score
3rd
placement
OpenAI

GPT-5.1 Chat

82
score
4th
placement
OpenAI

GPT-5.1-Codex

82
score
4th
placement
Anthropic

Claude Haiku 4.5

80
score
5th
placement