DeepSeek V3.2 Exp

DeepSeek-V3.2-Exp is an experimental large language model released by DeepSeek as an intermediate step between V3.1 and future architectures. It introduces DeepSeek Sparse Attention (DSA), a fine-grained sparse attention mechanism designed to improve training and inference efficiency in long-context scenarios while maintaining output quality. Users can control the reasoning behaviour with the reasoning enabled boolean. Learn more in our docs

The model was trained under conditions aligned with V3.1-Terminus to enable direct comparison. Benchmarking shows performance roughly on par with V3.1 across reasoning, coding, and agentic tool-use tasks, with minor tradeoffs and gains depending on the domain. This release focuses on validating architectural optimizations for extended context lengths rather than advancing raw task accuracy, making it primarily a research-oriented model for exploring efficient transformer designs.

Share

Model details

Context window163,840 tokens
Max completion size79 tokens
Prompt cost / 1K tokens$0.00000021
Completion cost / 1K tokens$0.00000032
Accepts
Produces

Benchmark performance

Overall

73
score
12th
placement

Cost

98
score
3rd
placement

Logic

84
score
5th
placement

Speed

28
score
35th
placement

Scoring

20
score
20th
placement

Tool Use

58
score
3rd
placement

Hallucination

89
score
5th
placement

Classification

50
score
1st
placement

Structured Output

75
score
4th
placement

Pricing

Usage pricing
Prompt
$0.00000021
Completion
$0.00000032
Request
FREE
Image
FREE
Web Search
FREE
Internal Reasoning
FREE
Input Cache Read
FREE
Input Cache Write
FREE

Best Overall scoring LLMs

xAI

Grok 4 Fast

88
score
1st
placement
Qwen

Qwen3 VL 235B A22B Instruct

86
score
2nd
placement
xAI

Grok 4.1 Fast

84
score
3rd
placement
OpenAI

GPT-5.1 Chat

82
score
4th
placement
OpenAI

GPT-5.1-Codex

82
score
4th
placement
Anthropic

Claude Haiku 4.5

80
score
5th
placement