Gemini 3 Pro Preview

Gemini 3 Pro is Google’s flagship frontier model for high-precision multimodal reasoning, combining strong performance across text, image, video, audio, and code with a 1M-token context window. Reasoning Details must be preserved when using multi-turn tool calling, see our docs here: https://openrouter.ai/docs/use-cases/reasoning-tokens#preserving-reasoning-blocks. It delivers state-of-the-art benchmark results in general reasoning, STEM problem solving, factual QA, and multimodal understanding, including leading scores on LMArena, GPQA Diamond, MathArena Apex, MMMU-Pro, and Video-MMMU. Interactions emphasize depth and interpretability: the model is designed to infer intent with minimal prompting and produce direct, insight-focused responses.

Built for advanced development and agentic workflows, Gemini 3 Pro provides robust tool-calling, long-horizon planning stability, and strong zero-shot generation for complex UI, visualization, and coding tasks. It excels at agentic coding (SWE-Bench Verified, Terminal-Bench 2.0), multimodal analysis, and structured long-form tasks such as research synthesis, planning, and interactive learning experiences. Suitable applications include autonomous agents, coding assistants, multimodal analytics, scientific reasoning, and high-context information processing.

Share

Model details

Context window1,048,576 tokens
Max completion size29 tokens
Prompt cost / 1K tokens$0.000002
Completion cost / 1K tokens$0.000012
Accepts
Produces

Benchmark performance

Overall

78
score
7th
placement

Cost

28
score
22nd
placement

Logic

87
score
4th
placement

Speed

67
score
27th
placement

Scoring

34
score
14th
placement

Tool Use

77
score
1st
placement

Hallucination

97
score
2nd
placement

Classification

50
score
1st
placement

Structured Output

92
score
2nd
placement

Pricing

Usage pricing
Prompt
$0.000002
Completion
$0.000012
Request
FREE
Image
$0.010
Web Search
FREE
Internal Reasoning
FREE
Input Cache Read
FREE
Input Cache Write
FREE

Best Overall scoring LLMs

xAI

Grok 4 Fast

88
score
1st
placement
Qwen

Qwen3 VL 235B A22B Instruct

86
score
2nd
placement
xAI

Grok 4.1 Fast

84
score
3rd
placement
OpenAI

GPT-5.1 Chat

82
score
4th
placement
OpenAI

GPT-5.1-Codex

82
score
4th
placement
Anthropic

Claude Haiku 4.5

80
score
5th
placement