gpt-oss-120b

gpt-oss-120b is an open-weight, 117B-parameter Mixture-of-Experts (MoE) language model from OpenAI designed for high-reasoning, agentic, and general-purpose production use cases. It activates 5.1B parameters per forward pass and is optimized to run on a single H100 GPU with native MXFP4 quantization. The model supports configurable reasoning depth, full chain-of-thought access, and native tool use, including function calling, browsing, and structured output generation.

Share

Model details

Context window131,072 tokens
Max completion size52 tokens
Prompt cost / 1K tokens$0.00000004
Completion cost / 1K tokens$0.0000002
Accepts
Produces

Benchmark performance

Overall

62
score
23rd
placement

Cost

100
score
1st
placement

Logic

82
score
6th
placement

Speed

90
score
10th
placement

Scoring

32
score
15th
placement

Tool Use

39
score
6th
placement

Hallucination

37
score
19th
placement

Classification

29
score
3rd
placement

Structured Output

25
score
9th
placement

Pricing

Usage pricing
Prompt
$0.00000004
Completion
$0.0000002
Request
FREE
Image
FREE
Web Search
FREE
Internal Reasoning
FREE

Best Overall scoring LLMs

xAI

Grok 4 Fast

88
score
1st
placement
Qwen

Qwen3 VL 235B A22B Instruct

86
score
2nd
placement
xAI

Grok 4.1 Fast

84
score
3rd
placement
OpenAI

GPT-5.1 Chat

82
score
4th
placement
OpenAI

GPT-5.1-Codex

82
score
4th
placement
Anthropic

Claude Haiku 4.5

80
score
5th
placement