Llama 4 Scout

Llama 4 Scout 17B Instruct (16E) is a mixture-of-experts (MoE) language model developed by Meta, activating 17 billion parameters out of a total of 109B. It supports native multimodal input (text and image) and multilingual output (text and code) across 12 supported languages. Designed for assistant-style interaction and visual reasoning, Scout uses 16 experts per forward pass and features a context length of 10 million tokens, with a training corpus of ~40 trillion tokens.

Built for high efficiency and local or commercial deployment, Llama 4 Scout incorporates early fusion for seamless modality integration. It is instruction-tuned for use in multilingual chat, captioning, and image understanding tasks. Released under the Llama 4 Community License, it was last trained on data up to August 2024 and launched publicly on April 5, 2025.

Share

Model details

Context window327,680 tokens
Max completion size88 tokens
Prompt cost / 1K tokens$0.00000008
Completion cost / 1K tokens$0.0000003
Accepts
Produces

Benchmark performance

Overall

68
score
17th
placement

Cost

100
score
1st
placement

Logic

59
score
15th
placement

Speed

94
score
6th
placement

Scoring

61
score
3rd
placement

Tool Use

13
score
10th
placement

Hallucination

51
score
17th
placement

Classification

50
score
1st
placement

Structured Output

25
score
9th
placement

Pricing

Usage pricing
Prompt
$0.00000008
Completion
$0.0000003
Request
FREE
Image
FREE
Web Search
FREE
Internal Reasoning
FREE

Best Overall scoring LLMs

xAI

Grok 4 Fast

88
score
1st
placement
Qwen

Qwen3 VL 235B A22B Instruct

86
score
2nd
placement
xAI

Grok 4.1 Fast

84
score
3rd
placement
OpenAI

GPT-5.1 Chat

82
score
4th
placement
OpenAI

GPT-5.1-Codex

82
score
4th
placement
Anthropic

Claude Haiku 4.5

80
score
5th
placement