Back to BenchmarksFrontier
MMLU
Massive Multitask Language Understanding across 57 subjects
Leaderboard
(35 models)| Rank | Model | Score | Stderr |
|---|---|---|---|
| 1 | GPT-4o | 88.10 | — |
| 2 | Claude 3.7 Sonnet | 87.30 | — |
| 3 | DeepSeek V3 | 87.20 | — |
| 4 | Gemini 1.5 Flash | 86.90 | — |
| 5 | GPT-4.1 | 86.40 | — |
| 6 | Llama 3.3 70B | 86.30 | — |
| 7 | Qwen2.5-Max | 85.30 | — |
| 8 | Qwen 2.5 72B | 85.00 | — |
| 9 | Phi-4 | 84.80 | — |
| 10 | Claude 3 Opus | 84.60 | — |
| 11 | Llama 3.1 405B | 84.50 | — |
| 12 | gpt-4o-mini-2024-07-18 | 81.80 | — |
| 13 | GPT-4 Turbo | 81.30 | — |
| 14 | Mistral Large | 80.00 | — |
| 15 | Gemini 2.5 Pro (Jun 2025) | 79.70 | — |
| 16 | Meta-Llama-3-8B-Instruct | 79.30 | — |
| 17 | yi-lightning | 79.30 | — |
| 18 | Phi-3-medium-128k-instruct | 78.00 | — |
| 19 | Mixtral-8x7B-v0.1 | 77.80 | — |
| 20 | Gemma 3 27B | 75.70 | — |
| 21 | Phi-3-small-8k-instruct | 75.70 | — |
| 22 | Claude 3.5 Haiku | 74.30 | — |
| 23 | Claude Opus 4.5 | 73.40 | — |
| 24 | gpt-3.5-turbo-1106 | 71.40 | — |
| 25 | falcon-180B | 70.60 | — |
| 26 | Llama-2-70b-hf | 69.90 | — |
| 27 | c4ai-command-a-03-2025 | 69.40 | — |
| 28 | Phi-3-mini-4k-instruct | 68.80 | — |
| 29 | Qwen3-Max-Instruct | 68.60 | — |
| 30 | Yi-6B | 68.40 | — |
| 31 | Qwen 3 235B | 66.30 | — |
| 32 | gemma-7b | 66.10 | — |
| 33 | Llama-2-7b | 62.60 | — |
| 34 | Mistral-7B-v0.1 | 62.50 | — |
| 35 | GPT-OSS 120B | 25.70 | — |
Data source: Epoch AI, “Data on AI Benchmarking”. Published at epoch.ai
Licensed under CC-BY 4.0