Back to BenchmarksFrontier
OTIS Mock AIME 2024-2025
Competition-level math problems from OTIS Mock AIME evaluating olympiad-level math
Leaderboard
(38 models)| Rank | Model | Score | Stderr |
|---|---|---|---|
| 1 | GPT-5.2 | 96.11 | ±0.03 |
| 2 | Gemini 3 Pro | 92.78 | ±0.04 |
| 3 | GPT-OSS 120B | 88.89 | ±0.04 |
| 4 | DeepSeek V3 | 87.82 | ±0.04 |
| 5 | Qwen 3 235B | 86.67 | ±0.05 |
| 6 | Claude Opus 4.5 | 86.11 | ±0.04 |
| 7 | Gemini 2.5 Pro (Jun 2025) | 84.72 | ±0.05 |
| 8 | Grok 4 | 84.00 | ±0.05 |
| 9 | o3 | 83.89 | ±0.04 |
| 10 | kimi-k2-thinking (official) | 83.06 | ±0.05 |
| 11 | o4-mini (high) | 81.67 | ±0.05 |
| 12 | Grok-3 mini | 77.78 | ±0.06 |
| 13 | Claude Sonnet 4.5 | 77.78 | ±0.06 |
| 14 | Qwen3-Max-Instruct | 73.33 | ±0.06 |
| 15 | o1 | 73.33 | ±0.07 |
| 16 | Claude Haiku 4.5 | 66.67 | ±0.07 |
| 17 | Gemini 2.0 Flash Thinking Exp | 57.78 | ±0.07 |
| 18 | Claude 3.7 Sonnet | 57.78 | ±0.07 |
| 19 | DeepSeek R1 | 53.33 | ±0.08 |
| 20 | GPT-4.1 mini | 44.72 | ±0.06 |
| 21 | GPT-4.1 | 38.33 | ±0.06 |
| 22 | Mistral Large | 32.22 | ±0.06 |
| 23 | Gemini 1.5 Flash | 23.06 | ±0.05 |
| 24 | Llama 4 Maverick (FP8) | 20.56 | ±0.05 |
| 25 | Gemma 3 27B | 19.72 | ±0.05 |
| 26 | Qwen Plus | 17.78 | ±0.04 |
| 27 | Qwen2.5-Max | 16.11 | ±0.04 |
| 28 | Phi-4 | 13.75 | ±0.04 |
| 29 | Llama 3.1 405B | 9.72 | ±0.03 |
| 30 | Llama 4 Scout | 7.78 | ±0.03 |
| 31 | gpt-4o-mini-2024-07-18 | 6.94 | ±0.03 |
| 32 | GPT-4 Turbo | 6.67 | ±0.02 |
| 33 | GPT-4o | 6.39 | ±0.03 |
| 34 | Llama 3.3 70B | 5.14 | ±0.02 |
| 35 | Claude 3 Opus | 4.72 | ±0.02 |
| 36 | Claude 3.5 Haiku | 4.31 | ±0.02 |
| 37 | Meta-Llama-3-8B-Instruct | 4.31 | ±0.02 |
| 38 | Llama-2-7b | 0.00 | — |
Data source: Epoch AI, “Data on AI Benchmarking”. Published at epoch.ai
Licensed under CC-BY 4.0