Back to BenchmarksFrontier
WeirdML
Unusual machine learning tasks testing adaptability
Leaderboard
(31 models)| Rank | Model | Score | Stderr |
|---|---|---|---|
| 1 | GPT-5.2 | 72.20 | — |
| 2 | Gemini 3 Pro | 69.93 | — |
| 3 | Claude Opus 4.5 | 63.70 | — |
| 4 | o3 | 58.21 | — |
| 5 | Gemini 2.5 Pro (Jun 2025) | 54.03 | — |
| 6 | o4-mini (high) | 52.56 | — |
| 7 | GPT-OSS 120B | 48.17 | — |
| 8 | o1 | 47.56 | — |
| 9 | Grok 4 | 45.73 | — |
| 10 | kimi-k2-thinking (official) | 42.79 | — |
| 11 | Grok-3 mini | 42.58 | — |
| 12 | DeepSeek V3 | 41.63 | — |
| 13 | Qwen3-Max-Instruct | 41.17 | — |
| 14 | Qwen 3 235B | 41.04 | — |
| 15 | Claude 3.7 Sonnet | 39.97 | — |
| 16 | GPT-4.1 | 39.37 | — |
| 17 | GPT-4.1 mini | 37.61 | — |
| 18 | DeepSeek R1 | 36.49 | — |
| 19 | Grok Code Fast 1 | 35.06 | — |
| 20 | Mistral Large | 33.13 | — |
| 21 | Claude 3.5 Haiku | 30.73 | — |
| 22 | GPT-4o | 25.12 | — |
| 23 | Gemini 1.5 Flash | 24.87 | — |
| 24 | Llama-4-Maverick-17B-128E-Instruct | 24.47 | — |
| 25 | Claude 3 Opus | 23.18 | — |
| 26 | Llama 3.1 405B | 21.38 | — |
| 27 | GPT-4 Turbo | 18.01 | — |
| 28 | Llama 3.3 70B | 14.44 | — |
| 29 | gpt-4o-mini-2024-07-18 | 11.76 | — |
| 30 | gpt-3.5-turbo-1106 | 3.48 | — |
| 31 | Mixtral-8x7B-v0.1 | 3.17 | — |
Data source: Epoch AI, “Data on AI Benchmarking”. Published at epoch.ai
Licensed under CC-BY 4.0