Back to BenchmarksFrontier
Balrog
Balanced reasoning and logic games evaluation
Leaderboard
(16 models)| Rank | Model | Score | Stderr |
|---|---|---|---|
| 1 | Grok 4 | 43.60 | — |
| 2 | Gemini 2.5 Pro (Jun 2025) | 43.30 | — |
| 3 | DeepSeek R1 | 34.90 | — |
| 4 | GPT-5.2 | 32.80 | — |
| 5 | Claude 3.7 Sonnet | 32.60 | — |
| 6 | GPT-4o | 32.30 | — |
| 7 | Grok-3 mini | 29.50 | — |
| 8 | Llama 3.1 405B | 27.90 | — |
| 9 | Llama 3.3 70B | 23.00 | — |
| 10 | Gemini 1.5 Flash | 21.00 | — |
| 11 | DeepSeek V3 | 19.50 | — |
| 12 | Claude 3.5 Haiku | 19.30 | — |
| 13 | Mistral Large | 17.60 | — |
| 14 | gpt-4o-mini-2024-07-18 | 17.40 | — |
| 15 | Qwen2.5-Max | 16.20 | — |
| 16 | Phi-4 | 11.60 | — |
Data source: Epoch AI, “Data on AI Benchmarking”. Published at epoch.ai
Licensed under CC-BY 4.0