Back to BenchmarksFrontier
BBH
Big-Bench Hard - 23 challenging tasks requiring multi-step reasoning
Leaderboard
(16 models)| Rank | Model | Score | Stderr |
|---|---|---|---|
| 1 | DeepSeek V3 | 87.50 | — |
| 2 | Llama 3.1 405B | 82.90 | — |
| 3 | Phi-3-medium-128k-instruct | 81.40 | — |
| 4 | Qwen 2.5 72B | 79.80 | — |
| 5 | Phi-3-small-8k-instruct | 79.10 | — |
| 6 | GPT-4.1 | 75.12 | — |
| 7 | Phi-3-mini-4k-instruct | 71.70 | — |
| 8 | Llama-2-70b-hf | 64.90 | — |
| 9 | gpt-3.5-turbo-1106 | 61.59 | — |
| 10 | Phi-4 | 59.40 | — |
| 11 | Llama-2-7b | 58.50 | — |
| 12 | Mistral-7B-v0.1 | 56.10 | — |
| 13 | gemma-7b | 55.10 | — |
| 14 | Qwen 3 235B | 55.00 | — |
| 15 | Yi-6B | 47.20 | — |
| 16 | falcon-180B | 37.10 | — |
Data source: Epoch AI, “Data on AI Benchmarking”. Published at epoch.ai
Licensed under CC-BY 4.0