Back to BenchmarksFrontier
OpenBookQA
Open-book question answering requiring common knowledge
Leaderboard
(14 models)| Rank | Model | Score | Stderr |
|---|---|---|---|
| 1 | Phi-3-mini-4k-instruct | 88.00 | — |
| 2 | Phi-3-small-8k-instruct | 88.00 | — |
| 3 | Phi-3-medium-128k-instruct | 87.40 | — |
| 4 | gpt-3.5-turbo-1106 | 86.00 | — |
| 5 | Mixtral-8x7B-v0.1 | 85.80 | — |
| 6 | Meta-Llama-3-8B-Instruct | 82.60 | — |
| 7 | Mistral-7B-v0.1 | 79.80 | — |
| 8 | gemma-7b | 78.60 | — |
| 9 | Phi-4 | 73.60 | — |
| 10 | falcon-180B | 64.20 | — |
| 11 | Llama 3.1 405B | 60.20 | — |
| 12 | Llama-2-70b-hf | 60.20 | — |
| 13 | Llama-2-7b | 58.60 | — |
| 14 | GPT-OSS 120B | 38.80 | — |
Data source: Epoch AI, “Data on AI Benchmarking”. Published at epoch.ai
Licensed under CC-BY 4.0