Back to BenchmarksFrontier
Winogrande
Large-scale Winograd schema challenge - anchor benchmark for ECI calculation
Leaderboard
(21 models)| Rank | Model | Score | Stderr |
|---|---|---|---|
| 1 | Llama 3.1 405B | 89.20 | — |
| 2 | Claude 3 Opus | 88.50 | — |
| 3 | GPT-4.1 | 87.50 | — |
| 4 | falcon-180B | 87.10 | — |
| 5 | DeepSeek V3 | 86.30 | — |
| 6 | Meta-Llama-3-8B-Instruct | 83.50 | — |
| 7 | Qwen 2.5 72B | 82.30 | — |
| 8 | gpt-3.5-turbo-1106 | 81.60 | — |
| 9 | Phi-3-medium-128k-instruct | 81.50 | — |
| 10 | Phi-3-small-8k-instruct | 81.50 | — |
| 11 | Qwen2.5-Max | 80.80 | — |
| 12 | Llama-2-70b-hf | 80.20 | — |
| 13 | gemma-7b | 79.00 | — |
| 14 | Mixtral-8x7B-v0.1 | 77.20 | — |
| 15 | Llama-2-7b | 76.70 | — |
| 16 | Mistral-7B-v0.1 | 75.30 | — |
| 17 | Claude 3.7 Sonnet | 75.10 | — |
| 18 | Phi-4 | 73.40 | — |
| 19 | Yi-6B | 73.00 | — |
| 20 | Phi-3-mini-4k-instruct | 70.80 | — |
| 21 | GPT-OSS 120B | 66.10 | — |
Data source: Epoch AI, “Data on AI Benchmarking”. Published at epoch.ai
Licensed under CC-BY 4.0