AI Benchmarking
Our database of benchmark results, featuring the performance of leading AI models on challenging tasks. It includes results from benchmarks evaluated internally by Epoch AI as well as data collected from external sources. Explore trends in AI capabilities across time, by benchmark, or by model.
Benchmarking updates
February 13, 2026
We analyzed three benchmarks — RLI, GDPval, and APEX-Agents — designed to test AI’s ability to do economically valuable digital work.
Read the report
February 6, 2026
Opus 4.6 matched the top score on FrontierMath Tiers 1-3, the first time an Anthropic model has been on the frontier of this benchmark.
See the results
January 27, 2026
We released FrontierMath: Open Problems, which tests AI on unsolved math research problems.
Discover the benchmark