Benchmarking updates

February 20, 2026
We've updated our methodology for running agentic coding benchmarks.
Learn about the update
February 13, 2026
We analyzed three benchmarks — RLI, GDPval, and APEX-Agents — designed to test AI’s ability to do economically valuable digital work.
Read the report
January 27, 2026
We released FrontierMath: Open Problems, which tests AI on unsolved math research problems.
Discover the benchmark
Trusted by leaders at OpenAI, DeepMind, and governments worldwide.
Need deeper insights? Our team offers custom research and advisory services.
Book a consultation