FrontierMath Competition: Setting Benchmarks for AI Evaluation
We are hosting a competition to establish rigorous human performance baselines for FrontierMath. With a prize pool of over $30,000, your participation will contribute directly to measuring AI progress in solving challenging mathematical problems.
Published
We’re launching a competition to establish rigorous human performance baselines for FrontierMath, our benchmark for evaluating AI mathematical capabilities. The results will provide crucial reference points for measuring AI progress in tackling very difficult mathematics problems.
Competition Overview
- Location: Cambridge, Massachusetts
- Timing: Early 2025 (February/March)
- Format: 3-4 hours solving novel mathematics problems alongside leading mathematicians
- Prize pool: >$30,000 distributed across top performers
- Recognition: Participants acknowledged in FrontierMath baseline publication
Why Participate
This competition offers a unique opportunity to contribute to AI progress measurement while competing for substantial prizes. Results will directly inform our understanding of AI capabilities by providing clear human performance benchmarks.
Express Interest
We’re gathering input to help shape the competition format. This is not a formal application, but an opportunity to share your preferences on:
- Location and timing
- Competition structure
- Prize pool distribution
Note: Completing this form is not a competition sign-up. Formal applications will open once the format is finalized.
Contact us
Contact us at math@epoch.ai.