Jaime Sevilla

Jaime Sevilla is the co-founder and director of Epoch AI. His research is focused on technological forecasting and the trajectory of AI. He has a background in Mathematics and Computer Science.

jaime@epoch.ai

Filter

Type
Topic

By Jaime Sevilla

0 results
Can AI companies become profitable?
Newsletter
Jan. 28, 2026
Can AI companies become profitable?

Lessons from GPT-5’s economics

By Jaime Sevilla, Hannah Petrovic, and Anson Ho

How far can decentralized training over the internet scale?
Newsletter
Dec. 29, 2025
How far can decentralized training over the internet scale?

Decentralized training over the internet promises to scale training to the limits of the internet.

By Jaime Sevilla

Could decentralized training solve AI’s power problem?
Report
Oct. 28, 2025
Could decentralized training solve AI’s power problem?

We illustrate a decentralized 10 GW training run across a dozen sites spanning thousands of kilometers. Developers are likely to scale datacenters to multi-gigawatt levels before adopting decentralized training.

By Jaime Sevilla and Anton Troynikov

How many digital workers could OpenAI deploy?
Newsletter
Oct. 3, 2025
How many digital workers could OpenAI deploy?

OpenAI has the inference compute to deploy tens of millions of digital workers, but only on a narrow set of tasks – for now.

By Jean-Stanislas Denain, Anson Ho, and Jaime Sevilla

Why GPT-5 used less training compute than GPT-4.5 (but GPT-6 probably won’t)
Newsletter
Sep. 26, 2025
Why GPT-5 used less training compute than GPT-4.5 (but GPT-6 probably won’t)

OpenAI focused on scaling post-training on a smaller model

By Yafah Edelman, Jean-Stanislas Denain, Jaime Sevilla, and Anson Ho

Forecasting AI progress until 2040
Podcast
Sep. 4, 2025
Forecasting AI progress until 2040

Epoch AI researchers Jaime Sevilla and Yafah Edelman forecast AI progress to 2040: coding automation, 10% GDP growth, and wild uncertainty after 2035.

By Jaime Sevilla and Yafah Edelman

What is Epoch?
Update
Jun. 5, 2025
What is Epoch?

Our director explains Epoch AI’s mission and how we decide our priorities. In short, we work on projects to understand the trajectory of AI, share this knowledge publicly, and inform important decisions about AI.

By Jaime Sevilla

Clarifying the creation and use of the FrontierMath benchmark
Update
Jan. 23, 2025
Clarifying the creation and use of the FrontierMath benchmark

We clarify that OpenAI commissioned Epoch AI to produce 300 math questions for the FrontierMath benchmark. They own these and have access to the statements and solutions, except for a 50-question holdout set.

By Tamay Besiroglu and Jaime Sevilla

AI in 2030, scaling bottlenecks, and explosive growth
Podcast
Jan. 17, 2025
AI in 2030, scaling bottlenecks, and explosive growth

Epoch AI presents their first podcast, exploring AI scaling trends, discussing power demands, chip production, data needs, and how continued progress could transform labor markets and potentially accelerate global economic growth to unprecedented levels.

By Jaime Sevilla, Tamay Besiroglu, and Ege Erdil

Can AI scaling continue through 2030?
Report
Aug. 20, 2024
Can AI scaling continue through 2030?

We investigate the scalability of AI training runs. We identify electric power, chip manufacturing, data and latency as constraints. We conclude that 2e29 FLOP training runs will likely be feasible by 2030.

By Jaime Sevilla, Tamay Besiroglu, Ben Cottier, Josh You, Edu Roldán, Pablo Villalobos, and Ege Erdil

Will we run out of data? Limits of LLM scaling based on human-generated data
Paper
Jun. 6, 2024
Will we run out of data? Limits of LLM scaling based on human-generated data

We estimate the effective stock of quality and repetition adjusted human-generated public text for AI training at around 300 trillion tokens. If trends continue, language models will fully utilize this stock between 2026 and 2032, or even earlier if intensely overtrained.

By Pablo Villalobos, Anson Ho, Jaime Sevilla, Tamay Besiroglu, Lennart Heim, and Marius Hobbhahn

Training compute of frontier AI models grows by 4-5x per year
Report
May 28, 2024
Training compute of frontier AI models grows by 4-5x per year

Our expanded AI model database shows that the compute used to train recent models grew 4-5x yearly from 2010 to May 2024. We find similar growth in frontier models, recent large language models, and models from leading companies.

By Jaime Sevilla and Edu Roldán

Algorithmic progress in language models
Paper
Mar. 12, 2024
Algorithmic progress in language models

Progress in pretrained language model performance surpasses what we’d expect from merely increasing computing resources, occurring at a pace equivalent to doubling computational power every 5 to 14 months.

By Anson Ho, Tamay Besiroglu, Ege Erdil, David Owen, Robi Rahman, Zifan Carl Guo, David Atkinson, Neil Thompson, and Jaime Sevilla

Please report your compute
Viewpoint
Apr. 26, 2023
Please report your compute

Compute is essential for AI performance, but researchers often fail to report it. Adopting reporting norms would support research, enhance forecasts of AI’s impacts and developments, and assist policymakers.

By Jaime Sevilla, Anson Ho, and Tamay Besiroglu

Power laws in speedrunning and machine learning
Paper
Apr. 21, 2023
Power laws in speedrunning and machine learning

We develop a model for predicting record improvements in video game speedrunning and apply it to predicting machine learning benchmarks. This model suggests that machine learning benchmarks are not close to saturation, and that large sudden improvements are infrequent, but not ruled out.

By Ege Erdil and Jaime Sevilla

An interactive model of AI takeoff speeds
Update
Jan. 24, 2023
An interactive model of AI takeoff speeds

We have developed an interactive website showcasing a new model of AI takeoff speeds.

By Jaime Sevilla and Edu Roldán

Literature review of transformative artificial intelligence timelines
Report
Jan. 17, 2023
Literature review of transformative artificial intelligence timelines

We summarize and compare several models and forecasts predicting when transformative AI will be developed.

By Keith Wynroe, David Atkinson, and Jaime Sevilla

Will we run out of ML data? Evidence from projecting dataset size trends
Paper
Nov. 10, 2022
Will we run out of ML data? Evidence from projecting dataset size trends

Based on our previous analysis of trends in dataset size, we project the growth of dataset size in the language and vision domains. We explore the limits of this trend by estimating the total stock of available unlabeled data over the next decades.

By Pablo Villalobos, Jaime Sevilla, Lennart Heim, Tamay Besiroglu, Marius Hobbhahn, and Anson Ho

The longest training run
Report
Aug. 17, 2022
The longest training run

Training runs of large machine learning systems are likely to last less than 14-15 months. This is because longer runs will be outcompeted by runs that start later and therefore use better hardware and better algorithms.

By Jaime Sevilla, Tamay Besiroglu, Owen Dudney, and Anson Ho

A time-invariant version of Laplace’s rule
Report
Jul. 15, 2022
A time-invariant version of Laplace’s rule

We explore how to estimate the probability of an event given information of past occurrences. We explain a problem with the naive application of Laplace’s rule in this context, and suggest a modification to correct it.

By Jaime Sevilla and Ege Erdil

Machine learning model sizes and the parameter gap
Paper
Jul. 5, 2022
Machine learning model sizes and the parameter gap

The model size of notable machine learning systems has grown ten times faster than before since 2018. After 2020 growth has not been entirely continuous: there was a jump of one order of magnitude which persists until today. This is relevant for forecasting model size and thus AI capabilities.

By Pablo Villalobos, Jaime Sevilla, Tamay Besiroglu, Lennart Heim, Anson Ho, and Marius Hobbhahn

Projecting compute trends in machine learning
Report
Mar. 7, 2022
Projecting compute trends in machine learning

Projecting forward 70 years' worth of trends in the amount of compute used to train machine learning models.

By Tamay Besiroglu, Lennart Heim, and Jaime Sevilla

Compute trends across three eras of machine learning
Paper
Updated May 2, 2022
Compute trends across three eras of machine learning

We’ve compiled a dataset of the training compute for over 120 machine learning models, highlighting novel trends and insights into the development of AI since 1952, and what to expect going forward."

By Jaime Sevilla, Lennart Heim, Anson Ho, Tamay Besiroglu, Marius Hobbhahn, and Pablo Villalobos

Estimating training compute of deep learning models
Report
Jan. 20, 2022
Estimating training compute of deep learning models

We describe two approaches for estimating the training compute of Deep Learning systems, by counting operations and looking at GPU time.

By Jaime Sevilla, Lennart Heim, Marius Hobbhahn, Tamay Besiroglu, Anson Ho, and Pablo Villalobos

What’s the backward-forward FLOP ratio for neural networks?
Report
Dec. 13, 2021
What’s the backward-forward FLOP ratio for neural networks?

Determining the backward-forward FLOP ratio for neural networks, to help calculate their total training compute.

By Marius Hobbhahn and Jaime Sevilla

Parameter counts in machine learning
Report
Jun. 19, 2021
Parameter counts in machine learning

Compiling a large dataset of machine learning models to determine changes in the parameters counts of systems since 1952.

By Jaime Sevilla, Pablo Villalobos, and Juan Felipe Cerón