Anson Ho

Anson Ho is a researcher at Epoch AI. He is interested in helping develop a more rigorous understanding of future developments in AI and its societal impacts.

anson@epoch.ai

Filter

Type
Topic

By Anson Ho

0 results
Keeping up with the GPTs
Newsletter
Apr. 7, 2026
Keeping up with the GPTs

Can Chinese and open model companies compete with the frontier through e.g. distillation and talent?

By Anson Ho

The least understood driver of AI progress
Newsletter
Feb. 25, 2026
The least understood driver of AI progress

An opinionated guide to “algorithmic progress” and why it matters

By Anson Ho

How close is AI to taking my job?
Newsletter
Feb. 6, 2026
How close is AI to taking my job?

Beyond benchmarks as leading indicators for task automation

By Anson Ho

AI math capabilities could be jagged for a long time – Daniel Litt
Podcast
Jan. 29, 2026
AI math capabilities could be jagged for a long time – Daniel Litt

In this episode, Daniel Litt chats with the hosts about AI’s limits in mathematics, accelerating math research, and how to measure progress on open problems.

By Daniel Litt, Greg Burnham, and Anson Ho

Can AI companies become profitable?
Newsletter
Jan. 28, 2026
Can AI companies become profitable?

Lessons from GPT-5’s economics

By Jaime Sevilla, Hannah Petrovic, and Anson Ho

How well did forecasters predict 2025 AI progress?
Newsletter
Jan. 16, 2026
How well did forecasters predict 2025 AI progress?

Mostly right about benchmarks, mixed results on real-world impacts

By Anson Ho

The changing drivers of LLM adoption
Newsletter
Dec. 19, 2025
The changing drivers of LLM adoption

Public data as well as our original polling suggest LLM adoption is roughly on trend, but the underlying drivers are shifting.

By Jean-Stanislas Denain and Anson Ho

The EU and the not-so-simple macroeconomics of AI – Luis Garicano
Podcast
Dec. 18, 2025
The EU and the not-so-simple macroeconomics of AI – Luis Garicano

In this episode, economist Luis Garicano chats with the hosts about macroeconomic and labor market effects of AI, with a focus on the EU.

By Luis Garicano, Andrei Potlogea, and Anson Ho

Is almost everyone wrong about America’s AI power problem?
Newsletter
Dec. 17, 2025
Is almost everyone wrong about America’s AI power problem?

Why power is less of a bottleneck than you think.

By Anson Ho, Yafah Edelman, Josh You, and Jean-Stanislas Denain

A Rosetta Stone for AI benchmarks
Paper
Dec. 2, 2025
A Rosetta Stone for AI benchmarks

Most benchmarks saturate too quickly to study long-run AI trends. We solve this using a statistical framework that stitches benchmarks together, with big implications for algorithmic progress and AI forecasting.

By Anson Ho, Jean-Stanislas Denain, David Atanasov, Samuel Albanie, and Rohin Shah

The software intelligence explosion debate needs experiments
Newsletter
Nov. 14, 2025
The software intelligence explosion debate needs experiments

The existing debate rests on data and assumptions that are shakier than most people realize. To make progress, we need better evidence, and experiments are the best way to get it on the margin.

By Anson Ho and Parker Whitfill

How many digital workers could OpenAI deploy?
Newsletter
Oct. 3, 2025
How many digital workers could OpenAI deploy?

OpenAI has the inference compute to deploy tens of millions of digital workers, but only on a narrow set of tasks – for now.

By Jean-Stanislas Denain, Anson Ho, and Jaime Sevilla

What does economics actually tell us about AGI? – Phil Trammell
Podcast
Oct. 1, 2025
What does economics actually tell us about AGI? – Phil Trammell

Stanford economist Phil Trammell joins Epoch AI to explore AGI, growth, GDP limits, and what economic theory can tells us about the future of AI.

By Anson Ho and Phil Trammell

Why GPT-5 used less training compute than GPT-4.5 (but GPT-6 probably won’t)
Newsletter
Sep. 26, 2025
Why GPT-5 used less training compute than GPT-4.5 (but GPT-6 probably won’t)

OpenAI focused on scaling post-training on a smaller model

By Yafah Edelman, Jean-Stanislas Denain, Jaime Sevilla, and Anson Ho

Newsletter
Sep. 19, 2025
The huge potential implications of long-context inference

Continual learning, scaling RL, and research feedback loops

By Jean-Stanislas Denain and Anson Ho

Three challenges facing compute-based AI policies
Newsletter
Sep. 11, 2025
Three challenges facing compute-based AI policies

'Training compute' is constantly evolving, and compute-based AI policies must adapt to remain relevant

By Venkat Somala, Anson Ho, and Séb Krier

Compute scaling will slow down due to increasing lead times
Newsletter
Sep. 5, 2025
Compute scaling will slow down due to increasing lead times

A heavily underappreciated dynamic when thinking about AI timelines.

By Yafah Edelman and Anson Ho

Newsletter
Aug. 22, 2025
Why future AI agents will be trained to work together

Many multi-agent setups are based on fancy prompts, but this is unlikely to persist

By Anson Ho and Jean-Stanislas Denain

Newsletter
Aug. 2, 2025
Quantifying the algorithmic improvement from reasoning models

Reasoning models were as big of an improvement as the Transformer, at least on some benchmarks

By Anson Ho and Arden Berg

After the ChatGPT moment: Measuring AI’s adoption
Newsletter
Jul. 17, 2025
After the ChatGPT moment: Measuring AI’s adoption

How quickly has AI been diffusing through the economy?

By Arden Berg and Anson Ho

Newsletter
Jul. 2, 2025
How big could an “AI Manhattan Project” get?

An AI Manhattan Project could accelerate compute scaling by two years.

By Arden Berg and Anson Ho

Newsletter
Jun. 20, 2025
AI and explosive growth redux

GATE model shows AI-driven growth surges more easily than expected and supports much larger investments—advocating moderate optimism.

By Andrei Potlogea and Anson Ho

Newsletter
Jun. 13, 2025
Do the biorisk evaluations of AI labs actually measure the risk of developing bioweapons?

Assessing if AI labs' biorisk evaluations effectively measure models' potential to enable amateur bioweapons development.

By Anson Ho and Arden Berg

Newsletter
Jun. 6, 2025
Beyond benchmark scores: Analyzing o3-mini’s mathematical reasoning

Examining o3-mini's math reasoning: an erudite, vibes-based solver that excels in knowledge but lacks precision, creativity, and formal human rigor.

By Anson Ho, Jean-Stanislas Denain, and Elliot Glazer

Is AI already superhuman on FrontierMath?
Newsletter
May 23, 2025
Is AI already superhuman on FrontierMath?

How do humans and AIs compare on FrontierMath? We ran a competition at MIT to put this to the test.

By Anson Ho

Newsletter
May 2, 2025
Where’s my ten minute AGI?

Why don't AIs automate more real-world tasks if they can handle 1-hour ones? Anson Ho explores key capability and context bottlenecks.

By Anson Ho

Newsletter
Mar. 28, 2025
The real reason AI benchmarks haven’t reflected economic impacts

The real reason that AI benchmarks haven’t reflected real-world impacts historically is that they weren’t optimized for this, not because of fundamental limitations – but this might be changing.

By Anson Ho and Jean-Stanislas Denain

What is the future of AI in mathematics? Interviews with leading mathematicians
Report
Dec. 4, 2024
What is the future of AI in mathematics? Interviews with leading mathematicians

How will AI transform mathematics? Fields Medalists and other leading mathematicians discuss whether they expect AI to automate advanced math research.

By Anson Ho and Tamay Besiroglu

Will we run out of data? Limits of LLM scaling based on human-generated data
Paper
Jun. 6, 2024
Will we run out of data? Limits of LLM scaling based on human-generated data

We estimate the effective stock of quality and repetition adjusted human-generated public text for AI training at around 300 trillion tokens. If trends continue, language models will fully utilize this stock between 2026 and 2032, or even earlier if intensely overtrained.

By Pablo Villalobos, Anson Ho, Jaime Sevilla, Tamay Besiroglu, Lennart Heim, and Marius Hobbhahn

Do the returns to software R&D point towards a singularity?
Paper
May 17, 2024
Do the returns to software R&D point towards a singularity?

The returns to R&D are crucial in determining the dynamics of growth and potentially the pace of AI development. Our new paper offers new empirical techniques and estimates for this crucial parameter.

By Tamay Besiroglu, Ege Erdil, and Anson Ho

Algorithmic progress in language models
Paper
Mar. 12, 2024
Algorithmic progress in language models

Progress in pretrained language model performance surpasses what we’d expect from merely increasing computing resources, occurring at a pace equivalent to doubling computational power every 5 to 14 months.

By Anson Ho, Tamay Besiroglu, Ege Erdil, David Owen, Robi Rahman, Zifan Carl Guo, David Atkinson, Neil Thompson, and Jaime Sevilla

Limits to the energy efficiency of CMOS microprocessors
Paper
Dec. 15, 2023
Limits to the energy efficiency of CMOS microprocessors

How far can the energy efficiency of CMOS microprocessors be pushed before we hit physical limits? Using a simple model, we find that there is room for a further 50 to 1000x improvement in energy efficiency.

By Anson Ho, Ege Erdil, and Tamay Besiroglu

Please report your compute
Viewpoint
Apr. 26, 2023
Please report your compute

Compute is essential for AI performance, but researchers often fail to report it. Adopting reporting norms would support research, enhance forecasts of AI’s impacts and developments, and assist policymakers.

By Jaime Sevilla, Anson Ho, and Tamay Besiroglu

Will we run out of ML data? Evidence from projecting dataset size trends
Paper
Nov. 10, 2022
Will we run out of ML data? Evidence from projecting dataset size trends

Based on our previous analysis of trends in dataset size, we project the growth of dataset size in the language and vision domains. We explore the limits of this trend by estimating the total stock of available unlabeled data over the next decades.

By Pablo Villalobos, Jaime Sevilla, Lennart Heim, Tamay Besiroglu, Marius Hobbhahn, and Anson Ho

Trends in training dataset sizes
Report
Sep. 20, 2022
Trends in training dataset sizes

We collected a database of notable ML models and their training dataset sizes. We use this database to find historical growth trends in dataset size for different domains, particularly language and vision.

By Pablo Villalobos and Anson Ho

The longest training run
Report
Aug. 17, 2022
The longest training run

Training runs of large machine learning systems are likely to last less than 14-15 months. This is because longer runs will be outcompeted by runs that start later and therefore use better hardware and better algorithms.

By Jaime Sevilla, Tamay Besiroglu, Owen Dudney, and Anson Ho

Machine learning model sizes and the parameter gap
Paper
Jul. 5, 2022
Machine learning model sizes and the parameter gap

The model size of notable machine learning systems has grown ten times faster than before since 2018. After 2020 growth has not been entirely continuous: there was a jump of one order of magnitude which persists until today. This is relevant for forecasting model size and thus AI capabilities.

By Pablo Villalobos, Jaime Sevilla, Tamay Besiroglu, Lennart Heim, Anson Ho, and Marius Hobbhahn

Grokking “Semi-informative priors over AI timelines”
Report
Jun. 13, 2022
Grokking “Semi-informative priors over AI timelines”

I give visual explanations for Tom Davidson’s report, Semi-informative priors over AI timelines, and summarise the key assumptions and intuitions

By Anson Ho

Grokking “Forecasting TAI with biological anchors”
Report
Jun. 6, 2022
Grokking “Forecasting TAI with biological anchors”

I give a visual explanation of Ajeya Cotra’s draft report, Forecasting TAI with biological anchors, summarising the key assumptions, intuitions, and conclusions.

By Anson Ho

Compute trends across three eras of machine learning
Paper
Updated May 2, 2022
Compute trends across three eras of machine learning

We’ve compiled a dataset of the training compute for over 120 machine learning models, highlighting novel trends and insights into the development of AI since 1952, and what to expect going forward."

By Jaime Sevilla, Lennart Heim, Anson Ho, Tamay Besiroglu, Marius Hobbhahn, and Pablo Villalobos

Estimating training compute of deep learning models
Report
Jan. 20, 2022
Estimating training compute of deep learning models

We describe two approaches for estimating the training compute of Deep Learning systems, by counting operations and looking at GPU time.

By Jaime Sevilla, Lennart Heim, Marius Hobbhahn, Tamay Besiroglu, Anson Ho, and Pablo Villalobos