Gradient Updates

Our weekly newsletter, offering shorter, more opinionated commentary on important issues around AI. Unlike our formal reports and papers, these posts reflect the views of their authors and are not necessarily endorsed by Epoch as a whole.

Subscribe

Filter

Topic
0 results
What does the war in Iran mean for AI?
Newsletter
Apr. 10, 2026
What does the war in Iran mean for AI?

By Josh You

Keeping up with the GPTs
Newsletter
Apr. 7, 2026
Keeping up with the GPTs

By Anson Ho

What do frontier AI companies' job postings reveal about their plans?
Newsletter
Mar. 24, 2026
What do frontier AI companies' job postings reveal about their plans?

By Jean-Stanislas Denain and Campbell Hutcheson

Final training runs account for a minority of R&D compute spending
Newsletter
Mar. 23, 2026
Final training runs account for a minority of R&D compute spending

By Jean-Stanislas Denain and Cheryl Wu

The least understood driver of AI progress
Newsletter
Feb. 25, 2026
The least understood driver of AI progress

By Anson Ho

Newsletter
Feb. 16, 2026
How persistent is the inference cost burden?

By Jean-Stanislas Denain

How close is AI to taking my job?
Newsletter
Feb. 6, 2026
How close is AI to taking my job?

By Anson Ho

Can AI companies become profitable?
Newsletter
Jan. 28, 2026
Can AI companies become profitable?

By Jaime Sevilla, Hannah Petrovic, and Anson Ho

How well did forecasters predict 2025 AI progress?
Newsletter
Jan. 16, 2026
How well did forecasters predict 2025 AI progress?

By Anson Ho

An FAQ on Reinforcement Learning Environments
Newsletter
Jan. 12, 2026
An FAQ on Reinforcement Learning Environments

By Jean-Stanislas Denain and Chris Barber

How far can decentralized training over the internet scale?
Newsletter
Dec. 29, 2025
How far can decentralized training over the internet scale?

By Jaime Sevilla

Why benchmarking is hard
Newsletter
Dec. 23, 2025
Why benchmarking is hard

By Florian Brand and Jean-Stanislas Denain

The changing drivers of LLM adoption
Newsletter
Dec. 19, 2025
The changing drivers of LLM adoption

By Jean-Stanislas Denain and Anson Ho

Is almost everyone wrong about America’s AI power problem?
Newsletter
Dec. 17, 2025
Is almost everyone wrong about America’s AI power problem?

By Anson Ho, Yafah Edelman, Josh You, and Jean-Stanislas Denain

Benchmark Scores = General Capability + Claudiness
Newsletter
Nov. 20, 2025
Benchmark Scores = General Capability + Claudiness

By Greg Burnham

The software intelligence explosion debate needs experiments
Newsletter
Nov. 14, 2025
The software intelligence explosion debate needs experiments

By Anson Ho and Parker Whitfill

Less than 70% of FrontierMath is within reach for today’s models
Newsletter
Oct. 17, 2025
Less than 70% of FrontierMath is within reach for today’s models

By Greg Burnham

OpenAI is projecting unprecedented revenue growth
Newsletter
Oct. 15, 2025
OpenAI is projecting unprecedented revenue growth

By Greg Burnham

How many digital workers could OpenAI deploy?
Newsletter
Oct. 3, 2025
How many digital workers could OpenAI deploy?

By Jean-Stanislas Denain, Anson Ho, and Jaime Sevilla

Why GPT-5 used less training compute than GPT-4.5 (but GPT-6 probably won’t)
Newsletter
Sep. 26, 2025
Why GPT-5 used less training compute than GPT-4.5 (but GPT-6 probably won’t)

By Yafah Edelman, Jean-Stanislas Denain, Jaime Sevilla, and Anson Ho

Newsletter
Sep. 19, 2025
The huge potential implications of long-context inference

By Jean-Stanislas Denain and Anson Ho

Three challenges facing compute-based AI policies
Newsletter
Sep. 11, 2025
Three challenges facing compute-based AI policies

By Venkat Somala, Anson Ho, and Séb Krier

Compute scaling will slow down due to increasing lead times
Newsletter
Sep. 5, 2025
Compute scaling will slow down due to increasing lead times

By Yafah Edelman and Anson Ho

Newsletter
Aug. 22, 2025
Why future AI agents will be trained to work together

By Anson Ho and Jean-Stanislas Denain

Newsletter
Aug. 7, 2025
We didn’t learn much from the IMO

By Greg Burnham

Newsletter
Aug. 2, 2025
Quantifying the algorithmic improvement from reasoning models

By Anson Ho and Arden Berg

Newsletter
Jul. 26, 2025
Why China isn’t about to leap ahead of the West on compute

By Veronika Blablová and Robi Rahman

After the ChatGPT moment: Measuring AI’s adoption
Newsletter
Jul. 17, 2025
After the ChatGPT moment: Measuring AI’s adoption

By Arden Berg and Anson Ho

Newsletter
Jul. 8, 2025
What will the IMO tell us about AI math capabilities?

By Greg Burnham

Newsletter
Jul. 2, 2025
How big could an “AI Manhattan Project” get?

By Arden Berg and Anson Ho

Newsletter
Jun. 20, 2025
AI and explosive growth redux

By Andrei Potlogea and Anson Ho

Newsletter
Jun. 13, 2025
Do the biorisk evaluations of AI labs actually measure the risk of developing bioweapons?

By Anson Ho and Arden Berg

Newsletter
Jun. 6, 2025
Beyond benchmark scores: Analyzing o3-mini’s mathematical reasoning

By Anson Ho, Jean-Stanislas Denain, and Elliot Glazer

Newsletter
May 30, 2025
GPQA Diamond: What’s left?

By Greg Burnham

Is AI already superhuman on FrontierMath?
Newsletter
May 23, 2025
Is AI already superhuman on FrontierMath?

By Anson Ho

Newsletter
May 16, 2025
How fast can algorithms advance capabilities?

By Henry Josephson

Newsletter
May 9, 2025
How far can reasoning models scale?

By Josh You

Newsletter
May 2, 2025
Where’s my ten minute AGI?

By Anson Ho

Newsletter
Apr. 26, 2025
The case for multi-decade AI timelines

By Ege Erdil

Newsletter
Mar. 28, 2025
The real reason AI benchmarks haven’t reflected economic impacts

By Anson Ho and Jean-Stanislas Denain

Newsletter
Mar. 21, 2025
Most AI value will come from broad automation, not from R&D

By Ege Erdil and Matthew Barnett

Newsletter
Mar. 7, 2025
What AI can currently do is not the story

By Ege Erdil

Newsletter
Feb. 28, 2025
The promise of reasoning models

By Matthew Barnett

Newsletter
Feb. 21, 2025
AI progress is about to speed up

By Ege Erdil

Newsletter
Feb. 14, 2025
Algorithmic progress likely spurs more spending on compute, not less

By Matthew Barnett

Newsletter
Feb. 7, 2025
How much energy does ChatGPT use?

By Josh You

Newsletter
Jan. 31, 2025
What went into training DeepSeek-R1?

By Ege Erdil

Newsletter
Jan. 24, 2025
AGI could drive wages below subsistence level

By Matthew Barnett

Newsletter
Jan. 17, 2025
How has DeepSeek improved the Transformer architecture?

By Ege Erdil

Newsletter
Jan. 10, 2025
The economic consequences of automating remote work

By Matthew Barnett

Newsletter
Dec. 27, 2024
Moravec’s paradox and its implications

By Ege Erdil

Newsletter
Dec. 20, 2024
How do mixture-of-experts models compare to dense models in inference?

By Ege Erdil

Newsletter
Dec. 13, 2024
Frontier language models have become much smaller

By Ege Erdil

Newsletter
Dec. 6, 2024
What did US export controls mean for China’s AI capabilities?

By Ege Erdil