Our weekly newsletter, offering shorter, more opinionated commentary on important issues around AI. Unlike our formal reports and papers, these posts reflect the views of their authors and are not necessarily endorsed by Epoch as a whole.
ScalingSoftware progressEconomic impactFinancesGeopoliticsInferenceLeading companiesReinforcement learningR&DMathChipsEnergyFuture of AIData centersAdoption and useRoboticsOpen models
0 results
Newsletter
Apr. 10, 2026
What does the war in Iran mean for AI?
By Josh You
Newsletter
Apr. 7, 2026
Keeping up with the GPTs
By Anson Ho
Newsletter
Mar. 24, 2026
What do frontier AI companies' job postings reveal about their plans?
By Jean-Stanislas Denain and Campbell Hutcheson
Newsletter
Mar. 23, 2026
Final training runs account for a minority of R&D compute spending
By Jean-Stanislas Denain and Cheryl Wu
Newsletter
Feb. 25, 2026
The least understood driver of AI progress
By Anson Ho
Newsletter
Feb. 16, 2026
How persistent is the inference cost burden?
By Jean-Stanislas Denain
Newsletter
Feb. 6, 2026
How close is AI to taking my job?
By Anson Ho
Newsletter
Jan. 28, 2026
Can AI companies become profitable?
By Jaime Sevilla, Hannah Petrovic, and Anson Ho
Newsletter
Jan. 16, 2026
How well did forecasters predict 2025 AI progress?
By Anson Ho
Newsletter
Jan. 12, 2026
An FAQ on Reinforcement Learning Environments
By Jean-Stanislas Denain and Chris Barber
Newsletter
Dec. 29, 2025
How far can decentralized training over the internet scale?
By Jaime Sevilla
Newsletter
Dec. 23, 2025
Why benchmarking is hard
By Florian Brand and Jean-Stanislas Denain
Newsletter
Dec. 19, 2025
The changing drivers of LLM adoption
By Jean-Stanislas Denain and Anson Ho
Newsletter
Dec. 17, 2025
Is almost everyone wrong about America’s AI power problem?
By Anson Ho, Yafah Edelman, Josh You, and Jean-Stanislas Denain
Newsletter
Nov. 20, 2025
Benchmark Scores = General Capability + Claudiness
By Greg Burnham
Newsletter
Nov. 14, 2025
The software intelligence explosion debate needs experiments
By Anson Ho and Parker Whitfill
Newsletter
Oct. 17, 2025
Less than 70% of FrontierMath is within reach for today’s models
By Greg Burnham
Newsletter
Oct. 15, 2025
OpenAI is projecting unprecedented revenue growth
By Greg Burnham
Newsletter
Oct. 3, 2025
How many digital workers could OpenAI deploy?
By Jean-Stanislas Denain, Anson Ho, and Jaime Sevilla
Newsletter
Sep. 26, 2025
Why GPT-5 used less training compute than GPT-4.5 (but GPT-6 probably won’t)
By Yafah Edelman, Jean-Stanislas Denain, Jaime Sevilla, and Anson Ho
Newsletter
Sep. 19, 2025
The huge potential implications of long-context inference
By Jean-Stanislas Denain and Anson Ho
Newsletter
Sep. 11, 2025
Three challenges facing compute-based AI policies
By Venkat Somala, Anson Ho, and Séb Krier
Newsletter
Sep. 5, 2025
Compute scaling will slow down due to increasing lead times
By Yafah Edelman and Anson Ho
Newsletter
Aug. 22, 2025
Why future AI agents will be trained to work together
By Anson Ho and Jean-Stanislas Denain
Newsletter
Aug. 7, 2025
We didn’t learn much from the IMO
By Greg Burnham
Newsletter
Aug. 2, 2025
Quantifying the algorithmic improvement from reasoning models
By Anson Ho and Arden Berg
Newsletter
Jul. 26, 2025
Why China isn’t about to leap ahead of the West on compute
By Veronika Blablová and Robi Rahman
Newsletter
Jul. 17, 2025
After the ChatGPT moment: Measuring AI’s adoption
By Arden Berg and Anson Ho
Newsletter
Jul. 8, 2025
What will the IMO tell us about AI math capabilities?
By Greg Burnham
Newsletter
Jul. 2, 2025
How big could an “AI Manhattan Project” get?
By Arden Berg and Anson Ho
Newsletter
Jun. 20, 2025
AI and explosive growth redux
By Andrei Potlogea and Anson Ho
Newsletter
Jun. 13, 2025
Do the biorisk evaluations of AI labs actually measure the risk of developing bioweapons?