The Finances of AI

AI is attracting unprecedented levels of investment, with companies spending billions on hardware, data centers, and talent to build frontier models. The costs are rising fast, and the gap between what companies are spending and what they are earning is an important dynamic in the industry. Epoch tracks the financial side of AI development: what it costs to train and run models, how much companies are spending and earning, and how hardware prices are trending.

Filter

Type
0 results
What does the war in Iran mean for AI?
Newsletter
Apr. 10, 2026
What does the war in Iran mean for AI?

A prolonged Hormuz crisis probably won't derail the compute buildout, but it could slow data center expansion and disrupt Gulf investment flows into AI.

By Josh You

Final training runs account for a minority of R&D compute spending
Newsletter
Mar. 23, 2026
Final training runs account for a minority of R&D compute spending

New evidence following the MiniMax and Z.ai IPOs

By Jean-Stanislas Denain and Cheryl Wu

Microsoft’s recent $68 billion in physical assets additions were driven by AI-related purchases
Data Insight
Mar. 5, 2026
Microsoft’s recent $68 billion in physical assets additions were driven by AI-related purchases

By Isabel Juniewicz

Hyperscaler capex has quadrupled since GPT-4's release
Data Insight
Feb. 26, 2026
Hyperscaler capex has quadrupled since GPT-4's release

By Isabel Juniewicz

Anthropic could surpass OpenAI in annualized revenue by mid-2026
Data Insight
Feb. 19, 2026
Anthropic could surpass OpenAI in annualized revenue by mid-2026

By Luke Emberson and Yafah Edelman

Newsletter
Feb. 16, 2026
How persistent is the inference cost burden?

Toby Ord argues that RL scaling primarily increases inference costs, creating a persistent economic burden. While the framing is useful, the cost to reach a given capability level falls fast, and the RL scaling data is thin.

By Jean-Stanislas Denain

Compute accounts for the majority of expenses of AI companies
Data Insight
Feb. 4, 2026
Compute accounts for the majority of expenses of AI companies

By Luke Emberson and Yafah Edelman

Can AI companies become profitable?
Newsletter
Jan. 28, 2026
Can AI companies become profitable?

Lessons from GPT-5’s economics

By Jaime Sevilla, Hannah Petrovic, and Anson Ho

Introducing the AI Chip Sales Data Explorer
Update
Jan. 13, 2026
Introducing the AI Chip Sales Data Explorer

We announce our new AI Chip Sales data explorer, which uses financial reports, company disclosures, and more to estimate compute, power usage, and spending over time for a wide variety of AI chips.

By The Epoch AI Team

NVIDIA’s B200 costs around $6,400 to produce, with memory accounting for half
Data Insight
Dec. 10, 2025
NVIDIA’s B200 costs around $6,400 to produce, with memory accounting for half

By Venkat Somala

OpenAI is projecting unprecedented revenue growth
Newsletter
Oct. 15, 2025
OpenAI is projecting unprecedented revenue growth

No company has gone from $10B to $100B as fast as OpenAI projects to do.

By Greg Burnham

OpenAI's revenue has been growing 3x a year since 2024
Data Insight
Oct. 14, 2025
OpenAI's revenue has been growing 3x a year since 2024

By Venkat Somala

Most of OpenAI’s 2024 compute went to experiments
Data Insight
Oct. 10, 2025
Most of OpenAI’s 2024 compute went to experiments

By Josh You

What did it take to train Grok 4?
Data Insight
Sep. 12, 2025
What did it take to train Grok 4?

By James Sanders, Luke Emberson, and Yafah Edelman

Compute scaling will slow down due to increasing lead times
Newsletter
Sep. 5, 2025
Compute scaling will slow down due to increasing lead times

A heavily underappreciated dynamic when thinking about AI timelines.

By Yafah Edelman and Anson Ho

Inference economics of language models
Paper
Jun. 17, 2025
Inference economics of language models

We investigate how speed trades off against cost in language model inference. We find that inference latency scales with the square root of model size and the cube root of memory bandwidth, and other results.

By Ege Erdil

Acquisition costs of leading AI supercomputers have doubled every 13 months
Data Insight
Jun. 5, 2025
Acquisition costs of leading AI supercomputers have doubled every 13 months

By Konstantin F. Pilz, Robi Rahman, James Sanders, Luke Emberson, and Lennart Heim

The combined revenues of leading AI companies grew by over 9x in 2023-2024
Data Insight
Apr. 3, 2025
The combined revenues of leading AI companies grew by over 9x in 2023-2024

By Ben Snodin, David Owen, and Luke Emberson

LLM inference prices have fallen rapidly but unequally across tasks
Data Insight
Mar. 12, 2025
LLM inference prices have fallen rapidly but unequally across tasks

By Ben Cottier, Ben Snodin, David Owen, and Tom Adamczewski

Newsletter
Feb. 14, 2025
Algorithmic progress likely spurs more spending on compute, not less

Algorithmic progress in AI may not reduce compute spending—instead, it could drive higher investment as efficiency unlocks new opportunities.

By Matthew Barnett

Newsletter
Jan. 31, 2025
What went into training DeepSeek-R1?

This Gradient Updates issue explores DeepSeek-R1's architecture, training cost, and pricing, showing how it rivals OpenAI's o1 at 30x lower cost.

By Ege Erdil

Performance per dollar improves around 30% each year
Data Insight
Oct. 23, 2024
Performance per dollar improves around 30% each year

By Robi Rahman

Training compute costs are doubling every eight months for the largest AI models
Data Insight
Jun. 19, 2024
Training compute costs are doubling every eight months for the largest AI models

By Ben Cottier and Robi Rahman

How much does it cost to train frontier AI models?
Paper
Jun. 3, 2024
How much does it cost to train frontier AI models?

The cost of training frontier AI models has grown by a factor of 2 to 3x per year for the past eight years, suggesting that the largest models will cost over a billion dollars by 2027.

By Ben Cottier, Robi Rahman, Loredana Fattorini, Nestor Maslej, and David Owen

Trends in the dollar training cost of machine learning systems
Report
Jan. 31, 2023
Trends in the dollar training cost of machine learning systems

I combine training compute and GPU price-performance data to estimate the cost of compute in US dollars for the final training run of 124 machine learning systems published between 2009 and 2022, and find that the cost has grown by approximately 0.5 orders of magnitude per year.

By Ben Cottier

Trends in GPU price-performance
Report
Jun. 27, 2022
Trends in GPU price-performance

Using a dataset of 470 models of graphics processing units released between 2006 and 2021, we find that the amount of floating-point operations/second per $ doubles every ~2.5 years.

By Marius Hobbhahn and Tamay Besiroglu