AI data centers

Training and running frontier AI models requires enormous physical infrastructure: warehouses packed with specialized chips consuming as much power as small cities. The largest are being built at extraordinary speed, going from empty land to operational in under two years. Using satellite imagery and permit data, Epoch tracks the scale and growth of AI data centers and supercomputers, from build times and costs to compute capacity and power demands.

Filter

Type
0 results
OpenAI Stargate: where the US sites stand
Report
Apr. 17, 2026
OpenAI Stargate: where the US sites stand

The $500 billion AI data center initiative is projected to exceed 9 gigawatts of capacity by 2029, with 0.3 gigawatts already operational in Abilene and six more US sites under active construction.

By Elliot Stewart and Ben Cottier

Five hyperscalers now own over two-thirds of global AI compute
Data Insight
Apr. 14, 2026
Five hyperscalers now own over two-thirds of global AI compute

By Luke Emberson, Josh You, and Venkat Somala

What does the war in Iran mean for AI?
Newsletter
Apr. 10, 2026
What does the war in Iran mean for AI?

A prolonged Hormuz crisis probably won't derail the compute buildout, but it could slow data center expansion and disrupt Gulf investment flows into AI.

By Josh You

Google controls the most AI computing power, driven by its custom TPUs
Data Insight
Apr. 7, 2026
Google controls the most AI computing power, driven by its custom TPUs

By Luke Emberson, Josh You, and Venkat Somala

Global AI power capacity is now comparable to peak power usage of New York State
Data Insight
Jan. 16, 2026
Global AI power capacity is now comparable to peak power usage of New York State

By Yafah Edelman, Josh You, Venkat Somala, and Luke Emberson

How far can decentralized training over the internet scale?
Newsletter
Dec. 29, 2025
How far can decentralized training over the internet scale?

Decentralized training over the internet promises to scale training to the limits of the internet.

By Jaime Sevilla

GPUs account for about 40% of power usage in AI data centers
Data Insight
Dec. 18, 2025
GPUs account for about 40% of power usage in AI data centers

By Luke Emberson and Ben Cottier

Is almost everyone wrong about America’s AI power problem?
Newsletter
Dec. 17, 2025
Is almost everyone wrong about America’s AI power problem?

Why power is less of a bottleneck than you think.

By Anson Ho, Yafah Edelman, Josh You, and Jean-Stanislas Denain

Today’s largest data center can do more than 20 GPT-4-scale training runs each month
Data Insight
Dec. 4, 2025
Today’s largest data center can do more than 20 GPT-4-scale training runs each month

By Jaeho Lee

Microsoft’s Fairwater datacenter will use more power than Los Angeles
Data Insight
Nov. 26, 2025
Microsoft’s Fairwater datacenter will use more power than Los Angeles

By Jaeho Lee

The largest AI data center campuses will soon be a fifth the size of Manhattan
Data Insight
Nov. 19, 2025
The largest AI data center campuses will soon be a fifth the size of Manhattan

By Ben Cottier

Build times for gigawatt-scale data centers can be 2 years or less
Data Insight
Nov. 10, 2025
Build times for gigawatt-scale data centers can be 2 years or less

By Venkat Somala and Ben Cottier

Introducing the Frontier Data Centers Hub
Update
Nov. 4, 2025
Introducing the Frontier Data Centers Hub

We announce our new Frontier Data Centers Hub, a database tracking large AI data centers using satellite and permit data to show compute, power use, and construction timelines.

By The Epoch AI Team

What you need to know about AI data centers
Topic Overview
Nov. 4, 2025
What you need to know about AI data centers

AI companies are planning a buildout of data centers that will rank among the largest infrastructure projects in history. We examine their power demands, what makes AI data centers special, and what all this means for AI policy and the future of AI.

By Ben Cottier and Yafah Edelman

Could decentralized training solve AI’s power problem?
Report
Oct. 28, 2025
Could decentralized training solve AI’s power problem?

We illustrate a decentralized 10 GW training run across a dozen sites spanning thousands of kilometers. Developers are likely to scale datacenters to multi-gigawatt levels before adopting decentralized training.

By Jaime Sevilla and Anton Troynikov

Newsletter
Jul. 2, 2025
How big could an “AI Manhattan Project” get?

An AI Manhattan Project could accelerate compute scaling by two years.

By Arden Berg and Anson Ho

Acquisition costs of leading AI supercomputers have doubled every 13 months
Data Insight
Jun. 5, 2025
Acquisition costs of leading AI supercomputers have doubled every 13 months

By Konstantin F. Pilz, Robi Rahman, James Sanders, Luke Emberson, and Lennart Heim

The US hosts the majority of GPU cluster performance, followed by China
Data Insight
Jun. 5, 2025
The US hosts the majority of GPU cluster performance, followed by China

By Konstantin F. Pilz, Robi Rahman, James Sanders, Luke Emberson, and Lennart Heim

Private-sector companies own a dominant share of GPU clusters
Data Insight
Jun. 5, 2025
Private-sector companies own a dominant share of GPU clusters

By Konstantin F. Pilz, Robi Rahman, James Sanders, Luke Emberson, and Lennart Heim

Power requirements of leading AI supercomputers have doubled every 13 months
Data Insight
Jun. 5, 2025
Power requirements of leading AI supercomputers have doubled every 13 months

By Konstantin F. Pilz, Robi Rahman, James Sanders, Luke Emberson, and Lennart Heim

The computational performance of leading AI supercomputers has doubled every nine months
Data Insight
Updated Jun. 5, 2025
The computational performance of leading AI supercomputers has doubled every nine months

By Konstantin F. Pilz, Robi Rahman, James Sanders, Luke Emberson, and Lennart Heim

Trends in AI supercomputers
Paper
Apr. 23, 2025
Trends in AI supercomputers

AI supercomputers double in performance every 9 months, cost billions of dollars, and require as much power as mid-sized cities. Companies now own 80% of all AI supercomputers, while governments’ share has declined.

By Konstantin F. Pilz, Robi Rahman, James Sanders, and Lennart Heim

AI training cluster sizes increased by more than 20x since 2016
Data Insight
Oct. 23, 2024
AI training cluster sizes increased by more than 20x since 2016

By Robi Rahman

Can AI scaling continue through 2030?
Report
Aug. 20, 2024
Can AI scaling continue through 2030?

We investigate the scalability of AI training runs. We identify electric power, chip manufacturing, data and latency as constraints. We conclude that 2e29 FLOP training runs will likely be feasible by 2030.

By Jaime Sevilla, Tamay Besiroglu, Ben Cottier, Josh You, Edu Roldán, Pablo Villalobos, and Ege Erdil