Updated Feb. 5, 2026

Trends in Artificial Intelligence

Frontier AI systems are advancing rapidly from increases in compute, hardware performance, algorithmic efficiency, and investment. This dashboard explores those dynamics.

What drives AI progress? The story is dominated by scale. Training AI systems with more compute, power and data has consistently led to better performance. Since 2010, the compute used to train notable AI models has increased 4.5× per year. Meanwhile, researchers have made the underlying algorithms far more efficient — each year, the same performance can be achieved with 3× less compute.

This massive scale-up in training compute comes from three sources: deploying more chips in parallel, running training for longer, and leveraging increasingly powerful AI processors. The consequences are striking. Training costs are climbing by 2.5× annually, while power requirements double each year. Today’s cutting-edge AI training runs consume tens to hundreds of megawatts — comparable to a medium-sized power plant. These trends appear set to continue through 2030.

Show growth values in

LLM inference prices

40 ×/year
2 months
2 OOM/year

The cost to inference an LLM at a fixed level of performance has fallen rapidly, but unevenly across tasks.

The cost to inference an LLM at a fixed level of performance has been halving every 2 months.

The cost to inference an LLM at a fixed level of performance has fallen by 2 OOMs per year.

Compute stock growth

2.3 ×/year
10 months
0.36 OOM/year

The total computing power of the stock of NVIDIA chips is growing at a rate of 2.3×/year.

The total computing power of the stock of NVIDIA chips is doubling every 10 months.

The total computing power of the stock of NVIDIA chips is growing by 0.36 OOMs per year.

Training compute

5 ×/year
5.2 months
0.7 OOM/year

Training compute for frontier language models has been growing at 5× per year since 2020.

Training compute for frontier language models has been doubling every 5.2 months since 2020.

Training compute for frontier language models has been growing at 0.7 OOMs per year since 2020.

Algorithmic progress

÷ 3.0 ×/year
7.6 months
0.5 OOM/year

Pre-training compute efficiency is improving at roughly 3.0× per year.

Pre-training compute efficiency is doubling roughly every 7.6 months.

Pre-training compute efficiency is improving by roughly 0.5 OOMs per year.

Largest AI data center

600,000 H100e

The largest known AI data center has computing power equivalent to 600,000 NVIDIA H100 chips.

FLOP/s per dollar

1.37 ×/year
2.2 years
0.14 OOM/year

AI chip performance per dollar has improved by 37% per year.

AI chip performance per dollar has doubled every 2.2 years.

AI chip performance per dollar has improved by 0.14 OOMs per year.

Model Performance

The cost to inference an LLM at a fixed level of performance has fallen rapidly, but unevenly across tasks.

The cost to inference an LLM at a fixed level of performance has fallen rapidly, but unevenly across tasks.

The inference price of LLMs has fallen dramatically in recent years, while performance has rapidly improved. The rate of decline varies dramatically, with costs falling by between 9× and 900× per year depending on the performance milestone.
AI Companies

The total computing power of the stock of NVIDIA chips is growing at a rate of 2.3×/year.

The total computing power of the stock of NVIDIA chips is growing at a rate of 2.3×/year.

The computing power of the total stock of NVIDIA chips has grown at 2.3× per year, doubling every 10 months, since 2019, based on estimates of NVIDIA sales using its data center revenue, and estimates of the longevity of AI chips.
Training Runs

Training compute for frontier language models has been growing at 5× per year since 2020.

Training compute for frontier language models has been growing at 5× per year since 2020.

The amount of compute used to train frontier language models has grown exponentially. Since 2020, the trend among top-5 models has grown by a factor of ~10,000.
Data Centers

The largest known AI data center has computing power equivalent to 600,000 NVIDIA H100 chips.

The largest known AI data center has computing power equivalent to 600,000 NVIDIA H100 chips.

Meta’s Prometheus cluster has an estimated 700 MW of power capacity and $20 billion in capital costs, making it the largest known AI data center. Microsoft’s Fairwater Wisconsin data center is expected to be nearly 10x more powerful, at 5.2M H100-equivalents by September 2027.
Hardware

AI chip performance per dollar has improved by 37% per year.

AI chip performance per dollar has improved by 37% per year.

The compute performance you can buy for a dollar has improved by roughly 40% per year across over 20 AI accelerators released between 2012 and 2025. Much of this is driven by manufacturers introducing more powerful and more expensive chips — the GB300 costs nearly 9× the P100's release price, but delivers about 24× the performance per dollar.

Trusted by leaders at OpenAI, DeepMind,
and governments worldwide

Need deeper insights? Our team offers custom research and advisory services.

Book a consultation