Frontier AI systems are advancing rapidly from increases in compute, hardware performance, algorithmic efficiency, and investment. This dashboard explores those dynamics.
What drives AI progress? The story is dominated by scale. Training AI systems with more compute, power and data has consistently led to better performance. Since 2010, the compute used to train notable AI models has increased 4.5× per year. Meanwhile, researchers have made the underlying algorithms far more efficient — each year, the same performance can be achieved with 3× less compute.
This massive scale-up in training compute comes from three sources: deploying more chips in parallel, running training for longer, and leveraging increasingly powerful AI processors. The consequences are striking. Training costs are climbing by 2.5× annually, while power requirements double each year. Today’s cutting-edge AI training runs consume tens to hundreds of megawatts — comparable to a medium-sized power plant. These trends appear set to continue through 2030.
LLM inference prices
The cost to inference an LLM at a fixed level of performance has fallen rapidly, but unevenly across tasks.
The cost to inference an LLM at a fixed level of performance has been halving every 2 months.
The cost to inference an LLM at a fixed level of performance has fallen by 2 OOMs per year.
Compute stock growth
The total computing power of the stock of NVIDIA chips is growing at a rate of 2.3×/year.
The total computing power of the stock of NVIDIA chips is doubling every 10 months.
The total computing power of the stock of NVIDIA chips is growing by 0.36 OOMs per year.
Training compute
Training compute for frontier language models has been growing at 5× per year since 2020.
Training compute for frontier language models has been doubling every 5.2 months since 2020.
Training compute for frontier language models has been growing at 0.7 OOMs per year since 2020.
Algorithmic progress
Pre-training compute efficiency is improving at roughly 3.0× per year.
Pre-training compute efficiency is doubling roughly every 7.6 months.
Pre-training compute efficiency is improving by roughly 0.5 OOMs per year.
Largest AI data center
The largest known AI data center has computing power equivalent to 600,000 NVIDIA H100 chips.
FLOP/s per dollar
AI chip performance per dollar has improved by 37% per year.
AI chip performance per dollar has doubled every 2.2 years.
AI chip performance per dollar has improved by 0.14 OOMs per year.
Need deeper insights? Our team offers custom research and advisory services.
Book a consultation