Data insights
Epoch AI’s data insights break down complex AI trends into focused, digestible snapshots. Explore topics like training compute, hardware advancements, and AI training costs in a clear and accessible format.

LLM inference prices have fallen rapidly but unequally across tasks

Leading AI chip designs are used for around 4 years in frontier training

Biology AI models are scaling 2-4x per year after rapid growth from 2019-2021

The stock of computing power from NVIDIA chips is doubling every 10 months

Over 20 AI models have been trained at the scale of GPT-4

Chinese language models have scaled up more slowly than their global counterparts

Frontier open models may surpass 1026 FLOP of training compute before 2026

Training compute growth is driven by larger clusters, longer training, and better hardware

US models currently outperform non-US models

Models with downloadable weights currently lag behind the top-performing models

Accuracy increases with estimated training compute

AI training cluster sizes increased by more than 20x since 2016

Performance per dollar improves around 30% each year

The computational performance of machine learning hardware has doubled every 2 years

The NVIDIA A100 has been the most popular hardware for training notable machine learning models

Performance improves 12x when switching from FP32 to tensor-INT8

Leading ML hardware becomes 40% more energy-efficient each year

Leading AI companies have hundreds of thousands of cutting-edge AI chips

The power required to train frontier AI models is doubling annually

The length of time spent training notable models is growing

Language models compose the large majority of large-scale AI models

Most large-scale models are developed by US companies

The pace of large-scale model releases is accelerating

Almost half of large-scale models have published, downloadable weights

The size of datasets used to train language models doubles approximately every eight months

Training compute costs are doubling every nine months for the largest AI models

The training compute of notable AI models is doubling roughly every five months

Training compute has scaled up faster for language than vision