Show sidebar Overview

Overview

Our AI Chip Components hub estimates what share of global advanced semiconductor manufacturing capacity was consumed each quarter by the leading AI accelerator designers — Nvidia, AMD, Google, and Amazon — from Q1 2024 through Q4 2025.

We focus on three supply chain components:

  1. Advanced-node logic wafers: 3–5 nm wafers fabricated at TSMC (N3/N3E/N3P and N5/4N/4NP nodes), which form the silicon dies inside AI accelerators.
  2. CoWoS advanced packaging wafers: TSMC’s chip-on-wafer-on-substrate packaging used to integrate multiple chiplets and HBM stacks onto a single substrate.
  3. High-Bandwidth Memory (HBM): Specialized DRAM stacks placed next to the logic die inside the accelerator package to feed data to the compute cores.

To keep demand and supply comparable, we attribute component demand to the quarter in which those inputs were consumed, rather than the quarter in which finished chips were sold.

Motivation

AI chip counts alone do not show which parts of the semiconductor supply chain are binding. Two accelerators can have very different implications for HBM, advanced packaging, and leading-edge logic capacity depending on their memory configuration, die size, packaging technology, and yield profile. This hub aims to make those differences explicit.

The framework supports several research questions:

  • Component shares by designer — which designer is absorbing the largest share of each constrained component, and how that’s changing over time.
  • Granular view of bottlenecks — when CoWoS or HBM is acutely constrained, which quarters tighten or loosen, and which designers absorb the marginal supply.
  • Component cost composition — how AI chip component spend splits across logic, packaging, memory, and auxiliary components, and how that mix has shifted.
  • Cross-checks on chip-sales estimates — comparing our implied component consumption against analyst and industry estimates of supply provides a validation point on our upstream chip-sales model.
  • Policy analysis — recent H200 export licensing policy requires showing that exports to China will not reduce semiconductor capacity available to U.S. customers. By translating chip volumes into HBM, CoWoS, and logic wafer consumption, this framework lets us evaluate that condition against the global supply pool.

Access the Data

The data is available on our website as a visualization or table, and is available for download as a ZIP file.

If you would like to ask any questions about the data, or suggest improvements, feel free to contact us at data@epoch.ai.

Use This Work

Epoch’s data is free to use, distribute, and reproduce provided the source and authors are credited under the Creative Commons Attribution license.

Citation

Epoch AI, 'Data on AI Chip Components'. Published online at epoch.ai. Retrieved from 'https://epoch.ai/data/ai-chip-components-documentation' [online resource]. Accessed 15 May 2026.

BibTeX Citation

@misc{EpochAIChipComponents2026, title = {{Data on AI Chip Components}}, author = {{Epoch AI}}, year = {2026}, month = {4}, url = {https://epoch.ai/data/ai-chip-components-documentation}, note = {Accessed: 15 May 2026} }