Announcing Gradient Updates: Our New Weekly Newsletter
We are announcing Gradient Updates, our new weekly newsletter focused on timely and important questions in AI.
Published
Last Friday, we released the first issue of our new weekly newsletter, Gradient Updates, led and mainly written by senior researcher Ege Erdil. Each issue will offer in-depth commentary on timely and enduring questions in AI. Rather than delivering a roundup of the week’s headlines, Gradient Updates focuses on a single, carefully chosen topic each week. For instance, our inaugural issue examined the impact of U.S. export controls on Chinese AI capabilities.
With Gradient Updates, we aim to share insights and explorations that are less formal than a full-length paper or technical report, but more substantial than typical industry news briefs. You won’t find the latest investments or product releases covered here; instead, expect content that grapples with broader themes—from the economic implications of vertical disintegration within the AI sector to the potential of synthetic data and test-time compute scaling to surpass current pretraining limitations.
You can read our first two issues now, and subscribe to receive new issues as they’re published.
Issue #1: What did US export controls mean for China’s AI capabilities? In this issue, Ege explains the effect of US export controls on high-performance GPUs on the ability of Chinese companies to develop and deploy AI models.
Issue #2: Frontier language models have become much smaller. In this issue, Ege discusses how we know current frontier models are much smaller than GPT-4 and why labs have not released larger models.