Transparency

We are a 501(c)(3) nonprofit committed to proactive transparency.

Funders

As an independent nonprofit, we rely primarily on the ongoing support of donors to continue our research. In addition to the contributions listed below, Epoch AI has received smaller donations from individual supporters. Here, we list only donations of $70,000 USD or more.

We value every contribution as a vote of confidence in our work and vision, and we pledge to use it to further our mission of building a better shared understanding of AI. If you would like further information about specific funding needs, please don’t hesitate to contact us at donate@epoch.ai.

Donate to Epoch AI

Coefficient Giving

Jaan Tallinn

Likith Govindaiah

Leopold Aschenbrenner

Sentinel Bio

Carl Shulman

Schmidt Sciences

Collaborations and consultations

We work with leading organizations to produce impactful research that advances the public’s understanding of AI. For simplicity, we may omit engagements under $30,000 USD. Considering working with Epoch AI?

Learn more about our services

EU AI Office

Google DeepMind

METR (Model Evaluation and Threat Research)

Blitzy Inc.

Bridgewater Associates AIA Labs

xAI

Sequoia Capital Global Equities

Google DeepMind

UK Department for Science, Innovation and Technology

EPRI

OpenAI

Advanced Research + Invention Agency

Anthropic

AI Index

FAQ

Do you lobby or advocate for specific policies?

No.

Epoch does not advocate for any particular stance on AI policy, and we do not engage in lobbying. While our staff have their own (diverse) opinions on how AI should be handled, we see Epoch as providing a unique service of informing the public with trustworthy data and evidence about AI without pushing for a specific agenda.

We have partnered with government agencies worldwide, and we see this as an important part of our mission to inform governments of the state of the art in AI so they can enact wiser policies. We might point out the consequences of a policy, while adhering to our usual standards of rigour and transparency. For example, we might publish estimates on the number of developers affected by a compute threshold regulation, and point out that keeping the scope limited will require elevating the threshold. But Epoch AI does not make policy recommendations.

Are you for or against AI progress?

Our staff, like the AI community more broadly, is split on whether advancing AI will ultimately be good for the world. As an organization, we are decidedly neutral on this question. We work on projects that, in different capacities, advance or slow down AI development. These projects are chosen because their primary purpose is to advance public understanding of this technology.

Many of our research projects may help advance the state of the art in artificial intelligence. We partnered with OpenAI to create the leading AI benchmark in mathematics, we have gone to great lengths to study bottlenecks to AI scaling, and we have advanced research on AI scaling laws. However, our goal is not to contribute to AI progress per se. In choosing to work on these projects, we are prioritizing our mission of improving societal understanding of the trajectory of AI.

How do you decide who to work with?

Most of our funding comes from grants and donations. However, we also receive funding and maintain contractual relationships with a number of organizations that either work on or are affected by AI. These include large AI companies (e.g. Google), government agencies (e.g. the UK Department of Science, Innovation and Technology), organizations working on adjacent sectors (e.g. the Electric Power Research Institute), and companies that are affected by AI (including hardware, energy and investment firms, consultancies and others).

In choosing who to work with, we ask ourselves the following questions:

Will our partner be making important decisions that affect the trajectory of AI? We prefer to partner with organizations that are making important decisions in AI, especially in government. These partners provide us with useful feedback on what is most important to work on and directly advance our mission of informing high-stakes decisions on AI.

Will the project help us gain a deeper understanding of AI? We have an opinionated view on what questions are most important to work on, which has prompted us to investigate emerging trends early, including inference time scaling or data scarcity. In our partnerships, and our work more generally, we choose to work on questions that we believe are important for the future of AI.

Will the work be released publicly? We prioritize projects with public outputs that advance the understanding of AI. By publishing our work, we reach a wider audience and impact, including audiences we would not have thought of. This is not always possible — for example, when consulting for governments on sensitive topics — but these are exceptions, and we strive to find acceptable compromises.

Will we have full editorial independence over publishable outputs? We care deeply about always being able to say what we believe to be true, transparently and freely, independently of who funded the research.

Will we be able to learn from our partner? We prioritize working with partners whom we can learn from, and who can provide good feedback on our work. For example, we partnered with EPRI to produce more accurate work on energy demand for AI.

Are we striking a good deal? We aim to charge prices at least on par with industry consultants so that we aren’t inadvertently subsidising the work of our partners. Any profits we make are fully reinvested into our mission, mainly subsidising our public research.

Throughout these partnerships, we are committed to a high level of transparency. We are committed to disclosing any research sponsorship and data access agreements with industry.

Do you invest in AI?

We invest part of our funds in semiconductor and AI stocks as part of a diversified portfolio. Any gains from such investments increase our capacity to advance our mission in scenarios when our work matters most.