Epoch AI
banner
epochai.bsky.social
Epoch AI
@epochai.bsky.social
We are a research institute investigating the trajectory of AI for the benefit of society.

epoch.ai
To learn more or discuss how you can make one of these projects happen, DM us or reach out via donate@epoch.ai.
February 3, 2026 at 7:23 PM
A $100k dedicated contribution would enable us to start any of these investigations, which would be similar in scope & depth to our AI power demand study w/EPRI: epoch.ai/blog/power-...
How much power will frontier AI training demand in 2030?
The power required to train the largest frontier models is growing by more than 2x per year, and is on trend to reaching multiple gigawatts by 2030.
epoch.ai
February 3, 2026 at 7:23 PM
Finally, with dedicated funding we want to investigate whether the cost of inference required to reach a given capability level will continue to fall at the recent rapid pace—or will improvement rates instead slow or plateau?
February 3, 2026 at 7:22 PM
A third ready-to-go project is forecasting compute production: we'll estimate the amount of compute built worldwide over the next 3-5 years, covering AI chip production volumes, the associated CapEx requirements, as well as implications for power infrastructure.
February 3, 2026 at 7:22 PM
A second short research project would map trends in training data supply, composition, & cost—incl. trends in RL environments & synthetic data, plus whether data scarcity is likely to become a binding constraint on current scaling trends.

See our preliminary explorations. epoch.ai/gradient-up...
An FAQ on Reinforcement Learning Environments
We interviewed 18 people across RL environment startups, neolabs, and frontier labs about the state of the field and where it’s headed.
epoch.ai
February 3, 2026 at 7:22 PM
First, we seek dedicated funding to produce a high-resolution picture of AI diffusion, similar to the post below, but using higher-quality datasets, more granular cuts, & geography beyond the US (incl. China & India), also breaking adoption down further by use case.

epoch.ai/gradient-up...
The changing drivers of LLM adoption
Public data as well as our original polling suggest LLM adoption is roughly on trend, but the underlying drivers are shifting.
epoch.ai
February 3, 2026 at 7:22 PM
A $100k contribution would get us started on any of these projects in a few weeks.

Output: data + a detailed written report.

To learn more or discuss how you can make one of these projects happen, DM us or reach out via donate@epoch.ai. More details on each investigation below 👇
February 3, 2026 at 7:22 PM
Shovel-ready short investigations seek funding!

- How is AI’s adoption varying across roles, sectors, & regions?
- Trends & bottlenecks for data supply? (incl. synthetic and RL environments)
- Forecast for worldwide compute buildout?
- Will inference costs continue falling?
February 3, 2026 at 7:22 PM
Watch the full episode here: youtu.be/jFJku8sxLWY

Full transcript & references available here: epoch.ai/epoch-after...
AI math capabilities could be jagged for a long time – Daniel Litt
Daniel Litt is a professor of mathematics at the University of Toronto. He has been a careful observer of AI’s progress toward accelerating mathematical disc...
www.youtube.com
January 29, 2026 at 8:11 PM
How does math research change when the cost of trying your first dumb idea goes to zero?

University of Toronto mathematician Daniel Litt joins hosts Greg Burnham & Anson Ho to discuss what today’s models can and can’t do in math, and how far they are from doing high-quality research.

Video below!
January 29, 2026 at 8:11 PM
This week’s Gradient Update was written by Jaime Sevilla, Hannah Petrovic, and Anson Ho, in a collaboration between @epochai.bsky.social and @exponentialview.skystack.xyz.
January 29, 2026 at 12:10 AM
All Gradient Updates are informal and opinionated analyses that represent the views of individual authors, not Epoch AI as a whole. You can read the full post here:
epoch.ai/gradient-up...
Can AI companies become profitable?
Lessons from GPT-5’s economics
epoch.ai
January 28, 2026 at 11:20 PM
The upshot: running models is profitable on gross margins, but not operating margins — and even gross profits aren’t high enough to recoup R&D costs. But while models are loss-making today, they may well be very profitable in the future.
January 28, 2026 at 11:20 PM
By these lights, what matters more is growth, and frontier AI companies are certainly growing. Indeed, OpenAI is projecting unprecedented revenue growth.

And they can also turn to other approaches for revenue: ads, enterprise usage, internet penetration, etc.
January 28, 2026 at 11:20 PM
This isn’t necessarily a cause for alarm. AI models don’t need to be profitable today, as long as companies can convince investors to expect profits in the future.

In fact, it’s common for fast-growing tech companies to make early losses in exchange for long-run profits.
January 28, 2026 at 11:20 PM
The core problem: AI R&D is expensive, and model lifecycles are too short to get enough revenue.

So even if it’s profitable to run models, the full lifecycle is likely loss-making — as long as GPT-5 is representative of other models.
January 28, 2026 at 11:20 PM
Even the gross profits from running models weren’t enough to recoup R&D costs.

Gross profits running GPT-5 were less than OpenAI's R&D costs in the four months before launch. And the true R&D cost was likely higher than that.
January 28, 2026 at 11:20 PM
Was serving GPT-5 profitable?

According to jsevillamol.bsky.social, @exponentialview.skystack.xyz’s Hannah Petrovic, and Anson Ho, it depends. Gross margins were around 45%, making inference look profitable.

But after accounting for the cost of operations, OpenAI likely incurred a loss.👇
January 28, 2026 at 11:20 PM
Visit our website to explore the benchmark, including problem write-ups, precise prompts you can try out, and AI attempts so far.

epoch.ai/frontiermat...
FrontierMath: Open Problems
A collection of unsolved mathematical problems designed to test AI systems’ ability to advance human mathematical knowledge.
epoch.ai
January 27, 2026 at 4:34 PM
Indeed, we hope to see strong attempts to get AI systems to solve these problems. Anyone can pose the problems to AI systems and inspect the results. Let us know if you get anything promising!
January 27, 2026 at 4:34 PM
We tried GPT-5.2 Pro and Gemini 3 Deep Think on the problems. They solved easier variants where a solution is known, but didn’t crack any of the unsolved cases.

At least, not yet. There is a lot more to try, including prompting, scaffolding, and scaling test-time compute.
January 27, 2026 at 4:34 PM
We’re also commissioning more problems. You can propose a problem here:

docs.google.com/forms/d/e/1...
January 27, 2026 at 4:34 PM
This pilot was supported by a grant from Schmidt Sciences. Problem statements are public, and we offer access to the verifier programs for a fee. Proceeds will help fund expansions to the benchmark. Contact math@epoch.ai if interested.
January 27, 2026 at 4:34 PM
We didn’t select the problems to be hard for AI. It’s enough that they are hard for humans: solving any one of them would meaningfully advance human knowledge. If AI can do that, so be it.

Near term, the easier ones may be within reach. The harder ones, probably not.
January 27, 2026 at 4:34 PM
So that we can evaluate at scale, problems are designed to be verifiable with ordinary computer programs (no LLM-as-judge, no Lean). Most problems ask for constructions or algorithms.

Today we are releasing a pilot set of 14 problem statements. Here are two:
January 27, 2026 at 4:34 PM