Gray Bender
banner
graybender.bsky.social
Gray Bender
@graybender.bsky.social
Climate and Energy Transition at S&P Global. Prior research affiliate with the Harvard Belfer Center’s Geopolitics of Energy and Harvard Kennedy School alum. All opinions my own.
7/ So where does this leave tech & energy companies betting big on AI-driven power demand?

That’s the billion-dollar question. And right now, no one has a clear answer.

Curious to learn more? Check out the full story in this week's Global Energy Lens:
An AI Breakthrough That Could Upend the Energy Landscape
Plus Europe debates the future of Russian gas, why the US is seeking a deal on Ukraine critical minerals, how the energy transition connects to political backlash in Norway, and new sanctions on Iran
climateandenergygeopolitics.substack.com
February 7, 2025 at 5:20 PM
6/ The real takeaway?

🔸 AI’s energy trajectory is uncertain.

🔸If efficiency gains outpace adoption, demand could level off—shifting energy market expectations while easing the clean energy transition.

🔸 But if AI workloads explode? We’re still looking at an energy crunch.
February 7, 2025 at 5:20 PM
5/ Second, inference energy use.

🔹DeepSeek’s R1 is a reasoning model, which means it takes a longer logic path than gen AI. Per MIT
Tech Review, it can consume up to 87% more energy than Meta’s Llama model to get to a final answer. The savings in training may not carry over.
February 7, 2025 at 5:20 PM
4/ But there’s a catch—two, actually.

🔹First, Jevons paradox. As Microsoft CEO Satya Nadella noted, efficiency often leads to higher consumption. If AI gets cheaper to train, demand could surge, wiping out energy savings.
February 7, 2025 at 5:20 PM
3/ If true, this is a potential game-changer for AI’s energy consumption.

The AI industry has been driving record-breaking power deals, with fears that data centers will overwhelm the grid. Now? DeepSeek suggests AI can scale with far less power.
February 7, 2025 at 5:20 PM
2/ Meet DeepSeek.

Last week, it unveiled an AI model that matches top competitors while using just 1/10th of the computing power.

How? By refining the "mixture of experts" technique, which turns off large portions of the model during training.
February 7, 2025 at 5:20 PM