Ben Harack
benharack.com
Ben Harack
@benharack.com
I study the potential for artificial intelligence to trigger a world war and how to prevent that from happening. Currently I'm finishing my DPhil at Oxford and working with @aigioxfordmartin.bsky.social.

benharack.com

he/him
I've normally seen this kind of thing coded as one of the non-extinction outcomes that are possible within the broader existential risk definition that I think Bostrom introduced. For example, AIs enforcing ideological homogeneity.
July 13, 2025 at 10:07 PM
I can only speak by reference to some of his written work. What I've read so far has been worth the read.
July 13, 2025 at 4:47 AM
You might be interested in work from Anton Korinek and his coauthors. I also recommend checking out Epoch epoch.ai And if you're going to have accelerationist stuff you might want to include Vitalik Buterin on defensive accelerationism: vitalik.eth.limo/general/2023...
Epoch AI
Epoch AI is a research institute investigating key trends and questions that will shape the trajectory and governance of Artificial Intelligence.
epoch.ai
July 12, 2025 at 10:31 PM
If you like the steel drum, check out the handpan too if you haven't already.
July 12, 2025 at 10:14 PM
16/ Guillem Bas, @nickacaputo.bsky.social, Julia C Morse, Janvi Ahuja, Isabella Duan, Janet Egan, Ben Bucknall, @briannarosen.bsky.social Renan Araujo, Vincent Boulanin, Ranjit Lall @fbarez.bsky.social, Sanaa Alvira, Corin Katzke, Ahmad Atamli, Amro Awad /end🧵
July 7, 2025 at 10:13 PM
15/ Thanks to @aigioxfordmartin.bsky.social ‬ for backing this project and all my coauthors: Robert Trager, @ankareuel.bsky.social, @davidmanheim.alter.org.il,‬ @milesbrundage.bsky.social, Onni Aarne, @aaronscher.bsky.social, Yanliang Pan, Jenny Xiao, Kristy Loke, Sumaya Nur Adan
July 7, 2025 at 10:13 PM
13/ Those who lived through or studied the Cold War may remember President Reagan reiterating the Russian proverb “Trust, but verify.” Just as it was with 1980s nuclear arms control, our ability to build new verification systems may be crucial for preserving peace today.
July 7, 2025 at 10:13 PM
12/ If we build these more serious verification systems, we would be laying the foundation for international agreements over AI—which might end up being the most important international deals in the history of humanity.
July 7, 2025 at 10:13 PM
11/ It seems possible to create similar verification exchanges that preserve security to an extreme degree, but we’ll need political action to get there. Our report goes into this in some detail. These setups might take about 1-3 years of intense effort to research and build.
July 7, 2025 at 10:13 PM
10/ However, even if we scale this up, the most important secrets (think national security info, military AI models, or the Coca-Cola formula) are probably too sensitive to govern via just confidential computing. Further work is needed to safeguard these.
July 7, 2025 at 10:13 PM
9/ Groups that use AI (including corporations and countries) will likewise place more trust in AI services that they can be sure are secure and appropriately governed. They may also request—or demand—this kind of thing in the future.
July 7, 2025 at 10:13 PM
8/ This setup allows 1) users to feel safe and confident about services they pay for, 2) companies to expand their offerings to more sensitive domains, and 3) governments to check that rules are followed.
July 7, 2025 at 10:13 PM
7/ An AI provider can prove that they abide by rules by having a set of third parties (e.g., AI testing companies and AI Safety / Security Institutes) securely test their models and systems. A user can trust a group of third parties a *lot* more than they trust the AI provider.
July 7, 2025 at 10:13 PM
6/ Confidential computing might be reliable enough for a company to make pretty strong claims about what they are *doing* (e.g., serving you inference with a given model and compute budget) and what they are *not doing* (e.g., copying your data).
July 7, 2025 at 10:13 PM
5/ Some of these technologies can be deployed *today*, such as confidential computing, which is available in recent hardware such as NVIDIA’s Hopper or Blackwell chips. These are good enough to get us started.
July 7, 2025 at 10:13 PM
4/ Luckily, decades of work has gone into privacy-preserving computational methods. Basically they are tricks with hardware and cryptography that allow one actor (Prover) to prove to another actor (Verifier) something without revealing all the underlying data.
July 7, 2025 at 10:13 PM
3/ But countries care about their security, so we can’t expect them to simply hand over all the information needed to prove that they’re following governance rules.
July 7, 2025 at 10:13 PM
2/ International AI governance is desirable (for peace, security, and good lives), but it faces verification challenges because there’s no easy way to understand what someone else is doing on their computer without violating their security.
July 7, 2025 at 10:13 PM
Cosmos by Carl Sagan was going to be my answer, but now I realize Star Trek should be in there too!
June 16, 2025 at 5:55 AM