MIRI
banner
intelligence.org
MIRI
@intelligence.org
For over two decades, the Machine Intelligence Research Institute (MIRI) has worked to understand and prepare for the critical challenges that humanity will face as it transitions to a world with artificial superintelligence.
”If Anyone Builds It, Everyone Dies” was recently added to the New Yorker's “The Best Books of the Year So Far” list!

newyorker.com/best-books-2...
October 31, 2025 at 2:30 AM
Academy Award winning director Kathryn Bigelow is reading “If Anyone Builds It, Everyone Dies.”

From an interview in The Guardian by Danny Leigh: www.theguardian.com/film/2025/oc...
October 18, 2025 at 3:12 PM
“The book uses parables, very well told, to argue that evolutionary processes are not predictable, at least not easily. [...] I came away far more concerned than I had been before opening the book.”

www.forbes.com/sites/billco...
October 18, 2025 at 1:18 AM
😮 Whoopie Goldberg recommends “If Anyone Builds It, Everyone Dies” on The View!
October 15, 2025 at 10:46 PM
Great event last week at @politicsprose.bsky.social (The Wharf) in DC, with “If Anyone Builds It, Everyone Dies” coauthor Nate Soares.

Many thanks to all those who attended, and to @jonatomic.bsky.social, Director of Global Risk at FAS, for the great conversation.
October 3, 2025 at 12:05 AM
#7 Combined Print & E-Book Nonfiction (www.nytimes.com/books/best-s...)

#8 Hardcover Nonfiction (www.nytimes.com/books/best-s...)
September 24, 2025 at 11:00 PM
Bannon's War Room:
“I want to bring on the authors of a book, that I think is a must read for every person in this nation.”

“Everyone ought to go and get it, ‘If Anyone Builds It, Everyone Dies,’ a landmark book everyone should get and read.”
rumble.com/v6z4xnk-epis...
September 20, 2025 at 8:59 PM
In the last couple of days “If Anyone Builds It, Everyone Dies” co-authors Eliezer Yudkowsky and Nate Soares have appeared live on @cnn.com, ABC News Live, and Bannon's War Room!

📺 Full segments below starting with @cnn.com:
next.frame.io/share/c0b240...
September 20, 2025 at 8:59 PM
🗓️ Next Friday Sept 26th in DC at @politicsprose.bsky.social (The Wharf)

Join us for a conversation between co-author Nate Soares and @jonatomic.bsky.social, Director of Global Risk at the Federation of American Scientists.

Audience Q&A, book signing, and more:
politics-prose.com/nate-soares
September 18, 2025 at 2:42 PM
“If Anyone Builds It, Everyone Dies” has hit the shelves!

New blurbs and media appearances, plus reading group support and ways to help with the book launch!

intelligence.org/2025/09/16/i...
September 16, 2025 at 10:00 PM
UK hardcover edition also looks sharp in person 😍

Just over a week to the UK release on Sept. 18th!
September 10, 2025 at 4:39 PM
🎧 Audiobook sneak peak from @hachetteaudio.bsky.social

From the book's intro: “We open many of the chapters with parables: stories that, we hope, will help convey some points more simply than otherwise.”

Here's a snippet of one of them from the beginning of Chapter 1.

(Extended clip in 🧵)
September 3, 2025 at 2:36 AM
😍

Hot off the press to our mailbox, from @littlebrown.bsky.social

Sept 16th release is getting close, 20 days and counting!
August 28, 2025 at 1:28 AM
Want to read “If Anyone Builds It, Everyone Dies” before everyone else?

The Goodreads Giveaway for Eliezer Yudkowsky and Nate Soares' new book is live through Aug 31st.

50 hardcover advance reader copies available (US)!

Enter here: www.goodreads.com/giveaway/sho...

#BookSky #GoodreadsGiveaway
August 27, 2025 at 1:08 AM
The UK cover for IF ANYONE BUILDS IT, EVERYONE DIES has been finalized.
August 20, 2025 at 6:57 PM
Another scenario we explore is a US National Project—the US races to build superintelligence, with the goal of achieving a decisive strategic advantage globally. This risks both loss of control to AI and increased geopolitical conflict, including war. 7/10
May 1, 2025 at 10:28 PM
In the research agenda, we lay out four scenarios for the geopolitical response to advanced AI in the coming years. For each scenario, we lay out research questions that, if answered, would provide important insight on how to successfully reduce catastrophic and extinction risks. 4/10
May 1, 2025 at 10:28 PM
The current trajectory of AI development looks pretty rough, likely resulting in catastrophe. As AI becomes more capable, we will face risks of loss of control, human misuse, geopolitical conflict, and authoritarian lock-in. 3/10
May 1, 2025 at 10:28 PM
New AI governance research agenda from MIRI’s TechGov Team. We lay out our view of the strategic landscape and actionable research questions that, if answered, would provide important insight on how to reduce catastrophic and extinction risks from AI. 🧵1/10
techgov.intelligence.org/research/ai-...
May 1, 2025 at 10:28 PM