Existential Risk Observatory
banner
xrobservatory.bsky.social
Existential Risk Observatory
@xrobservatory.bsky.social
Reducing existential risk by informing the public debate. We propose a Conditional AI Safety Treaty: https://time.com/7171432/conditional-ai-safety-treaty-trump/
Pinned
Today, we propose the Conditional AI Safety Treaty in TIME as a solution to AI's existential risks.

AI poses a risk of human extinction, but this problem is not unsolvable. The Conditional AI Safety Treaty is a global response to avoid losing control over AI.

How does it work?
Reposted by Existential Risk Observatory
MIRI CEO Malo Bourgon explains why AI isn't like other technologies, and why it looks likely that superintelligence will be developed much earlier than previously thought:
November 29, 2025 at 10:54 AM
So far, most xriskers have felt too good for anti-data center campaigning. We made fun of data center water usage and electricity consumption, even though these are actual problems.
View: Trump’s AI agenda sails toward an iceberg of bipartisan populist fury
The AI industry’s new super PAC picked its first political target this month — and missed.
www.semafor.com
November 28, 2025 at 11:53 AM
Sometimes, it is hard to believe that this is all real. Are people really building a machine that could be about to kill every living thing on this planet? If this is not true, why are the best scientists in the world saying it is? If this is true, why is no one trying to do anything about it?
November 7, 2025 at 9:59 AM
If one in ten experts think there is a risk of human extinction when developing a technology, we should not develop this technology, until we are confident that the risk can be almost ruled out.
The timeline & severity of major AI risks are still debated within the scientific community, but these disagreements reveal great uncertainty. The fact that many credible experts consider some catastrophic scenarios plausible should be enough to warrant serious caution.
www.axios.com/2025/06/16/a...
Behind the Curtain: What if predictions of humanity-destroying AI are right?
Everyone assumes AI optimists and doomers are simply exaggerating. But no one asks: "Well, what if they're right?"
www.axios.com
June 23, 2025 at 10:05 PM
📢 Event coming up in Amsterdam!📢

Many think we should have an AI safety treaty, but how to enforce it?🤔

Riccardo Varenna from TamperSec has part of a solution: sealing hardware within a secure enclosure. Their proto should be ready within three months.

Time to hear more!

Be there! lu.ma/v2us0gtr
Can a small startup prevent AI loss of control? - with Riccardo Varenna · Luma
According to many leading AI researchers, there is a chance we could lose control over future AI. We think one of the most important challenges of our century…
lu.ma
June 18, 2025 at 1:56 PM
Reposted by Existential Risk Observatory
BREAKING: New experiments by former OpenAI researcher Steven Adler find that GPT-4o will prioritize preserving itself over the safety of its users.

Adler set up a scenario where the AI believed it was a scuba diving assistant, monitoring user vitals and assisting them with decisions.
June 11, 2025 at 5:40 PM
Two weeks ago, Geoffrey Hinton informed a New Zealand audience that AI could kill their children. The presenter announced the part as: "They call it p(doom), don't they, the probability that AI could wipe us out. On the BBC recently you gave it a 10-20% chance".
June 11, 2025 at 10:13 PM
The closer we get to actual AI, the less people like intelligence, however measured. Passing the Turing test is downplayed now, but passing Marcus' Simpsons test will be downplayed later when it happens, too.

Still, AI reaching human level is actually important. We can't keep our heads in the sand.
The Turing Test is a sort of inverse IQ test.

The more seriously you take it, the less intelligent you are.
April 3, 2025 at 8:49 AM
New paper out!📜🚀

Many think there should be an AI Safety Treaty, but what should it look like?🤔

Our paper starts with a review of current treaty proposals, and then gives its own Conditional AI Safety Treaty recommendations.
March 26, 2025 at 11:50 AM
Richard Sutton has repeatedly argued that human extinction would be the morally right thing to happen, if AIs were smarter than us. Yesterday, he won the Turing Award from @acm.org.

Why is arguing for and working towards extinction fine in AI?

youtu.be/pD-FWetbvN8&...
Rich Sutton - The Future of AI
YouTube video by UBC Computer Science
youtu.be
March 6, 2025 at 4:14 PM
It is hopeful that the British public and British politicians support regulation to mitigate the risk of extinction from AI. Other countries should follow. In the end, a global AI Safety Treaty should be signed.
UK POLITICIANS DEMAND REGULATION OF POWERFUL AI

TODAY: Politicians across the UK political spectrum back our campaign for binding rules on dangerous AI development.

This is the first time a coalition of parliamentarians have acknowledged the extinction threat posed by AI.
1/6
February 6, 2025 at 10:51 PM
On the eve of the AI Action Summit in Paris, we proudly announce our AI Safety Debate with Prof. Yoshua Bengio!📢

In the panel:

@billyperrigo.bsky.social from Time
@kncukier.bsky.social from The Economist
Jaan Tallinn from CSER/FLI
Emma Verhoeff from @minbz.bsky.social

Join here! lu.ma/g7tpfct0
AI Safety Debate with prof. Yoshua Bengio · Luma
Progress in AI has been stellar and does not seem to slow down. If we continue at this pace, human-level AI with its existential risks may be a reality sooner…
lu.ma
January 24, 2025 at 7:11 PM
Pretraining may have hit a wall, but AI progress in general hasn't. Progress in closed-ended domains such as math and programming is obvious, and worrying.

The public needs to be kept up to date on both increasing capabilities, and obvious misalignment of leading models.
🚨 New piece in TIME: AI progress hasn't stalled — it's just become invisible to most people. 🚨

I used to think that AI slowed down a lot in 2024, but I now think I was wrong. Instead, there's a widening gap between AI's public face and its true capabilities. 🧵
January 9, 2025 at 10:22 PM
Nobel Prize winner Geoffrey Hinton thinks there is a 10-20% chance AI will "wipe us all out" and calls for regulation.

Our proposal is to implement a Conditional AI Safety Treaty. Read the details below.

www.theguardian.com/technology/2...
January 1, 2025 at 1:34 AM
Reposted by Existential Risk Observatory
💼 We're hiring a Head of US Policy! ⬇️

🇺🇸 This opening is an exciting opportunity to lead and grow our US policy team in its advocacy for forward-thinking AI policy at the state and federal levels.

✍ Apply by Dec. 22 and please share:
jobs.lever.co/futureof-life/c933ef39-588f-43a0-bca5-1335822b46a6
December 5, 2024 at 10:15 PM
Peaceful activism from organizations such as @pauseai.bsky.social is a good way to increase pressure on governments. They need to accept meaningful AI regulation, such as an international AI safety treaty.
We protested during the AI Safety Conference in SF, where world leaders got together to discuss AI Safety. We urgently need them to implement a treaty that prevents the creation of a superintelligence.

A thread 🔽
November 25, 2024 at 8:26 PM
It is now public knowledge that multiple LLMs significantly larger than GPT-4 have been trained, but they have not performed much better. That means scaling laws have broken down. What does this mean for existential risk?
November 22, 2024 at 1:23 PM
Today, we propose the Conditional AI Safety Treaty in TIME as a solution to AI's existential risks.

AI poses a risk of human extinction, but this problem is not unsolvable. The Conditional AI Safety Treaty is a global response to avoid losing control over AI.

How does it work?
November 22, 2024 at 12:21 PM