ControlAI
banner
controlai.com
ControlAI
@controlai.com
We work to keep humanity in control.

Subscribe to our free newsletter: https://controlai.news

Join our discord at: https://discord.com/invite/ptPScqtdc5
Less than a week ago, we announced that 85 UK politicians support our campaign for binding regulation on the most powerful AIs.

Now it's 90!

Lord Goldsmith is the 90th UK politician to back our campaign statement, acknowledging the extinction threat posed by AI.

controlai.com/statement
November 11, 2025 at 2:54 PM
Microsoft AI CEO Mustafa Suleyman says that smarter-than-human AIs capable of self-improvement, complete autonomy, or independent goal setting would be "very dangerous" and should never be built.

He says others in the field "just hope" that such an AI would not harm us.
November 11, 2025 at 11:22 AM
🚨 The joint statement calling for a ban on the development of superintelligence now has support from more than 100,000 people!
November 10, 2025 at 6:37 PM
Microsoft AI CEO Mustafa Suleyman says he's seeing lots of indications that people want to build superintelligence to replace or threaten our species.
November 10, 2025 at 4:43 PM
Earlier this week, King Charles gave Nvidia CEO Jensen Huang a copy of his 2023 AI Safety Summit speech.

In his speech, the King said that there is a "clear imperative" to ensure that AI remains safe and secure, and that countries need to work together to ensure this.
November 9, 2025 at 5:35 PM
🚨 NEW: OpenAI's latest blog post says superintelligence risks are "potentially catastrophic", and suggests the whole field might need to slow down development.

They say nobody should deploy superintelligent AIs without being able to control them, and admit this still can't be done.
November 7, 2025 at 6:07 PM
There's a very simple argument for why developing superintelligence ends badly.

Conjecture CEO Connor Leahy: "If you make something that is smarter than all humans, you don't know how to control it, how exactly does that turn out well for humans?"
November 7, 2025 at 4:07 PM
We have another new supporter!

The Rt Hon. the Lord Robathan has backed our campaign for binding regulation on the most powerful AI systems, acknowledging the extinction threat posed by AI!

It's great to see so many coming together from across parties on this issue.
November 7, 2025 at 2:37 PM
We have another new supporter!

The Rt Hon. the Lord Robathan has backed our campaign for binding regulation on the most powerful AI systems, acknowledging the extinction threat posed by AI!

It's great to see so many coming together from across parties on this issue.
November 7, 2025 at 2:37 PM
"No one can deny that this is real. "

Conjecture CEO Connor Leahy says the coalition calling for a ban on the development of superintelligence makes it harder and harder to ignore the danger of smarter-than-human AI.
November 6, 2025 at 9:16 PM
AI godfather and Nobel Prize winner Geoffrey Hinton says AI companies are much more concerned with racing each other than ensuring that humanity actually survives.
November 5, 2025 at 5:17 PM
AI godfather Geoffrey Hinton says countries will collaborate to prevent AI taking over.

"On AI taking over they will collaborate 'cause nobody wants that. The Chinese Communist Party doesn't want AI to take over. Trump doesn't want AI to take over. They can collaborate on that."
November 5, 2025 at 11:09 AM
The Guardian: Hundreds of AI safety and effectiveness evals have been found to be weak and flawed.

UK AI Security Institute scientists and others checked over 440 benchmarks and found problems that undermine their validity.
November 4, 2025 at 6:57 PM
Why is AI different from other technologies?

AI godfather Geoffrey Hinton points out that humans were always in charge.

" We control the steam engine. This isn't like that."

Hinton also says that AI will soon be smarter than humans.
November 4, 2025 at 2:52 PM
AI researcher Nate Soares says developing an AI is much more like growing an organism than writing code.
November 3, 2025 at 7:25 PM
🚨 NEW: Over 85 UK cross-party parliamentarians now support our campaign statement, underscoring the risk of extinction from AI.

This is the world’s first coalition of lawmakers taking a stand on this issue!

Supporters include:
— Viscount Camrose, former UK Minister for AI
November 3, 2025 at 5:35 PM
Senator Bernie Sanders: AI is like a meteor coming to this planet.

Sanders adds that he's worried about the development of superintelligence, which we could lose control of.
November 2, 2025 at 5:40 PM
OpenAI's Chief Scientist Jakub Pachocki says superintelligence could be developed in less than a decade.

Superintelligent AI would be vastly smarter than humans across virtually all cognitive domains, and experts warn it could lead to human extinction.
November 1, 2025 at 10:26 AM
Why are experts warning against the development of superintelligence?

In a new video we partnered with SciShow on, Hank Green explains the concerning trends we see in AI.

SciShow has over 8M subscribers. It's great to see so many people learn about this problem!

[link below]
October 31, 2025 at 3:28 PM
Center for Humane Technology cofounder Tristan Harris explains how AI isn't like other technologies.

"It's like if you imagine a hammer that can think to itself at a PhD level about hammers, invent better hammers, recursively go off in the world, duplicate itself..."
October 31, 2025 at 11:52 AM
We need a global movement to prohibit superintelligent AI, which experts warn could lead to human extinction.

In a new article in TIME, ControlAI's CEO Andrea Miotti (@_andreamiotti) explains why we need to ban superintelligence, and how this can be achieved.

[link below]
October 30, 2025 at 10:56 PM
"Let's be clear. The CEOs who are building this technology say if we succeed in this goal on which we are spending trillions of dollars of other people's money, then there is somewhere between a 10 and 30% chance of human extinction."
— Professor Stuart Russell
October 29, 2025 at 11:24 AM
🚨 The coalition to ban the development of superintelligence is growing rapidly!

Over 50,000 have now added their names to the joint statement.

One new addition we noticed is Kersti Kaljulaid, the former president of Estonia!
October 28, 2025 at 6:39 PM
CNBC: Professor Stuart Russell explains why we don't understand how modern AIs actually work and warns that they already show dangerous self-preservation tendencies.

"All the signs are pointing in the wrong direction."
October 28, 2025 at 11:41 AM
CNBC: Professor Stuart Russell explains why we need to ban the development of superintelligence.

"We can't predict what they're gonna do. We can't control them, we can't stop them from doing anything."
October 27, 2025 at 5:42 PM