ControlAI
banner
controlai.com
ControlAI
@controlai.com
We work to keep humanity in control.

Subscribe to our free newsletter: https://controlai.news

Join our discord at: https://discord.com/invite/ptPScqtdc5
Pinned
We built a coalition of 100+ UK lawmakers who are taking a stance against the extinction risk from superintelligent AI and back regulating the most powerful AIs!

From the former AI Minister to the former Defence Secretary, cross-party support is crystal clear.

Time to act!
ControlAI's CEO Andrea Miotti explains how one of the top AI companies, Anthropic, tested its AI and found that it was willing to engage in blackmail to avoid being replaced.

Imagine how things could go with much more powerful AI systems.
February 11, 2026 at 4:14 PM
Are we sleepwalking into our own extinction?

ControlAI's CEO Andrea Miotti (Andrea Miotti): Right now, top AI companies are investing hundreds of billions of dollars in a race to build superintelligent AI.

If we don't prevent this, the future will belong to AIs.
February 11, 2026 at 12:58 PM
Are we sleepwalking into our own extinction?

ControlAI's CEO Andrea Miotti (Andrea Miotti): Right now, top AI companies are investing hundreds of billions of dollars in a race to build superintelligent AI.

If we don't prevent this, the future will belong to AIs.
February 11, 2026 at 12:58 PM
"It was ready to kill someone, wasn't it?"

"Yes."

Daisy McGregor, UK policy chief at Anthropic, a top AI company, says it's "massively concerning" that Anthropic's Claude AI has shown in testing that it's willing to blackmail and kill in order to avoid being shut down.
February 10, 2026 at 7:21 PM
Top AI companies OpenAI and Anthropic just released even more capable AIs, and what they said about how they were developed is concerning.
February 10, 2026 at 3:12 PM
AI godfather and Nobel Prize winner Geoffrey Hinton compares developing superintelligent AI to raising a tiger cub.

"Now, if you have a tiger cub, it doesn't end well."
February 9, 2026 at 8:10 PM
🚨 BREAKING: Mrinank Sharma, a researcher at Anthropic, one of the top AI companies, just quit.

Sharma worked on developing defences to reduce risks from AI-assisted bioterrorism.

His resignation letter, shared on X, cites the difficulty in matching actions to values, including within Anthropic:
February 9, 2026 at 5:15 PM
AI godfather and Nobel Prize winner Geoffrey Hinton says countries will collaborate to prevent the risk of extinction posed by superintelligent AI.

He argues that just as the US and USSR collaborated on nuclear weapons at the height of the Cold War, so too will the US and China.
February 8, 2026 at 6:17 PM
How can a country like Canada prevent the risk of extinction posed by superintelligence?

ControlAI's CEO Andrea Miotti tells a Canadian House of Commons committee that the problem of superintelligence is like that of nuclear proliferation.
February 7, 2026 at 5:06 PM
Governor of Florida Ron DeSantis: "some people who ... almost relish in the fact that they think this can just displace human beings, and that ultimately ... the AI is gonna run society, and that you're not gonna be able to control it."

"Count me out on that."
February 7, 2026 at 10:58 AM
Elon Musk, CEO of one of the largest AI companies, says he thinks it would be foolish to assume that humans would maintain control over superintelligent AI.

He hopes it'll keep us around.

An alternative we'd suggest would be to prohibit its development in the first place.
February 6, 2026 at 3:07 PM
NEW: Elon Musk says it's difficult to imagine that humans stay in charge of AIs.

Musk is the CEO of one of the largest AI companies, xAI.
February 5, 2026 at 9:01 PM
Moltbook, a social network for AI agents, just went viral.

Agents have been around for a while now, so why is this causing such a stir?

We break it down for you in our latest article, along with news on other developments in AI!
AI Agents Enter the Chat: What’s the Deal with Moltbook?
The tip of the iceberg.
controlai.news
February 5, 2026 at 6:42 PM
The 2026 International AI Safety Report has been published, and it paints a concerning picture of the trajectory we’re on.

Real-world evidence for risks is growing.
February 4, 2026 at 5:07 PM
Top AI CEO Demis Hassabis says he'd back a halt to the race to superintelligence if others agreed.

Given the extinction risk posed by superintelligence, which he's warned of himself, it's good to see him say this.

But a voluntary pause isn't what we should bank on.

🧵
Would You Prevent Superintelligence?
DeepMind’s CEO says he’d support a pause if everyone else would. That seems very doubtful. Governments need to step in.
controlai.news
February 2, 2026 at 7:15 PM
From the House of Lords debate on superintelligent AI: The Lord Bishop of Hereford says an international moratorium on superintelligence is the only safe way forward, and urges the government to pursue it.
February 2, 2026 at 1:43 PM
ControlAI's CEO Andrea Miotti tells a Canadian House of Commons committee that to prevent the risk of extinction posed by superintelligence, governments should step in.

AI companies have shown themselves to be unable or unwilling to stop racing to develop it.
February 1, 2026 at 5:50 PM
In a Canadian House of Commons committee hearing, ex-OpenAI researcher Steven Adler says the plans AI companies have to control superintelligent AI are flimsy and speculative.
February 1, 2026 at 12:41 PM
Luc Thériault MP: "I think that globally we should do something to stop the race for superintelligent AI."

In a Canadian House of Commons committee hearing, Thériault asks what we should do to achieve that.
January 30, 2026 at 3:00 PM
Top AI CEO Demis Hassabis says he'd support a pause if everyone else agreed. That seems doubtful. Governments need to step in.

Also: South Korea's AI Basic Act comes into force, and the Doomsday Clock is set to its shortest time ever.

Our latest article:
Would You Prevent Superintelligence?
DeepMind’s CEO says he’d support a pause if everyone else would. That seems very doubtful. Governments need to step in.
controlai.news
January 29, 2026 at 7:30 PM
NEW: The Bulletin of the Atomic Scientists, founded by Manhattan Project scientists, which every year publishes a number representing how close they believe the world is to global disaster, has updated their “Doomsday Clock”.
January 29, 2026 at 1:44 PM
We just got another supporter!

Siân Berry MP (Sian Berry MP), former leader of the Green Party, has just backed our campaign for binding regulation on the most powerful AIs!

111 UK politicians now support us, acknowledging the risk of human extinction posed by superintelligent AI.
January 29, 2026 at 11:18 AM
Carla Denyer MP (Carla Denyer), the Green Party's science and tech spokesperson, has just joined our call for binding regulation on the most powerful AI systems!

110 UK politicians now support our campaign, recognising the risk of extinction posed by superintelligent AI.
January 28, 2026 at 3:47 PM
Viscount Camrose, the UK's first AI minister, says regulation dealing with superintelligent AI will need to be global.

He asks whether the government is taking full advantage of the UK's significant convening powers in driving forwards AI safety internationally.
January 28, 2026 at 1:37 PM
Lord Clement-Jones: The government promised to regulate the most powerful AIs, yet 18 months later they haven't even published a consultation.

Clement-Jones, the Liberal Democrats' Lords tech spokesperson, asks how the government can claim to take superintelligence seriously.
January 27, 2026 at 3:42 PM