ControlAI
@controlai.com
We work to keep humanity in control.
Subscribe to our free newsletter: https://controlai.news
Join our discord at: https://discord.com/invite/ptPScqtdc5
Subscribe to our free newsletter: https://controlai.news
Join our discord at: https://discord.com/invite/ptPScqtdc5
Microsoft AI CEO Mustafa Suleyman says that smarter-than-human AIs capable of self-improvement, complete autonomy, or independent goal setting would be "very dangerous" and should never be built.
He says others in the field "just hope" that such an AI would not harm us.
He says others in the field "just hope" that such an AI would not harm us.
November 11, 2025 at 11:22 AM
Microsoft AI CEO Mustafa Suleyman says that smarter-than-human AIs capable of self-improvement, complete autonomy, or independent goal setting would be "very dangerous" and should never be built.
He says others in the field "just hope" that such an AI would not harm us.
He says others in the field "just hope" that such an AI would not harm us.
Microsoft AI CEO Mustafa Suleyman says he's seeing lots of indications that people want to build superintelligence to replace or threaten our species.
November 10, 2025 at 4:43 PM
Microsoft AI CEO Mustafa Suleyman says he's seeing lots of indications that people want to build superintelligence to replace or threaten our species.
Earlier this week, King Charles gave Nvidia CEO Jensen Huang a copy of his 2023 AI Safety Summit speech.
In his speech, the King said that there is a "clear imperative" to ensure that AI remains safe and secure, and that countries need to work together to ensure this.
In his speech, the King said that there is a "clear imperative" to ensure that AI remains safe and secure, and that countries need to work together to ensure this.
November 9, 2025 at 5:35 PM
Earlier this week, King Charles gave Nvidia CEO Jensen Huang a copy of his 2023 AI Safety Summit speech.
In his speech, the King said that there is a "clear imperative" to ensure that AI remains safe and secure, and that countries need to work together to ensure this.
In his speech, the King said that there is a "clear imperative" to ensure that AI remains safe and secure, and that countries need to work together to ensure this.
There's a very simple argument for why developing superintelligence ends badly.
Conjecture CEO Connor Leahy: "If you make something that is smarter than all humans, you don't know how to control it, how exactly does that turn out well for humans?"
Conjecture CEO Connor Leahy: "If you make something that is smarter than all humans, you don't know how to control it, how exactly does that turn out well for humans?"
November 7, 2025 at 4:07 PM
There's a very simple argument for why developing superintelligence ends badly.
Conjecture CEO Connor Leahy: "If you make something that is smarter than all humans, you don't know how to control it, how exactly does that turn out well for humans?"
Conjecture CEO Connor Leahy: "If you make something that is smarter than all humans, you don't know how to control it, how exactly does that turn out well for humans?"
"No one can deny that this is real. "
Conjecture CEO Connor Leahy says the coalition calling for a ban on the development of superintelligence makes it harder and harder to ignore the danger of smarter-than-human AI.
Conjecture CEO Connor Leahy says the coalition calling for a ban on the development of superintelligence makes it harder and harder to ignore the danger of smarter-than-human AI.
November 6, 2025 at 9:16 PM
"No one can deny that this is real. "
Conjecture CEO Connor Leahy says the coalition calling for a ban on the development of superintelligence makes it harder and harder to ignore the danger of smarter-than-human AI.
Conjecture CEO Connor Leahy says the coalition calling for a ban on the development of superintelligence makes it harder and harder to ignore the danger of smarter-than-human AI.
AI godfather and Nobel Prize winner Geoffrey Hinton says AI companies are much more concerned with racing each other than ensuring that humanity actually survives.
November 5, 2025 at 5:17 PM
AI godfather and Nobel Prize winner Geoffrey Hinton says AI companies are much more concerned with racing each other than ensuring that humanity actually survives.
AI godfather Geoffrey Hinton says countries will collaborate to prevent AI taking over.
"On AI taking over they will collaborate 'cause nobody wants that. The Chinese Communist Party doesn't want AI to take over. Trump doesn't want AI to take over. They can collaborate on that."
"On AI taking over they will collaborate 'cause nobody wants that. The Chinese Communist Party doesn't want AI to take over. Trump doesn't want AI to take over. They can collaborate on that."
November 5, 2025 at 11:09 AM
AI godfather Geoffrey Hinton says countries will collaborate to prevent AI taking over.
"On AI taking over they will collaborate 'cause nobody wants that. The Chinese Communist Party doesn't want AI to take over. Trump doesn't want AI to take over. They can collaborate on that."
"On AI taking over they will collaborate 'cause nobody wants that. The Chinese Communist Party doesn't want AI to take over. Trump doesn't want AI to take over. They can collaborate on that."
Why is AI different from other technologies?
AI godfather Geoffrey Hinton points out that humans were always in charge.
" We control the steam engine. This isn't like that."
Hinton also says that AI will soon be smarter than humans.
AI godfather Geoffrey Hinton points out that humans were always in charge.
" We control the steam engine. This isn't like that."
Hinton also says that AI will soon be smarter than humans.
November 4, 2025 at 2:52 PM
Why is AI different from other technologies?
AI godfather Geoffrey Hinton points out that humans were always in charge.
" We control the steam engine. This isn't like that."
Hinton also says that AI will soon be smarter than humans.
AI godfather Geoffrey Hinton points out that humans were always in charge.
" We control the steam engine. This isn't like that."
Hinton also says that AI will soon be smarter than humans.
AI researcher Nate Soares says developing an AI is much more like growing an organism than writing code.
November 3, 2025 at 7:25 PM
AI researcher Nate Soares says developing an AI is much more like growing an organism than writing code.
Senator Bernie Sanders: AI is like a meteor coming to this planet.
Sanders adds that he's worried about the development of superintelligence, which we could lose control of.
Sanders adds that he's worried about the development of superintelligence, which we could lose control of.
November 2, 2025 at 5:40 PM
Senator Bernie Sanders: AI is like a meteor coming to this planet.
Sanders adds that he's worried about the development of superintelligence, which we could lose control of.
Sanders adds that he's worried about the development of superintelligence, which we could lose control of.
OpenAI's Chief Scientist Jakub Pachocki says superintelligence could be developed in less than a decade.
Superintelligent AI would be vastly smarter than humans across virtually all cognitive domains, and experts warn it could lead to human extinction.
Superintelligent AI would be vastly smarter than humans across virtually all cognitive domains, and experts warn it could lead to human extinction.
November 1, 2025 at 10:26 AM
OpenAI's Chief Scientist Jakub Pachocki says superintelligence could be developed in less than a decade.
Superintelligent AI would be vastly smarter than humans across virtually all cognitive domains, and experts warn it could lead to human extinction.
Superintelligent AI would be vastly smarter than humans across virtually all cognitive domains, and experts warn it could lead to human extinction.
Center for Humane Technology cofounder Tristan Harris explains how AI isn't like other technologies.
"It's like if you imagine a hammer that can think to itself at a PhD level about hammers, invent better hammers, recursively go off in the world, duplicate itself..."
"It's like if you imagine a hammer that can think to itself at a PhD level about hammers, invent better hammers, recursively go off in the world, duplicate itself..."
October 31, 2025 at 11:52 AM
Center for Humane Technology cofounder Tristan Harris explains how AI isn't like other technologies.
"It's like if you imagine a hammer that can think to itself at a PhD level about hammers, invent better hammers, recursively go off in the world, duplicate itself..."
"It's like if you imagine a hammer that can think to itself at a PhD level about hammers, invent better hammers, recursively go off in the world, duplicate itself..."
"Let's be clear. The CEOs who are building this technology say if we succeed in this goal on which we are spending trillions of dollars of other people's money, then there is somewhere between a 10 and 30% chance of human extinction."
— Professor Stuart Russell
— Professor Stuart Russell
October 29, 2025 at 11:24 AM
"Let's be clear. The CEOs who are building this technology say if we succeed in this goal on which we are spending trillions of dollars of other people's money, then there is somewhere between a 10 and 30% chance of human extinction."
— Professor Stuart Russell
— Professor Stuart Russell
CNBC: Professor Stuart Russell explains why we don't understand how modern AIs actually work and warns that they already show dangerous self-preservation tendencies.
"All the signs are pointing in the wrong direction."
"All the signs are pointing in the wrong direction."
October 28, 2025 at 11:41 AM
CNBC: Professor Stuart Russell explains why we don't understand how modern AIs actually work and warns that they already show dangerous self-preservation tendencies.
"All the signs are pointing in the wrong direction."
"All the signs are pointing in the wrong direction."
CNBC: Professor Stuart Russell explains why we need to ban the development of superintelligence.
"We can't predict what they're gonna do. We can't control them, we can't stop them from doing anything."
"We can't predict what they're gonna do. We can't control them, we can't stop them from doing anything."
October 27, 2025 at 5:42 PM
CNBC: Professor Stuart Russell explains why we need to ban the development of superintelligence.
"We can't predict what they're gonna do. We can't control them, we can't stop them from doing anything."
"We can't predict what they're gonna do. We can't control them, we can't stop them from doing anything."