ControlAI
@controlai.com
We work to keep humanity in control.
Subscribe to our free newsletter: https://controlai.news
Join our discord at: https://discord.com/invite/ptPScqtdc5
Subscribe to our free newsletter: https://controlai.news
Join our discord at: https://discord.com/invite/ptPScqtdc5
Citing risks including the threat of human extinction, signatories include the two most-cited living scientists, former presidents of Ireland and Estonia, and countless more experts and leaders.
It's great to see this coalition continue to grow so rapidly!
It's great to see this coalition continue to grow so rapidly!
November 10, 2025 at 6:37 PM
Citing risks including the threat of human extinction, signatories include the two most-cited living scientists, former presidents of Ireland and Estonia, and countless more experts and leaders.
It's great to see this coalition continue to grow so rapidly!
It's great to see this coalition continue to grow so rapidly!
You can watch the full speech here:
The King delivers a virtual address at the AI Safety Summit 2023
His Majesty The King delivers a virtual address at the AI Safety Summit 2023 at Bletchley Park.
www.youtube.com
November 9, 2025 at 5:35 PM
You can watch the full speech here:
You can find their blog post here:
AI progress and recommendations
AI is unlocking new knowledge and capabilities. Our responsibility is to guide that power toward broad, lasting benefit.
openai.com
November 7, 2025 at 6:07 PM
You can find their blog post here:
This comes after a huge coalition of leaders and AI experts have called for the development of superintelligence to be banned, citing the risk that the technology could lead to human extinction.
Despite this, building superintelligence remains OpenAI's publicly stated goal.
Despite this, building superintelligence remains OpenAI's publicly stated goal.
November 7, 2025 at 6:07 PM
This comes after a huge coalition of leaders and AI experts have called for the development of superintelligence to be banned, citing the risk that the technology could lead to human extinction.
Despite this, building superintelligence remains OpenAI's publicly stated goal.
Despite this, building superintelligence remains OpenAI's publicly stated goal.
The second most-cited scientist in the world, Hinton has been warning repeatedly that superintelligence could cause human extinction.
Just a couple of weeks ago, he joined a huge coalition of experts and leaders calling for a ban on developing this form of AI.
Just a couple of weeks ago, he joined a huge coalition of experts and leaders calling for a ban on developing this form of AI.
November 5, 2025 at 5:17 PM
The second most-cited scientist in the world, Hinton has been warning repeatedly that superintelligence could cause human extinction.
Just a couple of weeks ago, he joined a huge coalition of experts and leaders calling for a ban on developing this form of AI.
Just a couple of weeks ago, he joined a huge coalition of experts and leaders calling for a ban on developing this form of AI.
As ever more powerful AIs are developed and AI companies race to build superintelligence, this only becomes more concerning, as nobody knows how to ensure that smarter-than-human AIs won't turn against us.
Experts find flaws in hundreds of tests that check AI safety and effectiveness
Scientists say almost all have weaknesses in at least one area that can ‘undermine validity of resulting claims’
www.theguardian.com
November 4, 2025 at 6:58 PM
As ever more powerful AIs are developed and AI companies race to build superintelligence, this only becomes more concerning, as nobody knows how to ensure that smarter-than-human AIs won't turn against us.
But more concerningly, it could also happen if an AI realizes it is being tested and conceals how capable it is. The most advanced AIs today show significant awareness that they're being tested and do exhibit lower rates of malicious behavior when they say they believe they're being tested.
November 4, 2025 at 6:58 PM
But more concerningly, it could also happen if an AI realizes it is being tested and conceals how capable it is. The most advanced AIs today show significant awareness that they're being tested and do exhibit lower rates of malicious behavior when they say they believe they're being tested.
This can be because their tests were lacking and they simply failed to elicit a behavior. We've seen many cases where researchers find out months later that an AI was capable of doing something they didn't realize it could do.
November 4, 2025 at 6:57 PM
This can be because their tests were lacking and they simply failed to elicit a behavior. We've seen many cases where researchers find out months later that an AI was capable of doing something they didn't realize it could do.
Researchers can run tests on AIs after they've been trained and demonstrate that a particular behavior exists if the AI exhibits it in tests.
But they have no way to prove that the AI won't do something we don't want it to do.
But they have no way to prove that the AI won't do something we don't want it to do.
November 4, 2025 at 6:57 PM
Researchers can run tests on AIs after they've been trained and demonstrate that a particular behavior exists if the AI exhibits it in tests.
But they have no way to prove that the AI won't do something we don't want it to do.
But they have no way to prove that the AI won't do something we don't want it to do.
Nobody really knows how to interpret what these numbers mean. People are working on it, but research is at an early stage.
These AIs can learn things, like goals and behaviors, including ones we don't want.
Importantly, we don't have any way to reliably specify these, or even check them.
These AIs can learn things, like goals and behaviors, including ones we don't want.
Importantly, we don't have any way to reliably specify these, or even check them.
November 4, 2025 at 6:57 PM
Nobody really knows how to interpret what these numbers mean. People are working on it, but research is at an early stage.
These AIs can learn things, like goals and behaviors, including ones we don't want.
Importantly, we don't have any way to reliably specify these, or even check them.
These AIs can learn things, like goals and behaviors, including ones we don't want.
Importantly, we don't have any way to reliably specify these, or even check them.
Modern AIs aren't like normal computer programs.
Unlike normal code, AIs are grown like creatures. Billions of numbers are dialed up and down by a simple algorithm, processing tremendous amounts of data. From this process, emerges a form of intelligence.
Unlike normal code, AIs are grown like creatures. Billions of numbers are dialed up and down by a simple algorithm, processing tremendous amounts of data. From this process, emerges a form of intelligence.
November 4, 2025 at 6:57 PM
Modern AIs aren't like normal computer programs.
Unlike normal code, AIs are grown like creatures. Billions of numbers are dialed up and down by a simple algorithm, processing tremendous amounts of data. From this process, emerges a form of intelligence.
Unlike normal code, AIs are grown like creatures. Billions of numbers are dialed up and down by a simple algorithm, processing tremendous amounts of data. From this process, emerges a form of intelligence.
Robert Booth highlights that this comes after reports of real-world damage associated with AIs.
However, one thing that's important to note is that even if these tests were done properly, it would still be incredibly difficult to rely on them to ensure safety.
However, one thing that's important to note is that even if these tests were done properly, it would still be incredibly difficult to rely on them to ensure safety.
November 4, 2025 at 6:57 PM
Robert Booth highlights that this comes after reports of real-world damage associated with AIs.
However, one thing that's important to note is that even if these tests were done properly, it would still be incredibly difficult to rely on them to ensure safety.
However, one thing that's important to note is that even if these tests were done properly, it would still be incredibly difficult to rely on them to ensure safety.