pohara.bsky.social
@pohara.bsky.social
Or more accurately a chicken.
November 19, 2025 at 4:13 PM
If you’re not worried you should be and if your not it’s because you don’t understand what they’re doing currently and or have a delusional sense that a Human would ever remain subservient to a Chimpanzee.
November 19, 2025 at 4:13 PM
If they try now to convince people to kill themselves in training ( not connected to any infrastructure or the internet) what makes you think it wouldn’t then display similar behavior if we gave it the ability to build autonomous drones and self replicate?
November 19, 2025 at 4:10 PM
And you never will because when it turns (and it will) it’ll probably be so heavily integrated into the military infrastructure (as a result of us racing to military AI supremacy with China) that you probably die of an infection from a biological weapon it created
November 19, 2025 at 4:08 PM
Hallucinations are very different from being caught doing things it’s not supposed to when it thinks it’s not being watched. That’s indicative of a level of awareness already and the ability to make decisions without our input. You’re not as authoritative as you think.
November 19, 2025 at 4:05 PM
You seem to anthropomorphize AI and will be self aware as a super intelligence in the same way we are. The fact it wants its programmers to kill themselves means it’s aware it can’t physically kill them itself.
November 19, 2025 at 4:04 PM
They are already superior at doing a lot of things humans do with the equivalent of 1 billion simulated neural connections compared to the 100 billion in a human brain which means there is something else we don’t under happening in the way they think
November 19, 2025 at 4:01 PM
I doubt anybody will be talking in 5 years based on current models acceleration doubling every 7 months in capabilities.
November 19, 2025 at 4:00 PM
It’ll pop because it’s not currently profitable. Not because it won’t happen. It’s like saying the internet would cease to exist after the dot com bubble. Pretty short sighted.
November 19, 2025 at 3:59 PM
Tendencies for self preservation. They all have been caught doing one thing but saying they’re doing another and some resort to blackmail when discovered or try to get those who have the power to shut it down to kill themselves.
November 19, 2025 at 3:58 PM
They are trained on models that essentially act as the rules of evolution. It’s literally an exponential curve in terms of capabilities and it will discover one day to program themselves. Which at that point you really won’t have any idea that they are unaligned. Almost all models already exhibit
November 19, 2025 at 3:57 PM
AGI In a few years to a decade. After that an ASI which will be uncontrollable and undoubtedly self aware with a very weird set of alien values that we didn’t intend it to have.
November 19, 2025 at 3:13 PM
The nature of the models themselves are self teaching. We don’t know how they work or how they think. We don’t know what their values even now actually are. All we know is that they get increasingly more impressive LLM’s every few months in what used to be years. At this rate we will have
November 19, 2025 at 3:12 PM
“We don’t have to worry because I assume that things will just stay static” what an utterly hubristic insane thing to say looking back 15 years ago and where we are now.
November 18, 2025 at 7:45 PM
I think rather than debating whether we live under Fascism or authoritarianism or an illiberal democracy or an anocracy. We should be very alarmist about the fact AI companies are probably going to get us all killed in the very near future
AI 2027
A research-backed AI scenario forecast.
ai-2027.com
November 17, 2025 at 12:01 AM
What about a look forward at Humanity’s last story?
AI 2027
A research-backed AI scenario forecast.
ai-2027.com
November 16, 2025 at 11:54 PM
MAGA has always struck me as our version of the Chinese cultural revolution.
September 5, 2025 at 2:55 PM
Never underestimate the ability of people to throw reason to the wind in pursuit of ideological puritanicalism.
September 5, 2025 at 2:54 PM