AI Dystopia
banner
aidystopia.bsky.social
AI Dystopia
@aidystopia.bsky.social
Cataloguing the greatest technological hoax of all time (non exhaustive)
Pinned
Sure AI is useful for a lot of stuff, but it’s massively overcapitalised and outrageously overhyped. I love technology, but AI has catastrophic environmental and social impacts. And I agree with these guys, AI is a bullshit machine podcasts.apple.com/au/podcast/b...
CZM Rewind: The Academics That Think ChatGPT Is BS
Podcast Episode · Better Offline · 13/08/2025 · 57m
podcasts.apple.com
Reposted by AI Dystopia
I know lawyers have to argue this stuff but saying “sorry we’re not responsible for telling your kid to suicide because we said they you shouldn’t use it for suicide” is soooo crazy
Additionally, OpenAI argues its not liable because Raine, by using ChatGPT for self-harm, broke its terms of service
November 26, 2025 at 3:46 AM
Reposted by AI Dystopia
“LLMs are simply tools that emulate the communicative function of language, not the separate and distinct cognitive process of thinking and reasoning, no matter how many data centers we build.”
Grateful to The Verge for publishing my essay on why large-language models are not going to achieve general intelligence nor push the scientific frontier.

www.theverge.com/ai-artificia...
Is language the same as intelligence? The AI industry desperately needs it to be
The AI boom is based on a fundamental mistake.
www.theverge.com
November 26, 2025 at 10:25 AM
Reposted by AI Dystopia
Will installing a good steering wheel that doesn’t fly off while you are driving undermine the car company’s quest for growth.
November 23, 2025 at 12:35 PM
Reposted by AI Dystopia
Regret to report that there has been another good linkedin post
November 22, 2025 at 11:18 AM
Reposted by AI Dystopia
from government by consultant to government by consultants using generative AI.

decades of degrading government capacity to actually govern has left it open to further degradation through ai hype and perceptions it enables further cost cutting.
Major N.L. healthcare report contains errors likely generated by A.I. – The Independent
$1.6 million Health Human Resources Plan from Deloitte cites research papers that don’t exist, making it the second major government policy paper called into question in as many months
theindependent.ca
November 22, 2025 at 10:09 PM
Reposted by AI Dystopia
November 18, 2025 at 8:01 AM
Reposted by AI Dystopia
Companies are pushing products “before their dangerous behaviors can be fully understood.” Copilot says new features should be turned on only “if you understand the security implications outlined,” because they’re so susceptible to malware.

Not hard to guess how that’s going to go.
November 20, 2025 at 12:38 AM
Reposted by AI Dystopia
Physician who was no longer at the hospital but was on the email list had Otter ai installed and his bot “attended” the meeting, generated a transcript, and sent it to all 65 people on the email list, 12 of whom also no longer worked at the hospital.
AI bot recorded doctors’ meeting, sent patient info to current and former hospital staff, watchdog says
The transcription tool recorded the meeting on behalf of a physician who no longer worked at the hospital
www.theglobeandmail.com
November 19, 2025 at 5:54 PM
Reposted by AI Dystopia
The most comprehensive analysis I've read yet on the gyrations of Sam Altman and his ilk in their excruciating campaign to sell us green eggs and ham. www.thetimes.com/article/6616...
Does the world really want what Sam Altman is selling?
There is every sign the hype on artificial intelligence like OpenAI is running ahead of economic reality. As with railways, booms tend to be punctuated by busts
www.thetimes.com
November 22, 2025 at 9:22 AM
Reposted by AI Dystopia
Whoops! Microsoft’s new Windows AI agent platform lets in malware

and you thought Windows was supposed to run software

www.youtube.com/watch?v=tAeN... - video
pivottoai.libsyn.com/20251119-who... - podcast

time: 4 min 27 sec
November 19, 2025 at 8:29 PM
Reposted by AI Dystopia
thank you vince gilligan
November 19, 2025 at 11:48 AM
Reposted by AI Dystopia
Way back in 1947, Alan Turing had thoughts on how AI would influence the demand for skilled labor.

(via Matteo Pasquinelli 2023 _The Eye of the Master_)
November 18, 2025 at 4:01 AM
Reposted by AI Dystopia
The only responsible thing is to keep pouring $ into this venture which seems yet to have a use case and which is artificially inflated in value because… wait a minute
WSJ: “.. If the AI market blows up, the blast radius would be wide, hitting not only Wall Street firms, but also pensions, mutual and exchange-traded funds and individual investors, because of how debt is often sliced and resold across the financial landscape.”

@wsj.com
www.wsj.com/finance/inve...
November 18, 2025 at 11:26 AM
I not long ago quit a job where my employer - the CEO - was uploading clients’ personal, financial, and legally privileged material to ChatGPT without their consent. It’s frightening
November 18, 2025 at 8:51 AM
Reposted by AI Dystopia
🎯 Anthony Moser put it perfectly:

But I am a hater, and I will not be polite. The machine is disgusting and we should break it. The people who build it are vapid shit-eating cannibals glorifying ignorance. I strongly feel that this is an insult to life itself.
anthonymoser.github.io/writing/ai/h...
I Am An AI Hater
I am an AI hater. This is considered rude, but I do not care, because I am a hater.
anthonymoser.github.io
November 14, 2025 at 7:21 PM
Nothing but negatives: massive consumption of energy and water, destroying thousands of livelihoods, for a technology that is a social, environmental and economic corrosive
$1 billion data centre to maybe create 100 jobs. That's likely an overstatement.

It is truly not worth draining the living word for this vampiric tech and people are starting to realize it.
www.wisn.com/article/meta...
Meta plans $1 billion data center in Beaver Dam
The facility is expected to be completed in 2027 and will support 100 jobs, according to state officials.
www.wisn.com
November 18, 2025 at 8:42 AM
Reposted by AI Dystopia
Absolute must read on the AI bailout-in-progress: apparently it's not enough that lifetimes of human ingenuity and creativity have been stolen and enclosed to create generative AI and balloon billionaire wealth - much more public looting is in store...
This week Open AI walked back a call for the govt to backstop financing for its trillion dollar investments in data centers. This was only the tip of the iceberg; a slow bailout for AI firms is already underway. Read more from @ambakak.bsky.social and I in @wsj.com: www.wsj.com/opinion/you-...
Opinion | You May Already Be Bailing Out the AI Business
Washington is treating the industry as if it’s too big to fail, even as the market sends lukewarm signals.
www.wsj.com
November 13, 2025 at 6:03 PM
Reposted by AI Dystopia
Why are tech billionaires fixated on doomsday?

Because "apocalypse capitalism" is the business model— existential threats as trillion-dollar opportunities.

New at the Nerd Reich: Silicon Valley Apocalypse Capitalism

www.thenerdreich.com/silicon-vall...
Silicon Valley Apocalypse Capitalism
“Maybe if you aren’t trying to destroy the world, you’re not trying hard enough.”
www.thenerdreich.com
November 14, 2025 at 5:58 PM
Reposted by AI Dystopia
Yeah, to spell it out, this is about generative AI trying to brute force licensing!

Not cool machine learning tools that help you design a better reactor or something.

No, *generative AI* - the category that includes large language models and other content-spewing algos.
Problem: AI needs massive amounts of power to thrive. Nuclear makes lots of power. Nuclear takes a long long time to do safely.

Proposed solution that I'm sure will have no unpleasant consequences: Use AI to speed up the construction of new nuclear plants.
Power Companies Are Using AI To Build Nuclear Power Plants
Tech companies are betting big on nuclear energy to meet AIs massive power demands and they're using that AI to speed up the construction of new nuclear power plants.
www.404media.co
November 14, 2025 at 6:32 PM
Reposted by AI Dystopia
Might not be tomorrow, but it will happen. See: 3/3 ca.investing.com/news/stock-m...
Peter Thiel dumps entire Nvidia stake, slashes Tesla holdings amid bubble fears By Investing.com
Peter Thiel dumps entire Nvidia stake, slashes Tesla holdings amid bubble fears
ca.investing.com
November 17, 2025 at 6:37 AM
Reposted by AI Dystopia
The AI bubble may be about to bust.

Peter Thiel has sold all of his Nvidia stock.

We all need to say this very clearly:

NO BAILOUTS FOR THEFT-TECH!

Expropriate their asses instead.

They stole from all of us and fully plan to burn the planet.

They owe us - not the other way around. 1/3
November 17, 2025 at 6:37 AM
Reposted by AI Dystopia
Non-industry compromised scientists keep saying these models don't become safe, no matter what, but people keep thinking just because the concept of guardrails is mentioned it must work. By definition, it doesn't. This is not something open to discussion, unless you're a paid shill.
"Powered by OpenAI’s GPT-4o model by default...tests repeatedly showed that the AI toy dropped its guardrails the longer a conversation went on, until hitting rock bottom on incredibly disturbing topics."
AI-Powered Stuffed Animal Pulled From Market After Disturbing Interactions With Children
FoloToy says it's suspended sales of its AI-powered teddy bear after researchers found it gave wildly inappropriate and dangerous answers.
futurism.com
November 17, 2025 at 5:18 AM
Reposted by AI Dystopia
ICYMI: Microsoft’s charge “implies a more than $12 billion quarterly loss at OpenAI, said Firoz Valliji, an analyst at Bernstein.”

That “would mark one of the largest single-quarter losses for a tech company in history.”

@jessefelder.bsky.social $MSFT
www.wsj.com/livecoverage...
November 7, 2025 at 7:17 PM
Reposted by AI Dystopia
Everything about it is just pure poetry, down to the mangled curtain.
Russia presented its human-like AI robot. It fell down as it walked onto the stage.
November 12, 2025 at 9:51 AM
Reposted by AI Dystopia
> “If I’m lucky enough to be able to continue practicing before the appellate court, I’m not going to do it again,” Panichi told the court in July, just before getting hit with two more rounds of sanctions in August.
While the simplest approach would be to admit the AI use, act humble, and self-report the error to relevant legal associations, not every lawyer takes the path of least resistance.
You won’t believe the excuses lawyers have after getting busted for using AI
I got hacked; I lost my login; it was a rough draft; toggling windows is hard.
arstechnica.com
November 11, 2025 at 8:33 PM