Remmelt 🛑
banner
artificialbodies.net
Remmelt 🛑
@artificialbodies.net
Stop Big Tech that:
- launders our data
- dehumanises workers
- lobbies for unsafe uses
- pollutes our environment

Short book on how AI corps get destructive:
https://artificialbodies.net/artificial-bodies-preface-7042453348de
Epic dismantling of Bostrom's silly ivory tower ideas.
A critique of Nick Bostrom's--I don't know how else to put it--incredibly dumb new paper in favor of artificial superintelligence. His conclusion is atrocious, dangerous, callous, arrogant, and genocidal, & his argument is patently flawed. I explain why below:
Nick Bostrom’s Pro-Superintelligence Paper Is an Embarrassment
(4,300 words)
www.realtimetechpocalypse.com
February 16, 2026 at 3:01 AM
Reposted by Remmelt 🛑
We need care, and to become skilful at caring.
February 15, 2026 at 9:16 AM
Institutions have failed, and it's not enough to turn against 'Big Tech' or the 'Deep State'.

We need to become clear about what we value, and make collective choices as communities. We need to choose for better governance.
February 16, 2026 at 2:08 AM
Reposted by Remmelt 🛑
OpenAI ”acknowledged in its own research that LLMs will always produce hallucinations due to fundamental mathematical constraints that cannot be solved through better engineering, marking a significant admission from one of the AI industry’s leading companies.”

You can’t trust chatbots.
OpenAI admits AI hallucinations are mathematically inevitable, not just engineering flaws
In a landmark study, OpenAI researchers reveal that large language models will always produce plausible but false outputs, even with perfect data, due to fundamental statistical and computational limi...
www.computerworld.com
February 15, 2026 at 8:25 PM
That’s pretty atrocious.

Why talk with people who don’t consider you as a feeling human being? Your choice.
Here's an example of the reaction I'm describing here, and I appreciate something this particular poster makes clear: They neither understand nor respect boundaries. There's no policing happening here, just my own decisions about who I will have a conversation with.
February 15, 2026 at 9:19 AM
We need care, and to become skilful at caring.
February 15, 2026 at 9:16 AM
Reposted by Remmelt 🛑
“This feature would pose a grave risk to privacy, safety, and civil liberties and would cause widespread harm to the public…It must not be allowed to reach the market.”
EPIC Urges FTC, States to Block Meta’s Facial Recognition Smart Glasses Plan
<p>EPIC has sent a letter to the Federal Trade Commission and State Attorneys General urging them to quickly investigate and prevent Meta’s plan to add facial recognition and surveillance capabilities...
epic.org
February 14, 2026 at 12:12 AM
Reposted by Remmelt 🛑
The TESCREALists I wrote about (Goertzel), teaming up with pedophiles (Epstein), experimenting on five million Ethiopian children through "digital IDs" which they want for harvesting data to train "AI." My worlds colliding in the worst way.
February 13, 2026 at 1:33 AM
“What happens when atrocities go unnoticed, unpunished, or even tacitly accepted? Impunity does not end violence; it perpetuates it…

[A]ctive war has flared up again in Tigray in 2026.

This has raised the prospect of a renewed full-scale siege. This is evidenced by recent drone attacks”
February 13, 2026 at 5:27 PM
Reposted by Remmelt 🛑
“Largely untethered from sources, the output of AI bots has the capacity to rewrite history, and to therefore change our understanding of ourselves. What do we learn from history if history itself is a fiction?”
Ceci n’est pas une Coca Cola: The Treachery of Images ©2018 Neil Turkewitz
Who is the Pilot of Microsoft’s Copilot? A Cautionary Tale of AI Remaking the Past (& Shaping Our Future) by Neil Turkewitz At a friend’s urging, I asked Microsoft’s Copilot a question about …
medium.com
February 13, 2026 at 1:38 PM
Reposted by Remmelt 🛑
civil society and right groups orgs that often keep a tab on and organise campaigns against these evil corporations are overstretched and underfunded. this internal memo from Meta is insidious and sickening
Facebook plans to put facial recognition in its glasses and they think we’re too stupid to fight back.

Their internal memo: “We will launch during a dynamic political environment where many civil society groups that we would expect to attack us would have their resources focused on other concerns.”
Meta Plans to Add Facial Recognition Technology to Its Smart Glasses
www.nytimes.com
February 13, 2026 at 3:08 PM
Covers Palantir’s anti-competitive approach.
Big Tech Plans to Move Fast and Break Democracy | Kairos.fm
In the wake of many ICE killings, Jacob and Igor discuss the value and importance of calling out techno-authoritarianism
kairos.fm
February 13, 2026 at 4:27 PM
Technology is a gift.

Humans wasted that gift on excess—comfort, entertainment, and power games.

We could use tech to serve nature and community.
February 12, 2026 at 9:10 AM
Reposted by Remmelt 🛑
Saying it doesn’t make it so. www.theverge.com/tech/876866/...
February 11, 2026 at 2:23 AM
Reposted by Remmelt 🛑
We are demanding that EU co-legislators reject attempts in the AI Omnibus to remove a key transparency safeguard from the AI Act.

We cannot open a loophole that would let providers exempt themselves from the AI Act’s high risk requirements with no transparency www.accessnow.org/press-releas...
Access Now - A call to EU legislators: protect rights and reject the call to delete transparency safeguard in AI Act
We, the undersigned organisations and individuals, urge you in the strongest possible terms to reject the deletion of the Article 49(2) transparency safeguard for high-risk AI systems that is proposed...
www.accessnow.org
February 11, 2026 at 9:47 AM
Dario and Demis keep signalling how sciency they are.

Then they express ungrounded optimism about finding a way through. That ‘AI’ won’t do us in.

They haven’t rigorously reasoned it through at all.
February 11, 2026 at 5:47 PM
Reposted by Remmelt 🛑
But comms at Ring assured me it’s not mass surveillance 🙄
Here's a hidden secret of doorbell cameras, as revealed by Nancy Guthrie's kidnapping. They're recording even when you don't know it, and sending video to company servers even when you don't pay
February 11, 2026 at 5:23 PM
Reposted by Remmelt 🛑
It really sucks that the "merchants of doubt" around "AI" water usage are so good at spreading doubt. Because the water usage *is* bad, and even if data centers somehow magically created additional water, they'd still be bad for all the other reasons that data centers are bad
"Each data center can “drink” as much as an entire community. Yet Texas does not require data center operators to disclose projected water use or report actual consumption"

Again, the talking point that "there's no water issue" is dangerously wrong and if you repeat it, you're helping tech CEOs
Texas is rushing to build massive artificial intelligence infrastructure without a planning system capable of assessing industrial water needs, experts warn. There are currently over 400 data centers operating or under construction in the Lone Star State.
February 10, 2026 at 8:21 PM
Reposted by Remmelt 🛑
I do feel like part of the deal of neoliberalism was supposed to be that we'd accept declining quality of life and deepening inequality, in exchange for relatively less monstrousness from the people in power and for institutions to keep the monstrousness mostly in check. It's...not going great
Souls absolutely atrophied into weak pits of need that they tapped Epstein to fill up.

Pathetic, heartless creatures of vulgar appetite, happy to break bread and do worse with a man who was by all accounts a social and intellectual fraud.

Just to feed those ugly and deliberately harmful appetites.
February 11, 2026 at 2:07 AM
These techbros are equivocating ‘being human’ with virtual information-processing.

‘AI’ by its nature (given standardised hard parts through which light particles propagate as information signals) could do more virtual processing.

Therefore, ‘AI’ can be more human than us.

1/
February 11, 2026 at 4:11 PM
Reposted by Remmelt 🛑
Students said “they believe the monitoring software has created a sense of fear among students who worry looking something up online can get them into trouble at school, and even if their school district means well, the constant digital surveillance creates more stress, not less.”
Houston schools are using AI to watch student mental health. Experts are worried
Ed tech companies like GoGuardian and Lightspeed Systems offer for AI software to detect self-harm. Experts and students are concerned about its accuracy and unintended consequences.
www.houstonchronicle.com
January 31, 2026 at 2:10 PM
Reposted by Remmelt 🛑
They have been steadily increasing the amount of Palestinians they kill as the media continues to cover it as a "ceasefire".
January 31, 2026 at 3:10 PM
Reposted by Remmelt 🛑
It's like we're in one of these zombie movies and everyone has been bit by the virus or something.

How is this real life?
February 1, 2026 at 1:26 AM
Reposted by Remmelt 🛑
There is no choice presented here. Mozilla is using contrarian language to sell their own generative AI product.

This isn't rebelling. This isn't listening to the consumer. This is not coming from the humane perspective. This is an attempt at gaining investors and waning critics away from anti-ai.
January 27, 2026 at 10:03 PM
The cheesiest of campaigns to let the rich extract from all of us.
February 2, 2026 at 5:29 AM