Anthony Moser
banner
anthonymoser.com
Anthony Moser
@anthonymoser.com
(He/Him) Folk Technologist • anthony.moser@gmail.com • N4EJ • http://www.BetterDataPortal.com • baker in The FOIA Bakery • http://publicdatatools.comhttp://deseguys.com • #1 on hackernews when you search for "hater"
Pinned
I considered writing a long carefully constructed argument laying out the harms and limitations of AI, but instead I wrote about being a hater. Only humans can be haters.
I Am An AI Hater
I am an AI hater. This is considered rude, but I do not care, because I am a hater.
anthonymoser.github.io
"The judge issued an opinion" cool I issue opinions all the time. I'm issuing an opinion right now
Unless people choose to listen to them, or they can send somebody who will *make you* do what they say, a judge is just a person in a robe with opinions
February 10, 2026 at 9:35 PM
Human beings can understand many ways symbols may be related.

LLMs only model one: these things often appear together. This sequence is in some sense likely.

Likelihood is not sufficient
February 10, 2026 at 6:02 PM
Also the study I originally quoted literally found you're better off just searching or trusting your own judgment
February 10, 2026 at 6:00 PM
I'm not blaming people who fall for it, but I'm saying it's a harmful trick and we should treat it as such.

It seems like you're saying the harmful trick is comparable to the experience of seeking medical care in our dysfunctional non system
February 10, 2026 at 5:57 PM
But you're suggesting that we should validate that manipulation instead of exposing it as a harmful trick
February 10, 2026 at 5:55 PM
Reposted by Anthony Moser
This is key, and why relying on "lived experience" of these products is misguided & dangerous. We should listen to people's lived experience of oppression in healthcare, & of what their needs are. But that does not mean taking people's impressions of the cognitive hazard machine at face value
an LLM is effectively hijacking those cues.

people *feel* like it's listening.
it is not listening; it is a statistical word model.

people *feel* like it's paying attention.
it is not paying attention; it has no attention to pay.

it tricks them into trusting the output
February 10, 2026 at 5:44 PM
Reposted by Anthony Moser
an LLM is effectively hijacking those cues.

people *feel* like it's listening.
it is not listening; it is a statistical word model.

people *feel* like it's paying attention.
it is not paying attention; it has no attention to pay.

it tricks them into trusting the output
February 10, 2026 at 5:24 PM
but an LLM is neither a community nor a connection to another human being so it cannot offer that value
February 10, 2026 at 5:33 PM
that's completely true, and in a human being, not shaming someone is an indicator that you have a good doctor paying attention to you

in a large language model it does not mean the same thing
February 10, 2026 at 5:29 PM
it is using the cues we associate with trustworthy humans to make them think the model is trustworthy

but those cues do not mean the same thing because the language model is not a person.

and the advice is often dangerously incoherent
My mom and Dr. DeepSeek
In China and around the world, the sick and lonely turn to AI.
restofworld.org
February 10, 2026 at 5:28 PM
an LLM is effectively hijacking those cues.

people *feel* like it's listening.
it is not listening; it is a statistical word model.

people *feel* like it's paying attention.
it is not paying attention; it has no attention to pay.

it tricks them into trusting the output
February 10, 2026 at 5:24 PM
the thing about not shaming and being patient is that in a human interaction, those are cues

they tell you somebody is paying attention to you, really listening, trying to understand with an open mind what you're experiencing.

those things are good because they are *indicators* of a good thing
February 10, 2026 at 5:24 PM
do you think the statistical models of word frequency now understand the concepts the words refer to?
February 10, 2026 at 4:15 PM
i am certain some of y'all who follow me are in the middle of this venn diagram
If anyone has a grade 1-5 teacher in your network who is a Luddite or AI Hater and who would be willing to be interviewed for Season 2 of the podcast, please put them in touch! Our DMs are open. The guest can be anonymous, as we know that it can be dangerous to speak out against AI in these times. 😞
February 10, 2026 at 4:08 PM
astounding
February 10, 2026 at 4:01 PM
i'm going live
where?
out there. in the world. you know, live. i'll be back later
February 10, 2026 at 3:47 PM
lol I had the same reaction
As improv comedy groups enter the operating room, reports arise of botched surgeries and misidentified body parts
“Researchers from Johns Hopkins, Georgetown and Yale universities recently found that 60 FDA-authorized medical devices using AI were linked to 182 product recalls, according to a research letter published in the JAMA Health Forum in August.”
February 10, 2026 at 3:37 PM
indeed it seems like it would be profoundly unhelpful for any question that isn't about how often words occur together
February 10, 2026 at 3:28 PM
Tbh I don't think we'd see this level of people using it for self diagnosis if the ELIZA effect wasn't so powerful.

People don't ask image generators to draw a picture of what's wrong with their body.

The reflexive assumption is that text = intention = mind = understanding
February 10, 2026 at 2:52 PM
I think it's still a bad approach, and usually paired with the assumption of constrained resources.

Stuff like simple Wikipedia is a good example of trying to build an accessible context for making sense of things. And that's far better than statistical translation of jargon into plain language
February 10, 2026 at 2:48 PM
"Two of the robots - One and Two - were stationary, but a third named Three was making a delivery"
February 10, 2026 at 2:41 PM
I agree on that, is the official style guide to refer to robots by name? Would they have done that if the bots were named 1 and 2?
February 10, 2026 at 2:39 PM
Canalbot
February 10, 2026 at 2:36 PM
this should be obvious
February 10, 2026 at 2:35 PM