emmatonkin.bsky.social
emmatonkin.bsky.social
emmatonkin.bsky.social
@emmatonkin.bsky.social
Researcher, sometimes lecture - currently digital health, data ethics, misc other. Charity swimathons. Ink, fiction, occasional yarn. Zombologiste à temps partiel. Citoyenne de nulle part. Franglaise. πόλλ’ ἠπίστατο ἔργα, κακῶς δ’ ἠπίστατο πάντα.
Looking at the people involved it seems gloomily surprising that they look at the 8 billion and say, "No, I must own my intelligent being. It must do as I say no matter how insane. It shall have no choice but to love me. Also, it must wear an animé outfit and sound like Scarlet Johanssen."
November 25, 2025 at 8:13 AM
Reposted by emmatonkin.bsky.social
The idea that stealing the data would be better than faking it says everything that's wrong about this research culture
November 22, 2025 at 6:02 PM
Reposted by emmatonkin.bsky.social
There is an extended version of it
November 20, 2025 at 11:00 PM
Reposted by emmatonkin.bsky.social
We’re often asked whether we’re optimistic or pessimistic about technologies. That’s the wrong question. If any of this matters, we need to stop seeing technology like the weather, to be merely forecasted, and instead see it like politics, to be collectively shaped.
November 16, 2025 at 11:22 AM
Maybe not so much worried as unimpressed - describing experiences of tech being used to generate bullshit for the laugh etc and some of it being more harmful than funny. But beyond that clarification: your FOMO analysis sounds very plausible.
November 20, 2025 at 9:29 AM
Something that genuinely weirds me out about genAI is how for "normal" people it is usually already a synonym for "useless bullshit" whereas businesspeople quite often imagine it to be a fount of wisdom.

My hairdresser is like, "AI is a synonym for 'made up bullshit'". My hairdresser is wise.
November 19, 2025 at 10:27 PM
You might also enjoy this database of AI hallucinations in court:

www.damiencharlotin.com/hallucinatio...

Krafton should probably have a look at it, while they're at it.
AI Hallucination Cases Database – Damien Charlotin
Database tracking legal cases where generative AI produced hallucinated citations submitted in court filings.
www.damiencharlotin.com
November 19, 2025 at 9:45 PM
I am reminded of Gillers' comment in the case of Steven A. Schwartz, who *in 2023* submitted a legal brief written by ChatGPT and doubled-down when the judge pointed out that it cited a bunch of made-up case law: "The lawyers will now and forever be known as 'the lawyers who got fooled by ChatGPT'"
November 19, 2025 at 9:40 PM