Craig Abbott
banner
craigabbott.co.uk
Craig Abbott
@craigabbott.co.uk
Principal Accessibility Specialist at TetraLogical. Former Head of Accessibility at DWP. Writer. Speaker. Coder. Wildlife Photographer. Cat botherer. ADHD. Autistic. He/They. https://www.craigabbott.co.uk
I mean... Yeah... It is that bad.
November 7, 2025 at 6:57 PM
Wait... What? A Christmas tree... Now?
a man in a suit and tie is standing at a podium giving a speech ..
Alt: Ron Burgundy from Anchor Man, stood at a podium, in a suit and tie, shaking his head with the captions: Way too early for that, way too early.
media.tenor.com
November 7, 2025 at 6:49 PM
Haha. It is ridiculously fun to drive, so it’s really annoying that Elon has ruined it for me!
November 4, 2025 at 11:32 AM
These window stickers do make it a little more tolerable to drive though. 😄
November 4, 2025 at 7:39 AM
At this point, I can’t tell if I laugh because it’s hysterical, or if it’s me that’s hysterical!
October 28, 2025 at 7:43 AM
Well, this post just got way more terrifying. Now there is evidence AI can, and will, resist shutdown commands. 😅 bsky.app/profile/crai...
October 28, 2025 at 7:15 AM
I have reported it, but Apple provide no way to receive updates or track progress. So I guess I’ll have to just keep hoping with every update haha
October 22, 2025 at 7:34 PM
Thanks, but unfortunately this doesn't seem to work either. 😩
October 22, 2025 at 10:54 AM
So yeah, guess this means X continues to be a terrible place to get information, for both people and machines!

Sources:
llm-brain-rot.github.io
www.business-standard.com/technology/t...
cryptoslate.com/the-un-dead-...
www.indiatoday.in/technology/n...
LLMs Can Get Brain Rot
New finding: LLMs Can Get Brain Rot if being fed trivial, engaging Twitter/X content.
llm-brain-rot.github.io
October 22, 2025 at 6:26 AM
The research warns we’re headed towards a “Zombie Internet”, where LLMs are training on dodgy content, generated by other compromised LLMs, fully embracing the term “viral content”.
October 22, 2025 at 6:26 AM
What’s fascinating, is that LLMs have been designed to be so… human… that they seem to even experience the same brain rot and reasoning regression we’re seeing in people, who are often being manipulated and conditioned using the same awful content.
October 22, 2025 at 6:26 AM
Attempts to fix them by retraining them with high-quality data didn’t work. The viral posts caused permanent damage to their internal knowledge-base, and fine-tuning doesn’t seem to be able to reverse it.
October 22, 2025 at 6:26 AM
What’s really concerning, is that the models began “thought skipping”, meaning they cut corners when using reasoning capabilities. Monitoring showed that they also started to score higher for narcissism and psychopathy traits, and score lower on “agreeableness”.
October 22, 2025 at 6:26 AM
Using “viral” content from posts on X (obviously), models trained exclusively on low-accuracy but popular content, reduced in accuracy from 74.9% to 57.2%, and their long-context understanding reduced from 84.4% to 52.3%.
October 22, 2025 at 6:26 AM
Yeah. I have every setting in typing feedback toggled off ☹️
October 20, 2025 at 8:27 PM
The author, Terry DeYoung, argues that from a Christian perspective, a keystone of American culture, which worships a god of justice and teaches people to stand with the oppressed, it’s difficult to understand how people can support the Trump administration in these actions.
October 9, 2025 at 6:08 AM