Alexander Slugworth
alexanderslugworth.bsky.social
Alexander Slugworth
@alexanderslugworth.bsky.social
I just want everybody to be okay.
I know I opened with "Neat!" but this is a pretty worrying state of affairs.

Tons of insufficiently-conscientious people make decisions based 𝘦𝘯𝘵𝘪𝘳𝘦𝘭𝘺 upon advice from LLMs.

It's bad enough to have false content out there, but LLM citations lend it harmful authority. xkcd.com/978/

6/6
September 3, 2025 at 5:13 PM
For what it's worth, I sent my same initial prompt to GPT-5. It came across the exact same blog post, and evaluated the same false claim about Custom GPT Actions being deprecated.

Unlike Claude Opus 4.1, GPT-5 correctly identified OpenAI's documentation as the authoritative source.

5/6
September 3, 2025 at 5:13 PM
The citation it gave for its false claim was a blog post: www.lindy.ai/blog/custom-...

I clicked the link.

The content is immediately recognizable as having been written by an LLM. Its first sentence, by virtue of employing the past tense, propagates deeply hallucinated falsehoods.

4/6
September 3, 2025 at 5:13 PM
Claude's claim was false. Custom GPTs can currently perform actions (i.e. send API requests to third parties).

I knew this, and was confused as to how Claude had managed to arrive at this incorrect conclusion even after searching. After all, it had even found OpenAI's official documentation!

3/6
September 3, 2025 at 5:13 PM
I was writing some content about Custom GPTs, and I asked Claude Opus 4.1 to validate the accuracy of my technical claims. I specified that it should use its search tool.

It searched for relevant information, and then flagged my content for describing a deprecated feature: Custom GPT actions.

2/6
September 3, 2025 at 5:13 PM
You're the third person switching from Twitter whose content I particularly like, and the first mutual among my tiny Twitter circle.

I want federated networks to gain steam. This small threshold was enough for me to join and do my small part in service of that goal.
November 14, 2024 at 3:50 PM