chadanlo.bsky.social
@chadanlo.bsky.social
Reposted
When a chatbot gets something wrong, it’s not because it made an error. It’s because on that roll of the dice, it happened to string together a group of words that, when read by a human, represents something false. But it was working entirely as designed. It was supposed to make a sentence & it did.
June 19, 2025 at 11:28 AM
Reposted
Chatbots — LLMs — do not know facts and are not designed to be able to accurately answer factual questions. They are designed to find and mimic patterns of words, probabilistically. When they’re “right” it’s because correct things are often written down, so those patterns are frequent. That’s all.
June 19, 2025 at 11:21 AM