rafabotero.bsky.social
@rafabotero.bsky.social
Reposted
I'd love to be proven wrong about LLMs. Can someone explain how LLMs answers a question correctly without an approximate answer in its training data or doing an external lookup to an existing knowledge base.
A lot of your takes pop up on my feed, and they are incorrect. Like your complaint about agent frameworks.

There’s studies done on more advanced LLMs where they test for knowledge and intelligence by specifically using tests not in the training data.

So no, they’re not biased search engines.
March 24, 2025 at 11:28 PM