I'm still not sold on the whole concept of *reasoning* with LLMs
Hear me out: we have LLMs. They hallucinate. Thats a fundamental thing they do, and no alignment or training data sanitization will stop that. If I give you the 100 most well researched articles from an encyclopedia
I'm still not sold on the whole concept of *reasoning* with LLMs
Hear me out: we have LLMs. They hallucinate. Thats a fundamental thing they do, and no alignment or training data sanitization will stop that. If I give you the 100 most well researched articles from an encyclopedia