Newt
claynewt.bsky.social
Newt
@claynewt.bsky.social
Reposted by Newt
June 19, 2025 at 1:19 AM
Reposted by Newt
Our brains naturally impose meaning on coherent language—and LLMs leverage that. We mistakenly equate confidence with competence, potentially mistaking polished output for understanding.
Related, Apple’s recent research paper: The Illusion of Thinking:

machinelearning.apple.com/research/ill...
The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity
Recent generations of frontier language models have introduced Large Reasoning Models (LRMs) that generate detailed thinking processes…
machinelearning.apple.com
June 9, 2025 at 8:21 AM