John Hummel
jehummel.bsky.social
John Hummel
@jehummel.bsky.social
Professor of Psychology and Philosophy at the University of Illinois Urbana-Champaign. I build computational models and conduct experiments to understand how neural computing architectures give rise to symbolic thought.
Sorry. I didn't mean to imply the oldest is the most important. Usually, the oldest is the first to be proven wrong. I was just saying it's therefore worth knowing about if only to avoid bringing it back up as a "new" idea.
September 8, 2025 at 9:14 PM
Those who don't know history are doomed to repeat it. And for those who do know history, it is frustrating (to say the least) to see the same mistakes made over and over.
September 8, 2025 at 5:04 PM
Dude, _nothing_ is ever proven in science. Proof is strictly the purview of logic. More concretely: The limitations of transformers as models of human language use are still being revealed. There will be no time, _ever_, when we can definitively say they have all been demonstrated.
September 8, 2025 at 5:01 PM
This is a big part of why systems trained by back propagation will necessarily fail. To such systems "meaningful properties" are simply "statistical regularities in the training set", which are a degenerate case, in part because they cannot represent meaningful invariants: arxiv.org/pdf/2508.15082
arxiv.org
September 7, 2025 at 4:58 PM
This is a big part of whyy systems based on back propagation will always fail. To them, "meaningful property" means "statistical regularity". But statistical regularities are a degenerate case of meaningful. This is why modern AI is barking up the entirely wrong tree. See arxiv.org/pdf/2508.15082
arxiv.org
September 7, 2025 at 4:55 PM
This is a big part of why systems based on back propagation will always fail. To them, "meaningful property" means "statistical regularity". But statistical regularities are a degenerate case of meaningful. This is why modern AI is barking up the entirely wrong tree. See arxiv.org/pdf/2508.15082
arxiv.org
September 7, 2025 at 4:54 PM
And a d repro'n of a task may consist of l rep's of its components. The important question is not d vs. l but whether the neurons code for meaningful properties of the system's universe. If yes, then you have biological intelligence; if no, then you have a deep net or an LLM.
September 7, 2025 at 4:50 PM
The question of whether a representation is distributed, d, or localist, l, is ill posed. You have to specify d or l _with respect to what_. A "distributed" representation of an object may consist of neurons that locally code the object's properties.
September 7, 2025 at 4:49 PM
I do not think this means what you seem to think it means. You seem to be celebrating the refutation of a claim absolutely no one makes. The "claim" you're refuting is disproved by every fMRI study that has ever been done. As Bowers noted, no one claims just a single neuron responds to anything.
September 4, 2025 at 6:16 PM