Without executive control, networks like those in the human brain, transformer attention alone can't deliver understanding or reasoning. 6/
Without executive control, networks like those in the human brain, transformer attention alone can't deliver understanding or reasoning. 6/
However, if we prompt LLMs to read the word, performance is near 100% for the same stimuli.
The context length does not hit any image processing limits. 5/
However, if we prompt LLMs to read the word, performance is near 100% for the same stimuli.
The context length does not hit any image processing limits. 5/
In contrast, humans can maintain 97% for 1500 words. 4/
In contrast, humans can maintain 97% for 1500 words. 4/
Transformer attention FAILS at handling basic conflicting information:
This bottom-up flaw is a fundamental limitation & suggests that the current transformer architecture may be a dead end for AGI. 1/
Transformer attention FAILS at handling basic conflicting information:
This bottom-up flaw is a fundamental limitation & suggests that the current transformer architecture may be a dead end for AGI. 1/
How are infants using embodiment to categorize words into subjects, objects, and verbs without explicit instruction?
How are infants using embodiment to categorize words into subjects, objects, and verbs without explicit instruction?