suketupatel.bsky.social
@suketupatel.bsky.social
"While 'Attention is All You Need' revolutionized AI, our findings suggest we're missing a crucial piece of foundational architecture.

Without executive control, networks like those in the human brain, transformer attention alone can't deliver understanding or reasoning. 6/
January 24, 2025 at 8:05 PM
As the word list length increases to 40 (incongruent), performance rapidly degrades to 15% for 4o and 24% for Sonnet 3.5

However, if we prompt LLMs to read the word, performance is near 100% for the same stimuli.

The context length does not hit any image processing limits. 5/
January 24, 2025 at 8:05 PM
Claude Sonnet 3.5 even recognized the task without an explicit prompt, yet still only has 80% accuracy on 10 words list length with all mismatched color word pairs.

In contrast, humans can maintain 97% for 1500 words. 4/
January 24, 2025 at 8:05 PM
They have 100% accuracy on matching color-word pairs but plummet to 57-75% with all mismatches. Even neutral office terms trip them up (75%). But replace words with 'X's in the nonword neutral condition; they're 100% again. 3/
January 24, 2025 at 8:05 PM
🚨Preprint: Deficient Executive Control in Transformer Attention

Transformer attention FAILS at handling basic conflicting information:

This bottom-up flaw is a fundamental limitation & suggests that the current transformer architecture may be a dead end for AGI. 1/
January 24, 2025 at 8:05 PM
Parts of speech are embodied concepts representing fundamental aspects of agency: pronouns identify actors, nouns categorize them, and verbs capture their intentions and actions. These components combine recursively to generate infinite expressions of agency.
November 27, 2024 at 7:53 PM
How did we gradually accumulate the proto-language skills to then fully acquire grammar and lexicon?

How are infants using embodiment to categorize words into subjects, objects, and verbs without explicit instruction?
November 27, 2024 at 7:53 PM