Memory has a geometry.
The magnitude of representations predicts memorability across vision and language, providing a new lens for understanding why some stimuli are memorable.
Memory has a geometry.
The magnitude of representations predicts memorability across vision and language, providing a new lens for understanding why some stimuli are memorable.
An item’s vector length in representational space predicts how likely it is to stick in your mind — at least for images and words.
An item’s vector length in representational space predicts how likely it is to stick in your mind — at least for images and words.
✅ Robust effect for images
✅ Robust effect for words
❌ No effect for voices
→ Memorability seems tied to how strongly items project onto meaningful representational dimensions, not all sensory domains.
✅ Robust effect for images
✅ Robust effect for words
❌ No effect for voices
→ Memorability seems tied to how strongly items project onto meaningful representational dimensions, not all sensory domains.
Using a recent dataset with >600 voice clips, we tested whether wav2vec embeddings showed the same effect.
👉 They didn’t. No consistent link between L2 norm and voice memorability.
Using a recent dataset with >600 voice clips, we tested whether wav2vec embeddings showed the same effect.
👉 They didn’t. No consistent link between L2 norm and voice memorability.
This effect held even after controlling for word frequency, valence, and size.
So representational magnitude is not just a proxy for familiar or emotionally loaded words.
This effect held even after controlling for word frequency, valence, and size.
So representational magnitude is not just a proxy for familiar or emotionally loaded words.
When we turned to words, the result was striking:
Across 3 big datasets, words with higher vector magnitude in embeddings were consistently more memorable, revealing the same L2 norm principle
When we turned to words, the result was striking:
Across 3 big datasets, words with higher vector magnitude in embeddings were consistently more memorable, revealing the same L2 norm principle
📊 Larger representational magnitude → higher memorability.
📊 Larger representational magnitude → higher memorability.
Using the massive THINGS dataset (>26k images, 13k participants), we replicated that the L2 norm of CNN representations predicts image memorability.
Using the massive THINGS dataset (>26k images, 13k participants), we replicated that the L2 norm of CNN representations predicts image memorability.
Memory varies across people, but some items are intrinsically more memorable.
Jeagle et al. (2019) showed that a simple geometric property of representations — the L2 norm (vector magnitude) — positively correlates with image memorability
Memory varies across people, but some items are intrinsically more memorable.
Jeagle et al. (2019) showed that a simple geometric property of representations — the L2 norm (vector magnitude) — positively correlates with image memorability