David Amadeus Vogelsang
davogelsang.bsky.social
David Amadeus Vogelsang
@davogelsang.bsky.social
Lecturer in Brain & Cognition at the University of Amsterdam
Thank you; and that is an interesting question. My prediction is that it may not work so well (would be fun to test)
September 18, 2025 at 3:56 PM
Thank you for your reply. Unfortunately, we did not examine within-category effects, but that would certainly be interesting to do
September 18, 2025 at 3:51 PM
Our takeaway:
Memory has a geometry.
The magnitude of representations predicts memorability across vision and language, providing a new lens for understanding why some stimuli are memorable.
September 18, 2025 at 10:00 AM
Think of memory as geometry:
An item’s vector length in representational space predicts how likely it is to stick in your mind — at least for images and words.
September 18, 2025 at 10:00 AM
So what did we learn?
✅ Robust effect for images
✅ Robust effect for words
❌ No effect for voices
→ Memorability seems tied to how strongly items project onto meaningful representational dimensions, not all sensory domains.
September 18, 2025 at 9:59 AM
Then we asked: does this principle also apply to voices?
Using a recent dataset with >600 voice clips, we tested whether wav2vec embeddings showed the same effect.
👉 They didn’t. No consistent link between L2 norm and voice memorability.
September 18, 2025 at 9:59 AM
And crucially:
This effect held even after controlling for word frequency, valence, and size.
So representational magnitude is not just a proxy for familiar or emotionally loaded words.
September 18, 2025 at 9:58 AM
Then we asked: is this just a visual trick, or is it present in other domains as well?
When we turned to words, the result was striking:
Across 3 big datasets, words with higher vector magnitude in embeddings were consistently more memorable, revealing the same L2 norm principle
September 18, 2025 at 9:58 AM
In CNNs, the effect is strongest in later layers, where abstract, conceptual features are represented.
📊 Larger representational magnitude → higher memorability.
September 18, 2025 at 9:56 AM
We first wanted to examine whether we could replicate this L2 norm effect as reported by Jaegle et al. (2019).
Using the massive THINGS dataset (>26k images, 13k participants), we replicated that the L2 norm of CNN representations predicts image memorability.
September 18, 2025 at 9:56 AM
Why do we remember some things better than others?
Memory varies across people, but some items are intrinsically more memorable.
Jeagle et al. (2019) showed that a simple geometric property of representations — the L2 norm (vector magnitude) — positively correlates with image memorability
September 18, 2025 at 9:55 AM