David Amadeus Vogelsang
davogelsang.bsky.social
David Amadeus Vogelsang
@davogelsang.bsky.social
Lecturer in Brain & Cognition at the University of Amsterdam
Then we asked: does this principle also apply to voices?
Using a recent dataset with >600 voice clips, we tested whether wav2vec embeddings showed the same effect.
👉 They didn’t. No consistent link between L2 norm and voice memorability.
September 18, 2025 at 9:59 AM
And crucially:
This effect held even after controlling for word frequency, valence, and size.
So representational magnitude is not just a proxy for familiar or emotionally loaded words.
September 18, 2025 at 9:58 AM
Then we asked: is this just a visual trick, or is it present in other domains as well?
When we turned to words, the result was striking:
Across 3 big datasets, words with higher vector magnitude in embeddings were consistently more memorable, revealing the same L2 norm principle
September 18, 2025 at 9:58 AM
We first wanted to examine whether we could replicate this L2 norm effect as reported by Jaegle et al. (2019).
Using the massive THINGS dataset (>26k images, 13k participants), we replicated that the L2 norm of CNN representations predicts image memorability.
September 18, 2025 at 9:56 AM