Weijie Xu
weijiexu.bsky.social
Weijie Xu
@weijiexu.bsky.social
Psycholinguist @ UC Irvine
https://weijiexu-charlie.github.io/
Our account naturally indicates that high-surprisal inputs require more memory resources. This suggests that the tendency to put more effort into processing surprising inputs may reflect an evolved strategy of our cognitive system that is adapted to an efficiency problem.
November 6, 2025 at 11:31 PM
We also provide mathematical derivation (in Appendix) showing that the optimal encoding precisions to minimize total retrieval error should satisfy that high surprisal inputs be encoded with higher precision compared to low surprisal ones.
November 6, 2025 at 11:31 PM
High-surprisal inputs are less likely to be reconstructed under the influence of prior, resulting in higher retrieval error. Therefore, to minimize total retrieval error across multiple inputs, more surprising ones need to be encoded with higher precision.
November 6, 2025 at 11:31 PM
First, memory representation is noisy, and the true state of a past input is inaccessible. Importantly, following previous studies in psychophysics, we posit that more memory resources in encoding lead to less representational uncertainty of the input.
November 6, 2025 at 11:31 PM
[SRA 2.0] Glad to share that my new paper (with @futrell.bsky.social) titled "Strategic resource allocation in memory encoding: An efficiency principle shaping language processing" is now on Journal of Memory and Language:
www.sciencedirect.com/science/arti...
November 6, 2025 at 11:31 PM