Weijie Xu
weijiexu.bsky.social
Weijie Xu
@weijiexu.bsky.social
Psycholinguist @ UC Irvine
https://weijiexu-charlie.github.io/
The precursor of this work (SRA 1.0) was published earlier this year on JML, titled "Informativity enhances memory robustness against interference in sentence comprehension":
www.sciencedirect.com/science/arti...
Informativity enhances memory robustness against interference in sentence comprehension
Language comprehension has been argued to be expectation-based, with more predictable linguistic units being easier to process. However, as a communic…
www.sciencedirect.com
November 6, 2025 at 11:31 PM
Our account naturally indicates that high-surprisal inputs require more memory resources. This suggests that the tendency to put more effort into processing surprising inputs may reflect an evolved strategy of our cognitive system that is adapted to an efficiency problem.
November 6, 2025 at 11:31 PM
We also provide mathematical derivation (in Appendix) showing that the optimal encoding precisions to minimize total retrieval error should satisfy that high surprisal inputs be encoded with higher precision compared to low surprisal ones.
November 6, 2025 at 11:31 PM
High-surprisal inputs are less likely to be reconstructed under the influence of prior, resulting in higher retrieval error. Therefore, to minimize total retrieval error across multiple inputs, more surprising ones need to be encoded with higher precision.
November 6, 2025 at 11:31 PM
Second, since the true state of a past input is inaccessible, memory retrieval, or decoding, is effectively an inferential process that reconstructs a past input from uncertainty, using the statistical structure of long-term knowledge.
November 6, 2025 at 11:31 PM
First, memory representation is noisy, and the true state of a past input is inaccessible. Importantly, following previous studies in psychophysics, we posit that more memory resources in encoding lead to less representational uncertainty of the input.
November 6, 2025 at 11:31 PM
We propose that more memory resources should be allocated to more surprising linguistic units as the principled solution to a computational problem of working memory: how to maximize memory accuracy under the constraint of limited resources?
November 6, 2025 at 11:31 PM
Reposted by Weijie Xu
Fine-tuning a ggplot's aesthetic sensibilities is a time-honoured way to procrastinate from doing something you'd hate even more
October 10, 2025 at 1:55 PM