Thomas Hikaru Clark
thomashikaru.bsky.social
Thomas Hikaru Clark
@thomashikaru.bsky.social
MIT Brain and Cognitive Sciences
7/7 Check out our CogSci proceedings paper for more details, and stay tuned for updates! Thanks to all who provided their feedback :)
escholarship.org/uc/item/9kr1...
A Model of Approximate and Incremental Noisy-Channel Language Processing
Author(s): Clark, Thomas; Vigly, Jacob Hoover; Gibson, Edward; Levy, Roger | Abstract: How are comprehenders able to extract meaning from utterances in the presence of production errors? The noisy-cha...
escholarship.org
July 31, 2025 at 5:56 PM
6/7 We release our model's code on GitHub: github.com/thomashikaru...
GitHub - thomashikaru/noisy_channel_model
Contribute to thomashikaru/noisy_channel_model development by creating an account on GitHub.
github.com
July 31, 2025 at 5:56 PM
5/7 The model also returns incremental surprisals (quantified as mean particle weight, here tested on sentences from Ryskin et al., 2021 @ryskin.bsky.social), which can be compared to a baseline LM. "Explainable errors" tend to be less surprising under our model than the baseline.
July 31, 2025 at 5:56 PM
4/7 The rich, interpretable output of the model includes posteriors over inferred errors at each word and over intended (latent) sentences. The model makes inferences that are consistent with the human noisy-channel inferences implied by Gibson et al., 2013.
July 31, 2025 at 5:56 PM
3/7 We combine a generative model of noisy production (LM prior + symbolic error model), with approximate, incremental Sequential Monte Carlo inference. This allows fine-grained control of the error types under consideration and inference dynamics.
July 31, 2025 at 5:56 PM
2/7 According to noisy-channel theory, humans interpret utterances non-literally using both linguistic priors and error likelihoods. However, how this works at a more algorithmic level is an open question, and something that implemented computational models can help us explore.
July 31, 2025 at 5:56 PM