Ben Prystawski
@benpry.bsky.social
Cognitive science PhD student at Stanford, studying iterated learning and reasoning.
Reposted by Ben Prystawski
Now out in Cognition, work with the great @gershbrain.bsky.social @tobigerstenberg.bsky.social on formalizing self-handicapping as rational signaling!
📃 authors.elsevier.com/a/1lo8f2Hx2-...
📃 authors.elsevier.com/a/1lo8f2Hx2-...
September 19, 2025 at 3:46 AM
Now out in Cognition, work with the great @gershbrain.bsky.social @tobigerstenberg.bsky.social on formalizing self-handicapping as rational signaling!
📃 authors.elsevier.com/a/1lo8f2Hx2-...
📃 authors.elsevier.com/a/1lo8f2Hx2-...
Reposted by Ben Prystawski
How do we predict what others will do next? 🤔
We look for patterns. But what are the limits of this ability?
In our new paper at CCN 2025 (@cogcompneuro.bsky.social), we explore the computational constraints of human pattern recognition using the classic game of Rock, Paper, Scissors 🗿📄✂️
We look for patterns. But what are the limits of this ability?
In our new paper at CCN 2025 (@cogcompneuro.bsky.social), we explore the computational constraints of human pattern recognition using the classic game of Rock, Paper, Scissors 🗿📄✂️
August 12, 2025 at 10:56 PM
How do we predict what others will do next? 🤔
We look for patterns. But what are the limits of this ability?
In our new paper at CCN 2025 (@cogcompneuro.bsky.social), we explore the computational constraints of human pattern recognition using the classic game of Rock, Paper, Scissors 🗿📄✂️
We look for patterns. But what are the limits of this ability?
In our new paper at CCN 2025 (@cogcompneuro.bsky.social), we explore the computational constraints of human pattern recognition using the classic game of Rock, Paper, Scissors 🗿📄✂️
Reposted by Ben Prystawski
My final project from grad school is out now in Dev Psych! Mombasa County preschoolers were more accurate on object-based than picture-based vocabulary assessments, whereas Bay Area preschoolers were equally accurate on object-based and picture-based assessments.
psycnet.apa.org/doiLanding?d...
psycnet.apa.org/doiLanding?d...
APA PsycNet
psycnet.apa.org
August 6, 2025 at 11:54 PM
My final project from grad school is out now in Dev Psych! Mombasa County preschoolers were more accurate on object-based than picture-based vocabulary assessments, whereas Bay Area preschoolers were equally accurate on object-based and picture-based assessments.
psycnet.apa.org/doiLanding?d...
psycnet.apa.org/doiLanding?d...
Reposted by Ben Prystawski
In neuroscience, we often try to understand systems by analyzing their representations — using tools like regression or RSA. But are these analyses biased towards discovering a subset of what a system represents? If you're interested in this question, check out our new commentary! Thread:
August 5, 2025 at 2:36 PM
In neuroscience, we often try to understand systems by analyzing their representations — using tools like regression or RSA. But are these analyses biased towards discovering a subset of what a system represents? If you're interested in this question, check out our new commentary! Thread:
When people form conventions in reference games, how easy are they for outsiders to interpret? (for values of "outsider" that include naïve humans and vision-language models) Check out @vboyce.bsky.social's poster today at #CogSci2025 to find out.
paper: escholarship.org/uc/item/16c4...
paper: escholarship.org/uc/item/16c4...
Idiosyncratic but not opaque: Linguistic conventions formed in reference games are interpretable by naïve humans and vision–language models
Author(s): Boyce, Veronica; Prystawski, Ben; Tan, Alvin Wei Ming; Frank, Michael C. | Abstract: When are in-group linguistic conventions opaque to non-group members (teen slang like "rizz") or general...
escholarship.org
August 1, 2025 at 4:00 PM
When people form conventions in reference games, how easy are they for outsiders to interpret? (for values of "outsider" that include naïve humans and vision-language models) Check out @vboyce.bsky.social's poster today at #CogSci2025 to find out.
paper: escholarship.org/uc/item/16c4...
paper: escholarship.org/uc/item/16c4...
How can we use modern NLP methods to get lots of granular data from think-aloud experiments? Watch @danielwurgaft.bsky.social explain how in the Reasoning session at 4pm this afternoon at #CogSci2025
paper: arxiv.org/abs/2505.23931
paper: arxiv.org/abs/2505.23931
Scaling up the think-aloud method
The think-aloud method, where participants voice their thoughts as they solve a task, is a valuable source of rich data about human reasoning processes. Yet, it has declined in popularity in contempor...
arxiv.org
August 1, 2025 at 3:57 PM
How can we use modern NLP methods to get lots of granular data from think-aloud experiments? Watch @danielwurgaft.bsky.social explain how in the Reasoning session at 4pm this afternoon at #CogSci2025
paper: arxiv.org/abs/2505.23931
paper: arxiv.org/abs/2505.23931
How do people trade off between speed and accuracy in reasoning tasks without easy heuristics? Come to my talk, "Thinking fast, slow, and everywhere in between in humans and language models," in the Reasoning session this afternoon #CogSci2025 to find out!
paper: escholarship.org/uc/item/5td9...
paper: escholarship.org/uc/item/5td9...
Thinking fast, slow, and everywhere in between in humans and language models
Author(s): Prystawski, Ben; Goodman, Noah | Abstract: How do humans adapt how they reason to varying circumstances? Prior research has argued that reasoning comes in two types: a fast, intuitive type ...
escholarship.org
August 1, 2025 at 3:49 PM
How do people trade off between speed and accuracy in reasoning tasks without easy heuristics? Come to my talk, "Thinking fast, slow, and everywhere in between in humans and language models," in the Reasoning session this afternoon #CogSci2025 to find out!
paper: escholarship.org/uc/item/5td9...
paper: escholarship.org/uc/item/5td9...
Reposted by Ben Prystawski
🚨New paper! We know models learn distinct in-context learning strategies, but *why*? Why generalize instead of memorize to lower loss? And why is generalization transient?
Our work explains this & *predicts Transformer behavior throughout training* without its weights! 🧵
1/
Our work explains this & *predicts Transformer behavior throughout training* without its weights! 🧵
1/
June 28, 2025 at 2:35 AM
🚨New paper! We know models learn distinct in-context learning strategies, but *why*? Why generalize instead of memorize to lower loss? And why is generalization transient?
Our work explains this & *predicts Transformer behavior throughout training* without its weights! 🧵
1/
Our work explains this & *predicts Transformer behavior throughout training* without its weights! 🧵
1/
How can we combine the process-level insight that think-aloud studies give us with the large scale that modern online experiments permit? In our new CogSci paper, we show that speech-to-text models and LLMs enable us to scale up the think-aloud method to large experiments!
Excited to share a new CogSci paper co-led with @benpry.bsky.social!
Once a cornerstone for studying human reasoning, the think-aloud method declined in popularity as manual coding limited its scale. We introduce a method to automate analysis of verbal reports and scale think-aloud studies. (1/8)🧵
Once a cornerstone for studying human reasoning, the think-aloud method declined in popularity as manual coding limited its scale. We introduce a method to automate analysis of verbal reports and scale think-aloud studies. (1/8)🧵
June 25, 2025 at 5:32 AM
How can we combine the process-level insight that think-aloud studies give us with the large scale that modern online experiments permit? In our new CogSci paper, we show that speech-to-text models and LLMs enable us to scale up the think-aloud method to large experiments!
Reposted by Ben Prystawski
Delighted to announce our CogSci '25 workshop at the interface between cognitive science and design 🧠🖌️!
We're calling it: 🏺Minds in the Making🏺
🔗 minds-making.github.io
June – July 2024, free & open to the public
(all career stages, all disciplines)
We're calling it: 🏺Minds in the Making🏺
🔗 minds-making.github.io
June – July 2024, free & open to the public
(all career stages, all disciplines)
June 6, 2025 at 12:30 AM
Delighted to announce our CogSci '25 workshop at the interface between cognitive science and design 🧠🖌️!
We're calling it: 🏺Minds in the Making🏺
🔗 minds-making.github.io
June – July 2024, free & open to the public
(all career stages, all disciplines)
We're calling it: 🏺Minds in the Making🏺
🔗 minds-making.github.io
June – July 2024, free & open to the public
(all career stages, all disciplines)
Reposted by Ben Prystawski
the functional form of moral judgment is (sometimes) the nash bargaining solution
new preprint👇
new preprint👇
May 20, 2025 at 3:08 PM
the functional form of moral judgment is (sometimes) the nash bargaining solution
new preprint👇
new preprint👇
Reposted by Ben Prystawski
Despite the world being on fire, I can't help but be thrilled to announce that I'll be starting as an Assistant Professor in the Cognitive Science Program at Dartmouth in Fall '26. I'll be recruiting grad students this upcoming cycle—get in touch if you're interested!
May 7, 2025 at 10:08 PM
Despite the world being on fire, I can't help but be thrilled to announce that I'll be starting as an Assistant Professor in the Cognitive Science Program at Dartmouth in Fall '26. I'll be recruiting grad students this upcoming cycle—get in touch if you're interested!
Reposted by Ben Prystawski
Super excited to submit a big sabbatical project this year: "Continuous developmental changes in word
recognition support language learning across early
childhood": osf.io/preprints/ps...
recognition support language learning across early
childhood": osf.io/preprints/ps...
April 14, 2025 at 9:58 PM
Super excited to submit a big sabbatical project this year: "Continuous developmental changes in word
recognition support language learning across early
childhood": osf.io/preprints/ps...
recognition support language learning across early
childhood": osf.io/preprints/ps...
Reposted by Ben Prystawski
Hello bluesky world :) excited to share a new paper on data visualization literacy 📈 🧠 w/ @judithfan.bsky.social, @arnavverma.bsky.social, Holly Huey, Hannah Lloyd, @lacepadilla.bsky.social!
📝 preprint: osf.io/preprints/ps...
💻 code: github.com/cogtoolslab/...
📝 preprint: osf.io/preprints/ps...
💻 code: github.com/cogtoolslab/...
OSF
osf.io
March 7, 2025 at 5:05 PM
Hello bluesky world :) excited to share a new paper on data visualization literacy 📈 🧠 w/ @judithfan.bsky.social, @arnavverma.bsky.social, Holly Huey, Hannah Lloyd, @lacepadilla.bsky.social!
📝 preprint: osf.io/preprints/ps...
💻 code: github.com/cogtoolslab/...
📝 preprint: osf.io/preprints/ps...
💻 code: github.com/cogtoolslab/...
Reposted by Ben Prystawski
AI models are fascinating, impressive, and sometimes problematic. But what can they tell us about the human mind?
In a new review paper, @noahdgoodman.bsky.social and I discuss how modern AI can be used for cognitive modeling: osf.io/preprints/ps...
In a new review paper, @noahdgoodman.bsky.social and I discuss how modern AI can be used for cognitive modeling: osf.io/preprints/ps...
March 6, 2025 at 5:39 PM
AI models are fascinating, impressive, and sometimes problematic. But what can they tell us about the human mind?
In a new review paper, @noahdgoodman.bsky.social and I discuss how modern AI can be used for cognitive modeling: osf.io/preprints/ps...
In a new review paper, @noahdgoodman.bsky.social and I discuss how modern AI can be used for cognitive modeling: osf.io/preprints/ps...
Reposted by Ben Prystawski
1/13 New Paper!! We try to understand why some LMs self-improve their reasoning while others hit a wall. The key? Cognitive behaviors! Read our paper on how the right cognitive behaviors can make all the difference in a model's ability to improve with RL! 🧵
March 4, 2025 at 6:15 PM
1/13 New Paper!! We try to understand why some LMs self-improve their reasoning while others hit a wall. The key? Cognitive behaviors! Read our paper on how the right cognitive behaviors can make all the difference in a model's ability to improve with RL! 🧵
Reposted by Ben Prystawski
New paper in Psychological Review!
In "Causation, Meaning, and Communication" Ari Beller (cicl.stanford.edu/member/ari_b...) develops a computational model of how people use & understand expressions like "caused", "enabled", and "affected".
📃 osf.io/preprints/ps...
📎 github.com/cicl-stanfor...
🧵
In "Causation, Meaning, and Communication" Ari Beller (cicl.stanford.edu/member/ari_b...) develops a computational model of how people use & understand expressions like "caused", "enabled", and "affected".
📃 osf.io/preprints/ps...
📎 github.com/cicl-stanfor...
🧵
February 12, 2025 at 6:25 PM
New paper in Psychological Review!
In "Causation, Meaning, and Communication" Ari Beller (cicl.stanford.edu/member/ari_b...) develops a computational model of how people use & understand expressions like "caused", "enabled", and "affected".
📃 osf.io/preprints/ps...
📎 github.com/cicl-stanfor...
🧵
In "Causation, Meaning, and Communication" Ari Beller (cicl.stanford.edu/member/ari_b...) develops a computational model of how people use & understand expressions like "caused", "enabled", and "affected".
📃 osf.io/preprints/ps...
📎 github.com/cicl-stanfor...
🧵
Reposted by Ben Prystawski
What counts as in-context learning (ICL)? Typically, you might think of it as learning a task from a few examples. However, we’ve just written a perspective (arxiv.org/abs/2412.03782) suggesting interpreting a much broader spectrum of behaviors as ICL! Quick summary thread: 1/7
The broader spectrum of in-context learning
The ability of language models to learn a task from a few examples in context has generated substantial interest. Here, we provide a perspective that situates this type of supervised few-shot learning...
arxiv.org
December 10, 2024 at 6:17 PM
What counts as in-context learning (ICL)? Typically, you might think of it as learning a task from a few examples. However, we’ve just written a perspective (arxiv.org/abs/2412.03782) suggesting interpreting a much broader spectrum of behaviors as ICL! Quick summary thread: 1/7
Reposted by Ben Prystawski
Do you want to understand how language models work, and how they can change language science? I'm recruiting PhD students at UBC Linguistics! The research will be fun, and Vancouver is lovely. So much cool NLP happening at UBC across both Ling and CS! linguistics.ubc.ca/graduate/adm...
November 18, 2024 at 7:43 PM
Do you want to understand how language models work, and how they can change language science? I'm recruiting PhD students at UBC Linguistics! The research will be fun, and Vancouver is lovely. So much cool NLP happening at UBC across both Ling and CS! linguistics.ubc.ca/graduate/adm...
Reposted by Ben Prystawski
If you try to replicate a finding so you can build on it, but your study fails, what should you do? Should you follow up and try to "rescue" the failed rep, or should you move on? Boyce et al. tried to answer this question; in our sample, 5 of 17 rescue projects succeeded.
osf.io/preprints/ps...
osf.io/preprints/ps...
OSF
osf.io
October 18, 2024 at 3:51 PM
If you try to replicate a finding so you can build on it, but your study fails, what should you do? Should you follow up and try to "rescue" the failed rep, or should you move on? Boyce et al. tried to answer this question; in our sample, 5 of 17 rescue projects succeeded.
osf.io/preprints/ps...
osf.io/preprints/ps...
Reposted by Ben Prystawski
Preprint alert! After 4 years, I’m super excited to share work with @thecharleywu.bsky.social @gershbrain.bsky.social and Eric Schulz on the rise and fall of technological development in virtual communities in #OneHourOneLife #ohol
doi.org/10.31234/osf...
doi.org/10.31234/osf...
September 13, 2024 at 7:29 PM
Preprint alert! After 4 years, I’m super excited to share work with @thecharleywu.bsky.social @gershbrain.bsky.social and Eric Schulz on the rise and fall of technological development in virtual communities in #OneHourOneLife #ohol
doi.org/10.31234/osf...
doi.org/10.31234/osf...
Reposted by Ben Prystawski
How well can we understand an LLM by interpreting its representations? What can we learn by comparing brain and model representations? Our new paper highlights intriguing biases in learned feature representations that make interpreting them more challenging! 1/
May 23, 2024 at 6:58 PM
How well can we understand an LLM by interpreting its representations? What can we learn by comparing brain and model representations? Our new paper highlights intriguing biases in learned feature representations that make interpreting them more challenging! 1/
Reposted by Ben Prystawski
When a replication fails, researchers have to decide whether to make another attempt or move on. How should we think about this decision? Here's a new paper trying to answer this question, led by Veronica Boyce and featuring student authors from my class!
osf.io/preprints/ps...
osf.io/preprints/ps...
May 6, 2024 at 7:23 PM
When a replication fails, researchers have to decide whether to make another attempt or move on. How should we think about this decision? Here's a new paper trying to answer this question, led by Veronica Boyce and featuring student authors from my class!
osf.io/preprints/ps...
osf.io/preprints/ps...