Prev: Pre-doc Fellow @Harvard, Econ & CS research with Paul Romer, Stats & ML @UniofOxford, Econ @Columbia
@gershbrain.bsky.social and @fierycushman.bsky.social!
Full paper: osf.io/preprints/ps...
@gershbrain.bsky.social and @fierycushman.bsky.social!
Full paper: osf.io/preprints/ps...
When designing systems to reason socially, we shouldn’t assume full inference is always used — or always needed.
Humans strike a balance between accuracy and efficiency.
When designing systems to reason socially, we shouldn’t assume full inference is always used — or always needed.
Humans strike a balance between accuracy and efficiency.
1. Full ToM: preference + belief (inferred from environment) → action
2. Correspondance bias: preference → action
3. Belief neglect: preference + environment (ignoring beliefs) → action
People flexibly switch depending on context!
1. Full ToM: preference + belief (inferred from environment) → action
2. Correspondance bias: preference → action
3. Belief neglect: preference + environment (ignoring beliefs) → action
People flexibly switch depending on context!
But without that training, belief neglect was common.
This suggests people adaptively allocate cognitive effort, depending on task structure.
But without that training, belief neglect was common.
This suggests people adaptively allocate cognitive effort, depending on task structure.
People DO account for environmental constraints (e.g., locked doors).
But they skip reasoning about what the agent believes about the environment.
It’s a mid-level shortcut.
People DO account for environmental constraints (e.g., locked doors).
But they skip reasoning about what the agent believes about the environment.
It’s a mid-level shortcut.
They infer preferences as if the agent’s beliefs were correct — even when they’re not.
This is what we call belief neglect.
They infer preferences as if the agent’s beliefs were correct — even when they’re not.
This is what we call belief neglect.
Sometimes, gems are hidden behind doors. Participants were told that some agents falsely believed that they couldn't open these doors.
They then had to infer which gem the agents preferred.
Sometimes, gems are hidden behind doors. Participants were told that some agents falsely believed that they couldn't open these doors.
They then had to infer which gem the agents preferred.
And when do they fall back on faster heuristics?
And when do they fall back on faster heuristics?
It's often framed as Bayesian inverse planning: we observe a person's action, then infer their beliefs and desires.
But that kind of reasoning is computationally costly.
It's often framed as Bayesian inverse planning: we observe a person's action, then infer their beliefs and desires.
But that kind of reasoning is computationally costly.