Linas Nasvytis
linasnasvytis.bsky.social
Linas Nasvytis
@linasnasvytis.bsky.social
PhD @Stanford studying cognitive science & AI

Prev: Pre-doc Fellow @Harvard, Econ & CS research with Paul Romer, Stats & ML @UniofOxford, Econ @Columbia
Shoutout again to the amazing advisor team of
@gershbrain.bsky.social and @fierycushman.bsky.social!

Full paper: osf.io/preprints/ps...
OSF
osf.io
September 17, 2025 at 12:58 AM
This has implications for AI and cognitive modeling:

When designing systems to reason socially, we shouldn’t assume full inference is always used — or always needed.

Humans strike a balance between accuracy and efficiency.
September 17, 2025 at 12:58 AM
We model this in a Bayesian framework, comparing 3 hypotheses:
1. Full ToM: preference + belief (inferred from environment) → action
2. Correspondance bias: preference → action
3. Belief neglect: preference + environment (ignoring beliefs) → action

People flexibly switch depending on context!
September 17, 2025 at 12:58 AM
With minimal training, participants started engaging in full joint inference over beliefs and preferences.

But without that training, belief neglect was common.

This suggests people adaptively allocate cognitive effort, depending on task structure.
September 17, 2025 at 12:58 AM
Belief neglect is different from correspondence bias:

People DO account for environmental constraints (e.g., locked doors).

But they skip reasoning about what the agent believes about the environment.

It’s a mid-level shortcut.
September 17, 2025 at 12:58 AM
We find that, by default, people often neglect the agent’s beliefs.

They infer preferences as if the agent’s beliefs were correct — even when they’re not.

This is what we call belief neglect.
September 17, 2025 at 12:58 AM
In our task, participants watched agents navigate grid worlds to collect gems.

Sometimes, gems are hidden behind doors. Participants were told that some agents falsely believed that they couldn't open these doors.

They then had to infer which gem the agents preferred.
September 17, 2025 at 12:58 AM
The question we ask is: When do people actually engage in full ToM reasoning?

And when do they fall back on faster heuristics?
September 17, 2025 at 12:58 AM
Theory of mind (ToM) — reasoning about others’ beliefs and desires — is central to human intelligence.

It's often framed as Bayesian inverse planning: we observe a person's action, then infer their beliefs and desires.

But that kind of reasoning is computationally costly.
September 17, 2025 at 12:58 AM