David Rose (davdrose.github.io) led this project on how children's understanding of causal language develops.
📃 (preprint): osf.io/preprints/ps...
📎: github.com/davdrose/cau...
David Rose (davdrose.github.io) led this project on how children's understanding of causal language develops.
📃 (preprint): osf.io/preprints/ps...
📎: github.com/davdrose/cau...
houseofmirrors.substack.com/p/this-was-i...
houseofmirrors.substack.com/p/this-was-i...
In this new paper from the lab, Lorenzo Ciccione, Marie Lubineau, Theo Morfoisse and I show that 5 and 6 year olds already possess intuitions of linearity, curvature, period and compositionality.
www.sciencedirect.com/science/arti...
In this new paper from the lab, Lorenzo Ciccione, Marie Lubineau, Theo Morfoisse and I show that 5 and 6 year olds already possess intuitions of linearity, curvature, period and compositionality.
www.sciencedirect.com/science/arti...
🎉 I’m moving to University College London @ucl.ac.uk to join the Experimental Psychology department in @uclpals.bsky.social! 🎉
The big move happens in spring/summer. So I’m already exploring recruiting staff & students at UCL for fall 2026!
🎉 I’m moving to University College London @ucl.ac.uk to join the Experimental Psychology department in @uclpals.bsky.social! 🎉
The big move happens in spring/summer. So I’m already exploring recruiting staff & students at UCL for fall 2026!
Check it out, let us know what you think!
www.lnk.to/AQOSConsciou...
Check it out, let us know what you think!
www.lnk.to/AQOSConsciou...
In the photo Ken Daniels (an expert indigenous sailor) is looking towards the horizon whilst wearing an fNIRS system.
Analysis underway!
www.nytimes.com/2025/11/18/s...
In the photo Ken Daniels (an expert indigenous sailor) is looking towards the horizon whilst wearing an fNIRS system.
Analysis underway!
www.nytimes.com/2025/11/18/s...
Computational modeling of error patterns during reward-based learning show evidence that habit learning (value free!) supplements working memory in 7 human data sets.
rdcu.be/eQjLN
Computational modeling of error patterns during reward-based learning show evidence that habit learning (value free!) supplements working memory in 7 human data sets.
rdcu.be/eQjLN
@lauraaberner.bsky.social @kiante.bsky.social
@lauraaberner.bsky.social @kiante.bsky.social
www.washingtonpost.com/business/202...
www.washingtonpost.com/business/202...
Shared computations underlie how we acquire actions that are mutually beneficial, instrumentally harmful (benefits self at the expense of others), altruistic (benefit others at the expense of self), or mutually costly
🧵 rdcu.be/eL8mZ
Shared computations underlie how we acquire actions that are mutually beneficial, instrumentally harmful (benefits self at the expense of others), altruistic (benefit others at the expense of self), or mutually costly
🧵 rdcu.be/eL8mZ
go.bsky.app/1K9Suh
go.bsky.app/1K9Suh
Language models (LMs) are remarkably good at generating novel well-formed sentences, leading to claims that they have mastered grammar.
Yet they often assign higher probability to ungrammatical strings than to grammatical strings.
How can both things be true? 🧵👇
Language models (LMs) are remarkably good at generating novel well-formed sentences, leading to claims that they have mastered grammar.
Yet they often assign higher probability to ungrammatical strings than to grammatical strings.
How can both things be true? 🧵👇
www.pnas.org/doi/10.1073/...
www.pnas.org/doi/10.1073/...