osf.io/preprints/ps...
There does not seem to be an effect of ghrelin on risky decision-making in probability discounting. Not in behaviour, underlying computational processes, or neural activity.
More details ⬇️
There does not seem to be an effect of ghrelin on risky decision-making in probability discounting. Not in behaviour, underlying computational processes, or neural activity.
More details ⬇️
📄 "Elucidating attentional mechanisms underlying value normalization in human reinforcement learning"
👁️ We show that visual attention during learning causally shapes how values are encoded
w/ @sgluth.bsky.social & @stepalminteri.bsky.social
🔗 doi.org/10.31234/osf...
📄 "Elucidating attentional mechanisms underlying value normalization in human reinforcement learning"
👁️ We show that visual attention during learning causally shapes how values are encoded
w/ @sgluth.bsky.social & @stepalminteri.bsky.social
🔗 doi.org/10.31234/osf...
We investigated how the brain supports forward planning & structure learning during multi-step decision-making using fMRI 🧠
With A. Salvador, S. Hamroun, @mael-lebreton.bsky.social & @stepalminteri.bsky.social
📄 Preprint: submit.biorxiv.org/submission/p...
We investigated how the brain supports forward planning & structure learning during multi-step decision-making using fMRI 🧠
With A. Salvador, S. Hamroun, @mael-lebreton.bsky.social & @stepalminteri.bsky.social
📄 Preprint: submit.biorxiv.org/submission/p...
▶️ www.biorxiv.org/content/10.1...
#Neuroscience
▶️ www.biorxiv.org/content/10.1...
#Neuroscience
📄 NORMARL: A multi-agent RL framework for adaptive social norms & sustainability.
📄 Selective Attention: When attention helps vs. hinders learning under uncertainty.
Grateful to my amazing co-authors! *-*
📄 NORMARL: A multi-agent RL framework for adaptive social norms & sustainability.
📄 Selective Attention: When attention helps vs. hinders learning under uncertainty.
Grateful to my amazing co-authors! *-*
www.annualreviews.org/content/jour...
We unpack why psych theories of generalization keep cycling from rigid rule-based models to flexible similarity-based ones, then culminating in Bayesian hybrids. Let's break it down 👉 🧵
www.annualreviews.org/content/jour...
We unpack why psych theories of generalization keep cycling from rigid rule-based models to flexible similarity-based ones, then culminating in Bayesian hybrids. Let's break it down 👉 🧵
A quite self-centered review, but with a broad introduction and conclusions and very cool figures.
Few main takes will follow
osf.io/preprints/ps...
A quite self-centered review, but with a broad introduction and conclusions and very cool figures.
Few main takes will follow
osf.io/preprints/ps...
Performance of standard reinforcement learning (RL) algorithms depends on the scale of the rewards they aim to maximize.
Inspired by human cognitive processes, we leverage a cognitive bias to develop scale-invariant RL algorithms: reward range normalization.
Curious? Have a read!👇
Performance of standard reinforcement learning (RL) algorithms depends on the scale of the rewards they aim to maximize.
Inspired by human cognitive processes, we leverage a cognitive bias to develop scale-invariant RL algorithms: reward range normalization.
Curious? Have a read!👇
Achieving Scale-Invariant Reinforcement Learning Performance with Reward Range Normalization.
Where we show that things we discover in psychology can be useful for machine learning.
By the amazing
@maevalhotellier.bsky.social and Jeremy Perez.
doi.org/10.31234/osf...
Achieving Scale-Invariant Reinforcement Learning Performance with Reward Range Normalization.
Where we show that things we discover in psychology can be useful for machine learning.
By the amazing
@maevalhotellier.bsky.social and Jeremy Perez.
doi.org/10.31234/osf...