📄 "Elucidating attentional mechanisms underlying value normalization in human reinforcement learning"
👁️ We show that visual attention during learning causally shapes how values are encoded
w/ @sgluth.bsky.social & @stepalminteri.bsky.social
🔗 doi.org/10.31234/osf...
📄 "Elucidating attentional mechanisms underlying value normalization in human reinforcement learning"
👁️ We show that visual attention during learning causally shapes how values are encoded
w/ @sgluth.bsky.social & @stepalminteri.bsky.social
🔗 doi.org/10.31234/osf...
A quite self-centered review, but with a broad introduction and conclusions and very cool figures.
Few main takes will follow
osf.io/preprints/ps...
A quite self-centered review, but with a broad introduction and conclusions and very cool figures.
Few main takes will follow
osf.io/preprints/ps...
Performance of standard reinforcement learning (RL) algorithms depends on the scale of the rewards they aim to maximize.
Inspired by human cognitive processes, we leverage a cognitive bias to develop scale-invariant RL algorithms: reward range normalization.
Curious? Have a read!👇
Performance of standard reinforcement learning (RL) algorithms depends on the scale of the rewards they aim to maximize.
Inspired by human cognitive processes, we leverage a cognitive bias to develop scale-invariant RL algorithms: reward range normalization.
Curious? Have a read!👇
Achieving Scale-Invariant Reinforcement Learning Performance with Reward Range Normalization.
Where we show that things we discover in psychology can be useful for machine learning.
By the amazing
@maevalhotellier.bsky.social and Jeremy Perez.
doi.org/10.31234/osf...
Achieving Scale-Invariant Reinforcement Learning Performance with Reward Range Normalization.
Where we show that things we discover in psychology can be useful for machine learning.
By the amazing
@maevalhotellier.bsky.social and Jeremy Perez.
doi.org/10.31234/osf...