Andrew Glennerster
banner
ag3dvr.bsky.social
Andrew Glennerster
@ag3dvr.bsky.social
A corrigendum for this paper is out today:
doi.org/10.1016/j.ne...
(some quite weird things happened to the text between submission and proofs). Anyway, corrected now. I hope the weirdities in the uncorrected version do not put people off.
Redirecting
doi.org
October 9, 2025 at 3:18 PM
The set of saccades that take the fovea between points in the scene can be described either as an egocentric representation or a ‘policy’, a set of context-dependent actions. When the observer moves, the most enduring part of the representation is the set of angles between distant points.
September 2, 2025 at 7:07 AM
One element of the argument concerns the way that we (and most animals) move. We fixate on a point and move relative to that. The dorsal stream is well set up to control movements in this way.
September 2, 2025 at 7:07 AM
A bit more detail here: www.youtube.com/watch?v=Q5XN... and in a related paper doi.org/10.1098/rstb... .
Practice talk, iNav 2024
YouTube video by Andrew Glennerster
www.youtube.com
December 18, 2024 at 9:50 AM
The solution is likely to involve abandoning 3D coordinate frames and transformations. Instead, egocentric and allocentric tasks can be solved in a space of images or sensory states. This is what is done in reinforcement learning, which is probably a better model for biology than SLAM.
December 18, 2024 at 9:50 AM
This review asks good questions and summarises well our lack of progress in answering them. The fundamental problem is the failure to conceptualise what a biologically plausible alternative to SLAM might look like.
December 18, 2024 at 9:50 AM
Not as daft as it might seem. At least, when discussing where the interesting complexity lies.
December 17, 2024 at 12:56 AM
Overall, my take is that signals relating to the current and upcoming retinal image, and the task, are highly relevant for both HPC and PPC, rather than ‘place’ _per se_. Also, differences between processing in retinotopic areas (PPC) versus HPC are less dramatic than has been supposed in the past.
December 6, 2024 at 5:06 PM
These points are relevant for hippocampal as well as PPC cells.
December 6, 2024 at 5:06 PM
Visual responses can be ‘erroneously interpreted as place codes’:
December 6, 2024 at 5:06 PM
Conclusion: Overall, PPC and HPC responses were remarkably similar. Certainly, the original adage that PPC encodes egocentric and HPC allocentric coordinate frames seems inconsistent with these results.
December 6, 2024 at 5:06 PM
Not really, though (maps)
December 1, 2024 at 9:23 PM
Muryy et al (2020) doi.org/10.1016/j.vi...
Redirecting
doi.org
November 27, 2024 at 1:02 PM
Other information changes rapidly and so should give a fine scale refinement of the location estimate. This would tally with other types of biological representation. Is a coarse-to-fine hierarchy noticeable in policies built up with RL?
November 27, 2024 at 11:27 AM
In particular, I am interested in the extent to which there is a coarse-to-fine structure in embedding space. Some sensory information changes slowly as the agent moves (eg angles between distant objects). This should give a coarse scale location estimate.
November 27, 2024 at 11:27 AM
I have been out of the loop since then but I know RL is used a lot in navigation, including for drones. Is there a similar analysis of the structure of the embedding space (eg tSNE) underlying the policy in these more modern RL systems?
November 27, 2024 at 11:27 AM
By contrast, there was very little information about the camera location.
November 27, 2024 at 11:27 AM
Our lab looked in detail at one of the early papers using RL to learn to navigate to a target image (Zhu et al 2017, doi.org/10.1109/ICRA...). We showed that the feature vectors it had learned were clustered in the embedding space according to the target image.
Target-driven visual navigation in indoor scenes using deep reinforcement learning
Two less addressed issues of deep reinforcement learning are (1) lack of generalization capability to new goals, and (2) data inefficiency, i.e., the model requires several (and often costly) episodes...
doi.org
November 27, 2024 at 11:27 AM