Janis Keck
keckjanis.bsky.social
Janis Keck
@keckjanis.bsky.social
phd student in computational neuroscience, interested in geometric principles of biological & artificial learning || looking for postdoc positions
Symmetric learning rules might also explain why place cells in CA3 show less backward shift
(an indicator for the direction of prediction in the representation) than those in CA1, an empirical effect which we may reproduce in our model. 6/n
June 25, 2025 at 1:19 PM
This suggests symmetric learning rules provide a powerful inductive bias for problems with inherent symmetry, like navigation in metric spaces. Interestingly, in an asymmetric environment (directed graph), the classic SR performs better. 5/n
June 25, 2025 at 1:19 PM
Somewhat surprisingly, these symmetrized successor representations may still be used to solve navigation tasks.
Crucially, when we test these SRs on novel navigation targets without relearning, the symmetrized version leads to better generalization than the classical one.

4/n
June 25, 2025 at 1:19 PM
As one might expect, such symmetric learning rules will lead to successor representations which are also symmetrized. This means they reflect a transition structure different from the one the agent is actually experiencing. 3/n
June 25, 2025 at 1:19 PM
We started with a simple question: What happens to successor representations if we learn them with a learning rule that is symmetric in time, that is, invariant to the order of inputs? This question is particularly relevant for hippocampal subregions like CA3, where such rules might be at play. 2/n
June 25, 2025 at 1:19 PM