Olivier Codol
@oliviercodol.bsky.social
Post-doc at Mila & U de Montréal in Guillaume Lajoie & Matt Perich's labs
Focus on neuroscience, RL for motor learning, neural control of movement, NeuroAI.
Focus on neuroscience, RL for motor learning, neural control of movement, NeuroAI.
“Edge of chaos” dynamics are long recognized as a computationally potent dynamical regime that avoids vanishing gradients during learning and allows greater memory and expressivity of a system. This stark difference surprised us, and we think it can help explain our results on neural adaptation.
November 6, 2025 at 2:10 AM
“Edge of chaos” dynamics are long recognized as a computationally potent dynamical regime that avoids vanishing gradients during learning and allows greater memory and expressivity of a system. This stark difference surprised us, and we think it can help explain our results on neural adaptation.
Indeed, Lyapunov exponents show that fixed points for RL models largely stay near 0, showing these networks’ dynamics lie at the edge of chaos. Whereas SL models’ dynamics are contractive and orderly, keeping very little information in memory for long and having stereotyped expressivity.
November 6, 2025 at 2:10 AM
Indeed, Lyapunov exponents show that fixed points for RL models largely stay near 0, showing these networks’ dynamics lie at the edge of chaos. Whereas SL models’ dynamics are contractive and orderly, keeping very little information in memory for long and having stereotyped expressivity.
Does this mean SL models are very orderly, while RL models lie at the interface between order and chaos? To formally confirm, we looked at Lyapunov exponents, which tell us how fast close-by states diverge. Unlike Jacobians, this tells us about long-horizon, not just local, dynamics.
November 6, 2025 at 2:10 AM
Does this mean SL models are very orderly, while RL models lie at the interface between order and chaos? To formally confirm, we looked at Lyapunov exponents, which tell us how fast close-by states diverge. Unlike Jacobians, this tells us about long-horizon, not just local, dynamics.
We looked at local dynamics around fixed points over time. This showed that SL models’ fixed points are indeed very stable, having nearly all modes of their eigenspectrum <1. RL models showed many more self-sustaining modes ≈1, again demonstrating isometric dynamics.
November 6, 2025 at 2:10 AM
We looked at local dynamics around fixed points over time. This showed that SL models’ fixed points are indeed very stable, having nearly all modes of their eigenspectrum <1. RL models showed many more self-sustaining modes ≈1, again demonstrating isometric dynamics.
A dynamical system could recover perfectly against a state perturbation, or it could expand following that perturbation. It turns out supervised learning (SL) models do the former, while reinforcement learning (RL) models do something in-between; they act as isometric systems.
November 6, 2025 at 2:10 AM
A dynamical system could recover perfectly against a state perturbation, or it could expand following that perturbation. It turns out supervised learning (SL) models do the former, while reinforcement learning (RL) models do something in-between; they act as isometric systems.
But a biological brain receives an ever-changing stream of inputs, rarely ever reducing to steady-state inputs. Our models reflect that, and their inputs are time varying.
So we took a slightly different approach, and asked how fixed-points evolved over time and over perturbed neural states.
So we took a slightly different approach, and asked how fixed-points evolved over time and over perturbed neural states.
November 6, 2025 at 2:10 AM
But a biological brain receives an ever-changing stream of inputs, rarely ever reducing to steady-state inputs. Our models reflect that, and their inputs are time varying.
So we took a slightly different approach, and asked how fixed-points evolved over time and over perturbed neural states.
So we took a slightly different approach, and asked how fixed-points evolved over time and over perturbed neural states.
This similarity to NHP neural recordings was true for geometric similarity metrics (CCA), but also for dynamical similarity. Importantly, this was only evident when our models were trained to control biomechanistically realistic effectors.
November 6, 2025 at 2:10 AM
This similarity to NHP neural recordings was true for geometric similarity metrics (CCA), but also for dynamical similarity. Importantly, this was only evident when our models were trained to control biomechanistically realistic effectors.
This paper even reports some tasks with an UB of 50 bits/sec. So 10 bit/sec isn't quite a "limit" even by that paper's count.
December 18, 2024 at 1:00 AM
This paper even reports some tasks with an UB of 50 bits/sec. So 10 bit/sec isn't quite a "limit" even by that paper's count.
Winter is here folks! Excited to do some awesome science with the best view ever, Montréal in the snow.
December 5, 2024 at 2:51 PM
Winter is here folks! Excited to do some awesome science with the best view ever, Montréal in the snow.
Exciting morning for #NeuroAI at #MAIN2024!
Talk session on neuro foundation models chaired by no other than @glajoie.bsky.social
Looking forward to the upcoming panel discussion
Talk session on neuro foundation models chaired by no other than @glajoie.bsky.social
Looking forward to the upcoming panel discussion
October 24, 2024 at 3:32 PM
Exciting morning for #NeuroAI at #MAIN2024!
Talk session on neuro foundation models chaired by no other than @glajoie.bsky.social
Looking forward to the upcoming panel discussion
Talk session on neuro foundation models chaired by no other than @glajoie.bsky.social
Looking forward to the upcoming panel discussion
Finally, we show that the neural representations produced by RL have stabilization properties when fine-tuning to new environmental dynamics. Unlike supervised learning, this leads to representational reorganization that mirrors cortical plasticity in monkeys.
October 7, 2024 at 5:04 PM
Finally, we show that the neural representations produced by RL have stabilization properties when fine-tuning to new environmental dynamics. Unlike supervised learning, this leads to representational reorganization that mirrors cortical plasticity in monkeys.
A long-standing question in psych and neuro is what serves as a dominant "teaching signal" over which to optimize when learning new skills.
Instead of approaching this behaviourally, we compared monkey neural recordings to modelling predictions under the same objective function using MotorNet.
Instead of approaching this behaviourally, we compared monkey neural recordings to modelling predictions under the same objective function using MotorNet.
October 7, 2024 at 5:03 PM
A long-standing question in psych and neuro is what serves as a dominant "teaching signal" over which to optimize when learning new skills.
Instead of approaching this behaviourally, we compared monkey neural recordings to modelling predictions under the same objective function using MotorNet.
Instead of approaching this behaviourally, we compared monkey neural recordings to modelling predictions under the same objective function using MotorNet.