Hugo Ninou
hugoninou.bsky.social
Hugo Ninou
@hugoninou.bsky.social
I am a PhD student working at the intersection of neuroscience and machine learning.
My work focuses on learning dynamics in biologically plausible neural networks. #NeuroAI
9/10
Our work challenges the dominant view that learning must strictly follow a gradient. The diversity of plasticity rules in biology might not be a bug, but a feature—an evolutionary strategy leveraging non-gradient dynamics for more efficient and robust learning. 🌀
October 10, 2025 at 5:53 PM
8/10
In this latter regime where only one rule-flipped synapse was introduced in the readout layer, curl descent can speed up learning in nonlinear networks. This result holds over a wide range of hyper-parameters.
October 10, 2025 at 5:53 PM
7/10
The story is completely different for the readout layer. Surprisingly, even when curl terms make the solution manifold unstable, the network is still able to find other low-error regions!
October 10, 2025 at 5:53 PM
6/10
But beware! If you add too many rule-flipped neurons in the hidden layer of a compressive network, the learning dynamics spiral into chaos, thereby destroying performance. The weights never settle, and the error stays high.
October 10, 2025 at 5:53 PM
5/10
How does this scale up? We used random matrix theory to find that stability depends critically on network architecture. Expansive networks (input layer > hidden layer) are much more robust to curl terms, maintaining stable solutions even with many rule-flipped neurons.
October 10, 2025 at 5:53 PM
4/10
Toy example : In a tiny 2-synapse network, curl descent can escape saddle points and converge faster than gradient descent by temporarily ascending the loss function. But it comes at a cost: half of the optimal solutions become unstable.
October 10, 2025 at 5:53 PM
3/10
But can networks with such non-gradient learning dynamics still support meaningful optimization? We answer this question by focusing on an analytically tractable teacher-student framework, with 2-layer feedforward linear networks.
October 10, 2025 at 5:53 PM
2/10
This is motivated by the diversity observed in the brain. A given weight update signal can have an opposite effects on a network's computation depending on the postsynaptic neuron (e.g. E/I), which is inconsistent with standard gradient descent.
October 10, 2025 at 5:53 PM
1/10
We define the curl descent learning rule by flipping the sign of the updates given by gradient descent for some weights. These weights are chosen at the start of learning depending on the nature (rule-flipped or not) of the presynaptic neuron.
October 10, 2025 at 5:53 PM
Unable to access, but would love to read this ! Any full text link 👀?
June 13, 2025 at 2:26 PM
9/9
A huge thank you to my co-first author @SharonIsraely and @OriRajchert, @LeeElmaleh, @RanHarel, @FirasMawase,
@kadmonj.bsky.social , and @yifatprut.bsky.social ut.bsky.social for their invaluable contributions and support throughout this journey. 🙏
Bluesky
ut.bsky.social
March 22, 2025 at 6:44 PM
8/n
1️⃣ Our study provides new insights into how cerebellar signals constrain cortical preparatory activity, promoting generalization and adaptation.
2️⃣ We demonstrate that in the absence of cerebellar signals, cortical mechanisms are harnessed to restore adaptation, albeit with reduced efficiency.
March 22, 2025 at 6:35 PM
7/n
⚫ The increased dimensionality under HFS was accompanied by a decrease in generalization performance, both at the neural and behavioral levels.
March 22, 2025 at 6:35 PM
6/n
⚫ HFS led to higher dimensionality in neural activity, indicating a loss of structure in the neural representations, which is crucial for efficient motor learning and adaptation.
March 22, 2025 at 6:35 PM
5/n
⚫ This compensation involved an angular shift in neural activity, suggesting a "re-aiming" strategy to handle the force field in the absence of cerebellar control.
March 22, 2025 at 6:35 PM
4/n
⚫ Under high-frequency stimulation (HFS), we observed a bigger difference between FF and null field (NF) neural activity, indicating a compensatory mechanism in the motor cortex to adapt to the perturbation.
March 22, 2025 at 6:35 PM
3/n
⚫ Under high-frequency stimulation (HFS), neural activity was altered in both a target-dependent and independent manner, showing that cerebellar signals contain task-related information.
March 22, 2025 at 6:35 PM
2/n 🔍 Key Findings:

⚫ Cerebellar Block Impairs Adaptation: Blocking cerebellar outflow thanks to high-frequency stimulations (HFS) in the superior cerebellar peduncle significantly impairs force field (FF) adaptation, leading to increased motor noise and reduced error sensitivity.
March 22, 2025 at 6:35 PM