Shibhansh Dohare
shibhansh.bsky.social
Shibhansh Dohare
@shibhansh.bsky.social
AI researcher at University of Alberta.
https://shibhansh.github.io/
The timescale for observation of plasticity loss depends on the hyperparameters. Reducing the learning rate or the replay ratio reduces plasticity loss. But, I think a good rule of thumb is that the more sample-efficient the hyperparameters (initially), the faster the network loses plasticity.
January 14, 2025 at 4:54 PM
Reposted by Shibhansh Dohare
But injecting noise on the weights of quiescent neurons can:

www.nature.com/articles/s41...

So a bit of random homeostatic plasticity should do the trick.
Loss of plasticity in deep continual learning - Nature
The pervasive problem of artificial neural networks losing plasticity in continual-learning settings is demonstrated and a simple solution called the continual backpropagation algorithm is descri...
www.nature.com
November 27, 2024 at 3:00 PM