GerstnerLab
@gerstnerlab.bsky.social
The Laboratory of Computational Neuroscience @EPFL studies models of neurons, networks of neurons, synaptic plasticity, and learning in the brain.
Pinned
GerstnerLab
@gerstnerlab.bsky.social
· Aug 8
From Spikes To Rates
YouTube video by Gerstner Lab
youtu.be
Is it possible to go from spikes to rates without averaging?
We show how to exactly map recurrent spiking networks into recurrent rate networks, with the same number of neurons. No temporal or spatial averaging needed!
Presented at Gatsby Neural Dynamics Workshop, London.
We show how to exactly map recurrent spiking networks into recurrent rate networks, with the same number of neurons. No temporal or spatial averaging needed!
Presented at Gatsby Neural Dynamics Workshop, London.
Reposted by GerstnerLab
This was a lot of fun! From my side, it started with a technical Q: what's the relation between two-side cavity and path integrals? Turns out it's a fluctuation correction - and amazingly, this also enable the "O(N) rank" theory by @david-g-clark.bsky.social and @omarschall.bsky.social. 🤯
Now in PRX: Theory linking connectivity structure to collective activity in nonlinear RNNs!
For neuro fans: conn. structure can be invisible in single neurons but shape pop. activity
For low-rank RNN fans: a theory of rank=O(N)
For physics fans: fluctuations around DMFT saddle⇒dimension of activity
For neuro fans: conn. structure can be invisible in single neurons but shape pop. activity
For low-rank RNN fans: a theory of rank=O(N)
For physics fans: fluctuations around DMFT saddle⇒dimension of activity
Connectivity Structure and Dynamics of Nonlinear Recurrent Neural Networks
The structure of brain connectivity predicts collective neural activity, with a small number of connectivity features determining activity dimensionality, linking circuit architecture to network-level...
journals.aps.org
November 5, 2025 at 9:15 AM
This was a lot of fun! From my side, it started with a technical Q: what's the relation between two-side cavity and path integrals? Turns out it's a fluctuation correction - and amazingly, this also enable the "O(N) rank" theory by @david-g-clark.bsky.social and @omarschall.bsky.social. 🤯
Lab members are at the Bernstein conference @bernsteinneuro.bsky.social with 9 posters! Here’s the list:
TUESDAY 16:30 – 18:00
P1 62 “Measuring and controlling solution degeneracy across task-trained recurrent neural networks” by @flavioh.bsky.social
TUESDAY 16:30 – 18:00
P1 62 “Measuring and controlling solution degeneracy across task-trained recurrent neural networks” by @flavioh.bsky.social
September 30, 2025 at 9:29 AM
Lab members are at the Bernstein conference @bernsteinneuro.bsky.social with 9 posters! Here’s the list:
TUESDAY 16:30 – 18:00
P1 62 “Measuring and controlling solution degeneracy across task-trained recurrent neural networks” by @flavioh.bsky.social
TUESDAY 16:30 – 18:00
P1 62 “Measuring and controlling solution degeneracy across task-trained recurrent neural networks” by @flavioh.bsky.social
Reposted by GerstnerLab
New in @pnas.org: doi.org/10.1073/pnas...
We study how humans explore a 61-state environment with a stochastic region that mimics a “noisy-TV.”
Results: Participants keep exploring the stochastic part even when it’s unhelpful, and novelty-seeking best explains this behavior.
#cogsci #neuroskyence
We study how humans explore a 61-state environment with a stochastic region that mimics a “noisy-TV.”
Results: Participants keep exploring the stochastic part even when it’s unhelpful, and novelty-seeking best explains this behavior.
#cogsci #neuroskyence
September 28, 2025 at 11:07 AM
New in @pnas.org: doi.org/10.1073/pnas...
We study how humans explore a 61-state environment with a stochastic region that mimics a “noisy-TV.”
Results: Participants keep exploring the stochastic part even when it’s unhelpful, and novelty-seeking best explains this behavior.
#cogsci #neuroskyence
We study how humans explore a 61-state environment with a stochastic region that mimics a “noisy-TV.”
Results: Participants keep exploring the stochastic part even when it’s unhelpful, and novelty-seeking best explains this behavior.
#cogsci #neuroskyence
Reposted by GerstnerLab
🎉 "High-dimensional neuronal activity from low-dimensional latent dynamics: a solvable model" will be presented as an oral at #NeurIPS2025 🎉
Feeling very grateful that reviewers and chairs appreciated concise mathematical explanations, in this age of big models.
www.biorxiv.org/content/10.1...
1/2
Feeling very grateful that reviewers and chairs appreciated concise mathematical explanations, in this age of big models.
www.biorxiv.org/content/10.1...
1/2
September 19, 2025 at 8:01 AM
🎉 "High-dimensional neuronal activity from low-dimensional latent dynamics: a solvable model" will be presented as an oral at #NeurIPS2025 🎉
Feeling very grateful that reviewers and chairs appreciated concise mathematical explanations, in this age of big models.
www.biorxiv.org/content/10.1...
1/2
Feeling very grateful that reviewers and chairs appreciated concise mathematical explanations, in this age of big models.
www.biorxiv.org/content/10.1...
1/2
🧠 “You never forget how to ride a bike”, but how is that possible?
Our study proposes a bio-plausible meta-plasticity rule that shapes synapses over time, enabling selective recall based on context
Our study proposes a bio-plausible meta-plasticity rule that shapes synapses over time, enabling selective recall based on context
Context selectivity with dynamic availability enables lifelong continual learning
“You never forget how to ride a bike”, – but how is that possible? The brain is able to learn complex skills, stop the practice for years, learn other…
www.sciencedirect.com
September 4, 2025 at 4:00 PM
🧠 “You never forget how to ride a bike”, but how is that possible?
Our study proposes a bio-plausible meta-plasticity rule that shapes synapses over time, enabling selective recall based on context
Our study proposes a bio-plausible meta-plasticity rule that shapes synapses over time, enabling selective recall based on context
Reposted by GerstnerLab
So happy to see this work out! 🥳
Huge thanks to our two amazing reviewers who pushed us to make the paper much stronger. A truly joyful collaboration with @lucasgruaz.bsky.social, @sobeckerneuro.bsky.social, and Johanni Brea! 🥰
Tweeprint on an earlier version: bsky.app/profile/modi... 🧠🧪👩🔬
Huge thanks to our two amazing reviewers who pushed us to make the paper much stronger. A truly joyful collaboration with @lucasgruaz.bsky.social, @sobeckerneuro.bsky.social, and Johanni Brea! 🥰
Tweeprint on an earlier version: bsky.app/profile/modi... 🧠🧪👩🔬
August 25, 2025 at 4:18 PM
So happy to see this work out! 🥳
Huge thanks to our two amazing reviewers who pushed us to make the paper much stronger. A truly joyful collaboration with @lucasgruaz.bsky.social, @sobeckerneuro.bsky.social, and Johanni Brea! 🥰
Tweeprint on an earlier version: bsky.app/profile/modi... 🧠🧪👩🔬
Huge thanks to our two amazing reviewers who pushed us to make the paper much stronger. A truly joyful collaboration with @lucasgruaz.bsky.social, @sobeckerneuro.bsky.social, and Johanni Brea! 🥰
Tweeprint on an earlier version: bsky.app/profile/modi... 🧠🧪👩🔬
Reposted by GerstnerLab
Attending #CCN2025?
Come by our poster in the afternoon (4th floor, Poster 72) to talk about the sense of control, empowerment, and agency. 🧠🤖
We propose a unifying formulation of the sense of control and use it to empirically characterize the human subjective sense of control.
🧑🔬🧪🔬
Come by our poster in the afternoon (4th floor, Poster 72) to talk about the sense of control, empowerment, and agency. 🧠🤖
We propose a unifying formulation of the sense of control and use it to empirically characterize the human subjective sense of control.
🧑🔬🧪🔬
August 13, 2025 at 8:40 AM
Attending #CCN2025?
Come by our poster in the afternoon (4th floor, Poster 72) to talk about the sense of control, empowerment, and agency. 🧠🤖
We propose a unifying formulation of the sense of control and use it to empirically characterize the human subjective sense of control.
🧑🔬🧪🔬
Come by our poster in the afternoon (4th floor, Poster 72) to talk about the sense of control, empowerment, and agency. 🧠🤖
We propose a unifying formulation of the sense of control and use it to empirically characterize the human subjective sense of control.
🧑🔬🧪🔬
Is it possible to go from spikes to rates without averaging?
We show how to exactly map recurrent spiking networks into recurrent rate networks, with the same number of neurons. No temporal or spatial averaging needed!
Presented at Gatsby Neural Dynamics Workshop, London.
We show how to exactly map recurrent spiking networks into recurrent rate networks, with the same number of neurons. No temporal or spatial averaging needed!
Presented at Gatsby Neural Dynamics Workshop, London.
From Spikes To Rates
YouTube video by Gerstner Lab
youtu.be
August 8, 2025 at 3:25 PM
Is it possible to go from spikes to rates without averaging?
We show how to exactly map recurrent spiking networks into recurrent rate networks, with the same number of neurons. No temporal or spatial averaging needed!
Presented at Gatsby Neural Dynamics Workshop, London.
We show how to exactly map recurrent spiking networks into recurrent rate networks, with the same number of neurons. No temporal or spatial averaging needed!
Presented at Gatsby Neural Dynamics Workshop, London.
Reposted by GerstnerLab
Excited to present at the PIMBAA workshop at #RLDM2025 tomorrow!
We study curiosity using intrinsically motivated RL agents and developed an algorithm to generate diverse, targeted environments for comparing curiosity drives.
Preprint (accepted but not yet published): osf.io/preprints/ps...
We study curiosity using intrinsically motivated RL agents and developed an algorithm to generate diverse, targeted environments for comparing curiosity drives.
Preprint (accepted but not yet published): osf.io/preprints/ps...
OSF
osf.io
June 11, 2025 at 8:09 PM
Excited to present at the PIMBAA workshop at #RLDM2025 tomorrow!
We study curiosity using intrinsically motivated RL agents and developed an algorithm to generate diverse, targeted environments for comparing curiosity drives.
Preprint (accepted but not yet published): osf.io/preprints/ps...
We study curiosity using intrinsically motivated RL agents and developed an algorithm to generate diverse, targeted environments for comparing curiosity drives.
Preprint (accepted but not yet published): osf.io/preprints/ps...
Reposted by GerstnerLab
Stoked to be at RLDM! Curious how novelty and exploration are impacted by generalization across similar stimuli? Then don't miss my flash talk in the PIMBAA workshop (tmr at 10:30, E McNabb Theatre) or stop by my poster tmr (#74)! Looking forward to chat 🤩
www.biorxiv.org/content/10.1...
www.biorxiv.org/content/10.1...
Representational similarity modulates neural and behavioral signatures of novelty
Novelty signals in the brain modulate learning and drive exploratory behaviors in humans and animals. While the perceived novelty of a stimulus is known to depend on previous experience, the effect of...
www.biorxiv.org
June 11, 2025 at 8:41 PM
Stoked to be at RLDM! Curious how novelty and exploration are impacted by generalization across similar stimuli? Then don't miss my flash talk in the PIMBAA workshop (tmr at 10:30, E McNabb Theatre) or stop by my poster tmr (#74)! Looking forward to chat 🤩
www.biorxiv.org/content/10.1...
www.biorxiv.org/content/10.1...
Reposted by GerstnerLab
Our new preprint 👀
The firing of neural populations is high-dim even if their subthreshold activity is low-dim! This work by @bio-emergent.bsky.social and @haydari.bsky.social shows how, with a solvable model, a data analysis technique, and data from mouse visual cortex: www.biorxiv.org/content/10.1...
High-dimensional neuronal activity from low-dimensional latent dynamics: a solvable model
Computation in recurrent networks of neurons has been hypothesized to occur at the level of low-dimensional latent dynamics, both in artificial systems and in the brain. This hypothesis seems at odds ...
www.biorxiv.org
June 9, 2025 at 7:32 PM
Our new preprint 👀
Reposted by GerstnerLab
Interested in high-dim chaotic networks? Ever wondered about the structure of their state space? @jakobstubenrauch.bsky.social has answers - from a separation of fixed points and dynamics onto distinct shells to a shared lower-dim manifold and linear prediction of dynamics.
(1/3) How to analyse a dynamical system? Find its fixed points, study their properties!
How to analyse a *high-dimensional* dynamical system? Find its fixed points, study their properties!
We do that for a chaotic neural network! finally published: doi.org/10.1103/Phys...
How to analyse a *high-dimensional* dynamical system? Find its fixed points, study their properties!
We do that for a chaotic neural network! finally published: doi.org/10.1103/Phys...
Fixed point geometry in chaotic neural networks
Understanding the high-dimensional chaotic dynamics occurring in complex biological systems such as recurrent neural networks or ecosystems remains a conceptual challenge. For low-dimensional dynamics...
doi.org
June 10, 2025 at 7:45 PM
Interested in high-dim chaotic networks? Ever wondered about the structure of their state space? @jakobstubenrauch.bsky.social has answers - from a separation of fixed points and dynamics onto distinct shells to a shared lower-dim manifold and linear prediction of dynamics.
Reposted by GerstnerLab
Episode #22 in #TheoreticalNeurosciencePodcast: On 50 years with the Hopfield network model - with Wulfram Gerstner
theoreticalneuroscience.no/thn22
John Hopfield received the 2024 Physics Nobel prize for his model published in 1982. What is the model all about? @icepfl.bsky.social
theoreticalneuroscience.no/thn22
John Hopfield received the 2024 Physics Nobel prize for his model published in 1982. What is the model all about? @icepfl.bsky.social
December 7, 2024 at 8:24 AM
Episode #22 in #TheoreticalNeurosciencePodcast: On 50 years with the Hopfield network model - with Wulfram Gerstner
theoreticalneuroscience.no/thn22
John Hopfield received the 2024 Physics Nobel prize for his model published in 1982. What is the model all about? @icepfl.bsky.social
theoreticalneuroscience.no/thn22
John Hopfield received the 2024 Physics Nobel prize for his model published in 1982. What is the model all about? @icepfl.bsky.social
Reposted by GerstnerLab
A cool EPFL News article was written about our recent neurotheory paper on spikes vs rates!
Super engaging text by science communicater Nik Papageorgiou.
actu.epfl.ch/news/brain-m...
Definitely more accessible than the original physics-style, 4.5-page letter 🤓
journals.aps.org/prl/abstract...
Super engaging text by science communicater Nik Papageorgiou.
actu.epfl.ch/news/brain-m...
Definitely more accessible than the original physics-style, 4.5-page letter 🤓
journals.aps.org/prl/abstract...
Brain models draw closer to real-life neurons
Researchers at EPFL have shown how rough, biological spiking neural networks can mimic the behavior of brain models called recurrent neural networks. The findings challenge traditional assumptions and...
actu.epfl.ch
January 22, 2025 at 4:05 PM
A cool EPFL News article was written about our recent neurotheory paper on spikes vs rates!
Super engaging text by science communicater Nik Papageorgiou.
actu.epfl.ch/news/brain-m...
Definitely more accessible than the original physics-style, 4.5-page letter 🤓
journals.aps.org/prl/abstract...
Super engaging text by science communicater Nik Papageorgiou.
actu.epfl.ch/news/brain-m...
Definitely more accessible than the original physics-style, 4.5-page letter 🤓
journals.aps.org/prl/abstract...
Reposted by GerstnerLab
Super excited to see my PhD thesis featured by EPFL! 🎓
actu.epfl.ch/news/learnin...
P.S.: There's even a French version of the article! It feels so fancy! 😎 👨🎨 🇫🇷
actu.epfl.ch/news/apprend...
actu.epfl.ch/news/learnin...
P.S.: There's even a French version of the article! It feels so fancy! 😎 👨🎨 🇫🇷
actu.epfl.ch/news/apprend...
Learning from the unexpected
A researcher at EPFL working at the crossroads of neuroscience and computational science has developed an algorithm that can predict how surprise and novelty affect behavior.
actu.epfl.ch
January 10, 2025 at 2:29 PM
Super excited to see my PhD thesis featured by EPFL! 🎓
actu.epfl.ch/news/learnin...
P.S.: There's even a French version of the article! It feels so fancy! 😎 👨🎨 🇫🇷
actu.epfl.ch/news/apprend...
actu.epfl.ch/news/learnin...
P.S.: There's even a French version of the article! It feels so fancy! 😎 👨🎨 🇫🇷
actu.epfl.ch/news/apprend...
Reposted by GerstnerLab
New round of spike vs rate?
The concentration of measure phenomenon can explain the emergence of rate-based dynamics in networks of spiking neurons, even when no two neurons are the same.
This is what's shown in the last paper of my PhD, out today in Physical Review Letters 🎉 tinyurl.com/4rprwrw5
The concentration of measure phenomenon can explain the emergence of rate-based dynamics in networks of spiking neurons, even when no two neurons are the same.
This is what's shown in the last paper of my PhD, out today in Physical Review Letters 🎉 tinyurl.com/4rprwrw5
Emergent Rate-Based Dynamics in Duplicate-Free Populations of Spiking Neurons
Can spiking neural networks (SNNs) approximate the dynamics of recurrent neural networks? Arguments in classical mean-field theory based on laws of large numbers provide a positive answer when each ne...
tinyurl.com
January 6, 2025 at 4:45 PM
New round of spike vs rate?
The concentration of measure phenomenon can explain the emergence of rate-based dynamics in networks of spiking neurons, even when no two neurons are the same.
This is what's shown in the last paper of my PhD, out today in Physical Review Letters 🎉 tinyurl.com/4rprwrw5
The concentration of measure phenomenon can explain the emergence of rate-based dynamics in networks of spiking neurons, even when no two neurons are the same.
This is what's shown in the last paper of my PhD, out today in Physical Review Letters 🎉 tinyurl.com/4rprwrw5
Reposted by GerstnerLab
Pre-print 🧠🧪
Is mechanism modeling dead in the AI era?
ML models trained to predict neural activity fail to generalize to unseen opto perturbations. But mechanism modeling can solve that.
We say "perturbation testing" is the right way to evaluate mechanisms in data-constrained models
1/8
Is mechanism modeling dead in the AI era?
ML models trained to predict neural activity fail to generalize to unseen opto perturbations. But mechanism modeling can solve that.
We say "perturbation testing" is the right way to evaluate mechanisms in data-constrained models
1/8
January 8, 2025 at 4:33 PM
Pre-print 🧠🧪
Is mechanism modeling dead in the AI era?
ML models trained to predict neural activity fail to generalize to unseen opto perturbations. But mechanism modeling can solve that.
We say "perturbation testing" is the right way to evaluate mechanisms in data-constrained models
1/8
Is mechanism modeling dead in the AI era?
ML models trained to predict neural activity fail to generalize to unseen opto perturbations. But mechanism modeling can solve that.
We say "perturbation testing" is the right way to evaluate mechanisms in data-constrained models
1/8