GerstnerLab
banner
gerstnerlab.bsky.social
GerstnerLab
@gerstnerlab.bsky.social
The Laboratory of Computational Neuroscience @EPFL studies models of neurons, networks of neurons, synaptic plasticity, and learning in the brain.
Pinned
Is it possible to go from spikes to rates without averaging?

We show how to exactly map recurrent spiking networks into recurrent rate networks, with the same number of neurons. No temporal or spatial averaging needed!

Presented at Gatsby Neural Dynamics Workshop, London.
From Spikes To Rates
YouTube video by Gerstner Lab
youtu.be
Reposted by GerstnerLab
This was a lot of fun! From my side, it started with a technical Q: what's the relation between two-side cavity and path integrals? Turns out it's a fluctuation correction - and amazingly, this also enable the "O(N) rank" theory by @david-g-clark.bsky.social and @omarschall.bsky.social. 🤯
Now in PRX: Theory linking connectivity structure to collective activity in nonlinear RNNs!
For neuro fans: conn. structure can be invisible in single neurons but shape pop. activity
For low-rank RNN fans: a theory of rank=O(N)
For physics fans: fluctuations around DMFT saddle⇒dimension of activity
Connectivity Structure and Dynamics of Nonlinear Recurrent Neural Networks
The structure of brain connectivity predicts collective neural activity, with a small number of connectivity features determining activity dimensionality, linking circuit architecture to network-level...
journals.aps.org
November 5, 2025 at 9:15 AM
Lab members are at the Bernstein conference @bernsteinneuro.bsky.social with 9 posters! Here’s the list:

TUESDAY 16:30 – 18:00

P1 62 “Measuring and controlling solution degeneracy across task-trained recurrent neural networks” by @flavioh.bsky.social
September 30, 2025 at 9:29 AM
Reposted by GerstnerLab
New in @pnas.org: doi.org/10.1073/pnas...

We study how humans explore a 61-state environment with a stochastic region that mimics a “noisy-TV.”

Results: Participants keep exploring the stochastic part even when it’s unhelpful, and novelty-seeking best explains this behavior.

#cogsci #neuroskyence
September 28, 2025 at 11:07 AM
Reposted by GerstnerLab
🎉 "High-dimensional neuronal activity from low-dimensional latent dynamics: a solvable model" will be presented as an oral at #NeurIPS2025 🎉

Feeling very grateful that reviewers and chairs appreciated concise mathematical explanations, in this age of big models.

www.biorxiv.org/content/10.1...
1/2
September 19, 2025 at 8:01 AM
🧠 “You never forget how to ride a bike”, but how is that possible?
Our study proposes a bio-plausible meta-plasticity rule that shapes synapses over time, enabling selective recall based on context
Context selectivity with dynamic availability enables lifelong continual learning
“You never forget how to ride a bike”, – but how is that possible? The brain is able to learn complex skills, stop the practice for years, learn other…
www.sciencedirect.com
September 4, 2025 at 4:00 PM
Reposted by GerstnerLab
So happy to see this work out! 🥳
Huge thanks to our two amazing reviewers who pushed us to make the paper much stronger. A truly joyful collaboration with @lucasgruaz.bsky.social, @sobeckerneuro.bsky.social, and Johanni Brea! 🥰

Tweeprint on an earlier version: bsky.app/profile/modi... 🧠🧪👩‍🔬
Merits of Curiosity: A Simulation Study
Abstract‘Why are we curious?’ has been among the central puzzles of neuroscience and psychology in the past decades. A popular hypothesis is that curiosity is driven by intrinsically generated reward signals, which have evolved to support survival in complex environments. To formalize and test this hypothesis, we need to understand the enigmatic relationship between (i) intrinsic rewards (as drives of curiosity), (ii) optimality conditions (as objectives of curiosity), and (iii) environment structures. Here, we demystify this relationship through a systematic simulation study. First, we propose an algorithm to generate environments that capture key abstract features of different real-world situations. Then, we simulate artificial agents that explore these environments by seeking one of six representative intrinsic rewards: novelty, surprise, information gain, empowerment, maximum occupancy principle, and successor-predecessor intrinsic exploration. We evaluate the exploration performance of these simulated agents regarding three potential objectives of curiosity: state discovery, model accuracy, and uniform state visitation. Our results show that the comparative performance of each intrinsic reward is highly dependent on the environmental features and the curiosity objective; this indicates that ‘optimality’ in top-down theories of curiosity needs a precise formulation of assumptions. Nevertheless, we found that agents seeking a combination of novelty and information gain always achieve a close-to-optimal performance on objectives of curiosity as well as in collecting extrinsic rewards. This suggests that novelty and information gain are two principal axes of curiosity-driven behavior. These results pave the way for the further development of computational models of curiosity and the design of theory-informed experimental paradigms.
dlvr.it
August 25, 2025 at 4:18 PM
Reposted by GerstnerLab
Attending #CCN2025?
Come by our poster in the afternoon (4th floor, Poster 72) to talk about the sense of control, empowerment, and agency. 🧠🤖

We propose a unifying formulation of the sense of control and use it to empirically characterize the human subjective sense of control.

🧑‍🔬🧪🔬
August 13, 2025 at 8:40 AM
Is it possible to go from spikes to rates without averaging?

We show how to exactly map recurrent spiking networks into recurrent rate networks, with the same number of neurons. No temporal or spatial averaging needed!

Presented at Gatsby Neural Dynamics Workshop, London.
From Spikes To Rates
YouTube video by Gerstner Lab
youtu.be
August 8, 2025 at 3:25 PM
Reposted by GerstnerLab
Excited to present at the PIMBAA workshop at #RLDM2025 tomorrow!
We study curiosity using intrinsically motivated RL agents and developed an algorithm to generate diverse, targeted environments for comparing curiosity drives.

Preprint (accepted but not yet published): osf.io/preprints/ps...
OSF
osf.io
June 11, 2025 at 8:09 PM
Reposted by GerstnerLab
Stoked to be at RLDM! Curious how novelty and exploration are impacted by generalization across similar stimuli? Then don't miss my flash talk in the PIMBAA workshop (tmr at 10:30, E McNabb Theatre) or stop by my poster tmr (#74)! Looking forward to chat 🤩

www.biorxiv.org/content/10.1...
Representational similarity modulates neural and behavioral signatures of novelty
Novelty signals in the brain modulate learning and drive exploratory behaviors in humans and animals. While the perceived novelty of a stimulus is known to depend on previous experience, the effect of...
www.biorxiv.org
June 11, 2025 at 8:41 PM
Reposted by GerstnerLab
Our new preprint 👀
June 9, 2025 at 7:32 PM
Reposted by GerstnerLab
Interested in high-dim chaotic networks? Ever wondered about the structure of their state space? @jakobstubenrauch.bsky.social has answers - from a separation of fixed points and dynamics onto distinct shells to a shared lower-dim manifold and linear prediction of dynamics.
(1/3) How to analyse a dynamical system? Find its fixed points, study their properties!

How to analyse a *high-dimensional* dynamical system? Find its fixed points, study their properties!

We do that for a chaotic neural network! finally published: doi.org/10.1103/Phys...
Fixed point geometry in chaotic neural networks
Understanding the high-dimensional chaotic dynamics occurring in complex biological systems such as recurrent neural networks or ecosystems remains a conceptual challenge. For low-dimensional dynamics...
doi.org
June 10, 2025 at 7:45 PM
Reposted by GerstnerLab
Episode #22 in #TheoreticalNeurosciencePodcast: On 50 years with the Hopfield network model - with Wulfram Gerstner

theoreticalneuroscience.no/thn22

John Hopfield received the 2024 Physics Nobel prize for his model published in 1982. What is the model all about? @icepfl.bsky.social
December 7, 2024 at 8:24 AM
Reposted by GerstnerLab
A cool EPFL News article was written about our recent neurotheory paper on spikes vs rates!

Super engaging text by science communicater Nik Papageorgiou.
actu.epfl.ch/news/brain-m...

Definitely more accessible than the original physics-style, 4.5-page letter 🤓
journals.aps.org/prl/abstract...
Brain models draw closer to real-life neurons
Researchers at EPFL have shown how rough, biological spiking neural networks can mimic the behavior of brain models called recurrent neural networks. The findings challenge traditional assumptions and...
actu.epfl.ch
January 22, 2025 at 4:05 PM
Reposted by GerstnerLab
Super excited to see my PhD thesis featured by EPFL! 🎓
actu.epfl.ch/news/learnin...

P.S.: There's even a French version of the article! It feels so fancy! 😎 👨‍🎨 🇫🇷
actu.epfl.ch/news/apprend...
Learning from the unexpected
A researcher at EPFL working at the crossroads of neuroscience and computational science has developed an algorithm that can predict how surprise and novelty affect behavior.
actu.epfl.ch
January 10, 2025 at 2:29 PM
Reposted by GerstnerLab
New round of spike vs rate?

The concentration of measure phenomenon can explain the emergence of rate-based dynamics in networks of spiking neurons, even when no two neurons are the same.

This is what's shown in the last paper of my PhD, out today in Physical Review Letters 🎉 tinyurl.com/4rprwrw5
Emergent Rate-Based Dynamics in Duplicate-Free Populations of Spiking Neurons
Can spiking neural networks (SNNs) approximate the dynamics of recurrent neural networks? Arguments in classical mean-field theory based on laws of large numbers provide a positive answer when each ne...
tinyurl.com
January 6, 2025 at 4:45 PM
Reposted by GerstnerLab
Pre-print 🧠🧪
Is mechanism modeling dead in the AI era?

ML models trained to predict neural activity fail to generalize to unseen opto perturbations. But mechanism modeling can solve that.

We say "perturbation testing" is the right way to evaluate mechanisms in data-constrained models

1/8
January 8, 2025 at 4:33 PM