bqian.bsky.social
@bqian.bsky.social
If you’d like to learn more, swing by our poster on Friday (tomorrow) at 11am, East Exhibit Hall #3807, or at the NeuroAI workshop on Saturday at 3:30pm, West Ballroom B! (11/n, end)
December 12, 2024 at 7:51 PM
Our results illustrate the challenges inherent in accurately uncovering neural mechanisms from single-trial data, and suggest the need for new methods of validating data-constrained models for neural dynamics. (10/n)
December 12, 2024 at 7:51 PM
With nonlinear student and teacher dynamics, mismatches can be even more extreme, leading to the spurious discovery of limit cycles and incorrectly identified stable fixed points. This can occur over a wide range of fitting methods. (9/n)
December 12, 2024 at 7:51 PM
When the teacher network’s connectivity is non-normal, a data-constrained student under partial observation may spuriously fit transient dynamics using attractor-like dynamics. We show this analytically in the case of feedforward chain (top) and low-rank teacher connectivity (bottom). (8/n)
December 12, 2024 at 7:51 PM
Then, in the analytically tractable setting of linear RNNs driven by white noise, we show that these mismatches arise even when the student and teacher networks have matching single-unit dynamics. (7/n)
December 12, 2024 at 7:51 PM
As a motivating example, we show that fitting a low-dimensional linear dynamical system to simulated recordings of a feedforward chain performing an integration task leads to the spurious discovery of line attractor-like dynamics. (6/n)
December 12, 2024 at 7:51 PM
Here we show that observing only a subset of neurons in a circuit can create mechanistic mismatches between a simulated teacher network and a data-constrained student. (5/n)
December 12, 2024 at 7:51 PM
While simultaneous recordings of the activity of hundreds to thousands of neurons can now be obtained at high spatiotemporal resolution, this represents only a tiny fraction of most cortical circuits. (4/n)
December 12, 2024 at 7:51 PM
These data-constrained models are then dissected via dynamical systems analysis to arrive at conclusions about mechanisms underlying neural computations. How reliable are the conclusions derived from this procedure? (3/n)
December 12, 2024 at 7:51 PM
An increasingly popular approach for understanding the dynamics of neural circuits has been to train models (e.g. RNNs, latent dynamical systems models) to reproduce experimental recordings of neural activity. (2/n)
December 12, 2024 at 7:51 PM