Auguste Schulz
auschulz.bsky.social
Auguste Schulz
@auschulz.bsky.social
CEO of KI macht Schule gGmbH
Previously @mackelab.bsky.social - machine learning in (neuro)science.
We apply our masked VAE approach to drosophila 🪰walking behavior, a macaque 🐒reach task & a synthetic dataset with access to the ground truth.

We propose calibration checks to evaluate the models’ uncertainty estimates, to avoid making confidently wrong predictions.

Check out the paper for more 🙂
April 17, 2025 at 8:50 PM
We use masked VAEs to address two model desiderata in one model:

1. jointly modeling conditional distributions that are commonly targeted in neuroscience (e.g., encoding 🐭➡️🧠and decoding 🧠 ➡️🐭) and

2. accounting for low-dimensional dynamics underlying both neural activity and behavior. 🌀
April 17, 2025 at 8:50 PM
April 17, 2025 at 8:11 PM
7) Classifying sequences of neural population activity revealed that SCs and SCim neurons differentiate self-generated vs. object-generated looming stimuli beyond what is already accounted for by other covariates, such as speed. 🏃🏼‍ ⏰ 🗺️
February 3, 2025 at 7:19 PM
6) Finally, to assess how mice distinguish self-generated vs. object-generated stimuli, we replay image sequences generated by the animal's own movement from previous VR trials, thus decoupling the visual motion from their current behavior.
February 3, 2025 at 7:19 PM
5) Surprisingly, we found behavioural responses triggered by the VR environments emerging from the first day of exposure, with animals spontaneously slowing down as they got closer to the object. 🏃🏃🛑
February 3, 2025 at 7:19 PM
4) We then looked at SC responses to looming stimuli generated when mice navigate the VR environment. We found vision-dominated responses in SCs while animal speed greatly influenced SCim responses.
February 3, 2025 at 7:19 PM
3) First, we characterized responses to approaching objects at different speeds, targeting Superficial (SCs) and Intermediate (SCim) layers of the Superior Colliculus.
February 3, 2025 at 7:19 PM
2) In a fun collaboration with fantastic Stefano Zucca, @ppjgoncalves.bsky.social, @jakhmack.bsky.social, @amansaleem.bsky.social, and Sam Solomon we assessed how neurons in Superficial (SCs) and Intermediate (SCim) layers represent looming stimuli by using an immersive Virtual Reality environment.
February 3, 2025 at 7:19 PM
1) Some exciting science in turbulent times:

How do mice distinguish self-generated vs. object-generated looming stimuli? Our new study combines VR and neural recordings from superior colliculus (SC) 🧠🐭 to explore this question.

Check out our preprint doi.org/10.1101/2024... 🧵
February 3, 2025 at 7:19 PM
11) LDNS is particularly promising for heterogeneous datasets without trial structure, which pose challenges for many LVMs.

LDNS successfully mimicked cortical data during attempted speech—a challenging task due to varying trial lengths.
December 11, 2024 at 7:43 AM
10) Colorful latents are just so nice to look at, so we were glad to see that the LDNS latent space preserves behavioral information.

Both latents and PCs thereof reflect the reach direction of reaches used for conditioning.
December 11, 2024 at 7:43 AM
9) LDNS allows for flexible conditioning on behavioral variables.

Diffusion models conditioned on either reach direction or velocity trajectories produce neural activity samples that are consistent with the queried behavior.
December 11, 2024 at 7:43 AM
8) To increase the realism of generated spikes even further, we demonstrate how to equip LDNS with more expressive autoregressive observation models.

(this can be applied to any LVM trained with Poisson log-likelihood!)
December 11, 2024 at 7:43 AM
7) We then moved to a classic monkey reach task and show that LDNS samples are indistinguishable to the human eye from real cortical data and accurately capture population level and single neuron statistics.
December 11, 2024 at 7:43 AM
6) We validate that LDNS does what it’s supposed to on simulated spiking data.

LDNS perfectly captured firing rates & underlying dynamics and can length-generalize—producing faithful samples of 16 times the original training length.
December 11, 2024 at 7:43 AM
5) But how does LDNS work?

The AE first maps spikes to time-aligned latents, which allows training flexible (un)conditional diffusion models on smoothly varying latents, circumventing the issue of diffusion models acting on discrete values.
December 11, 2024 at 7:43 AM
4) Latent Diffusion for Neural Spiking data to the rescue ⛑️

LDNS combines 1) a regularized S4-based autoencoder (AE) with 2) diffusion in latent space, and can model diverse neural spiking data.

Here we consider 3 very different tasks:
December 11, 2024 at 7:43 AM
1) With our @neuripsconf.bsky.social poster happening tomorrow, it's about time to introduce our Spotlight paper 🔦, co-lead with @jkapoor.bsky.social:

Latent Diffusion for Neural Spiking data (LDNS), a latent variable model (LVM) which addresses 3 goals simultaneously:
December 11, 2024 at 7:43 AM