Benjamin Cowley
benjocowley.bsky.social
Benjamin Cowley
@benjocowley.bsky.social
Assistant Professor in computational neuroscience at Cold Spring Harbor Laboratory. Think cortically, act neuronally.
cowleygroup.cshl.edu
Not sure ori/spatial freq/color is enough to characterize/describe the diversity of V4 tuning. E.g., the V4 neuron below is a nonlinear mix of the three. Predicting V4 responses to pure sinusoidal gratings feels a bit like trying to predict the weather really well in Iowa ignoring everywhere else.
October 15, 2025 at 2:44 PM
Need to choose a paper for journal club?

One paper that molded my research direction was "Sequential optimal design of neurophysiology experiments." from Lewi, Butera, & Paninski in 2009.

The key idea is to let the model speak for itself by choosing its own stimuli in closed-loop experiments!
August 22, 2025 at 3:59 PM
I like to think of our work as combining the predictive power of DNNs with the explainability of linear-nonlinear models.

Overall, our main hypothesis is that the DNN models we use in comp neuro are needlessly large. We should strive for predictive *and* explainable DNN models.
December 19, 2023 at 11:08 PM
We interrogated the model much like an experimentalist interrogates the brain---ablating and recording from every filter.

We found a simple mechanism for dot size selectivity:
Detect the four corners of the dot and inhibit dots with large edges.

200+ compact models to go!
December 19, 2023 at 11:07 PM
This suggests that to understand a V4 neuron's processing, we only need to examine its consolidation step (after layer 3).

We put this to the test with my favorite compact model---a dot detector.  How does it work?
December 19, 2023 at 11:07 PM
After verifying the causal predictive power of the compact models, we looked under the hood to see how they worked.

Across all models, we found a common motif: The compact models share similar filters in early layers but then heavily specialize via a consolidation step.
December 19, 2023 at 11:07 PM
Because these maximizing stimuli were "punches" to V4 neurons, we tried something more subtle.

We used the compact models to identify adversarial images---small perturbations to the input that lead to large changes in V4 responses.

These adversarial images worked on V4 neurons!
December 19, 2023 at 11:06 PM
For causal testing:
1. We trained compact models on previous sessions.
2. Probed their maximizing images.
3. Recorded V4 responses to these images on a future session.

These 'preferred stimuli' did yield larger responses (red dots) than those to natural images (black dots).
December 19, 2023 at 11:05 PM
We immediately went to work interrogating these compact models. And things got weird.

For example, we found what appears to be a palm tree detecting V4 neuron---its maximizing natural and synthesized images were palm-tree like.

To make sure this was real, we ran causal tests.
December 19, 2023 at 11:05 PM
And compact they were! Each compact model had ~10k params---5000x smaller than the deep ensemble and 500x smaller than the leading task-driven DNN---without losing much in prediction power.

We can display *all* the convolutional weights of a compact model in one figure!
December 19, 2023 at 11:05 PM
Our resulting deep ensemble model predicted neural responses ~50% more accurately than leading DNN models. But it was a behemoth---60M params.

We relied on two more ML tricks---knowledge distillation and pruning---to compress the deep ensemble model to obtain compact models.
December 19, 2023 at 11:04 PM
We focused on predicting V4 responses to natural images.

We first obtained a highly-predictive, data-driven DNN by using every ML trick in the book---transfer learning, ensemble learning, active learning, etc.

and collected lots of responses: 45 sessions, ~75k unique images.
December 19, 2023 at 11:04 PM
New work!
Compact deep neural network models of visual cortex
B. Cowley, P. Stan, J. Pillow*, M. Smith*
tiny.cc/dmxhvz

Task-driven DNN models nicely predict neural responses but have millions of params---next to impossible to explain. Do they need to be so large?
December 19, 2023 at 11:04 PM
The fly brain’s anatomy also supports our finding of a visual distributed population code.

The LCs read out from multiple optic lobe neuron types, and downstream neuron types tend to read out from multiple LCs.
(from newly-released FlyWire connectome)
October 30, 2023 at 7:37 PM
We dissected our model to come up with the big picture:

→ Almost every neural channel encoded multiple visual features.
→ Multiple neural channels drove the same behavior.

In the end, our model suggests that the optic glomeruli form a distributed population code…
October 30, 2023 at 7:35 PM
We trained the model with behavior from 400+ perturbed flies.

The model *never* had access to neural activity. Even so, the model’s predicted activity matched well with real recorded responses!

(top plot: real LC11 neuron responses)
(bottom plot: model LC11 responses)
October 30, 2023 at 7:34 PM
To get this one-to-one mapping, we “knocked out” 22 different visual neuron types in the fruit fly and observed the resulting behavior.

We then devised "knockout training", silencing model units during training similar to how we silenced the real neurons.
October 30, 2023 at 7:33 PM
We modeled the natural behavior of a male fruit fly chasing and singing to a female during courtship.

For the model’s input, we reconstructed the male’s visual scene as he chases the female.

(sorry, no wings—they were beyond my animation skills!)
October 30, 2023 at 7:31 PM
We've updated our preprint on finding one-to-one mappings of DNN neurons and real visual neurons of fruit fly.

--> 2x silenced data with new LC31 neuron type
--> more LC neural recordings
--> FlyWire connectome
--> knockout training simulations

Enjoy!
tinyurl.com/5n7t6tpv
October 30, 2023 at 7:30 PM