Bryan M. Li
banner
bryanlimy.bsky.social
Bryan M. Li
@bryanlimy.bsky.social
Encode Fellow at Imperial College London | Biomedical AI PhD at the University of Edinburgh. Working on #NeuroAI and #ML4Health. https://bryanli.io.
We compared our model against SOTA models from the Sensorium 2023 challenge and showed that ViV1T is the most performant while being more computationally efficient. We also evaluated the data efficiency of the model by varying the number of training samples and neurons.

5/7
September 19, 2025 at 12:37 PM
Moving beyond gratings, we used ViV1T to generate centre-surround most exciting videos (MEVs) via the Inception Loop (Walker et al. 2019). Our in vivo experiments confirmed that MEVs elicit stronger contextual modulation than gratings, natural images and videos, and most exciting images (MEIs).

4/7
September 19, 2025 at 12:37 PM
ViV1T also revealed novel functional features. We found new properties of contextual responses to surround stimuli in V1 neurons, both movement- and contrast-dependent. We validated this in vivo!

3/7
September 19, 2025 at 12:37 PM
ViV1T, only trained on natural movies, captured well-known direction tuning and contextual modulation of V1. Despite no built-in mechanism for modelling neuron connectivities, the model predicted feedback-dependent contextual modulation (including feedback onset delay!) (Keller et al. 2020).

2/7
September 19, 2025 at 12:37 PM
We present our preprint on ViV1T, a transformer for dynamic mouse V1 response prediction. We reveal novel response properties and confirm them in vivo.

With @wulfdewolf.bsky.social, Danai Katsanevaki, @arnoonken.bsky.social, @rochefortlab.bsky.social.

Paper and code at the end of the thread!

🧵1/7
September 19, 2025 at 12:37 PM