Sushrut Thorat
banner
sushrutthorat.bsky.social
Sushrut Thorat
@sushrutthorat.bsky.social
Recurrent computations and lifelong learning.
Postdoc at IKW-UOS@DE with @timkietzmann.bsky.social
Prev. Donders@NL‬, ‪CIMeC@IT‬, IIT-B@IN
Pinned
🚨New Preprint!
How can we model natural scene representations in visual cortex? A solution is in active vision: predict the features of the next glimpse! arxiv.org/abs/2511.12715

+ @adriendoerig.bsky.social , @alexanderkroner.bsky.social , @carmenamme.bsky.social , @timkietzmann.bsky.social
🧵 1/14
Predicting upcoming visual features during eye movements yields scene representations aligned with human visual cortex
Scenes are complex, yet structured collections of parts, including objects and surfaces, that exhibit spatial and semantic relations to one another. An effective visual system therefore needs unified ...
arxiv.org
Reposted by Sushrut Thorat
1. 🧵 Thread: What happens to the visual brain after early transient blindness?
Our new Nature Communications paper examines a rare population: people born with dense bilateral cataracts—a short blindness occurring during a critical window of visual development.
🔗 rdcu.be/eQjMH
November 19, 2025 at 9:07 AM
Reposted by Sushrut Thorat
🚨New Preprint!
How can we model natural scene representations in visual cortex? A solution is in active vision: predict the features of the next glimpse! arxiv.org/abs/2511.12715

+ @adriendoerig.bsky.social , @alexanderkroner.bsky.social , @carmenamme.bsky.social , @timkietzmann.bsky.social
🧵 1/14
Predicting upcoming visual features during eye movements yields scene representations aligned with human visual cortex
Scenes are complex, yet structured collections of parts, including objects and surfaces, that exhibit spatial and semantic relations to one another. An effective visual system therefore needs unified ...
arxiv.org
November 18, 2025 at 12:37 PM
Reposted by Sushrut Thorat
How does our brain excel at complex object recognition, yet get fooled by simple illusory contours? What unifying principle governs all Gestalt laws of perceptual organization?

We may have an answer: integration of learned priors through feedback. New paper with @kenmiller.bsky.social! 🧵
October 24, 2025 at 2:00 PM
Reposted by Sushrut Thorat
We went back to the drawing board to think about what information is available to the visual system upon which it could build scene representations.

The outcome: a self-supervised training objective based on active vision that beats the SOTA on NSD representational alignment. 👇
November 18, 2025 at 2:14 PM
🚨New Preprint!
How can we model natural scene representations in visual cortex? A solution is in active vision: predict the features of the next glimpse! arxiv.org/abs/2511.12715

+ @adriendoerig.bsky.social , @alexanderkroner.bsky.social , @carmenamme.bsky.social , @timkietzmann.bsky.social
🧵 1/14
Predicting upcoming visual features during eye movements yields scene representations aligned with human visual cortex
Scenes are complex, yet structured collections of parts, including objects and surfaces, that exhibit spatial and semantic relations to one another. An effective visual system therefore needs unified ...
arxiv.org
November 18, 2025 at 12:37 PM
Reposted by Sushrut Thorat
paper🚨
When we learn a category, do we learn the structure of the world, or just where to draw the line? In a cross-species study, we show that humans, rats & mice adapt optimally to changing sensory statistics, yet rely on fundamentally different learning algorithms.
www.biorxiv.org/content/10.1...
Different learning algorithms achieve shared optimal outcomes in humans, rats, and mice
Animals must exploit environmental regularities to make adaptive decisions, yet the learning algorithms that enabels this flexibility remain unclear. A central question across neuroscience, cognitive science, and machine learning, is whether learning relies on generative or discriminative strategies. Generative learners build internal models the sensory world itself, capturing its statistical structure; discriminative learners map stimuli directly onto choices, ignoring input statistics. These strategies rely on fundamentally different internal representations and entail distinct computational trade-offs: generative learning supports flexible generalisation and transfer, whereas discriminative learning is efficient but task-specific. We compared humans, rats, and mice performing the same auditory categorisation task, where category boundaries and rewards were fixed but sensory statistics varied. All species adapted their behaviour near-optimally, consistent with a normative observer constrained by sensory and decision noise. Yet their underlying algorithms diverged: humans predominantly relied on generative representations, mice on discriminative boundary-tracking, and rats spanned both regimes. Crucially, end-point performance concealed these differences, only learning trajectories and trial-to-trial updates revealed the divergence. These results show that similar near-optimal behaviour can mask fundamentally different internal representations, establishing a comparative framework for uncovering the hidden strategies that support statistical learning. ### Competing Interest Statement The authors have declared no competing interest. Wellcome Trust, https://ror.org/029chgv08, 219880/Z/19/Z, 225438/Z/22/Z, 219627/Z/19/Z Gatsby Charitable Foundation, GAT3755 UK Research and Innovation, https://ror.org/001aqnf71, EP/Z000599/1
www.biorxiv.org
November 17, 2025 at 7:18 PM
Reposted by Sushrut Thorat
🤖🧠I'll be considering applications for PhD students & postdocs to start at Yale in Fall 2026!

If you are interested in the intersection of linguistics, cognitive science, & AI, I encourage you to apply!

PhD link: rtmccoy.com/prospective_...
Postdoc link: rtmccoy.com/prospective_...
November 14, 2025 at 4:40 PM
Reposted by Sushrut Thorat
Happy to share our review "Investigating hierarchical critical periods in human neurodevelopment” in @npp-journal.bsky.social! We examine neurobiological, environmental & behavioral evidence for human critical periods in sensory and association cortex +discuss new research directions rdcu.be/eMkVU 🧵
November 11, 2025 at 8:01 PM
Reposted by Sushrut Thorat
We wrote the Strain on scientific publishing to highlight the problems of time & trust. With a fantastic group of co-authors, we present The Drain of Scientific Publishing:

a 🧵 1/n

Drain: arxiv.org/abs/2511.04820
Strain: direct.mit.edu/qss/article/...
Oligopoly: direct.mit.edu/qss/article/...
November 11, 2025 at 11:52 AM
Lovely profit margins 😇 Knowing that we will keep publishing at Nature, etc. can we petition these companies to be publicly-traded so we can get some juicy dividends? 🥰 Who needs Wall Street when we have the amazing academic publishing system.
The solutions of the past 3 decades have failed to change the incentives of #PublishOrPerish. As a result, researcher funding, time, control, and trust has been lost.

The ONE CONSTANT in the wake of the serial crisis, #PlanS and #OpenAccess reform has been publish profit margins.

2/n
November 11, 2025 at 3:09 PM
Reposted by Sushrut Thorat
My reviewing style has changed over time. Rather than litigate every little thing, and pushing my own ideas, I focus only on 2 things:
(1) Are the claims interesting/important?
(2) Does the evidence support the claims?

Most of my reviews these days are short and focused.
November 8, 2025 at 11:22 AM
Reposted by Sushrut Thorat
I’m looking for interns to join our lab for a project on foundation models in neuroscience.

Funded by @ivado.bsky.social and in collaboration with the IVADO regroupement 1 (AI and Neuroscience: ivado.ca/en/regroupem...).

Interested? See the details in the comments. (1/3)

🧠🤖
AI and Neuroscience | IVADO
ivado.ca
November 7, 2025 at 1:52 PM
Reposted by Sushrut Thorat
🌏 Come spend some time with us in Sydney! 🇦🇺

@marcsinstitute.bsky.social is offering International Visiting Scholarships for PhD students + postdocs.

Spend 1–3 months collaborating, exploring ideas, and building connections.

📅 Apply by 4 Dec
📍 Sydney, Australia

Curious or keen? DM or email me
November 5, 2025 at 10:15 PM
Reposted by Sushrut Thorat
INTERSTELLAR was released 11 years ago today. The 9th feature film of director Christopher Nolan, and one of the biggest science fiction epics of the 21st century, the story of how it was made will have you wondering at our place in the stars…

1/50
November 5, 2025 at 11:40 AM
LLMs have enabled interaction w/ various kinds of data (image/audio/math/action) through language—a true breakthrough of our times. Ofc, as neuroscientists we are curious if this extends to brain data. @initself.bsky.social's answer: yes, we can flexibly "read out" a lot! Limits remain to be seen.
Introducing CorText: a framework that fuses brain data directly into a large language model, allowing for interactive neural readout using natural language.

tl;dr: you can now chat with a brain scan 🧠💬

1/n
November 3, 2025 at 4:08 PM
Reposted by Sushrut Thorat
A cool summary of our paper!
A core function of cortex is predicting what happens next given the world's state.

This recent paper from Oxford shows how cortical layers may use a delay trick to learn to predict.

A simple illustration can explain the idea.

A🧵with my toy model and notes:

#neuroskyence #compneuro #NeuroAI
October 31, 2025 at 7:40 AM
Reposted by Sushrut Thorat
Very proud of this great piece of multi-lab work by Kabir & Husta, et al.

Want to measure attention across the visual field (without interfering with ongoing perceptual/attentional processes)? Use RIFT!

Here we share our how-to-RIFT knowledge, including analysis code, quantitative comparisons, ..
Planning on running a RIFT study? In a new manuscript, we put together the RIFT know-how accumulated over the years by multiple labs (@lindadrijvers.bsky.social, @schota.bsky.social, @eelkespaak.bsky.social, with Cecília Hustá and others).

Preprint: osf.io/preprints/ps...
OSF
osf.io
October 29, 2025 at 10:58 AM
So… who’s up for starting a revolution? (Rhetorical question)
Scientific Reports meanwhile on course to publish 40K papers = >$100M pubmed.ncbi.nlm.nih.gov?term=%22Scie...
October 29, 2025 at 9:06 AM
A hard ARC problem from Fig. 1 of www.nature.com/articles/s41...

Am I the only one who thinks in the test solution, the “overtaken” dots could be red?
October 29, 2025 at 8:49 AM
Reposted by Sushrut Thorat
Want the freedom of a fancy fellowship, but not the year-long wait or arduous application?

Come join my lab! Work on neuroscience and AI, explore your creativity, be independent or work closely with me, collaborate widely, and have a lot of fun!

my.corehr.com/pls/uoxrecru...
October 23, 2025 at 10:46 AM
Reposted by Sushrut Thorat
🚀 We’re hiring - Join our lab 🚀

🔍 Hiring: PhD (75% TV-L) & Postdoc (100% TV-L)
🧠 fMRI, VR, EEG, modelling

We combine a range of cognitive neuroscience methods to study flexible behaviour.

📅 Start: Feb 2026 or later | ⏳ Apply by Nov 3!

More details:
tinyurl.com/ms3a9ajt

#CognitiveNeuroscience
October 27, 2025 at 11:57 AM
Reposted by Sushrut Thorat
Apply to become a CSHL-Simons Fellow in Neuroscience!

Run your own lab, pursue bold ideas, join a highly collaborative community!

All areas including experimental or computational neuro, including NeuroAI & systems

PhD required; ≤~1 yr postdoc

www.cshl.edu/about-us/car...
Fellows Positions | Cold Spring Harbor Laboratory
CSHL Simons Fellow in NEUROSCIENCE Cold Spring Harbor Laboratory (CSHL) is seeking to fill a Cold Spring Harbor Laboratory Fellow position in the area of NEUROSCIENCE (experimental and/or computationa...
www.cshl.edu
October 24, 2025 at 7:16 PM
Reposted by Sushrut Thorat
How well do classifiers trained on visual activity actually transfer to non-visual reactivation?

#Decoding studies often rely on training in one (visual) condition and applying it to another (e.g. rest-reactivation). However: How well does this work? Show us what makes it work and win up to 1000$!
IMAGINE-decoding-challenge
Predict which words participants were hearing, based upon brain activity recordings of visually seeing these items?
www.kaggle.com
October 24, 2025 at 6:55 AM
Reposted by Sushrut Thorat
Deadline for applying with us at #ELLISPhD program is in 10 days (Oct 31). We're looking for highly motivated people working at the interface of machine learning and neuroscience.
October 21, 2025 at 6:32 AM
Reposted by Sushrut Thorat
Great point. I think there is some evidence that ICL emerges before RLHF and instruction tuning (eg arxiv.org/abs/2205.05055), though I can’t remember any controlled experiments that directly dissociate the effects.
Data Distributional Properties Drive Emergent In-Context Learning in Transformers
Large transformer-based models are able to perform in-context few-shot learning, without being explicitly trained for it. This observation raises the question: what aspects of the training regime lead...
arxiv.org
October 20, 2025 at 1:43 PM