Tahereh Toosi
banner
taherehtoosi.bsky.social
Tahereh Toosi
@taherehtoosi.bsky.social
Associate Research Scientist at Center for Theoretical Neuroscience Zuckerman Mind Brain Behavior Institute
Kavli Institute for Brain Science
Columbia University Irving Medical Center
K99-R00 scholar @NIH @NatEyeInstitute
https://toosi.github.io/
This one sits right between Neuro and AI! I think 😅
bsky.app/profile/tahe...
How does our brain excel at complex object recognition, yet get fooled by simple illusory contours? What unifying principle governs all Gestalt laws of perceptual organization?

We may have an answer: integration of learned priors through feedback. New paper with @kenmiller.bsky.social! 🧵
November 20, 2025 at 10:23 AM
Same experience! Thanks reviewers!
November 19, 2025 at 6:34 PM
Thank you for sharing these papers and also the results of your analyses. Certainly, there should be some instances of end2end trained class of dynamic models that can account for generative inference hallmarks in brain(e.illusory pattern completion), it's fun to think about the missing ingredients!
November 18, 2025 at 11:20 PM
Why not the same? In both neurons seem to perform the same computations, no?
November 11, 2025 at 3:09 PM
I think priors in the brain are both evolutionarily scaffolded and refined through learning, there’s strong evidence for activity-dependent pruning and plasticity early in development.
October 26, 2025 at 8:41 PM
Thanks! It looks similar to DeepDream or adversarial optimization because we also use PGD, but that’s toward a preset label. In Generative Inference, the model instead implicitly increases confidence by moving away from the least-likely class(es) identified in first iteration.
Also,👇🏼
October 26, 2025 at 8:41 PM
Thanks so much!
Grossberg’s BCS–FCS framework was one of the first to mechanistically model Gestalt grouping and illusory contours. Generative Inference shows these same phenomena can emerge spontaneously from neural networks trained for object recognition, no specialized implementation is needed.
October 26, 2025 at 3:57 PM
Thank you!
October 26, 2025 at 11:10 AM
Thank you!
October 26, 2025 at 11:06 AM
Also, checkout our demo on Hugging face if you want to get a hands-on experience on a few instances of generative inference. Use community tab to upload your creations! huggingface.co/spaces/ttoos... Thanks for reading this long thread! 😅
Generative Inference Demo - a Hugging Face Space by ttoosi
This application allows users to upload images and run generative inference to understand how neural networks perceive visual illusions and Gestalt principles. Users can load pre-configured example...
huggingface.co
October 24, 2025 at 2:04 PM
This work with my mentor Ken Miller @kenmiller.bsky.social was supported by NEI K99 transition award and ARNI grant. Grateful to theory center colleagues and beyond who attended talks and provided insights. Pre-print: www.biorxiv.org/content/10.1...
Generative inference unifies feedback processing for learning and perception in natural and artificial vision
We understand how neurons respond selectively to patterns in visual input to support object recognition; however, how these circuits support perceptual grouping, illusory percepts, and imagination is ...
www.biorxiv.org
October 24, 2025 at 2:03 PM
We're also developing models for psychiatric disorders that account for perceptual aberrations through abnormal values of interpretable parameters in healthy perception models.

Clinical applications on the horizon! More on that soon!
October 24, 2025 at 2:03 PM
We also found surprising applications of PGDD in mechanistic interpretability—but that's another story for another time!
October 24, 2025 at 2:02 PM
Finally, we tested the other end of the perceptual continuum: purely prior-driven percepts where imagination/hallucination occurs. Starting from (here, the same) noise, both PGDD and "increase confidence" generate meaningful (distinct) patterns.
October 24, 2025 at 2:01 PM
We tested similar illusions: Ehrenstein illusion, Cornsweet illusion, Adelson's checker shadow, and the Confetti illusion. Neural recordings aren't available for these, but extensive human studies exist.

Again, generative inference via PGDD accounts for all of them.
October 24, 2025 at 2:01 PM
Our theory predicts: whatever algorithm the brain uses for learning hierarchical abstractions, the same feedback pathways that adjust synaptic weights during learning can adjust activations during inference for flexible computation.
(Makes continual learning much easier!)
October 24, 2025 at 2:02 PM
We tested Neon Color Spreading (which recently was studied in mice as well). Illusion: there is no blue disc, only blue lines! When we ran PGDD in the same network, the disk appears! Try it in the demo link below.
October 24, 2025 at 2:01 PM
We showed identical stimuli to our model and ran PGDD. Result: PGDD accounts for all these instances!

Our conclusion: all Gestalt perceptual grouping principles may unify under one mechanism: integration of priors into sensory processing.
October 24, 2025 at 2:01 PM
They showed monkeys stimuli with grouping by continuation and similarity, then tested with cued attention.

Result: attentional modulation spread to other group segments with a delay—the same signature of prior integration we see everywhere!
October 24, 2025 at 2:01 PM
Kanizsa Square and Face-Vase illusions exemplify Gestalt principles of closure and figure-ground segregation, now unified through generative inference and prior integration.

What about other principles? Roelfsema lab ran a clever monkey study:
October 24, 2025 at 2:01 PM