Tahereh Toosi
banner
taherehtoosi.bsky.social
Tahereh Toosi
@taherehtoosi.bsky.social
Associate Research Scientist at Center for Theoretical Neuroscience Zuckerman Mind Brain Behavior Institute
Kavli Institute for Brain Science
Columbia University Irving Medical Center
K99-R00 scholar @NIH @NatEyeInstitute
https://toosi.github.io/
We're also developing models for psychiatric disorders that account for perceptual aberrations through abnormal values of interpretable parameters in healthy perception models.

Clinical applications on the horizon! More on that soon!
October 24, 2025 at 2:03 PM
We also found surprising applications of PGDD in mechanistic interpretability—but that's another story for another time!
October 24, 2025 at 2:02 PM
Finally, we tested the other end of the perceptual continuum: purely prior-driven percepts where imagination/hallucination occurs. Starting from (here, the same) noise, both PGDD and "increase confidence" generate meaningful (distinct) patterns.
October 24, 2025 at 2:01 PM
We tested similar illusions: Ehrenstein illusion, Cornsweet illusion, Adelson's checker shadow, and the Confetti illusion. Neural recordings aren't available for these, but extensive human studies exist.

Again, generative inference via PGDD accounts for all of them.
October 24, 2025 at 2:01 PM
We tested Neon Color Spreading (which recently was studied in mice as well). Illusion: there is no blue disc, only blue lines! When we ran PGDD in the same network, the disk appears! Try it in the demo link below.
October 24, 2025 at 2:01 PM
We showed identical stimuli to our model and ran PGDD. Result: PGDD accounts for all these instances!

Our conclusion: all Gestalt perceptual grouping principles may unify under one mechanism: integration of priors into sensory processing.
October 24, 2025 at 2:01 PM
They showed monkeys stimuli with grouping by continuation and similarity, then tested with cued attention.

Result: attentional modulation spread to other group segments with a delay—the same signature of prior integration we see everywhere!
October 24, 2025 at 2:01 PM
Can other inference objectives arrive in similar plausible explanation? We developed Prior-Guided Drift Diffusion (PGDD):It moves away from a noisy representation of the input image(drift) while adding a small diffusion noise, importantly allowing access to low-level-only priors.
October 24, 2025 at 2:01 PM
We gave the face-vase image to two ResNet50 networks: one trained on ImageNet, the other on VGGFace2 (modeling face processing areas).

Running generative inference: the object network creates vase patterns, the face network creates facial features!
October 24, 2025 at 2:01 PM
Face-vase illusions evoke bistable perception in humans. Surprisingly, from early visual areas—supposedly just mirroring external reality—you can decode which percept the subject sees! (consistent with neural signature of prior integration)
October 24, 2025 at 2:01 PM
Does generative inference depend on architecture? We tested VGG16-bn, ResNet18, and WideResNet50—all trained on ImageNet showed induced contours. But when identical architectures trained on faces or places? No pattern completion. Data priors matter!
October 24, 2025 at 2:01 PM
What did it change to? From "plectrum" (initial classification) to "lampshade" with higher confidence through generative inference!

Here are actual ImageNet samples of both classes . The network found its closest meaningful interpretation!
October 24, 2025 at 2:01 PM
Remember: this isn't just a Kanizsa illusion model—it's ResNet50, which does complex object recognition on natural images close to training data. But when stimuli are unfamiliar (like Kanizsa), generative inference tries to make sense of the input.
October 24, 2025 at 2:01 PM
For Kanizsa, the least likely class was "trolleybus"! By iteratively moving away from that prediction, the network gradually creates contours around the square area!

First time I ran this—I couldn't believe my eyes!
October 24, 2025 at 2:01 PM
The "increase confidence" objective moves activations away from the least likely classes identified in conventional inference.

The algorithm iteratively updates activations using gradients (errors) of this objective.
October 24, 2025 at 2:01 PM
Example: What happens when we show a robustly-trained ResNet50 a Kanizsa square?

Conventional inference → low confidence output. Makes sense, it was never trained on Kanizsa stimuli. But what if we use it's (intrinsic learning) feedback (aka backpropagation graph)?
October 24, 2025 at 2:00 PM
Theory: feedback errors, under certain conditions, approximate the steepest ascent toward naturalistic patterns (the score function from generative models). These errors act like a compass, guiding activations toward more plausible states under the data distribution.
October 24, 2025 at 2:00 PM
What if we let the same feedback that was used during learning (to tune the weights for e.g. object recognition) to update the activations according to the learned prior of the network when needed? Obviously, we need novel inference objectives and a theory to link them to priors!
October 24, 2025 at 2:00 PM
Back to board (and biology)! In machine learning, feedback's main purpose is to tune the weights (through backprop. or bio alternatives). But during inference (test time/perception), that computational graph is not used. In brain, feedback is used for both: learning and inference
October 24, 2025 at 2:00 PM
First: does training recurrent networks for object recognition automatically create neural signatures of prior integration? We tested CORnet-S and PCN (predictive coding) on ImageNet. Result: end-to-end training of dynamics isn't the answer.
October 24, 2025 at 2:00 PM
It's not just Kanizsa. Figure-ground segregation, neon color spreading, and Rubin's vase illusion all show the same pattern.

Even Gestalt principles, grouping by similarity, continuation, etc., exhibit identical neural signatures of delayed, feedback-dependent processing.
October 24, 2025 at 2:00 PM
Consider the Kanizsa illusion: decades of neural recordings and careful experiments show that neurons in early visual areas respond to illusory contours, even when nothing exists in their receptive fields!

This response is causally dependent on feedback (not local recurrence).
October 24, 2025 at 2:00 PM
How does our brain excel at complex object recognition, yet get fooled by simple illusory contours? What unifying principle governs all Gestalt laws of perceptual organization?

We may have an answer: integration of learned priors through feedback. New paper with @kenmiller.bsky.social! 🧵
October 24, 2025 at 2:00 PM
Finally, we tested the other end of the perceptual continuum: purely prior-driven percepts where imagination/hallucination occurs. Starting from (here, the same) noise, both PGDD and "increase confidence" generate meaningful (distinct) patterns.
October 24, 2025 at 1:57 PM
We tested similar illusions: Ehrenstein illusion, Cornsweet illusion, Adelson's checker shadow, and the Confetti illusion. Neural recordings aren't available for these, but extensive human studies exist.

Again, generative inference via PGDD accounts for all of them.
October 24, 2025 at 1:57 PM