Farzaneh Olianezhad
folianezhad.bsky.social
Farzaneh Olianezhad
@folianezhad.bsky.social
Vision Science | Systems & Computational Neuroscience
Reposted by Farzaneh Olianezhad
We're also developing models for psychiatric disorders that account for perceptual aberrations through abnormal values of interpretable parameters in healthy perception models.

Clinical applications on the horizon! More on that soon!
October 24, 2025 at 2:03 PM
Reposted by Farzaneh Olianezhad
We also found surprising applications of PGDD in mechanistic interpretability—but that's another story for another time!
October 24, 2025 at 2:02 PM
Reposted by Farzaneh Olianezhad
Our theory predicts: whatever algorithm the brain uses for learning hierarchical abstractions, the same feedback pathways that adjust synaptic weights during learning can adjust activations during inference for flexible computation.
(Makes continual learning much easier!)
October 24, 2025 at 2:02 PM
Reposted by Farzaneh Olianezhad
Our goal wasn't to solve illusions or Gestalt principles, but to understand why some brain functions are better explained by pattern recognition models and others by generative models.

Turns out: training for pattern recognition gives generative abilit
October 24, 2025 at 2:02 PM
Reposted by Farzaneh Olianezhad
Finally, we tested the other end of the perceptual continuum: purely prior-driven percepts where imagination/hallucination occurs. Starting from (here, the same) noise, both PGDD and "increase confidence" generate meaningful (distinct) patterns.
October 24, 2025 at 2:01 PM
Reposted by Farzaneh Olianezhad
We tested similar illusions: Ehrenstein illusion, Cornsweet illusion, Adelson's checker shadow, and the Confetti illusion. Neural recordings aren't available for these, but extensive human studies exist.

Again, generative inference via PGDD accounts for all of them.
October 24, 2025 at 2:01 PM
Reposted by Farzaneh Olianezhad
We tested Neon Color Spreading (which recently was studied in mice as well). Illusion: there is no blue disc, only blue lines! When we ran PGDD in the same network, the disk appears! Try it in the demo link below.
October 24, 2025 at 2:01 PM
Reposted by Farzaneh Olianezhad
We showed identical stimuli to our model and ran PGDD. Result: PGDD accounts for all these instances!

Our conclusion: all Gestalt perceptual grouping principles may unify under one mechanism: integration of priors into sensory processing.
October 24, 2025 at 2:01 PM
Reposted by Farzaneh Olianezhad
Kanizsa Square and Face-Vase illusions exemplify Gestalt principles of closure and figure-ground segregation, now unified through generative inference and prior integration.

What about other principles? Roelfsema lab ran a clever monkey study:
October 24, 2025 at 2:01 PM
Reposted by Farzaneh Olianezhad
Can other inference objectives arrive in similar plausible explanation? We developed Prior-Guided Drift Diffusion (PGDD):It moves away from a noisy representation of the input image(drift) while adding a small diffusion noise, importantly allowing access to low-level-only priors.
October 24, 2025 at 2:01 PM
Reposted by Farzaneh Olianezhad
We gave the face-vase image to two ResNet50 networks: one trained on ImageNet, the other on VGGFace2 (modeling face processing areas).

Running generative inference: the object network creates vase patterns, the face network creates facial features!
October 24, 2025 at 2:01 PM
Reposted by Farzaneh Olianezhad
Face-vase illusions evoke bistable perception in humans. Surprisingly, from early visual areas—supposedly just mirroring external reality—you can decode which percept the subject sees! (consistent with neural signature of prior integration)
October 24, 2025 at 2:01 PM
Reposted by Farzaneh Olianezhad
Does generative inference depend on architecture? We tested VGG16-bn, ResNet18, and WideResNet50—all trained on ImageNet showed induced contours. But when identical architectures trained on faces or places? No pattern completion. Data priors matter!
October 24, 2025 at 2:01 PM
Reposted by Farzaneh Olianezhad
What did it change to? From "plectrum" (initial classification) to "lampshade" with higher confidence through generative inference!

Here are actual ImageNet samples of both classes . The network found its closest meaningful interpretation!
October 24, 2025 at 2:01 PM
Reposted by Farzaneh Olianezhad
Remember: this isn't just a Kanizsa illusion model—it's ResNet50, which does complex object recognition on natural images close to training data. But when stimuli are unfamiliar (like Kanizsa), generative inference tries to make sense of the input.
October 24, 2025 at 2:01 PM
Reposted by Farzaneh Olianezhad
For Kanizsa, the least likely class was "trolleybus"! By iteratively moving away from that prediction, the network gradually creates contours around the square area!

First time I ran this—I couldn't believe my eyes!
October 24, 2025 at 2:01 PM
Reposted by Farzaneh Olianezhad
The "increase confidence" objective moves activations away from the least likely classes identified in conventional inference.

The algorithm iteratively updates activations using gradients (errors) of this objective.
October 24, 2025 at 2:01 PM
Reposted by Farzaneh Olianezhad
We don't retrain weights on this new image. Instead, we let the network use its learned priors about natural image distributions from training.

Our inference objective? Simple and intuitive: Increase confidence!
October 24, 2025 at 2:00 PM
Reposted by Farzaneh Olianezhad
Example: What happens when we show a robustly-trained ResNet50 a Kanizsa square?

Conventional inference → low confidence output. Makes sense, it was never trained on Kanizsa stimuli. But what if we use it's (intrinsic learning) feedback (aka backpropagation graph)?
October 24, 2025 at 2:00 PM
Reposted by Farzaneh Olianezhad
Theory: feedback errors, under certain conditions, approximate the steepest ascent toward naturalistic patterns (the score function from generative models). These errors act like a compass, guiding activations toward more plausible states under the data distribution.
October 24, 2025 at 2:00 PM
Reposted by Farzaneh Olianezhad
What if we let the same feedback that was used during learning (to tune the weights for e.g. object recognition) to update the activations according to the learned prior of the network when needed? Obviously, we need novel inference objectives and a theory to link them to priors!
October 24, 2025 at 2:00 PM
Reposted by Farzaneh Olianezhad
Back to board (and biology)! In machine learning, feedback's main purpose is to tune the weights (through backprop. or bio alternatives). But during inference (test time/perception), that computational graph is not used. In brain, feedback is used for both: learning and inference
October 24, 2025 at 2:00 PM
Reposted by Farzaneh Olianezhad
It seems that there is a generative process in the visual cortex during perception of a stimulus is incomplete, ambiguous, or noisy. We call it Generative Inference. But how can we have generative inference in neural networks?
October 24, 2025 at 2:00 PM
Reposted by Farzaneh Olianezhad
First: does training recurrent networks for object recognition automatically create neural signatures of prior integration? We tested CORnet-S and PCN (predictive coding) on ImageNet. Result: end-to-end training of dynamics isn't the answer.
October 24, 2025 at 2:00 PM