David Richter
banner
davidrichter.bsky.social
David Richter
@davidrichter.bsky.social
Cognitive Neuroscientist | Predictive Processing & Perception Researcher.
At: CIMCYC, Granada. Formerly: VU Amsterdam & Donders Institute.
https://www.richter-neuroscience.com/
Pinned
High-level visual surprise is rapidly integrated during perceptual inference!

🚨 New paper 🚨 out now in @cp-iscience.bsky.social with @paulapena.bsky.social and @mruz.bsky.social

www.cell.com/iscience/ful...

Summary 🧡 below πŸ‘‡
Rapid computation of high-level visual surprise
Health sciences
www.cell.com
Congratulations Peter! Amazing news and well deserved!
December 11, 2025 at 4:53 PM
Thanks Juan!
December 9, 2025 at 11:08 AM
If you’re interested in more details, check out the full paper:
doi.org/10.1016/j.is...
Redirecting
doi.org
December 5, 2025 at 2:37 PM
Taken together, our findings show that high-level visual predictions are rapidly integrated during perceptual inference, suggesting that the brain's predictive machinery is finely tuned to utilize expectations abstracted away from low-level sensory details to facilitate perception.
December 5, 2025 at 2:37 PM
We also found a small decrease in neural responses by semantic (word-based) surprise. Notably, low-level visual surprise had no detectable effect, even though stimuli were predictable all the way down to the pixel level.
December 5, 2025 at 2:37 PM
Then we turned to the key questions: When and what kind of surprise drives visually evoked responses?
Neural responses ~190 ms post-stimulus onset over parieto-occipital electrodes were selectively increased by high-level visual surprise!
December 5, 2025 at 2:37 PM
As a sanity check, we first used RSA to show that the CNN and other models of interest (semantic and task models) robustly explained the EEG responses independent of surprise.
December 5, 2025 at 2:37 PM
We investigated these questions using EEG and a visual CNN. Participants viewed object images that were probabilistically predicted by preceding cues. We then quantified surprise trial-by-trial at low-levels (early CNN layers) and high-levels (late CNN layers) of visual feature abstraction.
December 5, 2025 at 2:37 PM
Predictive processing theories propose that the brain continuously generates predictions about incoming sensory input.
But what exactly does the brain predict? Low-level (edges, contrasts) and/or high-level visual features (textures, objects)?
And when do these predictions shape neural responses?
December 5, 2025 at 2:37 PM
High-level visual surprise is rapidly integrated during perceptual inference!

🚨 New paper 🚨 out now in @cp-iscience.bsky.social with @paulapena.bsky.social and @mruz.bsky.social

www.cell.com/iscience/ful...

Summary 🧡 below πŸ‘‡
Rapid computation of high-level visual surprise
Health sciences
www.cell.com
December 5, 2025 at 2:37 PM
Reposted by David Richter
🧠 Regularization, Action, and Attractors in the Dynamical β€œBayesian” Brain

direct.mit.edu/jocn/article...

(still uncorrected proofs, but they should post the corrected one soon--also OA is forthcoming, for now PDF at brainandexperience.org/pdf/10.1162-...)
Regularization, Action, and Attractors in the Dynamical β€œBayesian” Brain
Abstract. The idea that the brain is a probabilistic (Bayesian) inference machine, continuously trying to figure out the hidden causes of its inputs, has become very influential in cognitive (neuro)sc...
direct.mit.edu
October 22, 2025 at 8:59 AM
Reposted by David Richter
@dotproduct.bsky.social's first first author paper is finally out in @sfnjournals.bsky.social! Her findings show that content-specific predictions fluctuate with alpha frequencies, suggesting a more specific role for alpha oscillations than we may have thought. With @jhaarsma.bsky.social. 🧠🟦 πŸ§ πŸ€–
Contents of visual predictions oscillate at alpha frequencies
Predictions of future events have a major impact on how we process sensory signals. However, it remains unclear how the brain keeps predictions online in anticipation of future inputs. Here, we combin...
www.jneurosci.org
October 21, 2025 at 11:05 AM
If you’re into predictive processing and curious about the β€˜what & when of visual surprise’, come see me at #CCN2025 in Amsterdam!

Poster B23 Β· Wednesday at 1:00 pm Β· de Brug.
August 11, 2025 at 3:46 PM
Reposted by David Richter
Hi, we will have three NeuroAI postdoc openings (3 years each, fully funded) to work with Sebastian Musslick (@musslick.bsky.social), Pascal Nieters and myself on task-switching, replay, and visual information routing.

Reach out if you are interested in any of the above, I'll be at CCN next week!
August 9, 2025 at 8:13 AM
Reposted by David Richter
We are recruiting a new PI at the FIL @imagingneuroucl.bsky.social, Associate or Full Professor. This is an amazing place to do cognitive neuroscience, in the heart of London. If you or someone you know might be interested, please pass it on. #neuroskyence

www.ucl.ac.uk/work-at-ucl/...
UCL – University College London
UCL is consistently ranked as one of the top ten universities in the world (QS World University Rankings 2010-2022) and is No.2 in the UK for research power (Research Excellence Framework 2021).
www.ucl.ac.uk
August 8, 2025 at 10:52 AM
Reposted by David Richter
If you are interested in pursuing a PhD in cognitive neuroscience, specially targeting conscious vs. unconscious processing, contact me. We are recruiting πŸ™πŸ§  please RT
July 21, 2025 at 11:50 AM
Reposted by David Richter
🚨 We’re hiring a postdoc!
Join the FLARE project @cimcyc.bsky.social to study sudden perceptual learning using fMRI, RSA, and DNNs.
🧠 2 years, fully funded, flexible start
More info πŸ‘‰ gonzalezgarcia.github.io/postdoc/

DMs or emails welcome! Please share!
Postdoc Position – FLARE Project
gonzalezgarcia.github.io
July 18, 2025 at 11:17 AM
Reposted by David Richter
Exciting new preprint from the lab: β€œAdopting a human developmental visual diet yields robust, shape-based AI vision”. A most wonderful case where brain inspiration massively improved AI solutions.

Work with @zejinlu.bsky.social @sushrutthorat.bsky.social and Radek Cichy

arxiv.org/abs/2507.03168
arxiv.org
July 8, 2025 at 1:04 PM
Taken together, our findings demonstrate that high-level visual predictions are rapidly integrated during perceptual inference. This suggests that the brain's predictive machinery is finely tuned to utilize expectations abstracted away from low-level sensory details, likely to facilitate perception.
June 26, 2025 at 10:22 AM
We also found a curious decrease in ERP amplitude by semantic (word-based) surprise. Critically, we found no modulation by low-level visual surprise, even though stimuli were predictable all the way down to the pixel level.
June 26, 2025 at 10:22 AM
Next, we turned to the key questions – when and what kind of surprise drive visually evoked responses? Results showed that neural responses around 200ms post-stimulus onset over parieto-occipital electrodes were selectively enhanced by high-level visual surprise.
June 26, 2025 at 10:22 AM
First, as a sanity check, we used RSA to show that the DNN and other models of interest (a semantic word-based and a task model) well explained the EEG response irrespective of surprise.
June 26, 2025 at 10:22 AM
We investigated these questions using EEG and a visual DNN. Participants viewed object images that were probabilistically predicted by preceding cues. We then quantified trial-by-trial surprise at low-levels (early DNN layers) and high-levels (late DNN layers) of visual feature abstraction.
June 26, 2025 at 10:22 AM
Predictive processing holds that the brain continuously generates predictions about incoming sensory information. But at what level of abstraction does the brain predict – edges & contrasts or high-level textures & objects? And which stages of visual processing do such predictions modulate?
June 26, 2025 at 10:22 AM