Adrien Doerig
banner
adriendoerig.bsky.social
Adrien Doerig
@adriendoerig.bsky.social
Cognitive computational neuroscience, machine learning, psychophysics & consciousness.

Currently Professor at Freie Universität Berlin, also affiliated with the Bernstein Center for Computational Neuroscience.
Pinned
🚨 Finally out in Nature Machine Intelligence!!
"Visual representations in the human brain are aligned with large language models"
🔗 www.nature.com/articles/s42...
High-level visual representations in the human brain are aligned with large language models - Nature Machine Intelligence
Doerig, Kietzmann and colleagues show that the brain’s response to visual scenes can be modelled using language-based AI representations. By linking brain activity to caption-based embeddings from lar...
www.nature.com
Reposted by Adrien Doerig
Is the “standard workflow” holding back fMRI analysis?

Mass-univariate analysis is still the bread-and-butter: intuitive, fast… and chronically overfitted. Add harsh multiple-comparison penalties, and we patch the workflow with statistical band-aids. No wonder the stringency debates never die.
November 18, 2025 at 10:13 PM
Reposted by Adrien Doerig
Excited to share my first paper: Model–Behavior Alignment under Flexible Evaluation: When the Best-Fitting Model Isn’t the Right One (NeurIPS 2025). link below.
November 20, 2025 at 2:05 PM
Very cool work, go check it out!

Rich, brain-like scene representations built from acrive vision
November 18, 2025 at 2:53 PM
This seems very cool, @singerjohannes.bsky.social decidedly did loads of cool stuff before I got lucky and he joined our group
New preprint led by @pablooyarzo.bsky.social together with @kohitij.bsky.social, Diego Vidaurre & Radek Cichy.

Using EEG + fMRI, we show that when humans recognize images that feedforward CNNs fail on, the brain recruits cortex-wide recurrent resources.

www.biorxiv.org/content/10.1... (1/n)
www.biorxiv.org
November 7, 2025 at 12:56 PM
Ask a brain anything!

This is incredible work led by Victoria Bosch @initself.bsky.social at the Kietzmann Lab @timkietzmann.bsky.social, developing interactive and flexible brain decoding.

I'm excited to see where this goes next.
Introducing CorText: a framework that fuses brain data directly into a large language model, allowing for interactive neural readout using natural language.

tl;dr: you can now chat with a brain scan 🧠💬

1/n
November 3, 2025 at 3:48 PM
Reposted by Adrien Doerig
New preprint!

"Non-commitment in mental imagery is distinct from perceptual inattention, and supports hierarchical scene construction"

(by Li, Hammond, & me)

link: doi.org/10.31234/osf...

-- the title's a bit of a mouthful, but the nice thing is that it's a pretty decent summary
October 14, 2025 at 1:22 PM
Reposted by Adrien Doerig
Over the past year, my lab has been working on fleshing out theory + applications of the Platonic Representation Hypothesis.

Today I want to share two new works on this topic:

Eliciting higher alignment: arxiv.org/abs/2510.02425
Unpaired learning of unified reps: arxiv.org/abs/2510.08492

1/9
October 10, 2025 at 10:13 PM
Reposted by Adrien Doerig
🧠 New preprint: we show that model-guided microstimulation can steer monkey visual behavior.

Paper: arxiv.org/abs/2510.03684

🧵
October 7, 2025 at 3:22 PM
Reposted by Adrien Doerig
Awesome work by @jorge-morales.bsky.social and team, using LLMs to suggest that propositional reasoning might be enough to solve classic imagery tasks! ✨
Imagine an apple 🍎. Is your mental image more like a picture or more like a thought? In a new preprint led by Morgan McCarty—our lab's wonderful RA—we develop a new approach to this old cognitive science question and find that LLMs excel at tasks thought to be solvable only via visual imagery. 🧵
Artificial Phantasia: Evidence for Propositional Reasoning-Based Mental Imagery in Large Language Models
This study offers a novel approach for benchmarking complex cognitive behavior in artificial systems. Almost universally, Large Language Models (LLMs) perform best on tasks which may be included in th...
arxiv.org
October 1, 2025 at 6:32 AM
Reposted by Adrien Doerig
So excited to see this preprint released from the lab into the wild.

Charlotte has developed a theory for how learning curriculum influences learning generalization.
Our theory makes straightforward neural predictions that can be tested in future experiments. (1/4)

🧠🤖 🧠📈 #MLSky
🚨 New preprint alert!

🧠🤖
We propose a theory of how learning curriculum affects generalization through neural population dimensionality. Learning curriculum is a determining factor of neural dimensionality - where you start from determines where you end up.
🧠📈

A 🧵:

tinyurl.com/yr8tawj3
The curriculum effect in visual learning: the role of readout dimensionality
Generalization of visual perceptual learning (VPL) to unseen conditions varies across tasks. Previous work suggests that training curriculum may be integral to generalization, yet a theoretical explan...
tinyurl.com
September 30, 2025 at 2:35 PM
Reposted by Adrien Doerig
I wanted to add some thoughts to this excellent blog post, not detailed, maybe wrong, maybe useful:
1. Unique variance is easy to interpret as a lower bound of what a variable explains (the upper bound being either what the variable explains alone or what the other variables cannot explain uniquely)
Variance partitioning is used to quantify the overlap of two models. Over the years, I have found that this can be a very confusing and misleading concept. So we finally we decided to write a short blog to explain why.
@martinhebart.bsky.social @gallantlab.org
diedrichsenlab.org/BrainDataSci...
September 12, 2025 at 1:57 PM
Reposted by Adrien Doerig
Variance partitioning is used to quantify the overlap of two models. Over the years, I have found that this can be a very confusing and misleading concept. So we finally we decided to write a short blog to explain why.
@martinhebart.bsky.social @gallantlab.org
diedrichsenlab.org/BrainDataSci...
September 10, 2025 at 4:58 PM
Reposted by Adrien Doerig
🧠 New preprint: Why do deep neural networks predict brain responses so well?
We find a striking dissociation: it’s not shared object recognition. Alignment is driven by sensitivity to texture-like local statistics.
📊 Study: n=57, 624k trials, 5 models doi.org/10.1101/2025...
September 8, 2025 at 6:32 PM
Reposted by Adrien Doerig
What determines where specialisation for sensory information occurs in the cortex? We observe a spatial competition between primary sensory areas and the Default Mode Network, using our new Spatial Component Decomposition method. Work by @ulysse-klatzmann.bsky.social , with Bazin & Daniel Margulies
Spatial layout of visual specialization is shaped by competing default mode and sensory networks https://www.biorxiv.org/content/10.1101/2025.09.08.674858v1
September 9, 2025 at 8:51 AM
Reposted by Adrien Doerig
Preprint alert! 🚨
1/ How does deep sleep reshape our memories? Our new study shows that slow-wave sleep (SWS) reorganises episodic memory networks, shifting recall from the parietal cortex to the anterior temporal lobe (ATL). With Polina Perzich and @bstaresina.bsky.social . A thread below👇
Slow wave sleep supports the reorganisation of episodic memory networks https://www.biorxiv.org/content/10.1101/2025.03.24.644966v1
March 25, 2025 at 5:49 PM
Reposted by Adrien Doerig
A transformation from vision to imagery in the human brain. Intriguing new preprint by Roy & Naselaris et al for anyone interested in mental imagery!
www.biorxiv.org/content/10.1...
A transformation from vision to imagery in the human brain
Extensive work has shown that the visual cortex is reactivated during mental imagery, and that models trained on visual data can predict imagery activity and decode imagined stimuli. These findings ma...
www.biorxiv.org
September 4, 2025 at 9:28 AM
Reposted by Adrien Doerig
🧠 “You never forget how to ride a bike”, but how is that possible?
Our study proposes a bio-plausible meta-plasticity rule that shapes synapses over time, enabling selective recall based on context
Context selectivity with dynamic availability enables lifelong continual learning
“You never forget how to ride a bike”, – but how is that possible? The brain is able to learn complex skills, stop the practice for years, learn other…
www.sciencedirect.com
September 4, 2025 at 4:00 PM
Reposted by Adrien Doerig
New paper with @ManuKirberg 💭
Is “unconscious mental imagery” real? The evidence is weaker than it seems. We explain why—and how to move the debate forward.
🔗 www.sciencedirect.com/science/arti...
Aphantasia and the unconscious imagery hypothesis
Until recently, mental imagery has largely been regarded as an exclusively conscious phenomenon. However, recent empirical results suggest that mental…
www.sciencedirect.com
September 4, 2025 at 5:12 AM
Reposted by Adrien Doerig
🔍 Large language models, similar to those behind ChatGPT, can predict how the human brain responds to visual stimuli

New study by @adriendoerig.bsky.social @freieuniversitaet.bsky.social with colleagues from Osnabrück, Minnesota and @umontreal-en.bsky.social

Read the whole story 👉 bit.ly/3JXlYmO
September 2, 2025 at 7:01 AM
Reposted by Adrien Doerig
What makes humans similar or different to AI? In a paper out in @natmachintell.nature.com led by @florianmahner.bsky.social & @lukasmut.bsky.social, w/ Umut Güclü, we took a deep look at the factors underlying their representational alignment, with surprising results.

www.nature.com/articles/s42...
Dimensions underlying the representational alignment of deep neural networks with humans - Nature Machine Intelligence
An interpretability framework that compares how humans and deep neural networks process images has been presented. Their findings reveal that, unlike humans, deep neural networks focus more on visual ...
www.nature.com
June 23, 2025 at 8:03 PM
Reposted by Adrien Doerig
Our target discussion article out in Cognitive Neuroscience! It will be followed by peer commentary and our responses. If you would like to write a commentary, please reach out to the journal! 1/18 www.tandfonline.com/doi/full/10.... @cibaker.bsky.social @susanwardle.bsky.social
August 29, 2025 at 6:43 PM
Reposted by Adrien Doerig
So happy to see this work out! 🥳
Huge thanks to our two amazing reviewers who pushed us to make the paper much stronger. A truly joyful collaboration with @lucasgruaz.bsky.social, @sobeckerneuro.bsky.social, and Johanni Brea! 🥰

Tweeprint on an earlier version: bsky.app/profile/modi... 🧠🧪👩‍🔬
Merits of Curiosity: A Simulation Study
Abstract‘Why are we curious?’ has been among the central puzzles of neuroscience and psychology in the past decades. A popular hypothesis is that curiosity is driven by intrinsically generated reward signals, which have evolved to support survival in complex environments. To formalize and test this hypothesis, we need to understand the enigmatic relationship between (i) intrinsic rewards (as drives of curiosity), (ii) optimality conditions (as objectives of curiosity), and (iii) environment structures. Here, we demystify this relationship through a systematic simulation study. First, we propose an algorithm to generate environments that capture key abstract features of different real-world situations. Then, we simulate artificial agents that explore these environments by seeking one of six representative intrinsic rewards: novelty, surprise, information gain, empowerment, maximum occupancy principle, and successor-predecessor intrinsic exploration. We evaluate the exploration performance of these simulated agents regarding three potential objectives of curiosity: state discovery, model accuracy, and uniform state visitation. Our results show that the comparative performance of each intrinsic reward is highly dependent on the environmental features and the curiosity objective; this indicates that ‘optimality’ in top-down theories of curiosity needs a precise formulation of assumptions. Nevertheless, we found that agents seeking a combination of novelty and information gain always achieve a close-to-optimal performance on objectives of curiosity as well as in collecting extrinsic rewards. This suggests that novelty and information gain are two principal axes of curiosity-driven behavior. These results pave the way for the further development of computational models of curiosity and the design of theory-informed experimental paradigms.
dlvr.it
August 25, 2025 at 4:18 PM
Reposted by Adrien Doerig
🚨We believe this is a major step forward in how we study hippocampus function in healthy humans.

Using novel behavioral tasks, fMRI, RL & RNN modeling, and transcranial ultrasound stimulation (TUS), we demonstrate the causal role of hippocampus in relational structure learning.
August 28, 2025 at 2:00 PM
Reposted by Adrien Doerig
Very happy to see this preprint out! The amazing @danwang7.bsky.social was on fire sharing this work at #ECVP2025, gathering loads of attention, and here you can find the whole thing!
Using RIFT we reveal how the competition between top-down goals and bottom-up saliency unfolds within visual cortex.
August 28, 2025 at 10:09 AM
Reposted by Adrien Doerig
Job alert 🚨 Fully funded PhD position available in our Maastricht lab! Are you interested in predictive processing, individual differences, and computational modelling of behavioural and neural data? Please apply! #NeuroJobs vacancies.maastrichtuniversity.nl/job/Maastric...
PhD Candidate: cognitive computational neuroscience of individual differences
PhD Candidate: cognitive computational neuroscience of individual differences
vacancies.maastrichtuniversity.nl
August 28, 2025 at 9:11 AM