Heleen Slagter
banner
haslagter.bsky.social
Heleen Slagter
@haslagter.bsky.social
Professor Vrije Universiteit Amsterdam, Director Institute Brain and Behavior Amsterdam, human brain and mind, attention, predictive processing, action, consciousness, meditation
www.heleenslagter.com
Pinned
There is also my talk! "Meditation and the scientific study of consciousness: from consciousness to pure awareness to complete cessation of awareness" youtu.be/mQBKjPOgatw?... via @YouTube
Reposted by Heleen Slagter
Modeling non-dual awareness via constraint closure: a reinterpretation of groundlessness url: academic.oup.com/nc/article/2...
Modeling non-dual awareness via constraint closure: a reinterpretation of groundlessness
Abstract. Non-dual awareness (NDA) refers to a shift in consciousness in which the usual distinction between subject and object dissolves, and experience i
academic.oup.com
January 23, 2026 at 9:52 PM
Reposted by Heleen Slagter
🚨 New paper out in Science Advances 🚨
With @suryagayet.bsky.social and @peelen.bsky.social, in two fMRI studies we investigate mental object rotations that are driven by the scene context, rather than purely by cognitive operations. 🧵 www.science.org/doi/10.1126/...
January 23, 2026 at 3:16 PM
Reposted by Heleen Slagter
Cost of being female lead/corresponding author in biomedical sciences: "[T]he median amount of time spent under review is 7.4%–14.6% longer for female-authored articles than for male-authored articles" even in disciplines where women well-represented. #AcademicSky

journals.plos.org/plosbiology/...
Biomedical and life science articles by female researchers spend longer under review
Women are underrepresented in academia, especially in STEMM fields, at top institutions, and in senior positions. This study analyzes millions of biomedical and life science articles, revealing that f...
journals.plos.org
January 21, 2026 at 2:38 PM
Reposted by Heleen Slagter
New preprint with Nicolai Wolpert and Catherine Tallon-Baudry !

Reaction times across three distinct perceptual tasks (total N = 90) varied with the electrical rhythm of the stomach.

#neuroskyence
Perceptual reaction times are coupled to the gastric electrical rhythm https://www.biorxiv.org/content/10.64898/2026.01.18.700150v1
January 22, 2026 at 8:44 AM
Reposted by Heleen Slagter
Sharing our new paper published today in Nature Communications. In my view, this is our clearest demonstration to date that something profoundly changes in how infants encode the world around them before and after the emergence of self-representation. www.nature.com/articles/s41...
The self-reference memory bias is preceded by an other-reference bias in infancy - Nature Communications
A classic feature of human memory is that we remember information better when it refers to ourselves. Here, the authors show that before the emergence of self-concept, infants instead remember informa...
www.nature.com
July 9, 2025 at 3:59 PM
Reposted by Heleen Slagter
We can use past experience to make predictions about the future. How do predictions affect our memory for the present? My own work (tinyurl.com/42kyukch) suggests that predictions compete with memory. But other recent work (tinyurl.com/2ekd4wr6) found the opposite--cooperation! What's going on here?
PNAS
Proceedings of the National Academy of Sciences (PNAS), a peer reviewed journal of the National Academy of Sciences (NAS) - an authoritative source of high-impact, original research that broadly spans...
tinyurl.com
January 20, 2026 at 9:45 PM
Reposted by Heleen Slagter
EEG-Based Decoding of Color and Visual Category Representations Is Reliable Within and Across Sessions https://www.biorxiv.org/content/10.64898/2026.01.18.699677v1
January 22, 2026 at 3:15 AM
Reposted by Heleen Slagter
Computational neural dynamics of goal-directed visual attention in macaques https://www.biorxiv.org/content/10.64898/2026.01.18.700191v1
January 22, 2026 at 5:15 AM
Reposted by Heleen Slagter
I just created a series of seven deep-dive videos about AI, which I've posted to youtube and now here. 😊

Targeted to laypeople, they explore how LLMs work, what they can do, and what impacts they have on learning, well-being, disinformation, the workplace, the economy, and the environment.
Part 1: How do LLMs work?
YouTube video by Andrew Perfors
www.youtube.com
January 22, 2026 at 12:45 AM
Reposted by Heleen Slagter
Our new paper is out in @natmed.nature.com 😱! A thread:

Can our thoughts and feelings directly affect our physical well-being? Our pre-registered, double-blind RCT investigated this by testing if modulating the brain's reward system could enhance immune responses to vaccination.
January 21, 2026 at 5:55 PM
Reposted by Heleen Slagter
Check out our *preprint* for some cool correlations with behavior (for foblique effect fans). For now, I’m just happy that these fun data are out in the world. It’s been a minute Chaipat Chunharas & I ventured to dissociate allocentric and retinocentric reference frames (7+ years ago?! 🤫)... 10/n
Visual representations in the human brain rely on a reference frame that is in between allocentric and retinocentric coordinates
Visual information in our everyday environment is anchored to an allocentric reference frame – a tall building remains upright even when you tilt your head, which changes the projection of the building on your retina from a vertical to a diagonal orientation. Does retinotopic cortex represent visual information in an allocentric or retinocentric reference frame? Here, we investigate which reference frame the brain uses by dissociating allocentric and retinocentric reference frames via a head tilt manipulation combined with electroencephalography (EEG). Nineteen participants completed between 1728–2880 trials during which they briefly viewed (150 ms) and then remembered (1500 ms) a randomly oriented target grating. In interleaved blocks of trials, the participant’s head was either kept upright, or tilted by 45º using a custom rotating chinrest. The target orientation could be decoded throughout the trial (using both voltage and alpha-band signals) when training and testing within head-upright blocks, and within head-tilted blocks. Importantly, we directly addressed the question of reference frames via cross-generalized decoding: If target orientations are represented in a retinocentric reference frame, a decoder trained on head-upright trials would predict a 45º offset in decoded orientation when tested on head-tilted trials (after all, a vertical building becomes diagonal on the retina after head tilt). Conversely, if target representations are allocentric and anchored to the real world, no such offset should be observed. Our analyses reveal that from the earliest stages of perceptual processing all the way throughout the delay, orientations are represented in between an allocentric and retinocentric reference frame. These results align with previous findings from physiology studies in non-human primates, and are the first to demonstrate that the human brain does not rely on a purely allocentric or retinocentric reference frame when representing visual information. ### Competing Interest Statement The authors have declared no competing interest. NIH Common Fund, https://ror.org/001d55x84, NEI R01-EY025872, NIMH R01-MH087214
www.biorxiv.org
January 21, 2026 at 12:45 PM
Reposted by Heleen Slagter
Here’s a thought that might make you tilt your head in curiosity: With every movement of your eyes, head, or body, the visual input to your eyes shifts! Nevertheless, it doesn't feel like the world does suddenly tilts sideways whenever you tilt your head. How can this be? TWEEPRINT ALERT! 🚨🧵 1/n
a husky puppy is laying on the floor with its tongue out and wearing a blue collar .
ALT: a husky puppy is laying on the floor with its tongue out and wearing a blue collar .
media.tenor.com
January 21, 2026 at 12:28 PM
Reposted by Heleen Slagter
Thanks to The Transmitter for reaching out to me for comments on this methodological challenge to lesion network mapping. Scientific debate is critical to methodological advancement - so let the debate begin!

www.thetransmitter.org/brain-imagin...
Methodological flaw may upend network mapping tool
The lesion network mapping method, used to identify disease-specific brain networks for clinical stimulation, produces a nearly identical network map for any given condition, according to a new study.
www.thetransmitter.org
January 18, 2026 at 2:00 AM
Reposted by Heleen Slagter
With electrodes on or in the motor cortex, BCIs can already achieve amazing results, but what if this is exactly the damaged part of the brain? We propose to integrate intentions into BCI frameworks using ideomotor theory: linkinghub.elsevier.com/retrieve/pii/S1364661325003523
LinkedIn
This link will take you to a page that’s not on LinkedIn
lnkd.in
January 15, 2026 at 9:49 AM
Reposted by Heleen Slagter
Wanna know how to infer the presence or absence of consciousness in artificial systems? Check out my new preprint: philarchive.org/rec/WIEITP #PhilMind #PhilConsc #Consciousness
Wanja Wiese, Inferring the presence (or absence) of consciousness in artificial systems - PhilArchive
How should we assess which artificial systems could be conscious? Given uncertainty about the nature and distribution of consciousness, it is promising to look for indicators of consciousness that pro...
philarchive.org
January 16, 2026 at 1:44 PM
Reposted by Heleen Slagter
📆 updated for 2026!

list of summer schools & short courses in the realm of (computational) neuroscience or data analysis of EEG / MEG / LFP: 🔗 docs.google.com/spreadsheets...
various computational neuroscience / MEEG / LFP short courses and summer schools
docs.google.com
December 19, 2025 at 4:37 PM
Reposted by Heleen Slagter
still one of the best explanations of principal component analysis (pca), explained at different levels from layman to the more math inclined stats.stackexchange.com/a/140579/132...
Making sense of principal component analysis, eigenvectors & eigenvalues
In today's pattern recognition class my professor talked about PCA, eigenvectors and eigenvalues. I understood the mathematics of it. If I'm asked to find eigenvalues etc. I'll do it correctly li...
stats.stackexchange.com
January 13, 2026 at 3:51 PM
Reposted by Heleen Slagter
I am happy to share that our preprint “𝗪𝗼𝗿𝗸𝗶𝗻𝗴 𝘄𝗶𝘁𝗵 𝗖𝗶𝗿𝗰𝘂𝗹𝗮𝗿 𝗗𝗮𝘁𝗮: 𝗔 𝗧𝘂𝘁𝗼𝗿𝗶𝗮𝗹 𝗳𝗼𝗿 𝗖𝗼𝗴𝗻𝗶𝘁𝗶𝘃𝗲 𝗮𝗻𝗱 𝗕𝗲𝗵𝗮𝘃𝗶𝗼𝗿𝗮𝗹 𝗥𝗲𝘀𝗲𝗮𝗿𝗰𝗵” is now out.

Huge thanks to @bayslab.org, Julie de Falco, Zahara, @cjungerius.bsky.social, @ivntmc.bsky.social, Adam, and Xiaolu for the lovely collaboration.

doi.org/10.31234/osf...
OSF
doi.org
January 12, 2026 at 2:43 PM
Reposted by Heleen Slagter
We are officially in search of a PostDoc to join the Visual Attention Lab at BWH and affiliation with HMS under PI Dr. Jeremy Wolfe!

Please see attached link for more details and post around! We are excited to hear from you!

massgeneralbrigham.wd1.myworkdayjobs.com/MGBExternal/...
Postdoctoral Fellow Visual Attention Lab
Site: The Brigham and Women's Hospital, Inc. Mass General Brigham relies on a wide range of professionals, including doctors, nurses, business people, tech experts, researchers, and systems analysts t...
massgeneralbrigham.wd1.myworkdayjobs.com
January 12, 2026 at 3:19 PM
Reposted by Heleen Slagter
Recently posted: A video of this talk, intended to trigger conversations.

Please consider watching as a group, followed by a discussion of next steps for psychiatry-related research with your community. Discussions like these are crucial for progress.

mediacentral.princeton.edu/media/Nicole...
Nicole Rust
Nicole RustDirector of Visual Memory LabDepartment of PsychologyUniversity of Pennsylvania A New-New Intellectual Framework for PsychiatryIn 1998, Eric Kandel published the brilliant essay&n...
mediacentral.princeton.edu
January 13, 2026 at 7:40 AM
Reposted by Heleen Slagter
New preprint: Confidence-accuracy dissociations in perceptual decision making. A review I was supposed to write 3 years ago for my VSS Young Investigator Award. Better late than never 😅 I tried to organize the literature and explore the likely mechanisms. Feedback welcome!

osf.io/preprints/ps...
OSF
osf.io
January 13, 2026 at 6:13 PM
Reposted by Heleen Slagter
How does the brain generate predictive models of own actions?

We will soon open a **Postdoc position** to address this question in my lab. If you are interested, please write to moritz.wurm@unitn.it.
January 13, 2026 at 11:13 AM