Rosanne Rademaker
banner
rademaker.bsky.social
Rosanne Rademaker
@rademaker.bsky.social
Max Planck group leader at ESI Frankfurt | human cognition, fMRI, MEG, computation | sciences with the coolest (phd) students et al. | she/her
Pinned
Here’s a thought that might make you tilt your head in curiosity: With every movement of your eyes, head, or body, the visual input to your eyes shifts! Nevertheless, it doesn't feel like the world does suddenly tilts sideways whenever you tilt your head. How can this be? TWEEPRINT ALERT! 🚨🧵 1/n
a husky puppy is laying on the floor with its tongue out and wearing a blue collar .
ALT: a husky puppy is laying on the floor with its tongue out and wearing a blue collar .
media.tenor.com
Reposted by Rosanne Rademaker
Our paper is out in @natneuro.nature.com!

www.nature.com/articles/s41...

We develop a geometric theory of how neural populations support generalization across many tasks.

@zuckermanbrain.bsky.social
@flatironinstitute.org
@kempnerinstitute.bsky.social

1/14
February 10, 2026 at 3:56 PM
Reposted by Rosanne Rademaker
New preprint with @SamJung @timbrady.bsky.social and @violastoermer.bsky.social: osf.io/preprints/ps.... Here we uncover what might be driving the “meaningfulness benefit” in visual working memory. Studies show that real objects are remembered better in VWM tasks than abstract stimuli. But why? 1/
OSF
osf.io
February 9, 2026 at 9:06 PM
The best part though: Working with amazing graduate student *Maria Servetnik* from our lab, who did all the heavy lifting. Not to mention lots of inspiration & input from @mjwolff.bsky.social. I'm am one very lucky PI 😊 n/n
January 21, 2026 at 12:47 PM
Check out our *preprint* for some cool correlations with behavior (for foblique effect fans). For now, I’m just happy that these fun data are out in the world. It’s been a minute Chaipat Chunharas & I ventured to dissociate allocentric and retinocentric reference frames (7+ years ago?! 🤫)... 10/n
Visual representations in the human brain rely on a reference frame that is in between allocentric and retinocentric coordinates
Visual information in our everyday environment is anchored to an allocentric reference frame – a tall building remains upright even when you tilt your head, which changes the projection of the building on your retina from a vertical to a diagonal orientation. Does retinotopic cortex represent visual information in an allocentric or retinocentric reference frame? Here, we investigate which reference frame the brain uses by dissociating allocentric and retinocentric reference frames via a head tilt manipulation combined with electroencephalography (EEG). Nineteen participants completed between 1728–2880 trials during which they briefly viewed (150 ms) and then remembered (1500 ms) a randomly oriented target grating. In interleaved blocks of trials, the participant’s head was either kept upright, or tilted by 45º using a custom rotating chinrest. The target orientation could be decoded throughout the trial (using both voltage and alpha-band signals) when training and testing within head-upright blocks, and within head-tilted blocks. Importantly, we directly addressed the question of reference frames via cross-generalized decoding: If target orientations are represented in a retinocentric reference frame, a decoder trained on head-upright trials would predict a 45º offset in decoded orientation when tested on head-tilted trials (after all, a vertical building becomes diagonal on the retina after head tilt). Conversely, if target representations are allocentric and anchored to the real world, no such offset should be observed. Our analyses reveal that from the earliest stages of perceptual processing all the way throughout the delay, orientations are represented in between an allocentric and retinocentric reference frame. These results align with previous findings from physiology studies in non-human primates, and are the first to demonstrate that the human brain does not rely on a purely allocentric or retinocentric reference frame when representing visual information. ### Competing Interest Statement The authors have declared no competing interest. NIH Common Fund, https://ror.org/001d55x84, NEI R01-EY025872, NIMH R01-MH087214
www.biorxiv.org
January 21, 2026 at 12:45 PM
No matter the exact time point, no matter how we quantified the shift, no matter if we looked at decoding or at representational geometry ¬– the reference frame used by the brain to represent orientations was always smack dab in between retinocentric and allocentric 9/n
January 21, 2026 at 12:41 PM
Well, throughout perception (when the orientation is on the screen) as well as the entire memory delay (the orientation is held in mind), we discovered a reference frame that is in between retinocentric and allocentric coordinates! 8/n
January 21, 2026 at 12:35 PM
Conversely, if representations are allocentric and anchored to the real world, no such shift should be observed. In other words: Cross-generalized decoding to the rescue! If you had to guess… What reference frame do you think visual cortex uses for visual processing? 7/n
two german shepherds are laying on the floor in front of a fireplace .
ALT: two german shepherds are laying on the floor in front of a fireplace .
media.tenor.com
January 21, 2026 at 12:35 PM
The trick? If orientations are represented in a retinocentric reference frame, a decoder trained on head-upright trials would predict a 45º shift in decoded orientation when tested on head-tilted trials (after all, a vertical building becomes diagonal on the retina after head tilt). 6/n
January 21, 2026 at 12:34 PM
Now, even if the pattern *completely shifts* with head tilt, standard (within time point) decoding can only ever infer the exact same label! After all, we as researchers do not know the underlying shift, only the orientation (and hence the label) that was on the screen. 5/n
January 21, 2026 at 12:33 PM
We want to decode visual orientation from the EEG signal to uncover the reference frame used by the brain. But we have a problem… A decoder only learns the association between a label (e.g., 45º) and a pattern of brain activity. Presented with a new pattern of activity, the label is inferred. 4/n
January 21, 2026 at 12:32 PM
Do visual parts of the brain represent visual information in an allocentric or retinocentric reference frame? We used a simple orientation recall task while measuring electroencephalography (EEG) signals from human visual cortex. People had their head upright 😀 or tilted 🫠! 3/n
January 21, 2026 at 12:31 PM
Visual information in our environment is anchored to an allocentric reference frame – a tall building remains upright even when you tilt your head. But head tilt changes the retinal projection of the building from vertical to diagonal. The building is diagonal in a retinocentic reference frame. 2/n
January 21, 2026 at 12:29 PM
Here’s a thought that might make you tilt your head in curiosity: With every movement of your eyes, head, or body, the visual input to your eyes shifts! Nevertheless, it doesn't feel like the world does suddenly tilts sideways whenever you tilt your head. How can this be? TWEEPRINT ALERT! 🚨🧵 1/n
a husky puppy is laying on the floor with its tongue out and wearing a blue collar .
ALT: a husky puppy is laying on the floor with its tongue out and wearing a blue collar .
media.tenor.com
January 21, 2026 at 12:28 PM
As someone who once tried to recruit Natalie, I can of course only recommend hiring this extremely smart scientist!!
main goal for this year: find a new job! 🙂

looking for a role with fun & complex technical challenges & within a great community. my main expertise is in signal processing/EEG/MEG, but topic-wise I am quite flexible.

science/industry both great! starting mid-year. nschawor.github.io/cv
January 16, 2026 at 11:44 AM
Reposted by Rosanne Rademaker
🌍 Applications are open! The IBRO Exchange Fellowships give early career #neuroscientists to conduct lab visits with several expenses covered during the exchange.

🗓 Apply by 15 Apr: https://ibro.org/grant/exchange-fellowships/

#grant #IBROinAsiaPacific #IBROinUSCanada #IBROinAfrica #IBROinLatAm
January 15, 2026 at 12:01 PM
Reposted by Rosanne Rademaker
#BrainMeeting 🧠 Alert! 🎺

This Friday, January 16th, the Brain Meeting speaker will be Janneke Jehee giving a talk entitled "Uncertainty in perceptual decision-making"

In person or online. For more information:
www.fil.ion.ucl.ac.uk/event
January 12, 2026 at 9:14 AM
Reposted by Rosanne Rademaker
Please spread the word🔊My lab is looking to hire two international postdocs. If you want to do comp neuro, combine machine learning and awesome math to understand neural circuit activity, then come work with us! Bonn is such a cool place for neuroscience now, you don't want to miss out.
January 10, 2026 at 5:39 PM
Reposted by Rosanne Rademaker
New preprint from the lab!
January 6, 2026 at 9:53 PM
Reposted by Rosanne Rademaker
What if we could tell you how well you’ll remember your next visit to your local coffee shop? ☕️

In our new Nature Human Behaviour paper, we show that the 𝗾𝘂𝗮𝗹𝗶𝘁𝘆 𝗼𝗳 𝗮 𝘀𝗽𝗮𝘁𝗶𝗮𝗹 𝗿𝗲𝗽𝗿𝗲𝘀𝗲𝗻𝘁𝗮𝘁𝗶𝗼𝗻 can be measured with neuroimaging – and 𝘁𝗵𝗮𝘁 𝘀𝗰𝗼𝗿𝗲 𝗽𝗿𝗲𝗱𝗶𝗰𝘁𝘀 𝗵𝗼𝘄 𝘄𝗲𝗹𝗹 𝗻𝗲𝘄 𝗲𝘅𝗽𝗲𝗿𝗶𝗲𝗻𝗰𝗲𝘀 𝘄𝗶𝗹𝗹 𝘀𝘁𝗶𝗰𝗸.
January 5, 2026 at 6:43 PM
This is very cool! The link between spikes and LFP’s is something that comes up frequently in our (human neuroimaging) lab. Nice to learn more about it!
January 5, 2026 at 8:47 PM
Reposted by Rosanne Rademaker
The ADAM lab is hiring a Research Specialist to join us! This role involves conducting human subjects research (EEG experiments on attention + working memory) and assisting with the execution and administration of ongoing projects.

Job posting: emdz.fa.us2.oraclecloud.com/hcmUI/Candid...
Research Specialist
The Attention, Distractions, and Memory (ADAM) Lab at Rice University is recruiting a full-time Research Specialist (Research Specialist I). The ADAM Lab (PI: Kirsten Adam) conducts cognitive neurosci...
emdz.fa.us2.oraclecloud.com
January 2, 2026 at 3:21 PM
Reposted by Rosanne Rademaker
@shansmann-roth.bsky.social and I finally finished our paper confirming a unique prediction of the Demixing Model (DM): inter-item biases in #visualworkingmemory depend on the _relative_ noise of targets and non-targets, potentially going in opposing directions. 🧵1/9
www.biorxiv.org/content/10.6...
Noise in Competing Representations Determines the Direction of Memory Biases
Our memories are reconstructions, prone to errors. Historically treated as a mere nuisance, memory errors have recently gained attention when found to be systematically shifted away from or towards no...
www.biorxiv.org
December 26, 2025 at 4:39 PM
Reposted by Rosanne Rademaker
The neural basis of working memory has been debated. What we like to call “The Standard Model” of working memory posits that persistent discharges generated by neurons in the prefrontal cortex constitute the neural correlate of working memory (2/10)
December 29, 2025 at 2:41 PM
Reposted by Rosanne Rademaker
🚨 New paper in @pnas.org to end 2025 with a bang!🚨

Behavioral, experiential, and physiological signatures of mind blanking
www.pnas.org/doi/10.1073/...

with Esteban Munoz-Musat, @arthurlecoz.bsky.social @corcorana.bsky.social, Laouen Belloli and Lionel Naccache

Illustration: Ana Yael.

1/n
December 29, 2025 at 10:11 AM
Reposted by Rosanne Rademaker
No exciting plans for year-end yet?

Why not gear up for your next grant proposal? 💸

Check out our website for recurring and one-time funding lines, awards, and programs! 👉 bernstein-network.de/en/newsroom/...
December 23, 2025 at 8:01 AM