Christoph Strauch
banner
cstrauch.bsky.social
Christoph Strauch
@cstrauch.bsky.social
Assistant professor @utrechtuniversity.bsky.social studying spatial attention, eye-movements, pupillometry, and more. Co-PI @attentionlab.bsky.social
#ECVP2025 starts with a fully packed room!

I'll show data, demonstrating that synesthetic perception is perceptual, automatic, and effortless.
Join my talk (Thursday, early morning.., Color II) to learn how the qualia of synesthesia can be inferred from pupil size.
Join and (or) say hi!
August 24, 2025 at 4:28 PM
Gaze heatmaps (are popular especially for eye-tracking beginners and in many applied domains. How many participants should be tested?
Depends of course, but our guidelines help navigating this in an informed way.

Out now in BRM (free) doi.org/10.3758/s134...
@psychonomicsociety.bsky.social
July 29, 2025 at 7:37 AM
January 6, 2025 at 12:42 PM
We often rely on the external world rather than fully loading memory- if it's easy to access external information at least.

@tianyingq.bsky.social's new PBR metaanalysis across 28 exp shows: increases in access cost reliably push toward internal WM use.

doi.org/10.3758/s134...

#VisionScience 🧪
December 5, 2024 at 8:09 AM
Do you care about pupillometry? We need to rewrite its history in psychology. Now in TINS:

A rediscovery of the forgotten early wave of pupillometry research – effort, covert attention, imagery all made visible by 1900 in a fascinating literature.
authors.elsevier.com/a/1jKdtbotq3...
June 27, 2024 at 3:22 PM
How intense is tactile stimulation processed?
Pupils index the intensity of tactile stimulation (same location), and different sensitivities across body parts (same intensity) -one might build a homuncolos model with it.

Out in Psychophysiology
doi.org/10.1111/psyp...
February 20, 2024 at 8:43 AM
Indeed, participants made less saccades and even cut out especially costly saccade directions. This means that we actively weigh costs and flexibly adjust eye movement behavior in light of task demands!
February 8, 2024 at 8:24 AM
Are the cognitive resources used for higher-level tasks the same as for eye movements? The same cost? Then eye movements should change as soon as cognitive demand (or resources) changes! To test this, participants counted/ignored auditory numbers.
February 8, 2024 at 8:23 AM
Does this principle play a role in natural settings? We checked during search in natural scenes (2 exps), whilst throwing any possible control variable against the effect. Whatever we did, saccade costs predicted where participants looked.
February 8, 2024 at 8:23 AM
Cost minimization should show in a negative correlation between these maps. Participants indeed preferred saccading to affordable directions! This means: Saccade costs drive where we look!
February 8, 2024 at 8:22 AM
Now where people prefer to look. Same participants, very similar task: Participants need to choose between any two of the previous directions. No further instruction. Horizontal saccades were preferred; to the bottom left less so!
February 8, 2024 at 8:22 AM
Efficient organisms minimize cost & weigh it: Saccade costs should predict where we look! We measured pupil size upfront saccades across directions: Larger pupil = larger cost! Costs differ quite a bit: horizontal is affordable, diagonal is costly.
February 8, 2024 at 8:21 AM
Where do we look?
It’s not only what we see, it’s the most frequent human decision. We show that the predictors saliency, goals & scene knowledge need extension:
Eye movements are tuned to minimize effort that comes with planning & making them.
Check 🧵 & www.biorxiv.org/content/10.1...
February 8, 2024 at 8:21 AM
All those steps together show how Open DPSM outperforms existing work. Best thing: You can just download and run it (.exe) or use its Python functions. Full code and documentation online: github.com/caiyuqing/Op...

Very proud of Yuqing for her first PhD paper.
December 11, 2023 at 8:29 PM
3) We weight the contribution of visual events differently across visual field regions– regions in the fovea drive pupil responses stronger than in the periphery
December 11, 2023 at 8:27 PM
1) Visual events are extracted between frames – Open-DPSM uses gaze position & movie frames to extract visual changes across the field 2) Visual events are multiplied with a response function (how the pupil responds to changes). Brightness and contrast are modeled separately.
December 11, 2023 at 8:26 PM
Pupil size changes are a great marker of attention, BUT this requires very controlled visual input. New model Open-DPSM (t.ly/_onqK) spearheaded by PhD candidate Yuqing Cai
#psynomBRM might help here.
You can use it easily: Load video + eye-tracking data & wait for the model to produce its output
December 11, 2023 at 8:26 PM
Our data also allowed to check predictions across fixations: models work differently well over time! New models could incorporate this and make finer predictions, depending on whether early or late fixations are of interest.
November 18, 2023 at 7:55 AM
Now the things to work on: Models made better predictions for women and for people aged 18-29. This reflects the benchmarking that models have been trained and tested on all too well. The fix? All our data is freely available: osf.io/sk4fr/ - more data will be added
November 18, 2023 at 7:54 AM
So what did we do? Visitors viewed an image in a museum and donated their data (gaze + age, gender) to us. The good news first: 20/21 saliency models did better than just guessing that people are looking at the middle.
November 18, 2023 at 7:54 AM
Predictions of eye movements when viewing images should work well for all. Do they? We studied this with gaze data from >2,000 participants collected
in the Nemo museum Amsterdam! Out in Communications Psych
t.ly/Y1-Ty tldr: models do well, especially if you are a psychology student More:
November 18, 2023 at 7:53 AM