I'm open to discuss the topic, but o.c. many ideas!!
3+2 year 100% TVL-13 position in '26 - open topic on the intersection of combined EEG-EyeTracking, Statistical Methods, Cognitive Modelling, VR/Mobile EEG, Vision ...
Apply via Max-Planck IMPRS-IS program until 2025-11-16 imprs.is.mpg.de
Read: www.s-ccs.de/philosophy
I'm open to discuss the topic, but o.c. many ideas!!
In a new paper @jocn.bsky.social, we show that pupil size independently tracks breadth and load.
doi.org/10.1162/JOCN...
In a new paper @jocn.bsky.social, we show that pupil size independently tracks breadth and load.
doi.org/10.1162/JOCN...
say hi and show your colleagues that you're one of the dedicated ones by getting up early on the last day!
I'll show data, demonstrating that synesthetic perception is perceptual, automatic, and effortless.
Join my talk (Thursday, early morning.., Color II) to learn how the qualia of synesthesia can be inferred from pupil size.
Join and (or) say hi!
say hi and show your colleagues that you're one of the dedicated ones by getting up early on the last day!
@attentionlab.bsky.social @ecvp.bsky.social
Preprint for more details: www.biorxiv.org/content/10.1...
@attentionlab.bsky.social @ecvp.bsky.social
Preprint for more details: www.biorxiv.org/content/10.1...
I'll show data, demonstrating that synesthetic perception is perceptual, automatic, and effortless.
Join my talk (Thursday, early morning.., Color II) to learn how the qualia of synesthesia can be inferred from pupil size.
Join and (or) say hi!
I'll show data, demonstrating that synesthetic perception is perceptual, automatic, and effortless.
Join my talk (Thursday, early morning.., Color II) to learn how the qualia of synesthesia can be inferred from pupil size.
Join and (or) say hi!
Depends of course, but our guidelines help navigating this in an informed way.
Out now in BRM (free) doi.org/10.3758/s134...
@psychonomicsociety.bsky.social
Depends of course, but our guidelines help navigating this in an informed way.
Out now in BRM (free) doi.org/10.3758/s134...
@psychonomicsociety.bsky.social
www.rug.nl/about-ug/wor...
www.rug.nl/about-ug/wor...
Congratulations dr. Alex, super proud of your achievements!!!
The dissertation is available here: doi.org/10.33540/2960
Congratulations dr. Alex, super proud of your achievements!!!
Open Access link: doi.org/10.3758/s134...
Open Access link: doi.org/10.3758/s134...
presentations today!
R2, 15:00
@chrispaffen.bsky.social:
Functional processing asymmetries between nasal and temporal hemifields during interocular conflict
R1, 17:15
@dkoevoet.bsky.social:
Sharper Spatially-Tuned Neural Activity in Preparatory Overt than in Covert Attention
presentations today!
R2, 15:00
@chrispaffen.bsky.social:
Functional processing asymmetries between nasal and temporal hemifields during interocular conflict
R1, 17:15
@dkoevoet.bsky.social:
Sharper Spatially-Tuned Neural Activity in Preparatory Overt than in Covert Attention
We show: EEG decoding dissociates preparatory overt from covert attention at the population level:
doi.org/10.1101/2025...
OK, I might be biased but still, reading out different cognitive strategies by combining EEG data, our reaction time decomposition method, and a deep learning sequence model is super cool!
Congrats Rick for your first preprint 🥳
If you decide to click on this URL too quickly, you might just skip a cognitive operation! We combine Hidden Multivariate Pattern analysis (HMP) and deep learning (Mamba/State Space Models) to detect cognitive operations at a trial level from EEG data.
OK, I might be biased but still, reading out different cognitive strategies by combining EEG data, our reaction time decomposition method, and a deep learning sequence model is super cool!
Congrats Rick for your first preprint 🥳
eLife's digest:
elifesciences.org/digests/9776...
& the 'convincing & important' paper:
elifesciences.org/articles/97760
I consider this my coolest ever project!
#VisionScience #Neuroscience
eLife's digest:
elifesciences.org/digests/9776...
& the 'convincing & important' paper:
elifesciences.org/articles/97760
I consider this my coolest ever project!
#VisionScience #Neuroscience
We present two very large eye tracking datasets of museum visitors (4-81 y.o.!) who freeviewed (n=1248) or searched for a +/x (n=2827) in a single feature-rich image.
We invite you to (re)use the dataset and provide suggestions for future versions 📋
osf.io/preprints/os...
We present two very large eye tracking datasets of museum visitors (4-81 y.o.!) who freeviewed (n=1248) or searched for a +/x (n=2827) in a single feature-rich image.
We invite you to (re)use the dataset and provide suggestions for future versions 📋
osf.io/preprints/os...
Typically, pupillometry struggles with complex stimuli. We introduced a method to study covert attention allocation in complex video stimuli -
effects of top-down attention, bottom-up attention, and pseudoneglect could all be recovered.
doi.org/10.1111/psyp.70036
Typically, pupillometry struggles with complex stimuli. We introduced a method to study covert attention allocation in complex video stimuli -
effects of top-down attention, bottom-up attention, and pseudoneglect could all be recovered.
doi.org/10.1111/psyp.70036
Our paper shows that presaccadic attention moves up- and downward using the pupil light response.
doi.org/10.1111/psyp.70047
Our paper shows that presaccadic attention moves up- and downward using the pupil light response.
doi.org/10.1111/psyp.70047
academic.oup.com/brain/articl...
academic.oup.com/brain/articl...