Benedikt Ehinger
benediktehinger.bsky.social
Benedikt Ehinger
@benediktehinger.bsky.social
Comp-#CogSci TT-Prof - follow.me @ @benediktehinger@scholar.social

🧠, #vision, #eyetracking, #cognition, VR/mobile #EEG, methods, design (www.thesis-art.de), teaching & supervising

our lab mainly develops in #julialang
Pinned
S-CCS Lab PhD Position

3+2 year 100% TVL-13 position in '26 - open topic on the intersection of combined EEG-EyeTracking, Statistical Methods, Cognitive Modelling, VR/Mobile EEG, Vision ...

Apply via Max-Planck IMPRS-IS program until 2025-11-16 imprs.is.mpg.de

Read: www.s-ccs.de/philosophy
Reposted by Benedikt Ehinger
> We archived around 86M music files, representing around 99.6% of listens. It’s a little under 300TB. This is the largest music metadata database that is publicly available.

Many interesting 📊 charts on this page that can only be made by having this scale of data.

annas-archive.li/blog/backing...
December 21, 2025 at 11:05 AM
Reposted by Benedikt Ehinger
are you looking for someone with expertise in analysis of neuroimaging data (fMRI/DTI/EEG)? see ⬇️
I’ve mostly kept this space professional, but today I need to share something and ask for your help 🙏🏻

I’ve dedicated my entire career to neuroscience research in Russia. Through challenging times, my colleagues and I have done our best to continue the work we love and believe in...(1/4)
December 19, 2025 at 3:24 PM
Reposted by Benedikt Ehinger
📆 updated for 2026!

list of summer schools & short courses in the realm of (computational) neuroscience or data analysis of EEG / MEG / LFP: 🔗 docs.google.com/spreadsheets...
various computational neuroscience / MEEG / LFP short courses and summer schools
docs.google.com
December 19, 2025 at 4:37 PM
Interesting study, quite related to recent work with @theneurocookies.bsky.social on modelling event durations direct.mit.edu/imag/article...
The Role of Temporal Factors in Processing Rapid Serial Visual Presentations https://www.biorxiv.org/content/10.64898/2025.12.15.694535v1
December 19, 2025 at 7:01 AM
Reposted by Benedikt Ehinger
I’m excited to share our new paper, “Filling in the Blanks of Ganzfeld Art”, now published in Psychology of Aesthetics, Creativity, and the Arts.
December 17, 2025 at 2:57 PM
Reposted by Benedikt Ehinger
happy that our article about mu & alpha rhythm waveform shape in development is now finally out in the open: doi.org/10.1162/jocn...

oscillation frequency changes across development (one of the most robust findings in the oscillation world). in this work, we also look at waveform shape changes.
June 23, 2025 at 3:36 PM
Reposted by Benedikt Ehinger
If you analyse time-resolved data (M/EEG, iEEG, pupillometry, force recordings…) and feel limited by cluster-based permutation tests (CBPTs); especially when trying to determine when an effect starts or ends; you may want to try our new R package: lnalborczyk.github.io/neurogam/
#rstats #brms #EEG
Modelling time-resolved electrophysiological data with Bayesian generalised additive multilevel models
Providing utility functions for fitting Bayesian generalised additive multilevel models (BGAMMs) to time-resolved data (e.g., M/EEG, pupillometry, mouse-tracking, etc) and identifying clusters.
lnalborczyk.github.io
December 11, 2025 at 11:38 AM
Reposted by Benedikt Ehinger
Agree with a lot of this. Many neuroscientists think they deserve to be heard by ML, but this has to be earned.

I'd add that my impression is that recently ML research seems to have lost some creative edge, in the race to scale LLMs. Resource constraints might bring the fields closer soon.
Why isn’t modern AI built around principles from cognitive science or neuroscience? Starting a substack (infinitefaculty.substack.com/p/why-isnt-m...) by writing down my thoughts on that question: as part of a first series of posts giving my current thoughts on the relation between these fields. 1/3
Why isn’t modern AI built around principles from cognitive science?
First post in a series on cognitive science and AI
infinitefaculty.substack.com
December 16, 2025 at 8:34 PM
Reposted by Benedikt Ehinger
Modeling Speed–Accuracy Trade-Offs in the Stopping Rule for Confidence Judgments! Now out in #PsychologicalReview (aka we can finally say we do comp models)! Led by @stefherregods.bsky.social @lucvermeylen.bsky.social @pierreledenmat.bsky.social

Paper: desenderlab.com/wp-content/u... Thread ↓↓↓
desenderlab.com
December 16, 2025 at 3:52 PM
Reposted by Benedikt Ehinger
fMRI signals “up,” but neural metabolism might be going “down.”

In our @natneuro.nature.com paper, we demonstrate that about 40% of voxels with robust BOLD responses exhibit opposite oxygen metabolism, revealing two distinct hemodynamic modes.

rdcu.be/eUPO8
funds @erc.europa.eu
#neuroskyence 🧵:
December 16, 2025 at 3:43 PM
Reposted by Benedikt Ehinger
New preprint!
www.biorxiv.org/content/10.6...
As you are silently reading this, you may experience a little voice in your head. How is it represented in the brain, and what purpose does it serve? Our new study answers the questions.

Together with @adriendoerig.bsky.social and Radek Cichy.(1/8)
Auditory representations of words during silent visual reading
Silent visual reading is accompanied by the phenomenological experience of an inner voice. However, the temporal dynamics and functional role of the underlying neural representations remain unclear. H...
www.biorxiv.org
December 15, 2025 at 3:10 PM
Reposted by Benedikt Ehinger
Steve isn’t here anymore, but he always posted this picture on Hanukkah so I thought we could carry on his tradition and remember him.
Happy Hanukkah to our friends who celebrate. Rachel Posner, a rabbi’s wife in Kiel, Germany, took this photograph in 1931 -- a potent reminder that fascism must be fought in every generation, even if it's wrapped in an American flag and a red hat.
December 14, 2025 at 11:45 PM
Reposted by Benedikt Ehinger
The EEGManyPipelines Dataset: Metascientific Data on 168 Independent Analyses of a Single EEG Dataset: https://osf.io/c4xwg
December 3, 2025 at 5:48 PM
Reposted by Benedikt Ehinger
All travelers from ESTA countries (yes the ones on visa waiver programs) will have to disclose 5 years of social media + huge amounts of personal data to enter the US now. All US academic associations should now meet outside the US if we want to meet our international colleagues.
This is INSANE www.nytimes.com/2025/12/09/t...

and these are our closest allies!
December 10, 2025 at 4:27 AM
Reposted by Benedikt Ehinger
Passionate about women's mental health?

Interested in brain stimulation?

Excited by cutting edge neurotech?

Come do a PhD with me!

www.findaphd.com/phds/project...

(thread)
a woman says we have to work together in front of a wentworth sign
ALT: a woman says we have to work together in front of a wentworth sign
media.tenor.com
December 4, 2025 at 5:24 PM
Reposted by Benedikt Ehinger
Putting the 'Spotlight' on a recent article by @fannycazettes.bsky.social et al. by discussing how “brain leakage” can expose cognitive computations in bodily movements – in turn opening new ways to track cognition. Now out in @cp-trendscognsci.bsky.social. authors.elsevier.com/a/1mDCZ4sIRv...
authors.elsevier.com
December 8, 2025 at 10:35 AM
I'll probably need a week or so to fully digest this paper.

Amazing work by Mohr, Geuzebroek and @spk3lly.bsky.social on disentangling contributions of V1,V2,V3 to ERPs+SSVEPs

Beautiful illustrations throughout!

doi.org/10.1162/imag...
December 9, 2025 at 11:01 AM
Can we stop our #EEG experiments after 5 trials?

Thought provoking paper: www.arxiv.org/abs/2511.23162

tldr: n=5 trials gave better cvR² than n=100% on average

Kudos to the authors who went far beyond many other EEG papers in terms of additional tests & baselines (openreview.net/forum?id=c6L...)
December 8, 2025 at 8:46 AM
Reposted by Benedikt Ehinger
High-level visual surprise is rapidly integrated during perceptual inference!

🚨 New paper 🚨 out now in @cp-iscience.bsky.social with @paulapena.bsky.social and @mruz.bsky.social

www.cell.com/iscience/ful...

Summary 🧵 below 👇
Rapid computation of high-level visual surprise
Health sciences
www.cell.com
December 5, 2025 at 2:37 PM
Reposted by Benedikt Ehinger
The best brain-machine interface remains the mouth. Evolution spent 4B years of evolution on R&D developing the device, so I guess it's not that surprising. Yet it still rarely appears as a baseline in evaluations of new devices.
December 1, 2025 at 1:28 PM
Reposted by Benedikt Ehinger
Everyday visual experience tunes neural processing. Using fixation-related EEG, this new work shows how the N1 preview benefit depends on Chinese readers' prior experience with left-to-right vs. up-down reading. @umaurer.bsky.social

www.authorea.com/doi/full/10....
November 24, 2025 at 12:25 PM
Reposted by Benedikt Ehinger
🚨 New paper at #NeurIPS2025!

A systematic fixation-level comparison of a performance-optimized DNN scanpath model and a mechanistic cognitive model reveals behaviourally relevant mechanisms that can be added to the mechanistic model to substantially improve performance.

🧵👇
November 30, 2025 at 9:24 PM
Reposted by Benedikt Ehinger
Last month, I found out I have hypermobile Ehlers Danlos Syndrome, after decades of chronic pain and medical disinterest.

I've decided to write publicly about this, not just about hypermobility and its health impacts, but about how it feels when doctors don't care:

medium.com/p/4fea6398b8ba
Welcome to my body
After twenty years of pain and repeated medical dead ends, a stranger sent me a message on Instagram. It led to a diagnosis all the doctors…
medium.com
November 26, 2025 at 5:20 PM
Reposted by Benedikt Ehinger
That image is from 1961 and an idealization. Here is an actual trajectory of fixational eye movements. The dots are 2 ms apart. If a midget ganglion cell, with single-cone receptive field, fires at 100 Hz, then every spike reports about a different cone. How can we ever read anything?
November 7, 2025 at 6:23 PM
Reposted by Benedikt Ehinger
Our response to "Visual attention in crisis" by @ruthrosenholtz.bsky.social is here www.cambridge.org/core/journal...
And here is the author's response to all comments: www.cambridge.org/core/journal... I found the target paper very thought-provoking, and Ruth's responses are insightful. But...
Attention is doing just fine! Just don’t take it too seriously | Behavioral and Brain Sciences | Cambridge Core
Attention is doing just fine! Just don’t take it too seriously - Volume 48
www.cambridge.org
November 26, 2025 at 10:41 AM