Kristof Strijkers
banner
strijkers.bsky.social
Kristof Strijkers
@strijkers.bsky.social
CNRS scientist (neurolinguistics). LPL, ILCB & Aix-Marseille University.

ERC Consolidator Laureate (2025) for the project ‘LaDy’ (‘Language in the Dyad’)

Husband of Elin. Dad of Scott & Alessio.

#science #cycling & #wine
(personal account)
Our results strongly support Integration Models, which posit that word representations are shared across language modalities. /end

A fitting salute to my friend - a giant on the neurobiology of language 🫡

@cnrs.fr @univ-amu.fr @ilcb.bsky.social #LPL @agencerecherche.bsky.social @erc.europa.eu
November 4, 2025 at 5:32 PM
Results were clear: The same brain networks were activated in production and comprehension.

Topographically in the motor cortex (with stronger lip activity for bilabial words and stronger tongue activity for alveolar words), and distributed in the temporal cortex /4
November 4, 2025 at 5:32 PM
To test this, we scanned (fMRI) participants (n=37) while they both named aloud object names and passively listened to the same words. All words were minimal pairs only differing in their first phoneme (bilabial words like ‘Monkey’ vs. alveolar words like ‘Donkey’). /3
November 4, 2025 at 5:32 PM
We asked whether the brain uses the same phonological representations when speaking vs. understanding words.

While most brain language models predict asymmetries, a model as that of Friedemann suggests they’re shared across modalities. /2
November 4, 2025 at 5:32 PM
🫶
October 22, 2025 at 8:24 PM
Thanks Riccardo! That means a lot coming from you 🙏
October 15, 2025 at 1:10 PM
🙏
October 10, 2025 at 6:15 PM
Yes! We hope to finish those analyses in the coming months 🤞
October 10, 2025 at 6:15 PM
And myself 😉
(And wine posts, though that’s more about me having a conversation with myself 😂)
October 10, 2025 at 6:14 PM
Oh wow, that’s awesome! Thanks so much Laurel!
October 10, 2025 at 5:38 PM
💡This study confirms hypotheses of predictive dialog models and theories of joint action

More generally, this work demonstrates that language in interaction can rely on processing dynamics which are different or even absent in the dominant individualistic research tradition! /7
October 10, 2025 at 5:12 PM
💡Our data show that during interactions we not only predict what we are about to say, but also the words our partner may utter.

This dyadic prediction effect is not linear, but ‘incremental’ (perhaps advanced planning like in chess?), and requires ‘interpersonal synergy’ /6
October 10, 2025 at 5:12 PM
⏱️Importantly, in an individual control experiment where everything was identical except that one of the interlocutor’s was replaced with a loudspeaker, the typical prediction effect for picture naming was the same, but the prediction effect for replying with a related word disappeared completely! /5
October 10, 2025 at 5:12 PM
⏱️In predictable contexts picture naming was faster (typical prediction effect), but interestingly, the reply given by the partner was even faster… twice (!) as fast compared to the interlocutor who named the picture to which the actual predictable content applied (a dyadic prediction effect) /4
October 10, 2025 at 5:12 PM
💬 In a simple semantic association game, one interlocutor names a picture (eg dog), and the partner needs to reply with a semantically related word (eg cat).

Prior to this interaction, contex could be predictable for the upcoming picture (´man’s best friend is a…’) or not (‘outside there is a…)./3
October 10, 2025 at 5:12 PM
❔Prediction helps us process language faster. We use it to anticipate words, understand meaning, and plan what to say next. But nearly all studies have tested this within individuals. What happens when we actually talk to each other?

We explored this question with a new dyadic paradigm: /2
October 10, 2025 at 5:12 PM