Jean-Rémi King
jeanremiking.bsky.social
Jean-Rémi King
@jeanremiking.bsky.social
Researcher in Neuroscience & AI

CNRS, Ecole Normale Supérieure, PSL
currently detached to Meta
🧠How does the hierarchy of speech representations unfolds in the human brain?

Our latest work, led by @lauragwilliams.bsky.social, together with Alec Marantz and @davidpoeppel.bsky.social, is now out in PNAS:

www.pnas.org/doi/10.1073/...
October 22, 2025 at 7:56 AM
Overall, the training of DINOv3 mirror some striking aspects of brain development: late-acquired representations map onto the cortical areas with e.g. greater expansion and slower timescales, suggesting that DINOv3 spontaneously captures some of the neuro-developmental trajectory
September 3, 2025 at 5:18 AM
→ Second factor: data type: Even models trained only on satellite or cellular images significantly capture brain signals — but the same model trained on standard images encodes higher all brain regions.
September 3, 2025 at 5:18 AM
So what are the factors that lead DINOv3 to become brain-like?
→ 1st factor: Model size: bigger models become brain-like faster during training, reach higher brain-scores, especially in high-level brain regions.
September 3, 2025 at 5:18 AM
Third, the representations of the visual cortex are typically acquired early on in the training of DINOv3.
By contrast, it requires much more training to learn representations similar to those of the prefrontal cortex.
September 3, 2025 at 5:18 AM
Surprisingly, these encoding, spatial and temporal scores all emerge across training, but at different speeds.
September 3, 2025 at 5:18 AM
Second, DINOv3 learns a representational hierarchy which corresponds to the spatial and temporal hierarchies in the brain.
September 3, 2025 at 5:18 AM
First, we observe that, with training, DINOV3 learns representations that progressively align with those of the human brain.
September 3, 2025 at 5:18 AM
To evaluate how data type, data quantity and model size each leads DINOv3 to more-or-less brain-like activation, we trained and tested several variants:
September 3, 2025 at 5:18 AM
We compare the activation of DINOv3 (ai.meta.com/dinov3/), a SOTA self-supervised computer vision model trained on natural images,
to the activations of the human brain in response to the same images using both fMRI (naturalscenesdataset.org) and MEG (openneuro.org/datasets/ds0...)
September 3, 2025 at 5:18 AM
Can self supervised learning help understand how the brain learns to see the world?

Our latest study, led by Josephine Raugel (FAIR, ENS), is now out:

📄 arxiv.org/pdf/2508.18226
🧵 thread below
September 3, 2025 at 5:18 AM
We’re very happy to share 3 highlights of our Brain and AI team for #CCN2025 's week:

1. 🏆1st place for the Algonauts competition: paper, thtread and code below

2.🗣Keynote: Language in the Brain: 2025.ccneuro.org/k-and-t-lang...

3. 🚀Tutorial: Scale your decoding pipeline in the notebook
August 11, 2025 at 11:45 AM
🔎We're looking for volunteers to study the brain:
- Native English?
- 🇫🇷in Paris?
- 🧠Want to participate in a brain imaging experiment?
- 💶 8 sessions of 2 hours paid 80€ each
- 📩 contact: julie.bonnaire@cea.fr

Please RT :)
May 28, 2025 at 2:03 PM
🚀 Dynadiff achieves state-of-the-art image reconstruction from time-resolved fMRI.
✂️ It significantly simplifies the training pipeline, eliminating complex multi-stage processes.
🧠 It uniquely reveals the precise evolution of visual representations in the brain.
May 22, 2025 at 1:55 PM
Our latest brain-to-image decoding model is now available on HuggingFace:

"Dynadiff: Single-stage Decoding of Images from Continuously Evolving fMRI",

led by Marlène Careil and Yohann Benchetrit:

- Paper: arxiv.org/pdf/2505.14556
- Github: github.com/facebookrese...
- Thread: 👇
May 22, 2025 at 1:55 PM
Together, these findings reveal the maturation of language representations in the developing brain and show that modern AI systems provide a promising tool to model the neural bases of language acquisition, and thus help both fundamental and clinical neuroscience.
May 15, 2025 at 4:00 PM
Remarkably, this neuro-developmental trajectory is spontaneously captured by large language models: with training, these AI models learned representations that can only be identified in the adult human brain.
May 15, 2025 at 4:00 PM
Crucially, these language representations evolve with age: while fast phonetic features are already present in the superior temporal gyrus of the youngest individuals, slower word-level representations only emerge in the associative cortices of older individuals.
May 15, 2025 at 4:00 PM
We find that a hierarchy of linguistic features (phonemes, words) is robustly represented across the cortex, even in 2–5-year-olds.
May 15, 2025 at 4:00 PM
Here, we study neural activity recorded from over 7,400 electrodes clinically implanted in the brains of 46 patients, aged from 2 years old to adulthood, as they listened to an audiobook version of “The Little Prince”.
May 15, 2025 at 4:00 PM
The human brain is a remarkable learner:
A few million words suffice for children to acquire language.
Yet, the brain architecture underlying this unique ability remains poorly understood.
May 15, 2025 at 4:00 PM
I'm very pleased to share our latest study:
‘Emergence of Language in the Developing Brain’,
by L Evanson, P Bourdillon et al:
- Paper: ai.meta.com/research/pub...
- Blog: ai.meta.com/blog/meta-fa...
- Thread below 👇
May 15, 2025 at 4:00 PM
Together, these findings reveal the maturation of language representations in the developing brain and show that modern AI systems provide a promising tool to model the neural bases of language acquisition, and thus help both fundamental and clinical neuroscience.
May 15, 2025 at 12:19 PM
Remarkably, this neuro-developmental trajectory is spontaneously captured by large language models: with training, these AI models learned representations that can only be identified in the adult human brain.
May 15, 2025 at 12:19 PM
Crucially, these language representations evolve with age: while fast phonetic features are already present in the superior temporal gyrus of the youngest individuals, slower word-level representations only emerge in the associative cortices of older individuals.
May 15, 2025 at 12:19 PM