- Novices use domain-specific networks to produce relatively unstructured, domain-general (not tuned to the task) codes.
- Experts repurpose domain-general control networks to encode highly structured, domain-specific knowledge.
9/n
- Novices use domain-specific networks to produce relatively unstructured, domain-general (not tuned to the task) codes.
- Experts repurpose domain-general control networks to encode highly structured, domain-specific knowledge.
9/n
🧠 Experts rely more on
- working-memory systems
- navigation-related regions
- memory-retrieval networks
👁️ Novices rely more on
- early visual cortex
- face/object regions
- language areas
👉 A shift from domain-specific → domain-general control networks.
8/n
🧠 Experts rely more on
- working-memory systems
- navigation-related regions
- memory-retrieval networks
👁️ Novices rely more on
- early visual cortex
- face/object regions
- language areas
👉 A shift from domain-specific → domain-general control networks.
8/n
Using manifold dimensionality (Participation Ratio), we find lower-dimensional, more compressed neural codes in experts. And these compressed manifolds carry more task-relevant information.
👉 Experts pack more information into fewer dimensions.
7/n
Using manifold dimensionality (Participation Ratio), we find lower-dimensional, more compressed neural codes in experts. And these compressed manifolds carry more task-relevant information.
👉 Experts pack more information into fewer dimensions.
7/n
Brain–model RSA shows the same shift: both groups encode visual features, but only experts encode high-level, task-relevant structure.
In other words,
👉 expertise changes WHAT is represented: from low-level, surface features to high-level, relational structure.
4/n
Brain–model RSA shows the same shift: both groups encode visual features, but only experts encode high-level, task-relevant structure.
In other words,
👉 expertise changes WHAT is represented: from low-level, surface features to high-level, relational structure.
4/n
Behavioral RSA shows that experts organize their value judgments around relational and goal-relevant structure. Visual similarity barely plays a role.
Novices, meanwhile, show much less structured preferences.
3/n
Behavioral RSA shows that experts organize their value judgments around relational and goal-relevant structure. Visual similarity barely plays a role.
Novices, meanwhile, show much less structured preferences.
3/n
**Enter: Representational Similarity Analysis (RSA).**
2/n
**Enter: Representational Similarity Analysis (RSA).**
2/n
We asked a simple-but-big question:
What changes in the brain when someone becomes an expert?
Using chess ♟️ + fMRI 🧠 + representational geometry & dimensionality 📈, we ask:
1️⃣ WHAT information is encoded?
2️⃣ HOW is it structured?
3️⃣ WHERE is it expressed?
1/n
We asked a simple-but-big question:
What changes in the brain when someone becomes an expert?
Using chess ♟️ + fMRI 🧠 + representational geometry & dimensionality 📈, we ask:
1️⃣ WHAT information is encoded?
2️⃣ HOW is it structured?
3️⃣ WHERE is it expressed?
1/n
We 👀 into how #chess experts represent the board, and how the content, structure, and location of these repr shift w/ expertise.⬇️
We 👀 into how #chess experts represent the board, and how the content, structure, and location of these repr shift w/ expertise.⬇️
So there seems to be a local path within V1 and a route from higher-level visual areas—though not all info survives the trip.
So there seems to be a local path within V1 and a route from higher-level visual areas—though not all info survives the trip.
In contrast, higher-level areas like FFA and LOC robustly decoded those semantic details.
In contrast, higher-level areas like FFA and LOC robustly decoded those semantic details.
If foveal V1 encodes perceptual details, we’d expect it to align more with TDANN’s predictions than with CLIP’s -- and vice-versa.
If foveal V1 encodes perceptual details, we’d expect it to align more with TDANN’s predictions than with CLIP’s -- and vice-versa.
We used MVPA to see whether activation patterns reflected perceptual or categorical info.
We used MVPA to see whether activation patterns reflected perceptual or categorical info.