Rodrigo Braga
rodbraga.bsky.social
Rodrigo Braga
@rodbraga.bsky.social
Assistant Professor at Northwestern University, neuroscience, brain imaging, networks
We also propose that these MTL connections may influence the emergence/separation of the 2 networks, DN-A and DN-B, during development/evolution:

Early spontaneous patterned activity within the MTL (e.g., traveling waves?) could 'tether' connected cortical regions into distinct networks:
October 31, 2025 at 8:12 PM
Recent work has demonstrated that these two networks are connected to distinct portions of the medial temporal lobe (MTL):

DN-A is connected to the parahippocampal cortex
DN-B is connected to the amygdala

Both networks appear to be in the anterior hippocampus, subiculum and entorhinal cortex
October 31, 2025 at 8:03 PM
Group-level estimates of the default network (DN) have long argued that it serves many introspective functions, including mentalizing (thinking about other people's thoughts, feelings & beliefs) and recollection/prospection.

Often noted was the heterogeneity but also overlap of functions in the DN:
October 31, 2025 at 7:56 PM
We propose that thinking about this region a “nexus”, or a hand-off point, between visual and transmodal language systems, is useful for understanding why it is so critical for reading.
October 7, 2025 at 10:21 PM
AND – we replicated that familiar (Roman) letter strings activate the whole orthographic stream, but in contrast, strings of numbers only activated the more posterior regions!

This was very surprising – numbers are also highly learned, yet their activity is like the foreign scripts map above:
October 7, 2025 at 10:18 PM
Using the NSD fLOC task data, we replicated that the anterior part of the orthographic stream (teal) converges on that basal LANG region:
October 7, 2025 at 10:16 PM
We next replicated the key results in the Natural Scenes Dataset (NSD; Allen et al. 2018).

In three subjects, the FC map of LANG confirmed the ventral temporal region:

(there was a lot more dropout in this non multi-echo data so the other subs were inconclusive).
October 7, 2025 at 10:15 PM
We saw that the whole stream activated for stimuli in a familiar (Roman) script, replicating what we saw with pseudowords.

BUT: the foreign (Japanese) scripts only activated the more posterior regions – specifically missing the anterior region that is within LANG! 😱
October 7, 2025 at 10:13 PM
To test this further, we had the same subs view Pseudowords again, but also Real Words, Consonant Strings, Symbols and Line Drawings.

Some of these involved a familiar (Roman) script, whereas the Symbols category had similar visual features but was in an unfamiliar script (modified Japanese).
October 7, 2025 at 10:11 PM
So: the results suggest that the visual stream for reading converges precisely on a region of the distributed LANG network.

The LANG network connections therefore predict functional differences within the visual stream.
October 7, 2025 at 10:10 PM
Indeed, this anterior orthographic stream region also activates during listening to speech (but the Face and Scene streams don’t overlap with speech regions as consistently; see graph).

In almost all cases, the overlap was right where the FC-defined basal LANG region was (black lines).
October 7, 2025 at 10:08 PM
Intriguingly, in many subjects the orthographic stream (teal) extends in an arc that ends exactly where the basal LANG region is (see black lines)!

The other streams for Faces (blue) & Scenes (purple) didn’t overlap with LANG as reliably.
October 7, 2025 at 10:06 PM
In each subject, viewing Pseudowords activated multiple regions extening from near the occipital pole to the middle of the ventral surface.

We called it an ‘orthographic stream’, nodding to the “multiple VWFAs” of:

Yeatman 2021: doi.org/10.1146/annu...
Woolnough 2021: doi.org/10.1038/s415...
October 7, 2025 at 10:05 PM
If this region is transmodal, what is its contribution to reading? Does it have different properties to the VWFA? Is it the VWFA?

In the same subjects, we next mapped category-preferring visual areas using a classic visual categories task:
October 7, 2025 at 10:03 PM
Because this region is connected to LANG, it should have a transmodal function, despite being near the visual hierarchy.

And it does! Listening to speech (red-yellow) activates it.

(This is known but v. under-appreciated).
doi.org/10.1093/brai...
doi.org/10.1093/brai...
doi.org/10.1162/imag...
October 7, 2025 at 10:02 PM
Using precision fMRI, we first mapped LANG using functional connectivity (FC) in 8 individuals.

We observed that LANG reliably contains a basal temporal region that is not a “classic” language region:

(We used multi-echo to overcome dropout)
October 7, 2025 at 10:00 PM
And that’s what we saw: the LANG network boundaries capture transmodal language responses during reading (pink), and nicely separate those from unimodal auditory responses (blue).

Salvo, Anderson et al., 2025: doi.org/10.1162/IMAG...
October 7, 2025 at 9:56 PM
This distributed network shape is canonical of an association circuit & very different from the tightly clustered (unimodal) sensory networks (e.g., visual or auditory).

This means that the network shape of LANG predicts a transmodal, not sensory, function.
October 7, 2025 at 9:55 PM
Here’s a seed-based FC map of LANG. It is distributed across multiple association zones, extending beyond the “classic” frontal and lateral temporal language areas.

(from Braga et al. 2020. doi.org/10.1152/jn.0... )

(subtle foreshadow: Note the basal LANG region in the inferior temporal cortex…)
October 7, 2025 at 9:54 PM
This core language network (LANG) can be defined using resting-state functional connectivity (FC).

Further, the LANG network defined by FC (see black borders below) predicts task responses common to both reading and listening (brown):

Salvo, Anderson et al., 2024: doi.org/10.1162/IMAG...
October 7, 2025 at 9:54 PM
🧠 Language involves “transmodal” cognitive functions: the same core network is engaged whether the language input is auditory (as in speech) or visual (as in text/reading).

Mesulam 1998 on "transmodal": doi.org/10.1093/brai...

Image from Scott et al. 2017: doi.org/10.1080/1758...
October 7, 2025 at 9:52 PM
📣 New preprint from the Braga Lab! 📣

The ventral visual stream for reading converges on the transmodal language network

Congrats to Dr. Joe Salvo for this epic set of results

Big Q: What brain systems support the translation of writing to concepts and meaning?

Thread 🧵 ⬇️
October 7, 2025 at 9:51 PM
Overall, we think this paper lays important groundwork for the use of non-invasive, individualized brain network mapping for targeting intracranial stimulation to modulate specific networks.

This could help improve efficacy of ES, and reduce collateral effects.

Here is our proposed framework:
August 5, 2025 at 3:51 PM
The two types of maps overlapped a lot, but sites where HFES led to network-related behavioral effects were significantly closer to the individualized maps than the group-defined maps:

Sorry, @bttyeo.bsky.social :)
August 5, 2025 at 3:50 PM
As a bonus analysis, we compared results using individualized (“PFM”) vs. group-defined Yeo 17 maps.

Note that in some cases it’s hard to pick a matching Yeo network, e.g., “Default B” was the closest network to LANG as defined in individuals:
August 5, 2025 at 3:50 PM