Sam Nastase
banner
samnastase.bsky.social
Sam Nastase
@samnastase.bsky.social
assistant professor of psychology at USC丨he/him丨semiprofessional dungeon master丨https://snastase.github.io/
I'm recruiting PhD students to join my new lab in Fall 2026! The Shared Minds Lab at @usc.edu will combine deep learning and ecological human neuroscience to better understand how we communicate our thoughts from one brain to another.
October 1, 2025 at 10:39 PM
Finally, we developed a set of interactive tutorials for preprocessing and running encoding models to get you started. Happy to hear any feedback or field any questions about the dataset! hassonlab.github.io/podcast-ecog...
July 7, 2025 at 9:00 PM
We validated both the data and stimulus features using encoding models, replicating previous findings showing an advantage for LLM embeddings.
July 7, 2025 at 9:00 PM
We also provide word-level transcripts and stimulus features ranging from low-level acoustic features to large language model embeddings.
July 7, 2025 at 9:00 PM
We recorded ECoG data in nine subjects while they listened to a 30-minute story. We provide a minimally preprocessed derivative of the raw data, ready to be used.
July 7, 2025 at 9:00 PM
Check out Zaid's open "Podcast" ECoG dataset for natural language comprehension (w/ Hasson Lab). The paper is now out at Scientific Data (nature.com/articles/s41...) and the data are available on OpenNeuro (openneuro.org/datasets/ds0...).
July 7, 2025 at 9:00 PM
We then tested the extent to which each of these 58 languages can predict the brain activity of our participants. We found that languages that are more similar to the listener’s native language, the better the prediction:
June 30, 2025 at 8:56 PM
What about multilingual models? We translated the story from English to 57 other languages spanning 14 families, and extracted embeddings for each from multilingual BERT. We visualized the dissimilarity matrix using MDS and found clusters corresponding to language family types.
June 30, 2025 at 8:56 PM
We found that models trained to predict neural activity for one language generalize to different subjects listening to the same content in a different language, across high-level language and default-mode regions.
June 30, 2025 at 8:56 PM
We extracted embeddings from three unilingual BERT models—trained on entirely different languages)—and found that (with a rotation) they converge onto similar embeddings, especially in the middle layers:
June 30, 2025 at 8:56 PM
We used naturalistic fMRI and language models (LMs) to identify neural representations of the shared conceptual meaning of the same story as heard by native speakers of three languages: English, Chinese, and French.
June 30, 2025 at 8:56 PM
We hypothesized that the brains of native speakers of different languages would converge on the same supra-linguistic conceptual structures when listening to the same story in their respective languages:
June 30, 2025 at 8:56 PM
Previous research has found that language models trained on different languages learn embedding spaces with similar geometry. This suggests that internal geometry of different languages may converge on similar conceptual structures:
June 30, 2025 at 8:56 PM
How do different languages converge on a shared neural substrate for conceptual meaning? Happy to share a new preprint led by Zaid Zada that specifically addresses this question:
June 30, 2025 at 8:56 PM
Taking a slightly different approach, we assess how well specific model features capture larger-scale patterns of connectivity. We find that feature-specific model connectivity partly recapitulates stimulus-driven cortical network configuration.
June 24, 2025 at 11:25 PM
We observe a clear progression of feature-specific connectivity from early auditory to lateral temporal areas, advancing from acoustic-driven connectivity to speech- and finally language-driven connectivity.
June 24, 2025 at 11:25 PM
We show that early auditory areas are coupled to intermediate language areas via lower-level acoustic and speech features. In contrast, higher-order language and default-mode regions are predominantly coupled through more abstract language features.
June 24, 2025 at 11:25 PM
We developed a model-based framework for quantifying stimulus-driven, feature-specific connectivity between regions. We used parcel-wise encoding models to align feature-specific embeddings to brain activity and then evaluated how well these models generalize to other parcels.
June 24, 2025 at 11:25 PM
Following the logic of intersubject correlation (ISC) analysis, intersubject functional connectivity (ISFC) isolates stimulus-driven connectivity between regions (e.g., in response to naturalistic stimuli)—but is agnostic to the content of the stimulus shared between regions.
June 24, 2025 at 11:25 PM
We used fMRI data collected while subjects listened to naturalistic spoken stories and extracted three types of linguistic embeddings for the same stimuli from the Whisper speech and language model. We call these "acoustic", "speech", and "language" embeddings.
June 24, 2025 at 11:25 PM
Really excited to share our new preprint led by @ahmadsamara.bsky.social with Zaid Zada, @vanderlab.bsky.social, and Uri Hasson titled "Cortical language areas are coupled via a soft hierarchy of model-based linguistic features" doi.org/10.1101/2025...
June 24, 2025 at 11:25 PM
Happy to see this work led by Zaid Zada now published in Neuron! We use LLM embeddings to capture word-by-word linguistic content transmitted from the speaker's brain to the listener's brain in real-time, face-to-face conversations: www.cell.com/neuron/fullt...
August 2, 2024 at 10:47 PM
October 8, 2023 at 7:18 AM