Yalda Mohsenzadeh
yaldamohsenzadeh.bsky.social
Yalda Mohsenzadeh
@yaldamohsenzadeh.bsky.social
Assistant Professor of Computer Science and Brain and Mind Institute at Western University; Faculty member at Vector Institute for AI; cognitive computational neuroscience; perception and memory; computer vision and AI
6/By fusing EEG (temporal precision) with fMRI (spatial precision), this study offers a spatiotemporally resolved map of how audiovisual stimuli unfold across the brain.
January 29, 2025 at 1:10 AM
5/ A two-branch deep neural network (DNN) model trained on audiovisual data couldn't replicate early cross-modal integration seen in the brain, emphasizing the need for early fusion in models
January 29, 2025 at 1:10 AM
4/ High-level semantic & categorical info only appears later in high level auditory, visual, and multisensory areas.
January 29, 2025 at 1:10 AM
3/ visual & auditory features were processed with similar onsets, but different temporal dynamics. Auditory info takes longer to peak than visual, likely due to the temporal nature of sounds.
January 29, 2025 at 1:10 AM
2/ Using EEG and fMRI, we discovered early asymmetrical cross-modal interactions, with acoustic information represented in both early visual and auditory regions, while visual information only identified in visual cortices.
January 29, 2025 at 1:10 AM
1/ The human brain seamlessly integrates distinct sensory information to create a coherent percept. This paper dives deep into where and when different types of audiovisual information are processed in the brain using naturalistic stimuli.
January 29, 2025 at 1:10 AM