Soroush Mirjalili
banner
soroushmirjalili.bsky.social
Soroush Mirjalili
@soroushmirjalili.bsky.social
Postdoc in the Kuhl Lab at the University of Oregon, PhD from UT Austin. Episodic Memory | Computational Neuroscience | Cognitive Neuroscience | Machine Learning. 🌿 -> 🐝 -> 🐂 -> 🦆, he/him
soroushmirjalili.com
Finally, dimension-specific input-output functions in CA3/DG strikingly mirrored the sequential pattern observed in behavior: CA3/DG inverted each similarity dimension when it contributed to memory interference but preserved the dimension when it didn't contribute to interference.
October 14, 2025 at 4:48 PM
Among the 10 dimensions, the first 2 dimensions of similarity strongly predicted memory interference errors. However, their influence on behavior sharply changed with experience. Whereas one dimension drove interference earlier in learning, the other drove interference later.
October 14, 2025 at 4:48 PM
First, we generated a set of natural scene images from two visual categories and rigorously characterized similarity using a wide array of methods. We then applied PCA to these similarity matrices to identify orthogonal components (dimensions) of similarity across the 10 metrics.
October 14, 2025 at 4:48 PM
We also found that the neural evidence of high levels of perception, sustained attention, and selective attention were higher for events preceded by a hit compared to events preceded by a miss. Similar results were found when comparing the events based on the next event's encoding success. 8/11
March 24, 2025 at 7:04 PM
We investigated whether the sources' involvement fluctuated depending on how long the participant had been performing the encoding task. We found that as the time-on-task increased, the level of perception, sustained, and selective attention significantly decreased for both hits and misses. 7/11
March 24, 2025 at 7:04 PM
By leveraging the sources in a stepwise manner, we could quantify the extent to which each source improved the memory classification performance. While not independent from each other, we found that each three sources explained a unique additional variance of encoding-related activity. 5/11
March 24, 2025 at 7:04 PM
We found that this multidimensional treatment of memory decoding improved prediction performance compared to traditional, unidimensional, methods. 4/11
March 24, 2025 at 7:04 PM
Using a machine learning algorithm known as “transfer learning”, we leveraged visual perception, sustained attention, and selective attention brain states (i.e., the "sources") to better predict episodic memory performance from trial-to-trial encoding electroencephalography (EEG) activity. 3/11
March 24, 2025 at 7:04 PM