Cogan Lab
banner
coganlab.bsky.social
Cogan Lab
@coganlab.bsky.social
The Cogan Lab at Duke University: Investigating speech, language, and cognition using invasive neural human electrophysiology
http://coganlab.org
Stop by to say hello and see some great science!
#Sfn2025 #Neuroscience #neuroskyence
November 10, 2025 at 8:55 PM
Lastly (not least):

Wed. Nov 19 8am-12pm: 411.11 / MM10

Sensory-motor mechanisms for verbal working memory*

Postdoc Baishen Liang will be presenting his work on sensory-motor transformations for vWM
@gregoryhickok.bsky.social

*Also presenting at APAN
November 10, 2025 at 8:55 PM
Next:

Mon. Nov 17 8am-12pm: 173.10 / S11

Multimodal sensory-motor transformations for speech

@dukeengineering.bsky.social PhD Student Areti Majumdar will be presenting her work on multimodal sensory-motor transformations for speech
November 10, 2025 at 8:55 PM
Then:

Sun. Nov 16 1pm-5pm: 142.11 / LL17

Computational hierarchies of intrinsic neural timescales for speech perception and production

Former CRS @nicoleliddle.bsky.social (now at UCSD Cog Sci) will be presenting her work on intrinsic timescales and speech perception/production
November 10, 2025 at 8:55 PM
Next:

Sun. Nov 16 1pm-5pm: 142.06 / LL12

Hierarchical Speech Encoding in Non-Primary Auditory Regions*

Postdoc Nanlin Shi will be presenting his work on speech encoding in non-canonical areas

*Also presenting at APAN
November 10, 2025 at 8:55 PM
Then:

Sun. Nov 16 1pm-5pm: 142.05 / LL11

Verbal working memory is subserved by distributed network activity between temporal and frontal lobes

Former Neurosurgery Resident Daniel Sexton (now at @stanfordnsurg.bsky.social ) will be presenting his work on network decoding of verbal WM
November 10, 2025 at 8:55 PM
Next:

Sun. Nov 16 1pm-5pm: 137.10 / HH2

Intracranial EEG Correlates of Concurrent Demands on Cognitive Stability and Flexibility

Undergraduate Erin Burns and CNAP PhD Student Jim Zhang will present work from our lab and @tobiasegner.bsky.social Lab on cognitive control
November 10, 2025 at 8:55 PM
First up:

Sun. Nov 16 1pm-5pm: 126.20 / T11

Automated speech annotation achieves manual-level accuracy for neural speech decoding

@dukeengineering.bsky.social PhD Student Zac Spalding and Duke Kunshan undergrad Ahmed Hadwan will present work on validating automated speech alignment for BCI
November 10, 2025 at 8:55 PM
Saturday Sept. 13 11am-12:30pm, Poster Session C

C54: Baishen Liang (Postdoctoral Associate) will be presenting his work on sensory-motor mechanisms for verbal working memory.

Hope to see you all there!
September 10, 2025 at 8:18 PM
Friday Sept 12 4:30pm-6:00pm, Poster Session B

B70: Yuchao Wang (Rotation CNAP PhD Student) will be presenting his work on auditory pseudoword decoding in the hippocampus.
September 10, 2025 at 8:18 PM
❔3️⃣: In Figs. 4 and 5, do you obtain similar results if you operate directly on the spike trains instead of on the PCA-reduced spike trains? Why is PCA necessary first?

Thank you to the authors for your work!
cc: Alexis Arnaudon, Mauricio Barahona, Pierre Vandergheynst
September 9, 2025 at 2:51 PM
If separate animals were treated as separate manifolds with an embedding-agnostic MARBLE, would you still expect an informative latent space to be learned without any need for post-hoc alignment?
September 9, 2025 at 2:51 PM
❔2️⃣: It seems that a linear transformation between MARBLE representations of different animals was necessary because the same information is present in the latent space but not necessarily with the same ordering... (con't)
September 9, 2025 at 2:51 PM
❔1️⃣: It is stated that non-neighbors (both within and across manifolds) are negative samples (mapped far) during the contrastive learning step. Does treating non-neighbors within and across manifolds as similarly “distant” lead to less interpretability of larger distances in latent space?
September 9, 2025 at 2:51 PM
🤍3️⃣: The comparisons to state-of-the-art latent dynamical systems models are great for properly contextualizing the performance of MARBLE.
September 9, 2025 at 2:51 PM
🤍1️⃣: The initial proximity graph is a clever way to define distance and neighborhoods between inputs that can be used for downstream training.
🤍2️⃣: The rotation invariance is important and likely useful for extracting shared latent representations from systems with minor differences.
September 9, 2025 at 2:51 PM
They find that MARBLE successfully decomposes complex dynamical activity from spike trains into informative and easily decodable latent representations. This 🧵 explores our thoughts (🤍 & ❔). www.nature.com/articles/s41...
MARBLE: interpretable representations of neural population dynamics using geometric deep learning - Nature Methods
MARBLE uses geometric deep learning to map dynamics such as neural activity into a latent representation, which can then be used to decode the neural activity or compare it across systems.
www.nature.com
September 9, 2025 at 2:51 PM