Cogan Lab
banner
coganlab.bsky.social
Cogan Lab
@coganlab.bsky.social
250 followers 25 following 420 posts
The Cogan Lab at Duke University: Investigating speech, language, and cognition using invasive neural human electrophysiology http://coganlab.org
Posts Media Videos Starter Packs
Stop by to say hello and see some great science!
#Sfn2025 #Neuroscience #neuroskyence
Lastly (not least):

Wed. Nov 19 8am-12pm: 411.11 / MM10

Sensory-motor mechanisms for verbal working memory*

Postdoc Baishen Liang will be presenting his work on sensory-motor transformations for vWM
@gregoryhickok.bsky.social

*Also presenting at APAN
Next:

Mon. Nov 17 8am-12pm: 173.10 / S11

Multimodal sensory-motor transformations for speech

@dukeengineering.bsky.social PhD Student Areti Majumdar will be presenting her work on multimodal sensory-motor transformations for speech
Then:

Sun. Nov 16 1pm-5pm: 142.11 / LL17

Computational hierarchies of intrinsic neural timescales for speech perception and production

Former CRS @nicoleliddle.bsky.social (now at UCSD Cog Sci) will be presenting her work on intrinsic timescales and speech perception/production
Next:

Sun. Nov 16 1pm-5pm: 142.06 / LL12

Hierarchical Speech Encoding in Non-Primary Auditory Regions*

Postdoc Nanlin Shi will be presenting his work on speech encoding in non-canonical areas

*Also presenting at APAN
Then:

Sun. Nov 16 1pm-5pm: 142.05 / LL11

Verbal working memory is subserved by distributed network activity between temporal and frontal lobes

Former Neurosurgery Resident Daniel Sexton (now at @stanfordnsurg.bsky.social ) will be presenting his work on network decoding of verbal WM
Next:

Sun. Nov 16 1pm-5pm: 137.10 / HH2

Intracranial EEG Correlates of Concurrent Demands on Cognitive Stability and Flexibility

Undergraduate Erin Burns and CNAP PhD Student Jim Zhang will present work from our lab and @tobiasegner.bsky.social Lab on cognitive control
First up:

Sun. Nov 16 1pm-5pm: 126.20 / T11

Automated speech annotation achieves manual-level accuracy for neural speech decoding

@dukeengineering.bsky.social PhD Student Zac Spalding and Duke Kunshan undergrad Ahmed Hadwan will present work on validating automated speech alignment for BCI
Coming to San Diego for SfN and/or APAN? Come check out the intracranial work from the lab (7 posters)! There's a bit of everything this year, so come say hello!
#Sfn2025 #Neuroscience #neuroskyence
@dukebrain.bsky.social @dukeneurosurgery.bsky.social @dukeengineering.bsky.social
Come by tomorrow morning to hear about verbal working memory!
Saturday Sept. 13 11am-12:30pm, Poster Session C

C54: Baishen Liang (Postdoctoral Associate) will be presenting his work on sensory-motor mechanisms for verbal working memory.

Hope to see you all there!
Stop by this afternoon to see some intracranial speech decoding in the hippocampus and to say hello!
Friday Sept 12 4:30pm-6:00pm, Poster Session B

B70: Yuchao Wang (Rotation CNAP PhD Student) will be presenting his work on auditory pseudoword decoding in the hippocampus.
Saturday Sept. 13 11am-12:30pm, Poster Session C

C54: Baishen Liang (Postdoctoral Associate) will be presenting his work on sensory-motor mechanisms for verbal working memory.

Hope to see you all there!
Friday Sept 12 4:30pm-6:00pm, Poster Session B

B70: Yuchao Wang (Rotation CNAP PhD Student) will be presenting his work on auditory pseudoword decoding in the hippocampus.
Coming to DC for SNL later this week?

Come check out our posters on speech decoding and verbal working memory using intracranial recordings!

@snlmtg.bsky.social
#SNL2025
❔3️⃣: In Figs. 4 and 5, do you obtain similar results if you operate directly on the spike trains instead of on the PCA-reduced spike trains? Why is PCA necessary first?

Thank you to the authors for your work!
cc: Alexis Arnaudon, Mauricio Barahona, Pierre Vandergheynst
If separate animals were treated as separate manifolds with an embedding-agnostic MARBLE, would you still expect an informative latent space to be learned without any need for post-hoc alignment?
❔2️⃣: It seems that a linear transformation between MARBLE representations of different animals was necessary because the same information is present in the latent space but not necessarily with the same ordering... (con't)
❔1️⃣: It is stated that non-neighbors (both within and across manifolds) are negative samples (mapped far) during the contrastive learning step. Does treating non-neighbors within and across manifolds as similarly “distant” lead to less interpretability of larger distances in latent space?
🤍3️⃣: The comparisons to state-of-the-art latent dynamical systems models are great for properly contextualizing the performance of MARBLE.
🤍1️⃣: The initial proximity graph is a clever way to define distance and neighborhoods between inputs that can be used for downstream training.
🤍2️⃣: The rotation invariance is important and likely useful for extracting shared latent representations from systems with minor differences.
Reposted by Cogan Lab
Exciting work by PhD student Zac Spalding @zspald.bsky.social and multiple labs across departments at Duke! A combination of human electrophysiology, modeling/decoding, and cognitive concepts to improve BCI for speech in patients. 👏 🧠📈
We’re happy to present @zspald.bsky.social 's work on shared neural representations of speech production across individuals! We find that patient-specific data can be aligned to a shared space that preserves speech information, enabling cross-patient speech BCIs.
www.biorxiv.org/content/10.1...
From Cogan Lab Journal Club with @zspald.bsky.social
these decomposition acronyms are getting out of hand!
We’re happy to present @zspald.bsky.social 's work on shared neural representations of speech production across individuals! We find that patient-specific data can be aligned to a shared space that preserves speech information, enabling cross-patient speech BCIs.
www.biorxiv.org/content/10.1...