Neurospeech Team
banner
neurospeech.bsky.social
Neurospeech Team
@neurospeech.bsky.social
The "Neural coding and neuroengineering of human speech functions" team (Dir. Anne-Lise Giraud & Sophie Bouton) studies speech perception and production, to design new therapies for speech disorders. Part of Institut de l'Audition and IHU reConnect (Paris)
Pinned
The NeuroSpeech team is interested in understanding the computations that enable our brain to perceive and produce speech from a fundamental perspective in order to design targeted therapies for neurodevelopmental and acquired speech disorders.

More info about us and our projects ⬇️
Anne-Lise Giraud - Sophie Bouton - Neural coding and neuroengineering of human speech functions | Research - Institut Pasteur
research.pasteur.fr
Our review "Neuro-oscillatory models of cortical speech processing", authored by Olesia Dogonasheva, has finally been published. Check it below ⬇️

#compneuro #neuroskyence #EEG
Neuro-oscillatory models of cortical speech processing
In this review, we examine computational models that explore the role of neural oscillations in speech perception, spanning from early auditory proces…
www.sciencedirect.com
October 15, 2025 at 12:27 PM
⬇️ If you haven't, check our new Nat. Comput. Sci article describing our Brain Rhythm-based Inference Model (BRyBI), exploring how neural oscillations can be critical computational frameworks facilitating speech perception.

Below, an overview by @scienmag.bsky.social

#compneuro #neuroskyence
October 15, 2025 at 12:23 PM
Reposted by Neurospeech Team
🧠Et si des sons pouvaient aider à guérir la dyslexie ?
- #MyPhDattheInstitutPasteur

Découvrez Olesia, doctorante à l’Institut de l’Audition, qui explore une approche novatrice pour améliorer la lecture chez les enfants dyslexiques : stimuler le cerveau avec des sons.

Plus d'info en commentaires 💡
🧠 Et si des sons pouvaient aider à guérir la dyslexie ? | My PhD at the Institut Pasteur
YouTube video by Institut Pasteur EDUCATION
youtube.com
October 14, 2025 at 8:35 AM
Check a new article co-authored by our own Ludovica Veggiotti exploring size-action and number-action associations in infancy ⬇️
Exploring size-action and number-action associations in infancy
In the last decades, a growing body of research has assessed the link between numerical and action processing. However, this relationship has not been…
www.sciencedirect.com
September 30, 2025 at 3:02 PM
Reposted by Neurospeech Team
🎉New Paper Alert!🎉

Really excited that this collaboration is finally out! It shows that combining oscillatory dynamics with hierarchical predictive frameworks generates speech perception that is robust to temporal distortions: similarly to human behavior and more so than current ASR models.
September 29, 2025 at 10:01 AM
💡Check our new Point of View paper about the recent advances & future challenges of implantable neural speech decoders, with the aim of restoring communication abilities in Locked-In Syndrome patients.

✒️S. Jhilal, @sissymarche.bsky.social, B. Thirion, B. Soudrie, A-L Giraud & E. Mandonnet
Implantable Neural Speech Decoders: Recent Advances, Future Challenges - Soufiane Jhilal, Silvia Marchesotti, Bertrand Thirion, Brigitte Soudrie, Anne-Lise Giraud, Emmanuel Mandonnet, 2025
The social life of locked-in syndrome (LIS) patients is significantly impacted by their difficulties to communicate. Consequently, researchers have started to e...
doi.org
September 23, 2025 at 11:58 AM
Two amazing half-days during the NeuroDecoder workshop at the Institut de l'Audition! We thank C. Herff, F. Guenther, G. Richard, A. Bittar, A. Gramfort, @oiwi3000.bsky.social, @adeenflinker.bsky.social & @juliaberezutskaya.bsky.social for their enriching presentations and discussion.
September 19, 2025 at 3:18 PM
Proud to co-organize the NeuroDecoder workshop at the Hearing Institute on September 18-19. We will discuss advanced methods for neural decoding of speech and auditory signals with some brilliant speakers. Info here: dim-cbrains.fr/en/news/news...
DIM C-BRAINS
Cognition and Brain Revolutions: Artificial Intelligence, Neurogenomics, Society
dim-cbrains.fr
September 16, 2025 at 2:43 PM
Reposted by Neurospeech Team
The Pasteur Institute opens its Doctoral Program app. There are 5 projects focused on Hearing and my team is involved two of them!

1) neural oscillatory dynamics in temporal processing of speech

2) Algorithms for speech intelligibility in cochlear implants

DM me if interested!

www.pasteur.fr/ppu
Pasteur-Paris University International doctoral program​ (PPU)
OverviewIn 2009, the Institut Pasteur, the world leading biomedical research institute founded by Louis Pasteur in 1887, inaugurated the Pasteur Paris-University (PPU) international doctoral program i...
www.pasteur.fr
September 1, 2025 at 2:20 PM
Reposted by Neurospeech Team
We are deeply saddened to share that our friend and colleague Jim Hudspeth passed away on Saturday. We will remember and continue to be inspired by Jim’s integrity, his humility, and his unwavering commitment to discovery.
A. James Hudspeth, neuroscientist who unlocked secrets of hearing, has died - News
A. James Hudspeth, a Rockefeller neuroscientist who discovered how sound waves are converted into electrical signals in the ear's cochlea, died Saturday at his home in Manhattan. A pioneering scientist and dedicated mentor, he was the university's F.M. ...
www.rockefeller.edu
August 18, 2025 at 7:59 PM
Reposted by Neurospeech Team
New paper in Imaging Neuroscience by Marta Xavier, Patrícia Figueiredo, et al:

Consistency of resting-state correlations between fMRI networks and EEG band power

doi.org/10.1162/IMAG...
June 25, 2025 at 10:14 PM
Reposted by Neurospeech Team
Light propofol anaesthesia for non-invasive auditory EEG recording in unrestrained non-human primates. https://www.biorxiv.org/content/10.1101/2025.03.24.644890v1
March 24, 2025 at 4:17 PM
Reposted by Neurospeech Team
@scone-neuro.bsky.social has a great review coming soon led by @annekeitel.bsky.social. We summarize major oscillatory mechanisms and discuss open questions across a range of subfields in cog neuro. The preprint should be out soon.
July 21, 2025 at 5:03 PM
Reposted by Neurospeech Team
/1 We took our sweet time (~3yrs) to put this into its final shape - but happy to say that the pre-print of an extensive review of brain rhythms in cognition - from a cognruro perspective - is now available. Please let us know what you think. #neuroskyence doi.org/10.48550/arX...
Brain rhythms in cognition -- controversies and future directions
Brain rhythms seem central to understanding the neurophysiological basis of human cognition. Yet, despite significant advances, key questions remain unresolved. In this comprehensive position paper, w...
doi.org
July 22, 2025 at 12:31 PM
Reposted by Neurospeech Team
Congratulations to Brice Bathellier, (Institut Pasteur and @cnrs.fr) head of the “Auditory System Dynamics & Multisensory Perception” team at the Hearing Institute, for securing the ERC Proof of Concept 2025 for his project BRAINCODER! 👏

@bathellierlab.bsky.social

#Neuroscience
July 15, 2025 at 1:19 PM
Reposted by Neurospeech Team
🚨 Only 1 week left to apply! Hearing: from mechanisms to restoration technologies 🎧

Final call for this advanced Pasteur Course on hearing science and auditory restoration.

🎯For Master’s students, PhD candidates, clinicians & hearing professionals.

Apply now 👉 bit.ly/HearingCourse
July 8, 2025 at 11:44 AM
New preprint from our team!

People with tinnitus often complain about attention difficulties so we made them perform several tasks (n=200). After correcting for comorbidities, no deficits in selective attention or executive functions could be observed but they exhibited poorer control of arousal.
Tinnitus perception is linked to arousal system dysfunction
Tinnitus, the perception of sound in the absence of an external source, affects 14% of the population and is often associated with concentration and emotional difficulties. However, the characterizati...
doi.org
July 3, 2025 at 2:24 PM
💡 Check our new preprint (by @sophiebouton.bsky.social, w/ with @valerian-chambon.bsky.social @nargolestani.bsky.social). We organize modeling approaches based along two axes (covariance and temporal dependencies), which shapes how they align with specific scientific goals in language neuroscience.
OSF
doi.org
June 30, 2025 at 4:39 PM
Reposted by Neurospeech Team
No poster photo this time, but @fens.org Regional Meeting in Oslo was great! #FRM2025 Thank you @exppsychsoc.bsky.social for supporting me with a travel grant!
June 27, 2025 at 1:34 PM
Reposted by Neurospeech Team
👏 Félicitations à Anne-Lise Giraud, Directrice de l'Institut de l'Audition – IHU reConnect, médaille d'argent du CNRS 2025 ! Une reconnaissance bien méritée pour ses travaux en neurosciences cognitives sur l'audition et le langage. @cnrsbiologie.bsky.social ↘️ www.insb.cnrs.fr/fr/personne/...
June 18, 2025 at 7:10 AM
Reposted by Neurospeech Team
🧬 L'Institut Pasteur fait peau neuve !

Nouvelle plateforme de marque, nouvelle identité visuelle, nouvelle signature : "Pour chaque vie, la science agit" 💙

#InstitutPasteur
June 12, 2025 at 7:53 AM
Reposted by Neurospeech Team
Very happy to share a new preprint, written with people I have learned a lot from: Paola Cerrito, Carel van Schaik, Judith Burkart, Anne-Lise Giraud (@neurospeech.bsky.social), Daphne Baverlier, @balthasarbickel.bsky.social: how the communication system of neanderthals may have differed from ours
🧪🧵
Pleistocene origins of cultural and linguistic diversification: how Homo sapiens and Neanderthals differed
ecoevorxiv.org
June 2, 2025 at 6:12 AM
Reposted by Neurospeech Team
"mood, sleep, neuroticism, and hearing emerged as key modifiable factors influencing how tinnitus is experienced, offering actionable targets for clinical intervention." From: www.news-medical.net/news/2025050... #tinnitus #keeplistening
AI reveals how hearing, mood, and sleep predict who suffers most from tinnitus
Researchers used machine learning and UK Biobank data to identify predictors of subjective tinnitus presence and severity. Hearing health was the strongest factor, but mood, sleep, and neuroticism sha...
www.news-medical.net
May 13, 2025 at 4:01 PM
Reposted by Neurospeech Team
Our new @NatureComms paper: We used UK Biobank data (n≈193K) to build ML models predicting tinnitus presence (driven by hearing health) and severity (influenced by mood, neuroticism & sleep). A simple 6-item POST questionnaire forecasts 9-yr outcomes. shorturl.at/4AF4P
Tinnitus risk factors and its evolution over time - Nature Communications
Improving tinnitus prevention and clinical management by identifying key associated risk factors is crucial. Here, the authors use machine learning in a large cohort to identify key predictors of tinn...
shorturl.at
May 9, 2025 at 5:22 PM
Check out the new publication in @natcomms.nature.com by L. Hobeika & S. Samson (coll. w/ @evp82.bsky.social). ML models trained on a large database reveal the risk factors for tinnitus presence and severity, and helped to design a 6-item questionnaire to predict clinical outcomes. #tinnitus
Tinnitus risk factors and its evolution over time - Nature Communications
Improving tinnitus prevention and clinical management by identifying key associated risk factors is crucial. Here, the authors use machine learning in a large cohort to identify key predictors of tinn...
www.nature.com
May 20, 2025 at 9:15 AM