Léo Varnet
banner
leovarnet.bsky.social
Léo Varnet
@leovarnet.bsky.social
CNRS researcher at École normale supérieure Paris. Auditory perception, psycholinguistics, hearing loss.
Mastodon: @LeoVarnet@fediscience.org
Hello everyone!
A master student in my team has developed a very nice vowel perception experiment, applying the concept of Markov Chain Monte Carlo to human behavior.
We are kindly looking for a few volunteers to take part in a short (~15-20 min) pilot test. If you're willing to help, the
July 22, 2025 at 1:00 PM
Yesterday I defended my 'Habilitation à diriger des Recherches'! Thanks to those who attended! And thank you to a wonderful jury: J. Gervain, A. Calcus, I. Chitoran, J.J. Aucouturier, E. Gaudrain, C. Lorenzi.
May 27, 2025 at 4:46 AM
Not only did we reimplement and replicate a 50-years-old study within a fully #opensource framework (github.com/LeoVarnet/fa...), but we also analyzed the data with modern computation tools to reveal the "mental representation" of a tone in noise at different SNRs
May 14, 2025 at 2:31 PM
The paper is now online and #openaccess on @hal_fr : "50 Years of Reverse Correlation: Replicating Ahumada et al.’s Pioneering Study" hal.science/hal-05060148
May 14, 2025 at 2:31 PM
We successfully replicated the main finding, but were unable to reproduce some secondary results, such as the general performance levels. This is likely due to the presence of an intensive training session in the original study, a methodological detail not mentioned in the paper.
May 14, 2025 at 2:31 PM
Last year I decided to conduct a replication study of a milestone study in psychoacoustics, as I needed an example to showcase the capabilities of the toolbox we develop. I turned it into an small internship project.
May 14, 2025 at 2:31 PM
Aujourd'hui journée de présentation des recherches mené au sein de notre département de recherche à destination des ITAs. Super intiative de "communication interne".
Ma présentation : "La voix humaine, de Jean Cocteau à la psycholinguistique"
March 25, 2025 at 12:56 PM
Une belle mobilisation pour #standupforscience Paris
March 7, 2025 at 6:17 PM
Today I enjoyed some free time before the workshop and visited the Hundertwasser Museum. Did you know that Hundertwasser was one of the first lithographers to keep an exact record of the different versions of his prints? In this detail from "10002 Nights Homo Humus Come Va How Do You Do", the
February 16, 2025 at 4:45 PM
In a nutshell, we found that, as expected, each listener relied on multiple cues, but there was substantial variability in the specific cues used for a given phoneme across individuals. [12/X]
November 15, 2024 at 4:32 PM
By analyzing participants' responses, we generated spectrotemporal maps of the information used to identify each phoneme. These maps are highly detailed and individualized, showing which features each listener relies on. [11/X]
November 15, 2024 at 4:32 PM
We then analyzed how any particular instance of noise could mislead the participant into hearing more /aba/ or /ada/. This approach allowed us to identify what information listeners are "attending to," as those features will be most sensitive to the presence of noise. [10/X]
November 15, 2024 at 4:32 PM
We had participants categorize French stop consonants in noisy conditions. For example, a listener would hear sounds /aba/ or /ada/ in high noise levels and report which target they perceived. [9/X]
November 15, 2024 at 4:32 PM
The current status quo in phonetic literature is quite confusing. Introductory courses often teach that b-d-g perception is driven by the F2 transition – an acoustic information in the medium frequency range. [3/X]
November 15, 2024 at 4:32 PM
I will tell you more about this research when it gets published, but here's a glimpse of our main finding. This colorful figure shows the acoustic cues listeners rely on to differentiate the sounds /aba/, /ada/, /aga/, /apa/, /ata/, /aka/.
June 29, 2024 at 2:22 PM
There's a LOT more in the paper: link with phonetic features, simulations using an auditory model, ref of our toolbox to replicate or reproduce any part of this work from collecting the data to plotting the figures. The article will soon be available in open access, I'll share the link here! (7/X)
February 19, 2024 at 8:50 AM
While we usually think of noise as merely confusing the listener, resulting in random errors, our study presents a different perspective: we revealed that approximately 10%-20% of these errors are not random but rather predictable and reproducible. (6/X) 🧪
February 19, 2024 at 8:26 AM
This revealed a "systematic effect of noise". Although 2 instances of white noise are nearly indistinguishable, they can impact comprehension differently due to subtle acoustic differences. Depending on the distribution of noise energy they will consistently bias perception towards aba or ada. (5/X)
February 19, 2024 at 8:13 AM
Unlike conventional analysis methods, we opted against averaging results across numerous speech-in-noise trials. Instead, we focused on predicting the individual listener's response on a trial-by-trial basis. (4/X)
February 19, 2024 at 7:59 AM
Usually, scientists consider that auditory masking occurs when some parts of the speech signal are not audible anymore, because of the noise. Here we showed that this is not the whole story. (3/X)
February 19, 2024 at 7:56 AM
Our aim was to investigate the impact of background noise on speech comprehension. We used a minimalistic experimental design: two speech sounds (aba and ada) in a meaningless noise (e.g. white noise) (2/X)
February 19, 2024 at 7:50 AM
It was a long process (more than 2 years since we wrote the preregistration document) but our new study was finally published in JASA yesterday ! The paper itself is rather technical but the central idea is worth sharing in a thread. (1/X)
pubmed.ncbi.nlm.nih.gov/38364046/
February 19, 2024 at 7:47 AM
Suite à la médiatisation de notre article sur l'écriture inclusive, j'ai reçu de nombreuses questions et commentaires. Il m'a semblé utile de répondre aux interrogations les plus récurrentes sous la forme d'une FAQ : dbao.leo-varnet.fr/2023/12/19/r...
December 29, 2023 at 1:38 PM