Magdalena Kachlicka
banner
mkachlicka.bsky.social
Magdalena Kachlicka
@mkachlicka.bsky.social
Postdoctoral Researcher @unibe.ch https://neuro.inf.unibe.ch & Honorary Research Fellow @birkbeckpsychology.bsky.social @audioneurolab.bsky.social | speech + sounds + brains 🧠 cogsci, audio, neuroimaging, language, methods https://mkachlicka.github.io
These results suggest that perceptual strategies are shaped by the reliability of encoding at early stages of the auditory system. 🧵5/5
February 7, 2026 at 8:56 AM
We find that neural tracking of pitch is linked to pitch cue weighting during word emphasis and lexical stress perception. Specifically, higher pitch weighting is linked to increased tracking of pitch at early latencies within the neural response, from 15 to 55 ms. 🧵4/5
February 7, 2026 at 8:56 AM
Here, we tested the hypothesis that the reliability of early auditory encoding of a given dimension is linked to the weighting placed on that dimension during speech categorization. We tested this in 60 first language speakers of Mandarin learning English as a second language. 🧵3/5
February 7, 2026 at 8:55 AM
Linguistic categories are conveyed in speech by many acoustic cues at the same time, but not all of them are equally important. There are clear and replicable individual differences in how people use those cues during speech perception, but the underlying mechanisms are unclear. 🧵2/5
February 7, 2026 at 8:55 AM
Reposted by Magdalena Kachlicka
📜🎉 Our project on aperiodic neural activity during sleep, led by the wonderful @mosameen.bsky.social, is now published!

This project shows how time-resolved measures of aperiodic neural activity track changes of sleep stages + lots of other analyses in iEEG & EEG!

www.nature.com/articles/s44...
Temporally resolved analyses of aperiodic features track neural dynamics during sleep - Communications Psychology
Sleep involves dynamic changes in brain activity that unfold over time, reflected in the brain’s aperiodic EEG patterns. Incorporating the spectral ‘knee’—a bend in the EEG power spectrum—reveals stag...
www.nature.com
November 20, 2025 at 5:07 PM
Together, these results suggest that the precision with which people perceive and remember sound patterns plays a major role in how well they understand accented speech, and that auditory training may help listeners who struggle. 🧵5/5
February 3, 2026 at 9:44 AM
Native English speakers who were better at understanding the accent were also better at detecting pitch differences, remembering sound patterns, and attending to pitch. Musical training also helped. Better speech perception was also linked to stronger neural encoding of speech harmonics. 🧵4/5
February 3, 2026 at 9:44 AM
In this study, we asked L1 English speakers to listen to the prosody of Mandarin-accented English. We found that some listeners are better at understanding accented speech than others. 🧵3/5
February 3, 2026 at 9:42 AM
Non-native speakers of English speak with varying degrees of accent. So far, research has focused more on factors that help learners communicate more effectively. But what about the listeners? Are there factors that make it easier for native listeners to understand accented speech? 🧵2/5
February 3, 2026 at 9:42 AM
🚨New paper🚨 about accented speech perception doi.org/10.1016/j.ba... by brilliant (MSc student at the time!) Amir Ghooch Kanloo accompanied by myself, Kazuya Saito and @adamtierney.bsky.social from fun times at @audioneurolab.bsky.social @birkbeckpsychology.bsky.social 🧵1/5
Redirecting
doi.org
February 3, 2026 at 9:40 AM
Reposted by Magdalena Kachlicka
"The Human Insula Reimagined: Single Neurons Respond to Simple Sounds during Passive Listening"

Single neuron activity in the insula
#iEEG

in #JNeurosci @sfnjournals.bsky.social

www.jneurosci.org/content/46/4...
The Human Insula Reimagined: Single Neurons Respond to Simple Sounds during Passive Listening
The insula is critical for integrating sensory information from the body with that arising from the environment. Although previous studies suggested that posterior insula is sensitive to sounds, these...
www.jneurosci.org
January 29, 2026 at 9:58 AM
Reposted by Magdalena Kachlicka
New work from our lab showing the human frontal lobe receives fast, low-level speech information in **parallel** with early speech areas!

🧠🗣️

doi.org/10.1038/s414...
Parallel encoding of speech in human frontal and temporal lobes - Nature Communications
Whether high-order frontal lobe areas receive raw speech input in parallel with early speech areas in the temporal lobe is unclear. Here, the authors show that frontal lobe areas get fast low-level sp...
doi.org
January 22, 2026 at 2:27 AM
Reposted by Magdalena Kachlicka
December 1, 2025 at 5:11 PM
Reposted by Magdalena Kachlicka
If you haven't, you should, it's brilliant!
Have you heard about the Night Science Podcast, where we talk about the actual creative process of doing science? We explore this with discussions with brilliant scientists & also philosophers and artists, to figure out the tricks of the creative scientific trade.
podcasts.apple.com/us/podcast/n...
November 18, 2025 at 10:03 AM
Reposted by Magdalena Kachlicka
New preprint by Mika Nash and others on how selective attention affects neural tracking of prediction during ecologically valid music listening: www.biorxiv.org/content/10.1...
Neural tracking of melodic prediction is pre-attentive
Music’s ability to modulate arousal and manipulate emotions relies upon formation and violation of predictions. Music is often used to modulate arousal and mood while individuals focus on other tasks,...
www.biorxiv.org
November 4, 2025 at 4:09 PM
Reposted by Magdalena Kachlicka
As it's hiring season again I'm resharing the NeuroJobs feed. Add #NeuroJobs to your post if you're recruiting or looking for an RA, PhD, Postdoc, or faculty position in Neuro or an adjacent field.

bsky.app/profile/did:...
September 3, 2025 at 3:25 PM
Reposted by Magdalena Kachlicka
Humans largely learn language through speech. In contrast, most LLMs learn from pre-tokenized text.

In our #Interspeech2025 paper, we introduce AuriStream: a simple, causal model that learns phoneme, word & semantic information from speech.

Poster P6, tomorrow (Aug 19) at 1:30 pm, Foyer 2.2!
August 19, 2025 at 1:12 AM
Reposted by Magdalena Kachlicka
My PhD student Yue Li is looking for L1 speakers of Chinese and Spanish for her online English experiment! Please see below for details!
🎓 Call for Participants – Paid Online English Study

We look for: native speaker of Spanish or Chinese who has an advanced level of English

💸 Compensation provided

✅ Check the flyer for eligibility
📲 Scan the QR code to get in touch.

Feel free to share this news!

#linguistics #paidstudy
August 14, 2025 at 3:01 PM
Reposted by Magdalena Kachlicka
Can you think of examples of books, films, TV shows, etc. featuring earworms or other types of imagined music? Please share them here! musicinmyhead.org/inner-music-...
Inner Music in Fiction and Biography - The Inner Music and Wellbeing Network
Inner Music in Fiction and Biography ‘Inner music’ or ‘musical imagery’ refers to the music that one hears in one’s own head. For example, an ‘earworm’ is a catchy piece of music that is stuck in one’...
musicinmyhead.org
August 6, 2025 at 7:45 PM
Reposted by Magdalena Kachlicka
🎧 Join us for some fun listening tasks!

🧠 Researchers at the University of Manchester want to recruit normal hearing volunteers aged 18-50 who are native English speakers to take part in research, which will help us to understand different aspects of listening in noise.

#hearinghealth #research
July 23, 2025 at 1:11 PM
Reposted by Magdalena Kachlicka
A ✨bittersweet✨ moment – after 5 years at UCL, my final first-author project with @smfleming.bsky.social is ready to read as a preprint! 🥲
Distinct neural representations of perceptual and numerical absence in the human brain: https://doi.org/10.31234/osf.io/zyrdk_v1
July 25, 2025 at 9:23 AM
Reposted by Magdalena Kachlicka
Nice review, but why "controversies"? Evidence isn’t controversial. Like "epiphenomenon," it often just means, "doesn’t fit my hypothesis." That’s ad hominem science.

Brain rhythms in cognition -- controversies and future directions
arxiv.org/abs/2507.15639
#neuroscience
arxiv.org
July 25, 2025 at 3:25 PM
Reposted by Magdalena Kachlicka
Delighted to have our newest paper out in #Jneurosci ! We looked at how much a single cell contributes to an auditory-evoked EEG signal. Big thanks to my co-authors Ira Kraemer, Christine Köppl, Catherine Carr and Richard Kempter (all not in Bsky). Here’s how: (1/13)
bsky.app/profile/sfnj...
#JNeurosci: <a href="https://bsky.app/profile/did:plc:2qzyacanl3gck4zou547d34y" class="hover:underline text-blue-600 dark:text-sky-400 no-card-link" target="_blank" rel="noopener" data-link="bsky-mention">@paulakuokkanen.bsky.social et al. isolated scalp signals from single neurons in the 1st processing stage of the barn owl auditory pathway, finding that single neurons' contributions to the scalp signal were unexpectedly large, and time-locked to the 2nd peak.
vist.ly/3n7ycdj
June 28, 2025 at 2:18 PM