Neil Kirk
drspeakmind.bsky.social
Neil Kirk
@drspeakmind.bsky.social
Reader in Cognitive (and MIND) Psychology with an interest in all things dialect and voice-y. Other interests include: the X-men, AI, the gym and doomscrolling. 🏳️‍🌈
Very grateful to @the-sipr.bsky.social for funding this important work. 11/11
July 21, 2025 at 3:02 PM
💡 Why it matters: This could have real-world implications for designing public awareness campaigns and scam prevention messages. 10/11
July 21, 2025 at 3:02 PM
🏠 Take-Home Message: Simply telling people that AI voices can speak with a Scottish accent/dialect was far more effective than warning them to be vigilant. 9/11
July 21, 2025 at 3:02 PM
However, an explicit vigilance-based nudge warning about the dangers of AI voices and urging listeners “if in doubt, think AI” had no effect, unless paired with the capability message about AI’s linguistic abilities. 8/11
July 21, 2025 at 3:02 PM
A positively framed nudge highlighting AI’s capability to reproduce underrepresented accents and dialects significantly reduced this bias – in other words, changing their MINDSET made them more vigilant towards AI voices using these varieties. 7/11
July 21, 2025 at 3:02 PM
In this manuscript, I investigate whether simple informational nudges can shift these assumptions and reduce the bias for responding “Human”. Across two experiments, participants categorised voices as either Human or AI. 6/11
July 21, 2025 at 3:02 PM
Yet that assumption could be putting some language communities at greater risk of AI voice-based deception if they believe a voice speaking that way must be a real person. 5/11
July 21, 2025 at 3:02 PM
In my new paper, I introduce the concept of MINDSET: Minority, Indigenous, Non-standard, and Dialect-Shaped Expectations of Technology. It reflects the idea that people assume AI can’t convincingly reproduce underrepresented ways of speaking. 4/11
July 21, 2025 at 3:02 PM
I also suspect this is not unique to Scotland, but part of a global pattern affecting communities whose voices have historically been excluded from these systems. 3/11
July 21, 2025 at 3:02 PM
My previous work showed that listeners were more likely to believe an AI voice was a real human when it spoke in a local dialect. I think this happens because we’re not used to speech technology understanding these varieties - never mind speaking them! 2/11
July 21, 2025 at 3:02 PM
… or take Monday off. (Yes that was me).
June 23, 2025 at 6:52 PM
Reposted by Neil Kirk
This might make some language communities more vulnerable to AI voice-based deception. Luckily I’ve been given some funding to investigate this further, so watch this space!
April 18, 2025 at 6:00 PM
Do they factor self-indulgent puntastic titles into REF scores? I sure hope so! 😄
April 18, 2025 at 6:06 PM