Women in AI Research - WiAIR
banner
wiair.bsky.social
Women in AI Research - WiAIR
@wiair.bsky.social
WiAIR is dedicated to celebrating the remarkable contributions of female AI researchers from around the globe. Our goal is to empower early career researchers, especially women, to pursue their passion for AI and make an impact in this exciting field.
👇 Watch the trailer + subscribe so you don’t miss the full episode!

youtu.be/Bh3bT-r4aH8
youtu.be
November 17, 2025 at 5:03 PM
🎧 Coming soon — don’t miss it!

🎬 YouTube - youtube.com/@WomeninAIRe...
🎙️ Spotify - open.spotify.com/show/51RJNlZ...
🎧 Apple Podcasts - t.co/IZSYvx3YlI
Women in AI Research WiAIR
Women in AI Research (WiAIR) is a podcast dedicated to celebrating the remarkable contributions of female AI researchers from around the globe. Our mission is to challenge the prevailing perception…
youtube.com
November 14, 2025 at 4:01 PM
Annie's research focuses on language diversity, multilinguality, low-resource languages and multicultural fairness in AI systems.
November 14, 2025 at 4:01 PM
🌍 Models perform strongest on North American & European images, and weakest on East Asia, South Asia, Africa & SE Asia — highlighting cultural bias.
(7/8🧵)
November 10, 2025 at 4:15 PM
📊 Beyond accuracy, diversity@k measures the cultural range in top-k retrieved results.
It reveals cultural gaps that accuracy alone doesn’t show.
(6/8🧵)
November 10, 2025 at 4:14 PM
🔎 Cultural Visual Grounding Task
Tests whether models identify culture-specific elements (e.g., molinillo, knafeh, aarti thali) within images.
Performance varies across regions.
(5/8🧵)
November 10, 2025 at 4:14 PM
📸 Universal Concepts Task
Looks at how models retrieve images for concepts like wedding, breakfast, & funeral.
Findings show limited cultural variation, often leaning Western.
(4/8🧵)
November 10, 2025 at 4:14 PM
🧪 The study introduces GLOBALRG, a benchmark across 50 countries & 10 regions, with two tasks:
🔸 Cross-cultural image retrieval
🔸 Cultural visual grounding (culture-specific elements from 15 countries)
(3/8🧵)
November 10, 2025 at 4:13 PM
📄 From Local Concepts to Universals: Evaluating the Multicultural Understanding of Vision-Language Models
The paper examines whether VLMs represent universal concepts in culturally diverse ways, rather than defaulting to one dominant view.
(2/8🧵)
November 10, 2025 at 4:13 PM
🎧 Don’t miss this #WiAIR episode with Dr. Vered Shwartz!
🎬 YouTube: www.youtube.com/watch?v=RKIv...
🎙 Spotify: open.spotify.com/episode/3IvN...
🍎 Apple: podcasts.apple.com/ca/podcast/w...
(6/6🧵)
Why AI Doesn’t Understand Your Culture? Dr. Vered Shwartz on Cultural Bias in LLMs
YouTube video by Women in AI Research WiAIR
www.youtube.com
November 7, 2025 at 4:09 PM
📊 INFOGAP exposes how Wikipedia’s “neutral point of view” shifts by language and culture, revealing hidden editorial biases in multilingual knowledge. (5/6🧵)
November 7, 2025 at 4:08 PM
💬 Facts with polarizing connotations (positive or negative) are excluded more often than neutral ones across all language pairs. Russian LGBT biographies disproportionately share negative facts with English versions compared to non-LGBT bios, revealing selective editorial patterns. (4/6🧵)
November 7, 2025 at 4:07 PM
🧩 Using 2,700 LGBT biographies, the study finds massive asymmetries:
Only 35% of English facts appear in French and 23% in Russian.
English pages contain far more of the others’ content. (3/6🧵)
November 7, 2025 at 4:07 PM
🔍 INFOGAP combines cross-lingual fact alignment using multilingual embeddings with LLM-based entailment verification to determine what's shared, omitted, or reframed between English, Russian, and French articles. (2/6🧵)
📄 Paper: arxiv.org/abs/2410.04282
Locating Information Gaps and Narrative Inconsistencies Across Languages: A Case Study of LGBT People Portrayals on Wikipedia
To explain social phenomena and identify systematic biases, much research in computational social science focuses on comparative text analyses. These studies often rely on coarse corpus-level statisti...
arxiv.org
November 7, 2025 at 4:06 PM
🎧 Listen to our conversation with Dr. Shwartz:
YouTube: www.youtube.com/watch?v=RKIv...
Spotify: open.spotify.com/episode/3IvN...
Apple: podcasts.apple.com/ca/podcast/w...
(7/7🧵)
Why AI Doesn’t Understand Your Culture? Dr. Vered Shwartz on Cultural Bias in LLMs
YouTube video by Women in AI Research WiAIR
www.youtube.com
November 5, 2025 at 4:25 PM
This also connects with our recent Women in AI Research episode with Dr. Vered Shwartz, where we explore how AI systems trained mainly on Western data can shape language, norms, and expectations worldwide. 🌍✨
(6/7🧵)
November 5, 2025 at 4:25 PM
Responsible AI isn’t only about avoiding harmful outputs — it also requires careful thinking about how AI interacts with people and the role it takes in their lives. 🔍
(5/7🧵)
November 5, 2025 at 4:24 PM
This influence becomes especially concerning when people turn to AI for emotional support. Strong safeguards and transparency are needed to protect users. ⚠️
(4/7🧵)
November 5, 2025 at 4:24 PM