Women in AI Research - WiAIR
banner
wiair.bsky.social
Women in AI Research - WiAIR
@wiair.bsky.social
WiAIR is dedicated to celebrating the remarkable contributions of female AI researchers from around the globe. Our goal is to empower early career researchers, especially women, to pursue their passion for AI and make an impact in this exciting field.
🎙️ New #WiAIR episode coming soon!

We talk with Dr. Annie En-Shiun Lee (@ontariotechu.bsky.social & @utoronto.ca) about multilingual AI, inclusion in research - and proving you can build an amazing career while raising a family.

#wiairpodcast
November 17, 2025 at 5:03 PM
We're pleased to feature Dr. Annie En-Shiun Lee, Asst Prof at @ontariotechu.bsky.social and status-only at @utoronto.ca in the next @wiair.bsky.social episode.
November 14, 2025 at 4:01 PM
AI models are built on human values - but whose values, exactly? 🌍

Vered Shwartz highlights that diverse teams - across gender, culture, and discipline - are essential for building fair and trustworthy AI systems.

#llms #wiair #wiairpodcast
November 13, 2025 at 4:02 PM
🌍 Can AI represent “universal concepts” in ways that reflect cultural variation?
We hosted Dr. Vered Shwartz on WiAIR to discuss how culture shapes AI’s understanding of language & visuals. We also discussed an EMNLP 2024 paper examining multicultural understanding in VLMs.
(1/8🧵)
November 10, 2025 at 4:12 PM
🌍 Can we trust Wikipedia to tell the same story across languages?
In “Locating Information Gaps and Narrative Inconsistencies Across Languages”, Dr. Vered Shwartz (@veredshwartz.bsky.social) and collaborators introduce INFOGAP, a method to detect fact-level gaps across Wikipedias. (1/6🧵)
November 7, 2025 at 4:06 PM
🤖 Can LLMs respect culture and facts?

We want AI systems that understand diverse cultures 𝘢𝘯𝘥 stay grounded in factual truth.
But can we really have both?

Vered Shwartz explains this core challenge of modern LLMs.

#llms #wiair #wiairpodcast
November 3, 2025 at 5:03 PM
🎙 In our new episode, we spoke with @veredshwartz.bsky.social (Assistant Professor of Computer Science at The University of British Columbia) and highlighted her book Lost in Automatic Translation: Navigating Life in English in the Age of Language Technologies. (1/7🧵)
October 31, 2025 at 4:09 PM
🎙️ New episode of Women in AI Research (WiAIR) out now!

We sit down with @veredshwartz.bsky.social (Asst prof and CIFAR AI Chair) to talk about the important challenge in AI — cultural bias. 🌍

#nlproc #wiair #wiairpodcast

/1
October 29, 2025 at 4:03 PM
LLMs are shaping hiring, healthcare, and law — but can they truly understand users from every culture?

In our latest #WiAIRpodcast episode, Dr. Vered Shwartz explores how cultural bias impacts fairness and inclusivity in AI.

🎧 Watch here
👉 www.youtube.com/watch?v=9x2Q...

#wiair
October 27, 2025 at 4:03 PM
We’re excited to feature @veredshwartz.bsky.social , Asst Prof at @cs.ubc.ca, CIFAR AI Chair @vectorinstitute.ai, and author of lostinautomatictranslation.com, in the next @wiair.bsky.social episode.
October 24, 2025 at 4:04 PM
"Trust only exists when there's risk." - Ana Marasović
Trust isn't about certainty - it's about risk acceptance.
Full conversation: youtu.be/xYb6uokKKOo
October 22, 2025 at 4:06 PM
🧠 Can large language models build the very benchmarks used to evaluate them?
In “What Has Been Lost with Synthetic Evaluation”, Ana Marasović (@anamarasovic.bsky.social) and collaborators ask what happens when LLMs start generating the datasets used to test their reasoning. (1/6🧵)
October 20, 2025 at 4:01 PM
AI academia and industry aren’t rivals — they’re partners. 🤝
As Ana Marasović says, innovation flows both ways: research trains the next generation who power real-world AI.

🎓🤖 www.youtube.com/@WomeninAIRe...
October 17, 2025 at 4:07 PM
👉 Do large language models really reason the way their chain-of-thoughts suggest?
This week on #WiAIRpodcast, we talk with Ana Marasović (@anamarasovic.bsky.social) about her paper “Chain-of-Thought Unfaithfulness as Disguised Accuracy.” (1/6🧵)
📄 Paper: arxiv.org/pdf/2402.14897
October 15, 2025 at 4:06 PM
✈️🤖 AI Safety Like Aviation: Too Ambitious or Absolutely Necessary?

Can AI ever be as safely regulated as aviation?
Ana Marasović shares her vision for the future of AI governance — where safety principles and regulation become the default, not an afterthought.

www.youtube.com/@WomeninAIRe...
October 13, 2025 at 4:49 PM
How do we really know when and how much to trust large language models? 🤔
In this week’s #WiAIRpodcast, we talk with Ana Marasović (Asst Prof @ University of Utah; ex @ Allen AI, UWNLP) about explainability, trust, and human–AI collaboration. (1/8🧵)
October 10, 2025 at 7:04 PM
Our new guest at #WiAIRpodcast is @anamarasovic.bsky.social
(Asst prof @ University of Utah , Ex @ Allen AI). We'll talk with her about faithfulness, trust and robustness in AI.
The episode is coming soon, don't miss:
www.youtube.com/@WomeninAIRe...

#WiAIR #NLProc
October 3, 2025 at 4:02 PM
"Inclusivity is about saying: Come sit with us!" 💡

Valentina Pyatkin reminds us that AI research isn’t just about models and benchmarks - it’s about building a community where everyone feels welcome.

#AI #Inclusivity #WomenInAI
October 1, 2025 at 3:31 PM
🤔 How do we know if a reward model is truly good? In our last #WiAIR episode, Valentina Pyatkin (AI2 & University of Washington) introduced RewardBench 2—a harder, cleaner benchmark for reward models in post-training. (1/8🧵)
September 29, 2025 at 4:14 PM
💥 Behind every success is a story of rejection.
Persistence, curiosity, and resilience are what truly drive AI careers. 🚀

Don't miss the full episode:
🎬 YouTube: youtube.com/watch?v=DPhq...
🎙 Spotify: open.spotify.com/episode/7aHP...
September 26, 2025 at 3:06 PM
💡 Are LLMs truly good at precise instruction following, or just overfitting to benchmarks?
In our latest WiAIR episode, we sit down with Valentina Pyatkin (@valentinapy.bsky.social) from @ai2.bsky.social and UW to discuss her paper: “Generalizing Verifiable Instruction Following”. (1/7🧵)
September 24, 2025 at 4:19 PM
Tulu 3 isn’t just a model - it’s the ecosystem: data, recipes, benchmarks, and RLVR.
Valentina Pyatkin breaks down how smart data mixing & filtering shaped its performance.

Don't miss the full episode:
🎬 YouTube: youtube.com/watch?v=DPhq...
🎙 Spotify: open.spotify.com/episode/7aHP...
September 22, 2025 at 3:03 PM
🚀 Can open science beat closed AI? Tülu 3 makes a powerful case. In our new #WiAIRpodcast, we speak with Valentina Pyatkin (@valentinapy.bsky.social) of @ai2.bsky.social and the University of Washington about a fully open post-training recipe—models, data, code, evals, and infra. #WomenInAI 1/8🧵
September 19, 2025 at 4:13 PM
🚀 New WiAIR Podcast Episode!
Can open-source LLMs really outperform closed ones like Claude 3.5? 🤔

We asked Valentina Pyatkin (AI2, UW) — and you'll be interested to hear her answers

#NLProc #WiAIR #WiAIRpodcast
September 17, 2025 at 4:00 PM
🔥 Ready for the new episode?
Here’s a sneak peek of what’s coming next on the #WiAIR Podcast

✨ Full episode drops soon - don't miss it!

🔔 Subscribe here:
🎬 YouTube: www.youtube.com/@WomeninAIRe...
🎙️ Spotify: open.spotify.com/show/51RJNlZ...
🎧 Apple Podcasts: podcasts.apple.com/ca/podcast/w...
September 15, 2025 at 3:03 PM