Women in AI Research - WiAIR
banner
wiair.bsky.social
Women in AI Research - WiAIR
@wiair.bsky.social
WiAIR is dedicated to celebrating the remarkable contributions of female AI researchers from around the globe. Our goal is to empower early career researchers, especially women, to pursue their passion for AI and make an impact in this exciting field.
We're pleased to feature Dr. Annie En-Shiun Lee, Asst Prof at @ontariotechu.bsky.social and status-only at @utoronto.ca in the next @wiair.bsky.social episode.
November 14, 2025 at 4:01 PM
AI models are built on human values - but whose values, exactly? 🌍

Vered Shwartz highlights that diverse teams - across gender, culture, and discipline - are essential for building fair and trustworthy AI systems.

#llms #wiair #wiairpodcast
November 13, 2025 at 4:02 PM
🌍 Can AI represent “universal concepts” in ways that reflect cultural variation?
We hosted Dr. Vered Shwartz on WiAIR to discuss how culture shapes AI’s understanding of language & visuals. We also discussed an EMNLP 2024 paper examining multicultural understanding in VLMs.
(1/8🧵)
November 10, 2025 at 4:12 PM
Reposted by Women in AI Research - WiAIR
𝙒𝙚'𝙧𝙚 𝙝𝙞𝙧𝙞𝙣𝙜 𝙣𝙚𝙬 𝙛𝙖𝙘𝙪𝙡𝙩𝙮 𝙢𝙚𝙢𝙗𝙚𝙧𝙨!

KSoC: utah.peopleadmin.com/postings/190... (AI broadly)

Education + AI:
- utah.peopleadmin.com/postings/189...
- utah.peopleadmin.com/postings/190...

Computer Vision:
- utah.peopleadmin.com/postings/183...
November 7, 2025 at 11:35 PM
🌍 Can we trust Wikipedia to tell the same story across languages?
In “Locating Information Gaps and Narrative Inconsistencies Across Languages”, Dr. Vered Shwartz (@veredshwartz.bsky.social) and collaborators introduce INFOGAP, a method to detect fact-level gaps across Wikipedias. (1/6🧵)
November 7, 2025 at 4:06 PM
The recent UBC interview with Dr. Vered Shwartz (@veredshwartz.bsky.social), Assistant Professor at the University of British Columbia and CIFAR AI Chair at the Vector Institute, shares important reflections on how AI chatbots can influence users’ decisions and behaviour. 🤖💬
(1/7🧵)
Dr. Vered Shwartz, UBC professor of computer science and author of "Lost in Automatic Translation", on the potential for #AI chatbots to cause harm to users as well as potential safeguards for the future. @cs.ubc.ca

science.ubc.ca/news/2025-10...
Can AI persuade you to go vegan—or harm yourself?
Large language models are more persuasive than humans: UBC computer scientist Dr. Vered Shwartz discusses safeguards for the future of AI.
science.ubc.ca
November 5, 2025 at 4:23 PM
🤖 Can LLMs respect culture and facts?

We want AI systems that understand diverse cultures 𝘢𝘯𝘥 stay grounded in factual truth.
But can we really have both?

Vered Shwartz explains this core challenge of modern LLMs.

#llms #wiair #wiairpodcast
November 3, 2025 at 5:03 PM
Reposted by Women in AI Research - WiAIR
I'm super excited to update that "Lost in Automatic Translation" is now available as an audiobook! 🔊📖

It's currently on Audible:
www.audible.ca/pd/B0FXY8VQX5

Stay tuned (lostinautomatictranslation.com) for more retailers, including Amazon, iTunes, etc., and public libraries! 📚
October 28, 2025 at 1:01 AM
🎙 In our new episode, we spoke with @veredshwartz.bsky.social (Assistant Professor of Computer Science at The University of British Columbia) and highlighted her book Lost in Automatic Translation: Navigating Life in English in the Age of Language Technologies. (1/7🧵)
October 31, 2025 at 4:09 PM
Reposted by Women in AI Research - WiAIR
"NEO might not fold my shirt perfectly, but if an arm is kind of half hanging out of the shirt, it's OK, it's robotic slop" 🤣 Not sure what to think about this. Ideally, the housekeepers wouldn't need to leave their families in their home country and could work at the same wages. Realistically...
Why would someone pay $20k for a robot controlled by a human in a remote location to do things more slowly and clumsily when the median wage for a maid or housekeeper is $33k per year, which is typically spread across 10-20 households?
October 30, 2025 at 5:56 AM
🎙️ New episode of Women in AI Research (WiAIR) out now!

We sit down with @veredshwartz.bsky.social (Asst prof and CIFAR AI Chair) to talk about the important challenge in AI — cultural bias. 🌍

#nlproc #wiair #wiairpodcast

/1
October 29, 2025 at 4:03 PM
LLMs are shaping hiring, healthcare, and law — but can they truly understand users from every culture?

In our latest #WiAIRpodcast episode, Dr. Vered Shwartz explores how cultural bias impacts fairness and inclusivity in AI.

🎧 Watch here
👉 www.youtube.com/watch?v=9x2Q...

#wiair
October 27, 2025 at 4:03 PM
We’re excited to feature @veredshwartz.bsky.social , Asst Prof at @cs.ubc.ca, CIFAR AI Chair @vectorinstitute.ai, and author of lostinautomatictranslation.com, in the next @wiair.bsky.social episode.
October 24, 2025 at 4:04 PM
"Trust only exists when there's risk." - Ana Marasović
Trust isn't about certainty - it's about risk acceptance.
Full conversation: youtu.be/xYb6uokKKOo
October 22, 2025 at 4:06 PM
🧠 Can large language models build the very benchmarks used to evaluate them?
In “What Has Been Lost with Synthetic Evaluation”, Ana Marasović (@anamarasovic.bsky.social) and collaborators ask what happens when LLMs start generating the datasets used to test their reasoning. (1/6🧵)
October 20, 2025 at 4:01 PM
AI academia and industry aren’t rivals — they’re partners. 🤝
As Ana Marasović says, innovation flows both ways: research trains the next generation who power real-world AI.

🎓🤖 www.youtube.com/@WomeninAIRe...
October 17, 2025 at 4:07 PM
👉 Do large language models really reason the way their chain-of-thoughts suggest?
This week on #WiAIRpodcast, we talk with Ana Marasović (@anamarasovic.bsky.social) about her paper “Chain-of-Thought Unfaithfulness as Disguised Accuracy.” (1/6🧵)
📄 Paper: arxiv.org/pdf/2402.14897
October 15, 2025 at 4:06 PM
✈️🤖 AI Safety Like Aviation: Too Ambitious or Absolutely Necessary?

Can AI ever be as safely regulated as aviation?
Ana Marasović shares her vision for the future of AI governance — where safety principles and regulation become the default, not an afterthought.

www.youtube.com/@WomeninAIRe...
October 13, 2025 at 4:49 PM
How do we really know when and how much to trust large language models? 🤔
In this week’s #WiAIRpodcast, we talk with Ana Marasović (Asst Prof @ University of Utah; ex @ Allen AI, UWNLP) about explainability, trust, and human–AI collaboration. (1/8🧵)
October 10, 2025 at 7:04 PM
🎙️ New Women in AI Research episode out now!
This time, we sit down with @anamarasovic.bsky.social to unpack some of the toughest questions in AI explainability and trust.

🔗 Watch here → youtu.be/xYb6uokKKOo
youtu.be
October 8, 2025 at 4:03 PM
🎙️ New #WiAIR episode coming soon!

We sat down with Ana Marasović to talk about the uncomfortable truths behind AI trust.
When can we really trust AI explanations?

Watch the trailer youtu.be/GBghj6S6cic
Then subscribe on YouTube to catch the full episode when it drops.
October 6, 2025 at 4:02 PM
Our new guest at #WiAIRpodcast is @anamarasovic.bsky.social
(Asst prof @ University of Utah , Ex @ Allen AI). We'll talk with her about faithfulness, trust and robustness in AI.
The episode is coming soon, don't miss:
www.youtube.com/@WomeninAIRe...

#WiAIR #NLProc
October 3, 2025 at 4:02 PM
"Inclusivity is about saying: Come sit with us!" 💡

Valentina Pyatkin reminds us that AI research isn’t just about models and benchmarks - it’s about building a community where everyone feels welcome.

#AI #Inclusivity #WomenInAI
October 1, 2025 at 3:31 PM
🤔 How do we know if a reward model is truly good? In our last #WiAIR episode, Valentina Pyatkin (AI2 & University of Washington) introduced RewardBench 2—a harder, cleaner benchmark for reward models in post-training. (1/8🧵)
September 29, 2025 at 4:14 PM
💥 Behind every success is a story of rejection.
Persistence, curiosity, and resilience are what truly drive AI careers. 🚀

Don't miss the full episode:
🎬 YouTube: youtube.com/watch?v=DPhq...
🎙 Spotify: open.spotify.com/episode/7aHP...
September 26, 2025 at 3:06 PM