Vera Liao
qveraliao.bsky.social
Vera Liao
@qveraliao.bsky.social
Researcher@MSR, incoming Associate Prof@UMich. Studying human-AI interaction
Reposted by Vera Liao
♦️ Our next #AI & #Society Salon is soon 🎙️ Join us on 11 June 17:00 CET for a Salon with Marco Donnarumma, performance artist and researcher.

We will discuss human body, tech and power.

Register: www.eventbrite.com/e/regaining-...
June 4, 2025 at 4:35 PM
Thanks for coming and sharing 😀
May 14, 2025 at 1:33 PM
Reposted by Vera Liao
Wonderful talk by @qveraliao.bsky.social on bridging the socio-technical gap in AI.
May 14, 2025 at 10:01 AM
Reposted by Vera Liao
I am presenting tomorrow (Wednesday) my TOCHI Microsoft work “Generation Probabilities Are Not Enough: Uncertainty Highlighting in AI Code Completions” at ~11:55am in G301 :) should be fun 🥳

work done w/ @jennwv.bsky.social @qveraliao.bsky.social @adamfourney.bsky.social @gaganbansal.bsky.social 🤩
4. On Wednesday, @helenasresearch.bsky.social will present the TOCHI paper she led on the potential of uncertainty highlighting for fostering appropriate reliance on AI-powered code completion tools.

Program link: programs.sigchi.org/chi/2025/pro...
April 29, 2025 at 2:56 AM
Happy to see this out at #CHI2025! Another effort to push for a more central role of designer in the development of LLM-powered applciations through designerly adaptation, enabling mutual shaping of UX design and LLM adaptation (prompting). And we made a Figma widget for it👇
🤖LLMs are being integrated everywhere, but how do we ensure they're delivering meaningful user experiences?

In our #chi2025 paper, we empower designers to think about this via 🎨designerly adaptation🎨 of LLMs and built a Figma widget to help!

📜 arxiv.org/abs/2401.09051
🧵👇
April 25, 2025 at 4:56 PM
Reposted by Vera Liao
Mon April 28: I'll be presenting our 🏅 paper on fostering appropriate reliance on LLMs (w/ @jennwv.bsky.social, @qveraliao.bsky.social, @tanialombrozo.bsky.social, Olga Russakovsky) in the 4:20-5:50pm paper session (G303)

🧵 bsky.app/profile/sunn...
📌 programs.sigchi.org/chi/2025/pro...
Appropriate reliance is key to safe and successful user interactions with LLMs. But what shapes user reliance on LLMs, and how can we foster appropriate reliance?

In our #CHI2025 paper, we explore these questions through two user studies.

1/7
April 25, 2025 at 12:09 AM
Congratulations!!
"Fostering Appropriate Reliance on LLMs" received an Honorable Mention at #CHI2025!

This work is also the last chapter of my dissertation, so the recognition feels more special🏅🎓😊

🎉 to the team @jennwv.bsky.social @qveraliao.bsky.social @tanialombrozo.bsky.social Olga Russakovsky
Appropriate reliance is key to safe and successful user interactions with LLMs. But what shapes user reliance on LLMs, and how can we foster appropriate reliance?

In our #CHI2025 paper, we explore these questions through two user studies.

1/7
March 27, 2025 at 11:59 PM
Reposted by Vera Liao
📢 Looking for current research on #HCI + #AI? Here's a collection of 200+ #CHI2025 preprints, collected via arXiv and your suggestions: medium.com/human-center...
CHI’25 Preprint Collection
Looking for current research on HCI + AI? Here’s a list.
medium.com
March 10, 2025 at 1:36 PM
Reposted by Vera Liao
NEW from my team: a framework that walks AI product teams step-by-step through understanding and mitigating the risk of overreliance on AI. This happens when ppl accept incorrect AI outputs, b/c we …

learn.microsoft.com/en-us/ai/pla...
Overreliance on AI: Risk Identification and Mitigation Framework
This article describes a framework that helps product teams identify, assess, and mitigate overreliance risk in AI products.
learn.microsoft.com
March 10, 2025 at 3:45 PM
Another very cool work led by the very cool @sunniesuhyoung.bsky.social, coming out at #CHI2025. Check it out 👇
Appropriate reliance is key to safe and successful user interactions with LLMs. But what shapes user reliance on LLMs, and how can we foster appropriate reliance?

In our #CHI2025 paper, we explore these questions through two user studies.

1/7
February 28, 2025 at 3:53 PM
Reposted by Vera Liao
Our next #AI & #Society Salon is soon 🎙️ Join us on 18 February 18:00pm CET for a Salon with @carolinesinders.bsky.social, artist and researcher.

Register here: www.eventbrite.com/e/regaining-...
February 11, 2025 at 12:37 PM
Reposted by Vera Liao
As #AI gains a growing space in creation and art, how are the public discourses on AI in the arts shaping creative work?

It what we investigate in a new paper with @katecrawford.bsky.social , @qveraliao.bsky.social, Gonzalo Ramos and Jenny Williams : arxiv.org/abs/2502.03940
February 7, 2025 at 10:42 AM
Reposted by Vera Liao
Bumping this up 🔉 If interested in interning with me or my colleagues apply by Friday, Jan 10 for full consideration! We are especially looking for candidates interested responsible and ethical AI considerations related to human agency, human control, anthropomorphic AI systems, and measurement
📣 📣 Interested in an internship on human-centred AI, human agency, AI evaluation & the impacts of AI systems? Our team/FATE MLT (Su Lin Blodgett, @qveraliao.bsky.social & I) is looking for a few summer interns 🎉 Apply by Jan 10 for full consideration: jobs.careers.microsoft.com/global/en/jo...
January 8, 2025 at 2:46 PM
Reposted by Vera Liao
Human-centered Evalulation and Auditing of Language models (HEAL) workshop is back for #CHI2025, with this year's special theme: “Mind the Context”! Come join us on this bridge between #HCI and #NLProc!

Workshop submission deadline: Feb 17 AoE
More info at heal-workshop.github.io.
December 16, 2024 at 10:07 PM
It is that time of year again we are looking for summer 2025 interns at FATE Montreal. Apply!
📣 📣 Interested in an internship on human-centred AI, human agency, AI evaluation & the impacts of AI systems? Our team/FATE MLT (Su Lin Blodgett, @qveraliao.bsky.social & I) is looking for a few summer interns 🎉 Apply by Jan 10 for full consideration: jobs.careers.microsoft.com/global/en/jo...
December 5, 2024 at 8:19 PM
Had a lot of fun teaching a tutorial on Human-Centered Evaluation of Language Technologies at #EMNLP2024, w/ @ziangxiao.bsky.social, Su Lin Blodgett, and Jackie Cheung

We just posted the slides on our tutorial website: human-centered-eval.github.io
Human-Centered Eval@EMNLP24
human-centered-eval.github.io
November 26, 2024 at 8:55 PM
Join us for another Regaining Power of AI Salon with Linda Dounia Rebeiz on December 4 👇
We are organizing a third Salon with Linda Dounia Rebeiz to talk about her work and vision of #AI, #Art and #Technocapitalism.
🗓️ Dec 4, 5pm CEST
📻 Online, info and registration here: www.eventbrite.com/e/regaining-...
👥 w/ Gonzalo Ramos, Jenny Williams, @katecrawford.bsky.social & Vera Liao
November 22, 2024 at 7:07 PM
Reposted by Vera Liao
I’m putting together a starter pack for researchers working on human-centered AI evaluation. Reply or DM me if you’d like to be added, or if you have suggestions! Thank you!

(It looks NLP-centric at the moment, but that’s due to the current limits of my own knowledge 🙈)

go.bsky.app/G3w9LpE
November 21, 2024 at 3:56 PM