Sunnie S. Y. Kim ☀️
sunniesuhyoung.bsky.social
Sunnie S. Y. Kim ☀️
@sunniesuhyoung.bsky.social
Responsible AI & Human-AI Interaction

Currently: Research Scientist at Apple

Previously: Princeton CS PhD, Yale S&DS BSc, MSR FATE & TTIC Intern

https://sunniesuhyoung.github.io/
Pinned
"Fostering Appropriate Reliance on LLMs" received an Honorable Mention at #CHI2025!

This work is also the last chapter of my dissertation, so the recognition feels more special🏅🎓😊

🎉 to the team @jennwv.bsky.social @qveraliao.bsky.social @tanialombrozo.bsky.social Olga Russakovsky
Appropriate reliance is key to safe and successful user interactions with LLMs. But what shapes user reliance on LLMs, and how can we foster appropriate reliance?

In our #CHI2025 paper, we explore these questions through two user studies.

1/7
Our Responsible AI team at Apple is looking for spring/summer 2026 PhD research interns! Please apply at jobs.apple.com/en-us/detail... and email rai-internship@group.apple.com. Do not send extra info (e.g., CV), just drop us a line so we can find your application in the central pool!
Machine Learning / AI Internships - Jobs - Careers at Apple
Apply for a Machine Learning / AI Internships job at Apple. Read about the role and find out if it’s right for you.
jobs.apple.com
October 10, 2025 at 2:28 AM
Reposted by Sunnie S. Y. Kim ☀️
We're happy to officially announce the location of #FAccT2026!

Next year's conference will be held in Montreal, Canada 🇨🇦

Su Lin Blodgett and Zeerak Talat will be General Chairs, and Michael Madaio will be PC Chair 🎉

(thanks to MindView for the photo!)
June 30, 2025 at 11:12 AM
Reposted by Sunnie S. Y. Kim ☀️
Our #FAccT2025 proceedings are out!! 🎉

Read all of the published papers here: dl.acm.org/doi/proceedi...
June 24, 2025 at 5:53 AM
Reposted by Sunnie S. Y. Kim ☀️
Looking for posts about #FAccT2025? Check out our 🦋 custom feed 🦋 which is already lively and full of papers, events, and attendees for this year's conference in Athens!

Click the pin 📌 in the upper right hand corner to keep this feed quickly accessible.

bsky.app/profile/mari...
June 21, 2025 at 6:25 AM
Reposted by Sunnie S. Y. Kim ☀️
We are thrilled to welcome an incredible lineup of invited speakers to the 4th Explainable AI for Computer Vision (XAI4CV) Workshop, held as part of #CVPR2025 — which kicks off next week, from Wednesday, June 11th to Sunday, June 15th in Nashville, TN!
June 5, 2025 at 12:59 PM
Commencement 🐯🎓🎉
May 30, 2025 at 3:53 PM
📢 I successfully defended my PhD dissertation! Huge thanks to my committee (Olga @andresmh.com @jennwv.bsky.social @qveraliao.bsky.social @parastooabtahi.bsky.social) & everyone who supported me ❤️

📢 Next I'll join Apple as a research scientist in the Responsible AI team led by @jeffreybigham.com!
May 7, 2025 at 8:46 PM
Reposted by Sunnie S. Y. Kim ☀️
Exciting news!!! This just got into @icmlconf.bsky.social as a position paper!!! 🎉 More updates to come as we work on the camera-ready version!!!
Remember this @neuripsconf.bsky.social workshop paper? We spent the past month writing a newer, better, longer version!!! You can find it online here: arxiv.org/abs/2502.00561
May 3, 2025 at 8:59 PM
#CHI2025 = friends, friends, and friends (many not in the pics) 🥰 Research can be lonely, so moments like these are special! Thank you all for an amazing week of learning and fun. Hope to be back soon!!
May 1, 2025 at 6:24 PM
I'll be at #CHI2025 🌸 to share 1 paper and 2 LBW on fostering appropriate user understanding and reliance on AI (LLMs and not). Let's catch up and connect!

What I've been up to: job market & research (responsible AI, human-centered evaluation, overreliance, anthropomorphism) 🤓
April 25, 2025 at 12:09 AM
Reposted by Sunnie S. Y. Kim ☀️
Check out Indu Panigrahi’s LBW at #CHI2025: “Interactivity x Explainability: Toward Understanding How Interactivity Can Improve Computer Vision Explanations.”

🔗 Project Page: ind1010.github.io/interactive_XAI
📄 Extended Abstract: arxiv.org/abs/2504.10745
April 18, 2025 at 9:14 PM
"Fostering Appropriate Reliance on LLMs" received an Honorable Mention at #CHI2025!

This work is also the last chapter of my dissertation, so the recognition feels more special🏅🎓😊

🎉 to the team @jennwv.bsky.social @qveraliao.bsky.social @tanialombrozo.bsky.social Olga Russakovsky
Appropriate reliance is key to safe and successful user interactions with LLMs. But what shapes user reliance on LLMs, and how can we foster appropriate reliance?

In our #CHI2025 paper, we explore these questions through two user studies.

1/7
March 27, 2025 at 11:57 PM
Reposted by Sunnie S. Y. Kim ☀️
I've recently put together a "Fairness FAQ": tinyurl.com/fairness-faq. If you work in non-fairness ML and you've heard about fairness, perhaps you've wondered things like what the best definitions of fairness are, and whether we can train algorithms that optimize for it.
March 17, 2025 at 2:39 PM
Reposted by Sunnie S. Y. Kim ☀️
NEW from my team: a framework that walks AI product teams step-by-step through understanding and mitigating the risk of overreliance on AI. This happens when ppl accept incorrect AI outputs, b/c we …

learn.microsoft.com/en-us/ai/pla...
Overreliance on AI: Risk Identification and Mitigation Framework
This article describes a framework that helps product teams identify, assess, and mitigate overreliance risk in AI products.
learn.microsoft.com
March 10, 2025 at 3:45 PM
Reposted by Sunnie S. Y. Kim ☀️
Trying something new:
A 🧵 on a topic I find many students struggle with: "why do their 📊 look more professional than my 📊?"

It's *lots* of tiny decisions that aren't the defaults in many libraries, so let's break down 1 simple graph by @jburnmurdoch.bsky.social

🔗 www.ft.com/content/73a1...
November 20, 2024 at 5:09 PM
Reposted by Sunnie S. Y. Kim ☀️
New research on the role of explanations, sources, and inconsistencies in fostering appropriate reliance, led by the always amazing @sunniesuhyoung.bsky.social! 👇👇👇
Appropriate reliance is key to safe and successful user interactions with LLMs. But what shapes user reliance on LLMs, and how can we foster appropriate reliance?

In our #CHI2025 paper, we explore these questions through two user studies.

1/7
February 28, 2025 at 3:48 PM
Reposted by Sunnie S. Y. Kim ☀️
Another very cool work led by the very cool @sunniesuhyoung.bsky.social, coming out at #CHI2025. Check it out 👇
Appropriate reliance is key to safe and successful user interactions with LLMs. But what shapes user reliance on LLMs, and how can we foster appropriate reliance?

In our #CHI2025 paper, we explore these questions through two user studies.

1/7
February 28, 2025 at 3:53 PM
Reposted by Sunnie S. Y. Kim ☀️
Already a time capsule, but back in January a bunch of us working at the intersection of the humanities and AI/ML came together to sketch out eight provocations from the humanities for genAI research. Here's a 🧵 1/

arxiv.org/abs/2502.19190
Provocations from the Humanities for Generative AI Research
This paper presents a set of provocations for considering the uses, impact, and harms of generative AI from the perspective of humanities researchers. We provide a working definition of humanities res...
arxiv.org
March 3, 2025 at 3:08 PM
Reposted by Sunnie S. Y. Kim ☀️
Super fun collaboration across cogsci fields! We find that when an LLM response is accompanied by an explanation, users are more likely to accept the response. But the quality of the explanation matters - users rely less on responses with explanations when the explanations contain inconsistencies.
Appropriate reliance is key to safe and successful user interactions with LLMs. But what shapes user reliance on LLMs, and how can we foster appropriate reliance?

In our #CHI2025 paper, we explore these questions through two user studies.

1/7
February 28, 2025 at 4:35 PM
Reposted by Sunnie S. Y. Kim ☀️
Submit your latest work (papers, demos) in #XAI to the 4th Explainable AI for Computer Vision (XAI4CV) Workshop at #CVPR2025!

Details: xai4cv.github.io/workshop_cvp...
Submission Site: cmt3.research.microsoft.com/XAI4CV2025

@cvprconference.bsky.social @xai-research.bsky.social
March 2, 2025 at 6:53 PM
Appropriate reliance is key to safe and successful user interactions with LLMs. But what shapes user reliance on LLMs, and how can we foster appropriate reliance?

In our #CHI2025 paper, we explore these questions through two user studies.

1/7
February 28, 2025 at 3:21 PM