Sauvik Das
sauvik.me
Sauvik Das
@sauvik.me
I work on human-centered {security|privacy|computing}. Associate Professor (w/o tenure) at @hcii.cmu.edu. Director of the SPUD (Security, Privacy, Usability, and Design) Lab. Non-Resident Fellow @cendemtech.bsky.social
Pinned
I created a starter pack for researchers who work at the nexus of HCI & cybersecurity / privacy here.

Please do let me know if you would like to be added to the list!I'm sure I've missed many folks.

go.bsky.app/RGsu5jn
In short: Quantifying privacy risks can help users make more informed decisions—but the UX needs to present risks in a manner that is interpretable and actionable to truly *empower* users, rather than scare them.

Thanks @NSF for supporting this work!
February 10, 2026 at 6:07 PM
(1) Pair risk flags with actionable guidance (how to preserve intent, reduce risk)
(2) Explain plausible attacker exploits (not just “risk: high”)
(3) Communicate risk without pushing unnecessary self-censorship
(4) Use intuitive language/visuals; avoid jargon
February 10, 2026 at 6:07 PM
Interestingly, no single UI for presenting PREs to users “won”.

Participants didn’t show a strong overall preference across the five designs (though “risk by disclosure” tended to be liked more; the meter less).

So what *should* PRE designs do? 4 design recommendations:
February 10, 2026 at 6:07 PM
…but sometimes PREs encouraged self-censorship.

A meaningful chunk of reflections ended with deleting the post, not posting at all, or even leaving the platform.
February 10, 2026 at 6:07 PM
Finding #2: PREs drove action (often good!).
In 66% of reflections, participants envisioned the user editing the post.

Most commonly: “evasive but still expressive” edits (change details, generalize, remove a pinpoint).
February 10, 2026 at 6:07 PM
Finding #1: PREs often *shifted perspective*.
In ~74% of reflections, participants expected higher privacy awareness / risk concern.

…but awareness came with emotional costs.
Many participants anticipated anxiety, frustration, or feeling stuck about trade-offs.
February 10, 2026 at 6:07 PM
The 5 concepts ranged from:

(1) raw k-anonymity score
(2) a re-identifiability “meter”
(3) low/med/high simplified risk
(4) threat-specific risk
(5) “risk by disclosure” (which details contribute most)
February 10, 2026 at 6:07 PM
Method: speculative design + design fictions.

We storyboarded 5 PRE UI concepts using comic-boards (different ways to show risk + what’s driving it).
February 10, 2026 at 6:07 PM
The core design question:

How should PREs be presented so they help people make better disclosure decisions… *without* nudging them into unnecessary self-censorship?

We don't want people to stop posting — we want them to make informed disclosure decisions accounting for risks.
February 10, 2026 at 6:07 PM
This paper explores how to present “population risk estimates” (PREs): an AI-driven estimate of how uniquely identifiable you are based on your disclosures.

Smaller “k” means you're more identifiable (e.g., k=1 means only 1 person matches everything you have disclosed)
February 10, 2026 at 6:07 PM
This paper is the latest of a productive collaboration between my lab, @cocoweixu, and @alan_ritter.

ACL'24 -> a SOTA self-disclosure detection model
CSCW'25 -> a human-AI collaboration study of disclosure risk mitigation
NeurIPS'25 -> a method to quantify self-disclosure risk
February 10, 2026 at 6:07 PM
📣 New at #CHI2026
People share sensitive things “anonymously”… but anonymity is hard to reason about.

What if we could quantify re-identification risk with AI? How should we present those AI-estimated risks to users?

Led by my student Isadora Krsek

Paper: www.sauvik.me/papers/70/s...
February 10, 2026 at 6:07 PM
Check out the paper! It's one of the coolest papers from my lab in that includes both a fully working system *and* a very comprehensive mixed-methods evaluation. Still had a reviewer that wanted even more, but c'est la vie 😂

www.sauvik.me/papers/69/s...

Thank for the support @NSF!
February 9, 2026 at 7:13 PM
Thus, even though LLM assistance improved outputs, it also raised practitioner-expectations of what the AI would handle for them and made the manual work they *did* have to do feel extra burdensome. A stark design tension for the future of AI-assisted work.
February 9, 2026 at 7:13 PM
A surprising aside: we added a number of design frictions to Privy-LLM to encourage critical thinking. As a result, some practitioners rated Privy-LLM as being *less helpful* than those who used just the static template (where they had to do much more of the work manually).
February 9, 2026 at 7:13 PM
Key detail: experts also rated the LLM-condition mitigations as especially good “conversation starters”—i.e., credible enough to bring to a product team and use to kick off real mitigation planning.

This could help bring privacy and product teams closer together.
February 9, 2026 at 7:13 PM
But outputs from the LLM-supported version was rated higher quality overall: clearer, more correct, more relevant/severe risks; and mitigation plans that experts saw as more effective and more product-specific.
February 9, 2026 at 7:13 PM
Both versions enabled practitioners to produce strong privacy impact assessments (as judged by experts). So the scaffolding itself mattered irrespective of the AI-support provided.
February 9, 2026 at 7:13 PM
We recruited 24 industry practitioners to use one of the two versions (between-subjects). Their assessments were then rated by 13 independent privacy experts across multiple quality dimensions.
February 9, 2026 at 7:13 PM
We made two versions:
A) an interactive LLM-assisted Privy (w/ intention design friction to encourage critical thinking)
B) a structured worksheet modeled after existing PIAs

Same underlying workflow—one with AI support and one without.
February 9, 2026 at 7:13 PM
Privy then helps folks:
• articulate how each risk could show up in this specific product
• prioritize what is most relevant and severe
• draft mitigations that protect people without flattening the feature’s utility.
February 9, 2026 at 7:13 PM
In a vacuum, it's hard to answer how a product will lead to privacy risks.

Privy guides folks through a workflow to articulate:
• who uses the product + who’s affected
• what the AI can do
• what data it needs / produces

→ then maps that to the AI privacy taxonomy.
February 9, 2026 at 7:13 PM
In prior work, we introduced a taxonomy of AI privacy risks (CHI'24 best paper) and found that practitioners face significant awareness, motivation, and ability barriers when engaging in AI privacy work (USENIX SEC distinguished paper).

Privy is a follow up to this line of work.
February 9, 2026 at 7:12 PM
📣 New at #CHI2026

Developing a new AI product? How would you figure out what are the privacy risks?

Privy help non-privacy expert practitioners create high quality privacy impact assessments for early-stage AI products.

Led by @hankhplee.bsky.social
Paper: www.sauvik.me/papers/69/s...
February 9, 2026 at 7:12 PM
Reposted by Sauvik Das
New @acm.org U.S. Tech Policy Committee response to DHS proposed visa rule collecting social media data identifies serious risks to U.S. competitiveness in computer science research:
www.acm.org/binaries/con...

#TechPolicy
www.acm.org
January 21, 2026 at 10:20 PM