Please do let me know if you would like to be added to the list!I'm sure I've missed many folks.
go.bsky.app/RGsu5jn
Thanks @NSF for supporting this work!
Thanks @NSF for supporting this work!
(2) Explain plausible attacker exploits (not just “risk: high”)
(3) Communicate risk without pushing unnecessary self-censorship
(4) Use intuitive language/visuals; avoid jargon
(2) Explain plausible attacker exploits (not just “risk: high”)
(3) Communicate risk without pushing unnecessary self-censorship
(4) Use intuitive language/visuals; avoid jargon
Participants didn’t show a strong overall preference across the five designs (though “risk by disclosure” tended to be liked more; the meter less).
So what *should* PRE designs do? 4 design recommendations:
Participants didn’t show a strong overall preference across the five designs (though “risk by disclosure” tended to be liked more; the meter less).
So what *should* PRE designs do? 4 design recommendations:
A meaningful chunk of reflections ended with deleting the post, not posting at all, or even leaving the platform.
A meaningful chunk of reflections ended with deleting the post, not posting at all, or even leaving the platform.
In 66% of reflections, participants envisioned the user editing the post.
Most commonly: “evasive but still expressive” edits (change details, generalize, remove a pinpoint).
In 66% of reflections, participants envisioned the user editing the post.
Most commonly: “evasive but still expressive” edits (change details, generalize, remove a pinpoint).
In ~74% of reflections, participants expected higher privacy awareness / risk concern.
…but awareness came with emotional costs.
Many participants anticipated anxiety, frustration, or feeling stuck about trade-offs.
In ~74% of reflections, participants expected higher privacy awareness / risk concern.
…but awareness came with emotional costs.
Many participants anticipated anxiety, frustration, or feeling stuck about trade-offs.
(1) raw k-anonymity score
(2) a re-identifiability “meter”
(3) low/med/high simplified risk
(4) threat-specific risk
(5) “risk by disclosure” (which details contribute most)
(1) raw k-anonymity score
(2) a re-identifiability “meter”
(3) low/med/high simplified risk
(4) threat-specific risk
(5) “risk by disclosure” (which details contribute most)
We storyboarded 5 PRE UI concepts using comic-boards (different ways to show risk + what’s driving it).
We storyboarded 5 PRE UI concepts using comic-boards (different ways to show risk + what’s driving it).
How should PREs be presented so they help people make better disclosure decisions… *without* nudging them into unnecessary self-censorship?
We don't want people to stop posting — we want them to make informed disclosure decisions accounting for risks.
How should PREs be presented so they help people make better disclosure decisions… *without* nudging them into unnecessary self-censorship?
We don't want people to stop posting — we want them to make informed disclosure decisions accounting for risks.
Smaller “k” means you're more identifiable (e.g., k=1 means only 1 person matches everything you have disclosed)
Smaller “k” means you're more identifiable (e.g., k=1 means only 1 person matches everything you have disclosed)
ACL'24 -> a SOTA self-disclosure detection model
CSCW'25 -> a human-AI collaboration study of disclosure risk mitigation
NeurIPS'25 -> a method to quantify self-disclosure risk
ACL'24 -> a SOTA self-disclosure detection model
CSCW'25 -> a human-AI collaboration study of disclosure risk mitigation
NeurIPS'25 -> a method to quantify self-disclosure risk
People share sensitive things “anonymously”… but anonymity is hard to reason about.
What if we could quantify re-identification risk with AI? How should we present those AI-estimated risks to users?
Led by my student Isadora Krsek
Paper: www.sauvik.me/papers/70/s...
People share sensitive things “anonymously”… but anonymity is hard to reason about.
What if we could quantify re-identification risk with AI? How should we present those AI-estimated risks to users?
Led by my student Isadora Krsek
Paper: www.sauvik.me/papers/70/s...
www.sauvik.me/papers/69/s...
Thank for the support @NSF!
www.sauvik.me/papers/69/s...
Thank for the support @NSF!
This could help bring privacy and product teams closer together.
This could help bring privacy and product teams closer together.
A) an interactive LLM-assisted Privy (w/ intention design friction to encourage critical thinking)
B) a structured worksheet modeled after existing PIAs
Same underlying workflow—one with AI support and one without.
A) an interactive LLM-assisted Privy (w/ intention design friction to encourage critical thinking)
B) a structured worksheet modeled after existing PIAs
Same underlying workflow—one with AI support and one without.
• articulate how each risk could show up in this specific product
• prioritize what is most relevant and severe
• draft mitigations that protect people without flattening the feature’s utility.
• articulate how each risk could show up in this specific product
• prioritize what is most relevant and severe
• draft mitigations that protect people without flattening the feature’s utility.
Privy guides folks through a workflow to articulate:
• who uses the product + who’s affected
• what the AI can do
• what data it needs / produces
→ then maps that to the AI privacy taxonomy.
Privy guides folks through a workflow to articulate:
• who uses the product + who’s affected
• what the AI can do
• what data it needs / produces
→ then maps that to the AI privacy taxonomy.
Privy is a follow up to this line of work.
Privy is a follow up to this line of work.
Developing a new AI product? How would you figure out what are the privacy risks?
Privy help non-privacy expert practitioners create high quality privacy impact assessments for early-stage AI products.
Led by @hankhplee.bsky.social
Paper: www.sauvik.me/papers/69/s...
Developing a new AI product? How would you figure out what are the privacy risks?
Privy help non-privacy expert practitioners create high quality privacy impact assessments for early-stage AI products.
Led by @hankhplee.bsky.social
Paper: www.sauvik.me/papers/69/s...
www.acm.org/binaries/con...
#TechPolicy
www.acm.org/binaries/con...
#TechPolicy