Michelle L. Ding
banner
michelleding.bsky.social
Michelle L. Ding
@michelleding.bsky.social
organizer/researcher critically investigating how AI systems impact communities. cs phd @ brown cntr. she/her.

🌷 https://michelle-ding.github.io/
💭 https://michellelding.substack.com/
Reposted by Michelle L. Ding
We released a new report in partnership with the Center for Tech Responsibility at Brown University on how policymakers and researchers can better analyze AI legislation to protect our civil rights and liberties.
Making Sense of AI Policy Using Computational Tools | TechPolicy.Press
A new report examines how to use computational tools to evaluate policy, with AI policy as a case study.
www.techpolicy.press
January 10, 2026 at 10:37 PM
Reposted by Michelle L. Ding
Today on @indicator.media's free weekly briefing: The staggering impunity of xAI, which turned its abusive image generator on its own users in full view and has barely done anything to contain it.
Briefing: Grok brings nonconsensual image abuse to the masses
Plus: a new feature in the Meta Ad Library and a new Telegram investigation tool.
indicator.media
January 9, 2026 at 3:50 PM
Reposted by Michelle L. Ding
New post by @michelleding.bsky.social on resources for the Brown community in the aftermath of the shooting. open.substack.com/pub/michelle...
Caring for yourself and each other
Resources for the Brown community, friends, family, loved ones and how to support us
open.substack.com
December 21, 2025 at 3:51 AM
@mantzarlis.com and folks at @indicator.media have done incredible reporting & investigation on the AI nudification ecosystem that I'm constantly citing in my research on AIG-NCII - appreciate all the work you do!
Today on Indicator: 2025 has been a banner year for AI nudifiers. I found another 9,000 ads on Meta since my last report, bringing the total for this year to 25,000. The top 10 nudifying websites got 10 million views in October.
Nonconsensual nude generators had another banner year. What will it take to defeat them?
Deplatforming the companies, debilitating the technology, and deterring the users
indicator.media
December 4, 2025 at 8:45 PM
Very glad to be a part of a new paper detailing how developers and developer platforms can prevent AIG-NCII, a form of image based sexual abuse that disproportionately harms women and girls. Thanks to all the collaborators and Max Kamachee & @scasper.bsky.social for leading this important project!
Did you know that one base model is responsible for 94% of model-tagged NSFW AI videos on CivitAI?

This new paper studies how a small number of models power the non-consensual AI video deepfake ecosystem and why their developers could have predicted and mitigated this.
December 4, 2025 at 8:38 PM
Reposted by Michelle L. Ding
Thanks to collaborators! This was a really interesting paper for me to work on, and it took a special group of interdisciplinary people to get it done.
Max Kamachee
@r-jy.bsky.social
@michelleding.bsky.social
@ankareuel.bsky.social
@stellaathena.bsky.social
@dhadfieldmenell.bsky.social
December 4, 2025 at 5:32 PM
Reposted by Michelle L. Ding
ACM members/computing researchers who should be members interested in contributing should join the subcommittee's mailing list!

One of our goals here is to build policy coalitions across institutions so we can do more as a collective 💪 and balance special interest groups.
November 25, 2025 at 5:43 PM
Reposted by Michelle L. Ding
@reniebird.bsky.social and I have just been appointed to co-Chair @TheOfficialACM's US Technology Policy Committee’s Subcommittee on AI and Algorithms. cs.brown.edu/news/2025/11...
Serena Booth And Suresh Venkatasubramanian Co-Chair ACM’s US Technology Policy Committee’s Subcommittee On AI And Algorithms
Brown CS faculty members Serena Booth and Suresh Venkatasubramanian have just been appointed to co-chair the AI and Algorithms Subcommittee, whose recent work includes responses to government RFIs, te...
cs.brown.edu
November 25, 2025 at 5:31 PM
Reposted by Michelle L. Ding
PSA: tips to protect yourself from scams on Signal.

Every major comms platform has to contend w phishing, impersonation, & scams. Sadly.

Signal is major, and as we've grown we've heard about more of these attacks--scammy people pretending to be something or someone to trick and abuse others. 1/
November 11, 2025 at 6:13 PM
Reposted by Michelle L. Ding
Today on @indicator.media: A first-of-its-kind audit of AI labels on major social platforms.
Tech platforms promised to label AI content. They're not delivering.
An Indicator audit of hundreds of synthetic images and videos reveals that platforms frequently fail to label AI content
indicator.media
October 23, 2025 at 12:45 PM
Hi friends! After much thinking & doodling, I'm excited to share my new substack "Finding Peace in an AI-Everywhere World" 🌷 🌏

Here is the first article based on some reflections I had at COLM: michellelding.substack.com/p/who-has-th...
Who has the luxury to think?
Researchers are responsible for more than just papers.
michellelding.substack.com
October 20, 2025 at 3:35 PM
Reposted by Michelle L. Ding
Technologies like synthetic data, evaluations, and red-teaming are often framed as enhancing AI privacy and safety. But what if their effects lie elsewhere?

In a new paper with @realbrianjudge.bsky.social at #EAAMO25, we pull back the curtain on AI safety's toolkit. (1/n)

arxiv.org/pdf/2509.22872
arxiv.org
October 17, 2025 at 9:09 PM
Reposted by Michelle L. Ding
I wrote a (personal) blog post about my hopes and dreams for AI policy, my devastation after the US Election, and my process of picking myself off the floor by rebuilding an optimistic vision for AI scientists in government through education: simons.berkeley.edu/news/rebuild...
Rebuilding an Optimistic Vision for AI Policy
Recall November 6, 2024 — the day after the U.S. election. I was driving back to my home in Washington, DC, from Ohio with colleagues. I was heartbroken not because of the rebuke to my political party...
simons.berkeley.edu
October 13, 2025 at 1:56 PM
Reposted by Michelle L. Ding
💡We kicked off the SoLaR workshop at #COLM2025 with a great opinion talk by @michelleding.bsky.social & Jo Gasior Kavishe (joint work with @victorojewale.bsky.social and
@geomblog.bsky.social
) on "Testing LLMs in a sandbox isn't responsible. Focusing on community use and needs is."
October 10, 2025 at 2:31 PM
Reposted by Michelle L. Ding
Have you or a loved one been misgendered by an LLM? How can we evaluate LLMs for misgendering? Do different evaluation methods give consistent results?
Check out our preprint led by the newly minted Dr. @arjunsubgraph.bsky.social, and with Preethi Seshadri, Dietrich Klakow, Kai-Wei Chang, Yizhou Sun
Agree to Disagree? A Meta-Evaluation of LLM Misgendering
Numerous methods have been proposed to measure LLM misgendering, including probability-based evaluations (e.g., automatically with templatic sentences) and generation-based evaluations (e.g., with aut...
arxiv.org
June 11, 2025 at 1:28 PM
Reposted by Michelle L. Ding
🚨 New preprint! 🚨
Excited to share my work: An AI-Powered Framework for Analyzing Collective Idea Evolution in Deliberative Assemblies 🤖🗳️

I’ll be presenting this at @colmweb.org in the NLP4Democracy workshop!

🔗 arxiv.org/abs/2509.12577
An AI-Powered Framework for Analyzing Collective Idea Evolution in Deliberative Assemblies
In an era of increasing societal fragmentation, political polarization, and erosion of public trust in institutions, representative deliberative assemblies are emerging as a promising democratic forum...
arxiv.org
September 17, 2025 at 5:40 PM
Hi #COLM2025! 🇨🇦 I will be presenting a talk on the importance of community-driven LLM evaluations based on an opinion abstract I wrote with Jo Kavishe, @victorojewale.bsky.social and @geomblog.bsky.social tomorrow at 9:30am in 524b for solar-colm.github.io

Hope to see you there!
Third Workshop on Socially Responsible Language Modelling Research (SoLaR) 2025
COLM 2025 in-person Workshop, October 10th at the Palais des Congrès in Montreal, Canada
solar-colm.github.io
October 9, 2025 at 7:32 PM
Reposted by Michelle L. Ding
Very excited to be part of this new AI Institute that is being led by Ellie Pavlick @brown.edu and to be able to work with so many experts, including @datasociety.bsky.social

www.brown.edu/news/2025-07...
Brown University to lead national institute focused on intuitive, trustworthy AI assistants
A new institute, based at Brown and supported by a $20 million National Science Foundation grant, will convene researchers to guide development of a new generation of AI assistants for use in mental a...
www.brown.edu
July 29, 2025 at 3:26 PM
Reposted by Michelle L. Ding
I'll be presenting a position paper about consumer protection and AI in the US at ICML. I have a surprisingly optimistic take: our legal structures are stronger than I anticipated when I went to work on this issue in Congress.

Is everything broken rn? Yes. Will it stay broken? That's on us.
July 14, 2025 at 1:01 PM
Reposted by Michelle L. Ding
With their 'Sovereignty as a Service' offerings, tech companies are encouraging the illusion of a race for sovereign control of AI while being the true powers behind the scenes, write Rui-Jie Yew, Kate Elizabeth Creasey, Suresh Venkatasubramanian.
'Sovereignty' Myth-Making in the AI Race | TechPolicy.Press
Tech companies stand to gain by encouraging the illusion of a race for 'sovereign' AI, write Rui-Jie Yew, Kate Elizabeth Creasey, Suresh Venkatasubramanian.
www.techpolicy.press
July 7, 2025 at 1:07 PM
Reposted by Michelle L. Ding
Very excited to see this piece out in @techpolicypress.bsky.social today. This was written together with @r-jy.bsky.social and Kate Elizabeth Creasey (a historian here at Brown), and calls out what we think is a scary and interesting rhetorical shift.

www.techpolicy.press/sovereignty-...
'Sovereignty' Myth-Making in the AI Race | TechPolicy.Press
Tech companies stand to gain by encouraging the illusion of a race for 'sovereign' AI, write Rui-Jie Yew, Kate Elizabeth Creasey, Suresh Venkatasubramanian.
www.techpolicy.press
July 7, 2025 at 1:50 PM
Reposted by Michelle L. Ding
So the EU AI Act passed. Companies have to comply. AI regulation is here to stay. Right? Right?

FAccT 2025 paper with @r-jy.bsky.social and Bill Marino (not on bsky) 📜 incoming! 1/n

arxiv.org/abs/2506.01931
Red Teaming AI Policy: A Taxonomy of Avoision and the EU AI Act
The shape of AI regulation is beginning to emerge, most prominently through the EU AI Act (the "AIA"). By 2027, the AIA will be in full effect, and firms are starting to adjust their behavior in light...
arxiv.org
June 12, 2025 at 10:33 PM
Reposted by Michelle L. Ding
Excited to present "Towards AI Accountability Infrastructure: Gaps and Opportunities in AI Audit Tooling" at #CHI2025 tomorrow(today)!

🗓 Tue, 29 Apr | 9:48–10:00 AM JST (Mon, 28 Apr | 8:48–9:00 PM ET)
📍 G401 (Pacifico North 4F)

📄 dl.acm.org/doi/10.1145/...
April 28, 2025 at 11:26 AM
Reposted by Michelle L. Ding
@michelleding.bsky.social has been doing amazing work laying out the complex landscape of "deepfake porn" and distilling the unique challenges in governing it. We hope this work informs future AI governance efforts to address the severe harms of this content - reach out to us to chat more!
April 25, 2025 at 6:42 PM
Excited to be presenting a new paper with @harinisuresh.bsky.social on the extremely critical topic of technical prevention/governance of adult AI generated non-consensual intimate images aka "deepfake pornography" at #CHI2025 chi-staig.github.io on 4/27 10:15-11:15 JST arxiv.org/abs/2504.17663 🧵
The Malicious Technical Ecosystem: Exposing Limitations in Technical Governance of AI-Generated Non-Consensual Intimate Images of Adults
In this paper, we adopt a survivor-centered approach to locate and dissect the role of sociotechnical AI governance in preventing AI-Generated Non-Consensual Intimate Images (AIG-NCII) of adults, coll...
arxiv.org
April 25, 2025 at 5:41 PM