Michelle L. Ding
banner
michelleding.bsky.social
Michelle L. Ding
@michelleding.bsky.social
organizer/researcher critically investigating how AI systems impact communities. cs phd @ brown cntr. she/her.

🌷 https://michelle-ding.github.io/
💭 https://michellelding.substack.com/
Pinned
Hi #COLM2025! 🇨🇦 I will be presenting a talk on the importance of community-driven LLM evaluations based on an opinion abstract I wrote with Jo Kavishe, @victorojewale.bsky.social and @geomblog.bsky.social tomorrow at 9:30am in 524b for solar-colm.github.io

Hope to see you there!
Third Workshop on Socially Responsible Language Modelling Research (SoLaR) 2025
COLM 2025 in-person Workshop, October 10th at the Palais des Congrès in Montreal, Canada
solar-colm.github.io
Reposted by Michelle L. Ding
Today on @indicator.media: A first-of-its-kind audit of AI labels on major social platforms.
Tech platforms promised to label AI content. They're not delivering.
An Indicator audit of hundreds of synthetic images and videos reveals that platforms frequently fail to label AI content
indicator.media
October 23, 2025 at 12:45 PM
Hi friends! After much thinking & doodling, I'm excited to share my new substack "Finding Peace in an AI-Everywhere World" 🌷 🌏

Here is the first article based on some reflections I had at COLM: michellelding.substack.com/p/who-has-th...
Who has the luxury to think?
Researchers are responsible for more than just papers.
michellelding.substack.com
October 20, 2025 at 3:35 PM
Reposted by Michelle L. Ding
Technologies like synthetic data, evaluations, and red-teaming are often framed as enhancing AI privacy and safety. But what if their effects lie elsewhere?

In a new paper with @realbrianjudge.bsky.social at #EAAMO25, we pull back the curtain on AI safety's toolkit. (1/n)

arxiv.org/pdf/2509.22872
arxiv.org
October 17, 2025 at 9:09 PM
Reposted by Michelle L. Ding
I wrote a (personal) blog post about my hopes and dreams for AI policy, my devastation after the US Election, and my process of picking myself off the floor by rebuilding an optimistic vision for AI scientists in government through education: simons.berkeley.edu/news/rebuild...
Rebuilding an Optimistic Vision for AI Policy
Recall November 6, 2024 — the day after the U.S. election. I was driving back to my home in Washington, DC, from Ohio with colleagues. I was heartbroken not because of the rebuke to my political party...
simons.berkeley.edu
October 13, 2025 at 1:56 PM
Reposted by Michelle L. Ding
💡We kicked off the SoLaR workshop at #COLM2025 with a great opinion talk by @michelleding.bsky.social & Jo Gasior Kavishe (joint work with @victorojewale.bsky.social and
@geomblog.bsky.social
) on "Testing LLMs in a sandbox isn't responsible. Focusing on community use and needs is."
October 10, 2025 at 2:31 PM
Reposted by Michelle L. Ding
Have you or a loved one been misgendered by an LLM? How can we evaluate LLMs for misgendering? Do different evaluation methods give consistent results?
Check out our preprint led by the newly minted Dr. @arjunsubgraph.bsky.social, and with Preethi Seshadri, Dietrich Klakow, Kai-Wei Chang, Yizhou Sun
Agree to Disagree? A Meta-Evaluation of LLM Misgendering
Numerous methods have been proposed to measure LLM misgendering, including probability-based evaluations (e.g., automatically with templatic sentences) and generation-based evaluations (e.g., with aut...
arxiv.org
June 11, 2025 at 1:28 PM
Reposted by Michelle L. Ding
🚨 New preprint! 🚨
Excited to share my work: An AI-Powered Framework for Analyzing Collective Idea Evolution in Deliberative Assemblies 🤖🗳️

I’ll be presenting this at @colmweb.org in the NLP4Democracy workshop!

🔗 arxiv.org/abs/2509.12577
An AI-Powered Framework for Analyzing Collective Idea Evolution in Deliberative Assemblies
In an era of increasing societal fragmentation, political polarization, and erosion of public trust in institutions, representative deliberative assemblies are emerging as a promising democratic forum...
arxiv.org
September 17, 2025 at 5:40 PM
Hi #COLM2025! 🇨🇦 I will be presenting a talk on the importance of community-driven LLM evaluations based on an opinion abstract I wrote with Jo Kavishe, @victorojewale.bsky.social and @geomblog.bsky.social tomorrow at 9:30am in 524b for solar-colm.github.io

Hope to see you there!
Third Workshop on Socially Responsible Language Modelling Research (SoLaR) 2025
COLM 2025 in-person Workshop, October 10th at the Palais des Congrès in Montreal, Canada
solar-colm.github.io
October 9, 2025 at 7:32 PM
Reposted by Michelle L. Ding
Very excited to be part of this new AI Institute that is being led by Ellie Pavlick @brown.edu and to be able to work with so many experts, including @datasociety.bsky.social

www.brown.edu/news/2025-07...
Brown University to lead national institute focused on intuitive, trustworthy AI assistants
A new institute, based at Brown and supported by a $20 million National Science Foundation grant, will convene researchers to guide development of a new generation of AI assistants for use in mental a...
www.brown.edu
July 29, 2025 at 3:26 PM
Reposted by Michelle L. Ding
I'll be presenting a position paper about consumer protection and AI in the US at ICML. I have a surprisingly optimistic take: our legal structures are stronger than I anticipated when I went to work on this issue in Congress.

Is everything broken rn? Yes. Will it stay broken? That's on us.
July 14, 2025 at 1:01 PM
Reposted by Michelle L. Ding
With their 'Sovereignty as a Service' offerings, tech companies are encouraging the illusion of a race for sovereign control of AI while being the true powers behind the scenes, write Rui-Jie Yew, Kate Elizabeth Creasey, Suresh Venkatasubramanian.
'Sovereignty' Myth-Making in the AI Race | TechPolicy.Press
Tech companies stand to gain by encouraging the illusion of a race for 'sovereign' AI, write Rui-Jie Yew, Kate Elizabeth Creasey, Suresh Venkatasubramanian.
www.techpolicy.press
July 7, 2025 at 1:07 PM
Reposted by Michelle L. Ding
Very excited to see this piece out in @techpolicypress.bsky.social today. This was written together with @r-jy.bsky.social and Kate Elizabeth Creasey (a historian here at Brown), and calls out what we think is a scary and interesting rhetorical shift.

www.techpolicy.press/sovereignty-...
'Sovereignty' Myth-Making in the AI Race | TechPolicy.Press
Tech companies stand to gain by encouraging the illusion of a race for 'sovereign' AI, write Rui-Jie Yew, Kate Elizabeth Creasey, Suresh Venkatasubramanian.
www.techpolicy.press
July 7, 2025 at 1:50 PM
Reposted by Michelle L. Ding
So the EU AI Act passed. Companies have to comply. AI regulation is here to stay. Right? Right?

FAccT 2025 paper with @r-jy.bsky.social and Bill Marino (not on bsky) 📜 incoming! 1/n

arxiv.org/abs/2506.01931
Red Teaming AI Policy: A Taxonomy of Avoision and the EU AI Act
The shape of AI regulation is beginning to emerge, most prominently through the EU AI Act (the "AIA"). By 2027, the AIA will be in full effect, and firms are starting to adjust their behavior in light...
arxiv.org
June 12, 2025 at 10:33 PM
Reposted by Michelle L. Ding
Excited to present "Towards AI Accountability Infrastructure: Gaps and Opportunities in AI Audit Tooling" at #CHI2025 tomorrow(today)!

🗓 Tue, 29 Apr | 9:48–10:00 AM JST (Mon, 28 Apr | 8:48–9:00 PM ET)
📍 G401 (Pacifico North 4F)

📄 dl.acm.org/doi/10.1145/...
April 28, 2025 at 11:26 AM
Reposted by Michelle L. Ding
@michelleding.bsky.social has been doing amazing work laying out the complex landscape of "deepfake porn" and distilling the unique challenges in governing it. We hope this work informs future AI governance efforts to address the severe harms of this content - reach out to us to chat more!
April 25, 2025 at 6:42 PM
Excited to be presenting a new paper with @harinisuresh.bsky.social on the extremely critical topic of technical prevention/governance of adult AI generated non-consensual intimate images aka "deepfake pornography" at #CHI2025 chi-staig.github.io on 4/27 10:15-11:15 JST arxiv.org/abs/2504.17663 🧵
The Malicious Technical Ecosystem: Exposing Limitations in Technical Governance of AI-Generated Non-Consensual Intimate Images of Adults
In this paper, we adopt a survivor-centered approach to locate and dissect the role of sociotechnical AI governance in preventing AI-Generated Non-Consensual Intimate Images (AIG-NCII) of adults, coll...
arxiv.org
April 25, 2025 at 5:41 PM
Reposted by Michelle L. Ding
Independent Bookstore Day - Saturday
April 25, 2025 at 3:42 AM
Extremely proud to finally launch the SRC Handbook: a project that I began with @geomblog.bsky.social and Julia Netter 1 year ago to bring topics of AI governance, privacy, and accessibility etc. into Brown's CS courses. We now have an interdisciplinary team of 22 students on product/research! 🌷
April 25, 2025 at 1:43 AM
Reposted by Michelle L. Ding
The 23andME bankruptcy shows why data protection is important. But for genetic data, the problems are even more serious. Genetic data is used in so many places and is collected so widely that there are dangerous leaks everywhere. So much so that we wrote a paper on it. arxiv.org/abs/2502.09716 1/n
arxiv.org
April 2, 2025 at 1:32 PM
Reposted by Michelle L. Ding
Excited to be joining a great lineup of speakers at the Technical AI Governance workshop in Vancouver this summer

If you are working on AI governance, definitely consider submitting!
#ICML2025
📣We’re thrilled to announce the first workshop on Technical AI Governance (TAIG) at #ICML2025 this July in Vancouver! Join us (& this stellar list of speakers) in bringing together technical & policy experts to shape the future of AI governance! www.taig-icml.com
April 1, 2025 at 5:05 PM
Reposted by Michelle L. Ding
We are excited to announce our 2025 Annual Subscriptions! Don't miss out on your chance to save on this year's titles.

For the first time, we have introduced an annual subscription specifically designed for North America!

www.tiltedaxispress.com/store/2025-uk-print-subscription
February 17, 2025 at 3:28 PM
Reposted by Michelle L. Ding
🚨Call for Presenters! 🚨
Last semester at the Center for Tech Responsibility, we had a speaker series consisting of grad students presenting their ongoing work on sociotechnical computing (broadly conceived). It was fantastic: a relaxed environment, brief presentations, and lots of discussion. 1/4
January 16, 2025 at 6:42 PM