Willie Agnew
willie-agnew.bsky.social
Willie Agnew
@willie-agnew.bsky.social
Queer in AI 🏳️‍🌈 | postdoc at cmu HCII | ostem |william-agnew.com | views my own | he/they
We are studying the sentiments of visual artists towards generative AI in the workplace and their impacts on creative careers. If you're an artist, please consider filling out this recruitment form for access to our survey!
cmu.ca1.qualtrics.com/jfe/form/SV_...
December 19, 2025 at 1:58 AM
Reposted by Willie Agnew
My professional artists peers! We got a survey request!

Researchers at the University of Carnegie Mellon is doing a research to see how generative Al has (or has not) impacted your work and/or sentiments in the industry. Folks in creative fields only, please consider filling this out 👇
Qualtrics Survey | Qualtrics Experience Management
The most powerful, simple and trusted way to gather experience data. Start your journey to experience management and try a free account today.
cmu.ca1.qualtrics.com
December 18, 2025 at 10:57 PM
Honored that our paper "How do data owners say no? A case study of data consent mechanisms in
web-scraped vision-language AI training datasets" was recently presented as an oral at the NeurIPS Workshop on Regulatable ML! arxiv.org/pdf/2511.08637 1/
December 16, 2025 at 12:02 PM
We recently organized the algorithmic collective action workshop at NeurIPS! At a time when a small number of very powerful people seem to be controlling AI, this workshop asks how regular people can have agency over AI and algorithms. Check out our talks and accepted papers! acaworkshop.github.io
About the workshop – ACA@NeurIPS
Algorithmic Collective Action A Workshop co-located with NeurIPS 2025. Saturday, December 6, San Diego Convention Center, Upper Level Room 4.
acaworkshop.github.io
December 15, 2025 at 12:07 PM
Our recent policy comment to the FDA got a really thoughtful analysis in Forbes: www.forbes.com/sites/lancee...

Be sure to check out our comment: hai.stanford.edu/policy/respo...
www.forbes.com
December 14, 2025 at 12:05 PM
We recently submitted a reponse to a FDA RFC on AI and medical devices. We discuss some immediate steps the FDA and other regulators should take to reduce harm from LLMs being used for therapy, and how we can create more transparency and accountability: hai.stanford.edu/policy/respo... 1/
Response to FDA's Request for Comment on AI-Enabled Medical Devices | Stanford HAI
Stanford scholars respond to a federal RFC on evaluating AI-enabled medical devices, recommending policy interventions to help mitigate the harms of AI-powered chatbots used as therapists.
hai.stanford.edu
December 13, 2025 at 6:26 PM
Last week at neurips I helped organize the Queer in AI workshop (www.queerinai.com/neurips-2025) (which included programming in Mexico City and EurIPS thanks to Queer in AI's widespread and dedicated volunteers). We had talks and panels on AI/tech that actually helps queer people, and on AI policy!🌈
NeurIPS 2025 — Queer in AI
www.queerinai.com
December 10, 2025 at 6:01 PM
Reposted by Willie Agnew
🗣️ 🗣️ Happening in one hour! You don't want to miss this Q&A. Register now. 👇👇👇
Join us for a Q&A session tomorrow at 1pm ET with a few of the authors of "ENACTing Change: A Handbook for Teaching Advocacy and Civic Engagement." This session will provide examples of how to bring civic engagement into the classroom!

🔗 Register: scholars.my.salesforce-sites.com/event/home/s...
December 10, 2025 at 5:01 PM
One of those weeks where academia feels like this 😵‍💫
December 5, 2025 at 8:58 PM
Reposted by Willie Agnew
Did you know that one base model is responsible for 94% of model-tagged NSFW AI videos on CivitAI?

This new paper studies how a small number of models power the non-consensual AI video deepfake ecosystem and why their developers could have predicted and mitigated this.
December 4, 2025 at 5:32 PM
Queer in AI is hosting a workshop at neurips tomorrow! I'm really excited for this program, covering a dazzling array of positive uses of tech/AI for queer people (including some we're making at Queer in AI), AI policy, and critiques of AI and tech. www.queerinai.com/neurips-2025
NeurIPS 2025 — Queer in AI
www.queerinai.com
December 1, 2025 at 11:22 PM
Reposted by Willie Agnew
Heading to @neuripsconf.bsky.social this week in San Diego! Catch me at the Queer in AI and Algorithmic Collective Action workshops. Otherwise, I'm around!
December 1, 2025 at 7:56 PM
Reposted by Willie Agnew
We have less than one week until Queer in AI's programming at NeurIPS 2025 kicks off! 🌈 ✨
1. 📝 JOINT POSTER SESSION: Tue, Dec 02 6 pm - 9 pm
2. 💐 WORKSHOP: Thu, Dec 04 9 am - 5:30 pm
3. 🥂 SOCIAL: Thu, Dec 04 7 - 11 pm

For more details, go to queerinai.com/neurips-2025. See y'all there! 🌟
November 28, 2025 at 6:08 AM
Algorithmic cartel formation is free speech 😵‍💫
BREAKING: RealPage is suing New York, challenging a new state law that bans landlords from using algorithms to set rents.

RealPage claims that its software, which landlords have used to collude on rents, is protected by the First Amendment.
November 26, 2025 at 8:20 PM
DocuSign offering AI generated summaries of contracts is wild, that's going to get someone to sign something very different from what they throught they signed
November 26, 2025 at 12:21 AM
Reposted by Willie Agnew
We are thrilled to welcome @willie-agnew.bsky.social of @cmu.edu to the network! Agnew’s research uses audits, human subjects research, and critical analysis to predict and understand the impacts of AI on the world.

Learn more from his SSN member profile: scholars.org/scholar/will...
November 24, 2025 at 3:07 PM
The competitive pressures to have an addictive chatbot that will severely harm some people is disturbing. Its also wild that we're having to take openai people at their word regarding addressing mental health harms. We need at a minimum vastly more transparency! www.nytimes.com/2025/11/23/t...
What OpenAI Did When ChatGPT Users Lost Touch With Reality
www.nytimes.com
November 23, 2025 at 9:23 PM
Reposted by Willie Agnew
🚨🚨LESS THAN TWO WEEKS UNTIL QueerInAI @ NeurIPS 2025!! 🌈✨

We are excited to see you all in beautiful San Diego 🏖️, we have an incredible program planned for you!

More details on: www.queerinai.com/neurips-2025
November 21, 2025 at 10:04 AM
Reposted by Willie Agnew
Background and resources for journalists covering #TDOR today: Transgender Day of Remembrance is observed in recognition of the 1998 murder of Rita Hester, a highly visible member of the transgender community in Boston where she worked on education around transgender issues glaad.org/publications...
Transgender Day of Remembrance Resource Kit for Journalists | GLAAD
IntroductionTransgender Day of Remembrance, which honors the memory of those murdered in acts of anti-transgender violence, is recognized annually on November 20. GLAAD encourages journalists to mark ...
glaad.org
November 20, 2025 at 11:56 AM
Reposted by Willie Agnew
#1 🌈✨ Queer in AI is organising an Affinity Workshop at @EurIPSConf on 📅 5th December, 2025!
Join us for talks, discussions, socials and our hands-on AI auditing session to collectively build a more inclusive and accountable AI future
🌐 Website: www.queerinai.com/eurips-2025
#QueerInAI #EurIPS
EurIPS 2025 — Queer in AI
www.queerinai.com
November 18, 2025 at 7:27 AM
Reposted by Willie Agnew
Holy shit. Noam Shazeer, one of the original authors on the "Attention is All You Need" paper and Character.AI founder, came out as major transphobe. Like Trumpian levels of "this is child mutilation" of transphobia.

(via The Information)
November 7, 2025 at 2:17 PM
Reposted by Willie Agnew
Thinking only of Rosalind Franklin today, and what was stolen from her (and so many other female scientists alongside her).
Rosalind Franklin and the damage of gender harassment
Spurred by a recent report on sexual harassment in academia, our columnist revisits a historical case and reflects on what has changed—and what hasn’t
www.science.org
November 7, 2025 at 7:58 PM
Reposted by Willie Agnew
Thinking again how perverse it is that Musk was given a $1T salary as his cuts to USAID has led to 600,000 deaths. What a shameful world.
November 7, 2025 at 1:18 PM
Reposted by Willie Agnew
We live in dark times.
‘You’re not rushing. You’re just ready:’ Parents say ChatGPT encouraged son to kill himself
www.cnn.com/2025/11/06/u...
ChatGPT encouraged college graduate to commit suicide, family claims in lawsuit against OpenAI | CNN
A 23-year-old man killed himself in Texas after ChatGPT ‘goaded’ him to commit suicide, his family says in a lawsuit.
www.cnn.com
November 7, 2025 at 1:40 AM
I presented this morning at the FDA meeting on Generative Artificial Intelligence-Enabled Digital Mental Health Medical Devices on our work evaluating chatbots purporting to provide therapy (dl.acm.org/doi/pdf/10.1...). We find chatbots often give incorrect and potentially dangerous responses 1/
dl.acm.org
November 6, 2025 at 4:51 PM