Margaret Mitchell
banner
mmitchell.bsky.social
Margaret Mitchell
@mmitchell.bsky.social

Researcher trying to shape AI towards positive outcomes. ML & Ethics +birds. Generally trying to do the right thing. TIME 100 | TED speaker | Senate testimony provider | Navigating public life as a recluse.
Former: Google, Microsoft; Current: Hugging Face .. more

Margaret Mitchell is a computer scientist who works on algorithmic bias and fairness in machine learning. She is most well known for her work on automatically removing undesired biases concerning demographic groups from machine learning models, as well as more transparent reporting of their intended use. .. more

Computer science 94%
Psychology 5%
For many, the end of the year is an opportunity to catch up on reading or to purchase books as gifts. In 2025, a number of authors joined the Tech Policy Press podcast, providing fresh insights into how technology interacts with people, politics, and power. Check out the list:
Tech Policy Press: The Year in Books 2025 | TechPolicy.Press
In 2025, a number of authors joined the Tech Policy Press podcast, providing fresh insights into how technology interacts with people, politics, and power.
www.techpolicy.press

My recent OpEd actually may be helpful here.
It’s a really common mistake in the zeitgeist atm.
www.technologyreview.com/2025/12/15/1...
Generative AI hype distracts us from AI’s more important breakthroughs
It's a seductive distraction from the advances in AI that are most likely to improve or even save your life
www.technologyreview.com

One of my deep psycholinguistic concerns about AI chatbots is the use of self-referential language--chatbots using pronouns like "I"/"me"--which subliminally assert a sentient mind ("It says 'I think', therefore, it is"). @kashhill.bsky.social brilliantly unpacks: www.nytimes.com/2025/12/19/t...

Here's what the UI is doing for me now (which may very well be them quickly addressing the clear pushback).

I wonder if writing ACM formally about this might be helpful. I reckon many authors would be happy to sign on.
The ACM Digital Library, where a LOT of computing-related research is published (I'd say at least 75% of my own publications), is now not only providing (without consent of the authors and without opt-in by readers) AI-generated summaries of papers, but they appear as the *default* over abstracts.
Was really challenging to participate in this, but I think I was able to pen something real, important, and personal to my experience in AI. Remember that GenAI is not *all* of AI nor all of what it should be.
www.technologyreview.com/2025/12/15/1...
And look, here's the thing, AI *can be* amazing. But the hype over generative large language models obscures the really profound, meaningful breakthroughs, as @mmitchell.bsky.social breaks down for us here: www.technologyreview.com/2025/12/15/1...
Generative AI hype distracts us from AI’s more important breakthroughs
It's a seductive distraction from the advances in AI that are most likely to improve or even save your life
www.technologyreview.com

Reposted by Margaret Mitchell

ICYMI: @alexshultz.bsky.social sifted through over 6,000 pages of court filings for lawsuits alleging social media giants are ignoring child safety concerns for the sake of growth and engagement.

The internal comms uncovered in the documents are grotesque.

www.hardresetmedia.com/p/new-court-...
New Court Filings Allege Depraved Internal Communications at Meta and Snapchat
Social media platforms are confronting allegations that they're intentionally addictive and designed to keep teenage users compulsively scrolling.
www.hardresetmedia.com

Very helpful!

"Data are no longer things to be accounted for by a theoretical model...but rather inputs to the process of creating models". Many in LLM-ML don't care about the problems they are actually building models of: "the nature of languages...how we work with language...and the specific contexts [of use]."
What makes something data? Some thoughts on that question, and how answers to it help us understand AI hype:

medium.com/@emilymenonb...
What makes something data?
This is a question I posted on BlueSky on Friday 11/21/25, inspired by a talk I recently attended about evaluation of “AI” systems. I think…
medium.com

I get so frustrated with people who demean expertise while centering the narrative of a corporation.

Reposted by Timnit Gebru

Pro tip: If the person providing a different interpretation is not speaking for Moneyed Interests and is an expert in their field, they’re an “independent scholar”—not just a “skeptic” (ffs!)
Okay, I hestitate to even share this link, because while I like NYMag I do not want to send this journalist in particular clicks, b ut:

If your framing is that an academic is the "dominant voice" and the underdogs are OpenAI, Anthropic and Google, maybe a fact check in is order??

>>
Is ChatGPT Conscious?
Many users feel they’re talking to a real person. Scientists say it’s time to consider whether they’re onto something.
nymag.com
Okay, I hestitate to even share this link, because while I like NYMag I do not want to send this journalist in particular clicks, b ut:

If your framing is that an academic is the "dominant voice" and the underdogs are OpenAI, Anthropic and Google, maybe a fact check in is order??

>>
Is ChatGPT Conscious?
Many users feel they’re talking to a real person. Scientists say it’s time to consider whether they’re onto something.
nymag.com

Congratulations!!

Reposted by Margaret Mitchell

Highly personalised and personable, Advanced AI Assistants may soon become the primary way that most people access the internet.

Our research explores the challenges that could arise if their adoption is not carefully managed in the public interest.

www.adalovelaceinstitute.org/report/dilem...
The dilemmas of delegation
An analysis of policy challenges posed by Advanced AI Assistants and natural-language AI agents.
www.adalovelaceinstitute.org
Nice piece, and consistent with our findings that misinformation exploits outrage to spread online www.science.org/doi/full/10....
New from 404 Media: people are 3D-printing whistles to warn each other about the presence of ICE. Some people make designs and upload them; others are given a design and are printing hundreds and hundreds of whistles at home. It's been effective in Chicago

www.404media.co/the-latest-d...
The Latest Defense Against ICE: 3D-Printed Whistles
Chicagoans are making, sharing, and printing designs for whistles that can warn people when ICE is in the area. The goal is to “prevent as many people from being kidnapped as possible.”
www.404media.co

Also love "behavior hijacking".

That's a good one! Maybe the external-facing version of this is Gen-Juicing.

Oh that's great. I actually find that hard to scroll past, especially on my phone, since it tends to take up a lot of space, and THEN it's similar questions that upon dropdown click ALSO have AI-generated responses, and THEN it's finally links.
Have you proactively disabled some AI things?

Yes 100%!

Reposted by Margaret Mitchell

If you have been framing your work as involving/in relation to "AI", what do you mean by "AI"? How would you describe your work without using that phrase?

>>

Haha. As a vegetarian, I particularly like that one.

There's this form of behavioral engineering/coercion going on to use AI when you're not even trying to. Ex: Enterprise Google Slides replaced "Upload an Image", with "Generate an Image" (right?), requiring extra work +adaptation to sidestep the AI-as-default push. It drives me bonkers.

Love the phrases "Cognitive Cost" and "Executive Function Theft" to pinpoint how exhausting it is to be constantly bombarded with apps telling you to use AI. I've been calling it "Corporate pressure" and "Force feeding". It's quite a lot right now. Seems a bit desperate tbh.
The #ExecutiveFunctionTheft of having to opt out.
Feeling annoyed at the cognitive cost of having to dismiss all the offers of "AI" assitance every time I use Acrobat to provide feedback on documents my students wrote. (NO I do NOT want an "AI" summary of this. In what world???)

Decided to check settings:

Reposted by Margaret Mitchell

The #ExecutiveFunctionTheft of having to opt out.
Feeling annoyed at the cognitive cost of having to dismiss all the offers of "AI" assitance every time I use Acrobat to provide feedback on documents my students wrote. (NO I do NOT want an "AI" summary of this. In what world???)

Decided to check settings: