Emily M. Bender
banner
emilymbender.bsky.social
Emily M. Bender
@emilymbender.bsky.social
Reposted by Emily M. Bender
A brilliant question to ask grad students in CS, HCI, etc, fields where all the money for their current paychecks and future faculty lines is tied to "AI" right now 👇
If so, why? Are you under pressure to do so? Where does that pressure come from? What would happen if you resist?

>>
November 10, 2025 at 11:46 PM
Reposted by Emily M. Bender
This is a good thread on narrative framing. How we talk about technology matters, particularly because hype is one of Silicon Valley’s most powerful weapons
Just read a talk announcement about "AI" and K-12 education that looked like it maybe, maybe embedded a critical perspective but was still dressed up in the language of AI hype, presupposing both job market & education "reshaped by AI".

🧵>>
November 10, 2025 at 11:57 PM
Some thoughts about the power we have as academics and the impact of our choices around the framing of our work.
Just read a talk announcement about "AI" and K-12 education that looked like it maybe, maybe embedded a critical perspective but was still dressed up in the language of AI hype, presupposing both job market & education "reshaped by AI".

🧵>>
November 10, 2025 at 8:44 PM
Just read a talk announcement about "AI" and K-12 education that looked like it maybe, maybe embedded a critical perspective but was still dressed up in the language of AI hype, presupposing both job market & education "reshaped by AI".

🧵>>
November 10, 2025 at 8:32 PM
Reposted by Emily M. Bender
Love the phrases "Cognitive Cost" and "Executive Function Theft" to pinpoint how exhausting it is to be constantly bombarded with apps telling you to use AI. I've been calling it "Corporate pressure" and "Force feeding". It's quite a lot right now. Seems a bit desperate tbh.
The #ExecutiveFunctionTheft of having to opt out.
Feeling annoyed at the cognitive cost of having to dismiss all the offers of "AI" assitance every time I use Acrobat to provide feedback on documents my students wrote. (NO I do NOT want an "AI" summary of this. In what world???)

Decided to check settings:
November 10, 2025 at 6:53 PM
Reposted by Emily M. Bender
The #ExecutiveFunctionTheft of having to opt out.
Feeling annoyed at the cognitive cost of having to dismiss all the offers of "AI" assitance every time I use Acrobat to provide feedback on documents my students wrote. (NO I do NOT want an "AI" summary of this. In what world???)

Decided to check settings:
November 10, 2025 at 6:39 PM
Feeling annoyed at the cognitive cost of having to dismiss all the offers of "AI" assitance every time I use Acrobat to provide feedback on documents my students wrote. (NO I do NOT want an "AI" summary of this. In what world???)

Decided to check settings:
November 10, 2025 at 6:37 PM
If your post about your research starts with "Breaking:" ... well, I'm surprised you expect other researchers to take you seriously.
November 10, 2025 at 6:32 PM
Reposted by Emily M. Bender
I appreciate the work of these authors to show that this problem not only is still here but has grown:

www.nbcnews.com/tech/tech-ne...

But it is also quite frustrating 🧵>>
AI's capabilities may be exaggerated by flawed tests, according to new study
A study from the Oxford Internet Institute analyzed 445 tests used to evaluate AI models.
www.nbcnews.com
November 9, 2025 at 2:59 PM
Reposted by Emily M. Bender
This NeurIPS workshop claims that LLMs "provide an important foundation for exploring human cognition, emotion, and social interaction"

This is flawed logic, as @lmesseri.bsky.social and I argue here:
www.sciencedirect.com/science/arti...
November 9, 2025 at 3:23 PM
Okay, that was only 4 years ago. Still, one would hope to have been listened to.
Five years ago, at NeurIPS, in Raji et al 2021 we told them so:

datasets-benchmarks-proceedings.neurips.cc/paper/2021/h...
November 9, 2025 at 3:19 PM
I appreciate the work of these authors to show that this problem not only is still here but has grown:

www.nbcnews.com/tech/tech-ne...

But it is also quite frustrating 🧵>>
AI's capabilities may be exaggerated by flawed tests, according to new study
A study from the Oxford Internet Institute analyzed 445 tests used to evaluate AI models.
www.nbcnews.com
November 9, 2025 at 2:59 PM
Reposted by Emily M. Bender
Beautiful tree on my walk this morning
November 9, 2025 at 1:17 AM
Beautiful tree on my walk this morning
November 9, 2025 at 1:17 AM
Reposted by Emily M. Bender
We have taken products off the market for being far less dangerous. We also have a clear signal that all the money we’re dumping into AI could be better spent addressing a serious mental health crisis.
Panera’s moderately caffeinated lemonade was loosely associated with 2 deaths before it was taken off market.

This article alone has 4 examples of ChatGPT encouraging young people to commit suicide, and OpenAI’s own public stats estimate over a million users discuss suicide with ChatGPT each week.
I fully believe in the corporate death penalty and believe we would be a better world if OpenAI lost its corporate charter and was forcibly dissolved.

www.cnn.com/2025/11/06/u...
November 8, 2025 at 3:37 PM
Reposted by Emily M. Bender
‘The ouroboros of hype’
My review for @eyemagazine.bsky.social
Packed with information, The AI Con pours cold water on ‘AI’ hysteria, while drawing attention to the tech’s environmental and ethical implications.
www.eyemagazine.com/blog/post/th...
@emilymbender.bsky.social
@alexhanna.bsky.social
November 8, 2025 at 7:00 PM
Reposted by Emily M. Bender
"This long-running war on knowledge and expertise has sown the ground for the narratives widely used by AI companies and the CEOs adopting it. Human labor, inquiry, creativity, and expertise is spurned in the name of “efficiency.” With AI, there is no need for human expertise"
AI Is Supercharging the War on Libraries, Education, and Human Knowledge
"Fascism and AI, whether or not they have the same goals, they sure are working to accelerate one another."
www.404media.co
November 7, 2025 at 4:20 PM
Reposted by Emily M. Bender
Dear @rweingarten.bsky.social and @aft.org: Please cancel your partnership with this evil company and stop pushing “A.i.” on teachers and students.
There are no words for how evil this is
November 7, 2025 at 5:56 AM
"This long-running war on knowledge and expertise has sown the ground for the narratives widely used by AI companies and the CEOs adopting it. Human labor, inquiry, creativity, and expertise is spurned in the name of “efficiency.” With AI, there is no need for human expertise"
AI Is Supercharging the War on Libraries, Education, and Human Knowledge
"Fascism and AI, whether or not they have the same goals, they sure are working to accelerate one another."
www.404media.co
November 7, 2025 at 4:20 PM
Reposted by Emily M. Bender
‘The ouroboros of hype’

Packed with information, The AI Con pours cold water on ‘AI’ hysteria, while drawing attention to the tech’s environmental and ethical implications.

Review by @johnlwalters.bsky.social

@emilybender.bsky.social
@alexhanna.bsky.social
www.eyemagazine.com/blog/post/th...
November 7, 2025 at 3:15 PM
Reposted by Emily M. Bender
Holy shit. Noam Shazeer, one of the original authors on the "Attention is All You Need" paper and Character.AI founder, came out as major transphobe. Like Trumpian levels of "this is child mutilation" of transphobia.

(via The Information)
November 7, 2025 at 2:17 PM
Reposted by Emily M. Bender
me at the end of class: here's a little speculative exercises; imagine you wake up from cryosleep in 2085. what's the kind of tech-society r/ship you'd like to see around you?

students: no AI

I honestly think students' views are missing from the 'should AI be integrated in classrooms' discussion
November 6, 2025 at 6:32 AM
Reposted by Emily M. Bender
okay, now let's hear from people who don't have millions of dollars in tech company stocks still vesting
November 6, 2025 at 5:27 PM
Reposted by Emily M. Bender
Make your 2025 words-of-the-year nominations for the only vote that matters! bit.ly/2025WOTYNOMS
November 6, 2025 at 2:20 PM