Megan McIntyre
banner
rcmeg.bsky.social
Megan McIntyre
@rcmeg.bsky.social
Director, Program in Rhet/Comp, U of Arkansas
English prof
Writing about #WPALife and writing pedagogy
Loves dogs

Before: Sonoma State English & Dartmouth Institute for Writing & Rhetoric
(views only ever mine, obv)
she/her
Pinned
I'm genuinely begging folks to read both @timnitgebru.bsky.social and Torres' "TESCREAL Bundle" and @adambecker.bsky.social's _More Everything Forever_ and connect these billionaires' eugenicist dreams to the GenAI products they're pushing onto every educational institution, K-college.
Reposted by Megan McIntyre
Just how bad is Grok's nonconsensual porn problem? Musk's AI chatbot is generating another sexualized image of someone without their permission every single minute:
Grok Is Generating About 'One Nonconsensual Sexualized Image Per Minute'
Elon Musk’s chatbot Grok keeps churning out nonconsenual images of women and minors in bikinis and lingerie, outraging users and regulators
www.rollingstone.com
January 6, 2026 at 10:43 PM
Reposted by Megan McIntyre
"We're not pressuring you to integrate AI into your courses. We're just inviting you to play with it!"
This is an intentional rhetorical move designed to obscure the very real harms of LLMs. You're not consuming water or exploiting workers and artists; you're just playing!
January 6, 2026 at 8:41 PM
Reposted by Megan McIntyre
Teach the ways AI and its political economy are an extension of a longer history of environmental racism.
NEW: After a White Town Rejected a Data Center, Developers Targeted a Black Area

Four million Americans live within 1 mile of a data center. The communities closest to them are “overwhelmingly” non-white.

capitalbnews.org/data-center-...
After a White Town Rejected a Data Center, Developers Targeted a Black Area
Four million Americans live within 1 mile of a data center. The communities closest to them are “overwhelmingly” non-white.
capitalbnews.org
January 6, 2026 at 7:42 PM
Reposted by Megan McIntyre
I may have, uh, suggested that this should…have some discussion with parents and guardians. I truly don’t want the kid in question to start thinking in ai prompts when their brain IS STILL LEARNING HOW TO HUMAN. This is easily as important to me as permission for sex ed classes.
January 6, 2026 at 9:47 PM
Reposted by Megan McIntyre
This is a catastrophe.

1. Devices are as important to regulate as pharmaceuticals.

2. The market is notoriously bad at determining safety, accuracy, and which technologies are in the public interest.

3. Even when the market does tell us when a tech is bad, it does so *after it has caused harm*.
“FDA Commissioner Marty Makary indicated that one of the agency’s priorities is fostering an environment that’s good for investors… The new approach appears to open the door to the unregulated use of generative AI products for certain medical tasks, such as summarizing a radiologist’s findings.”
FDA announces sweeping changes to oversight of wearables, AI-enabled devices
The FDA will ease regulation of digital health products, aiming to deregulate AI and promote its widespread use.
www.statnews.com
January 6, 2026 at 9:48 PM
Reposted by Megan McIntyre
Feel like it shld be front page, 34-sized font headline news that the person who gave ~$250 million to Trump campaign and then was co-president for a few months, dismantling agencies and stealing Americans' data, built an AI tool that is being used for child porn.

This Sept. piece is instructive
January 5, 2026 at 7:23 PM
Reposted by Megan McIntyre
“ChatGPT started coaching Sam on how to take drugs, recover from them and plan further binges. It gave him specific doses of illegal substances, and in one chat, it wrote, ‘Hell yes—let’s go full trippy mode’”
www.sfgate.com/tech/article...
A Calif. teen trusted ChatGPT for drug advice. He died from an overdose.
"Who on earth gives that advice?"
www.sfgate.com
January 5, 2026 at 6:28 PM
Reposted by Megan McIntyre
If you have to train students to recognize when a *university provided* resource is lying to them, maybe the university should not provide that resource.
December 12, 2025 at 10:54 AM
Reposted by Megan McIntyre
A reminder that the US killed 80 people in Venezuela, and it would be nice if the US media cared enough to think that the life of a grandmother in Caracas whose building is destroyed by a US bomb matters as much as the life of a person in the US.
January 5, 2026 at 5:54 PM
Reposted by Megan McIntyre
Gee, seems like universities should not be foisting this flawed product on their students, right?
January 4, 2026 at 7:40 PM
Reposted by Megan McIntyre
For all that the media has spent years freaking out about what social media does to our brains, I don’t understand why there appears to be little effort to grapple with the fact that these chatbots are, for some not insignificant portion of the population, literally psychosis machines
January 4, 2026 at 1:32 PM
Reposted by Megan McIntyre
OpenAI is now facing a total of 8 wrongful death lawsuits from grieving families....who claim that ChatGPT, in particular, the GPT-4o version, drove their loved ones to suicide. Soelberg’s complaint also alleges that company executives knew the chatbot was defective before it pushed it to the public
January 4, 2026 at 2:06 PM
Reposted by Megan McIntyre
“To some extent, it moves stalking and harassment more easily from online to the real world, which is always the problem with wearables…”
Meta smart glasses pose a threat to women, campaigners say
Reports of covert filming have prompted privacy fears as experts are concerned that the devices would be altered to ‘nudify’ women without their consent
www.thetimes.com
January 2, 2026 at 11:07 PM
Reposted by Megan McIntyre
Parker's piece nails why this matters. "Tech companies... would rather not answer for their products’ failures. Every headline that says “Grok apologizes” or “Grok admits” or “Grok says” creates a world where the chatbot takes the fall while Musk and his executives face no scrutiny whatsoever."
January 2, 2026 at 7:32 PM
Reposted by Megan McIntyre
I've been tracking the spread of nonconsensual deepfakes on X for more than two years. Here's a timeline of how Musk's leadership allowed the practice to flourish from a once-underground market to a viral trend, with little recourse for victims or legal enforcement.
spitfirenews.com/p/grok-csam-...
How Grok's sexual abuse hit a tipping point
Nonconsensual deepfakes on X are nothing new, but now it's built into the platform.
spitfirenews.com
January 2, 2026 at 7:47 PM
Reposted by Megan McIntyre
This is a thread of major media outlets falsely anthropomorphising the "Grok" chatbot program and in doing so, actively and directly removing responsibility and accountability from individual people working at X who created a child pornography generator (Elon Musk, Nikita Bier etc)

#1: Reuters
January 2, 2026 at 8:02 PM
Reposted by Megan McIntyre
just so it's clear: this is the exact core function of genAI

Fascist government and climate deniers love it because it can produce the aesthetics of knowledge without any actual tendency towards truth. It is automated denialism.

Shame on every climate scientist promoting its use (there are lots)
Bob and I were among the hundreds of researchers that were supposed to conduct the 6th U.S. National Climate Assessment. Now it looks like they're gonna produce it with a few people and Grok? Communities need rigorous and accurate information about climate change. This will put communities at risk.
January 1, 2026 at 7:11 AM
Reposted by Megan McIntyre
Stickers! I'll be bringing these to the #MLA26 panel @caramartamessina.com & I put together on Making Space for GenAI Refusal.

Credit to @betterimagesofai.bsky.social (Jamillah Knowles & Digit / Clarote & AI4Media) for the two images, to Bonnie Lenore Kyburz for inspiring mansplAIning, +
December 29, 2025 at 10:28 PM
Reposted by Megan McIntyre
I don't know how many times it needs to be said that you absolutely should not let fucking "AI" anywhere near your legal contracts, either in creating them or evaluating them, but apparently it needs to be said at least one more time, so allow me to say it again, here, right now
December 28, 2025 at 5:27 PM
Reposted by Megan McIntyre
By any reasonable historical standard — including that of the technology industry! — ChatGPT should be pulled from the market and its product managers and executives held accountable for creating a product that ROUTINELY tells teens to kill themselves. This is a basic, common sense standard.
December 28, 2025 at 6:21 PM
Reposted by Megan McIntyre
Thanks for sharing this document, @rweingarten.bsky.social.

I read it, and I believe it's inexcusably inadequate and will not prepare teachers or schools to confront the very real dangers of the "A.i." products pushed by your partners, including OpenAI.

A few thoughts for your consideration...
Read about Commonsense Guardrails for Using Advanced Technology in Schools aiinstruction.org/sites/defaul...
aiinstruction.org
December 24, 2025 at 3:43 AM
Reposted by Megan McIntyre
Wildly different things, tasks, techniques, subspecialties being lumped into "AI" and then being conflated with each other, doesn't help. Different types of models vs the techniques to train them vs the tasks they are supposed to accomplish, all under "AI".
December 22, 2025 at 5:21 PM
Reposted by Megan McIntyre
New publication out: "Weathering the Rhetorical Climates of AI." I'm really excited that it's open access!
publicationsncte.org/content/jour...
December 20, 2025 at 1:14 PM
Reposted by Megan McIntyre
Academics are making the decision to shove unpublished original research that isn’t even theirs into the plagiarism machine. I can’t.
I find this very alarming. AI is being used in explicitly prohibited ways. No doubt this will soon be the norm. The consequences of this will be dramatic. Over the holidays, the publishers and the whole of academic community are doing nothing else but working on a solution to this, right? RIGHT?!
More than half of researchers now use AI for peer review — often against guidance
A survey of 1,600 academics found that more than 50% have used artificial-intelligence tools while peer reviewing manuscripts.
www.nature.com
December 18, 2025 at 10:27 PM