Anissa Wren
banner
anissawren.bsky.social
Anissa Wren
@anissawren.bsky.social
software engineer | tinkerer | she/her | views are my own
👩🏻‍🦽🏳️‍🌈😷
Reposted by Anissa Wren
I don’t know who is ready to hear this but people don’t have to add value

Like people should also be allowed to exist for free
May 27, 2025 at 3:04 AM
Reposted by Anissa Wren
This is the lighter flavor of how existential risk AI operates as AI hype. Espionage and hacking campaigns the have the feel of being "autonomous" and yet executed by a state actor are one step removed from "the system ran away from us and now we're all fucked."
here_we_go_again.webp
November 14, 2025 at 3:47 PM
Reposted by Anissa Wren
We have upgraded our geomagnetic forecast today (12 November 2025) to the highest intensity level amid an ongoing solar storm.

Current predictions suggest that the activity will result in potentially the largest solar storm to hit our planet in over two decades.
November 12, 2025 at 2:55 PM
I'm giving a talk today called "A brief history of artificial intelligence and what it can teach us"

You can find the slides and supplemental reading suggestions at github.com/anissa111/history-of-ai
November 7, 2025 at 7:31 PM
I got tins of butter cookies from Costco without looking too closely and the pictures are ai and ugh I'm so mad.
I wanted to keep the tins for craft supplies, but I'll need to do something about the lids because I can't not see how ai they are
November 4, 2025 at 9:53 PM
Reposted by Anissa Wren
your timely reminder that AGI is neither clearly defined/described nor amenable to scientific or engineering principles... it remains an ideology that is rooted in white supremacy, racism and eugenics
October 24, 2025 at 8:18 PM
Reposted by Anissa Wren
“People [in tech jobs] worry that not being seen as uncritical AI cheerleaders will be a career-limiting move…for those who aren't insiders in the tech industry, it's vital that you understand that you've been presented with an extremely distorted view about what tech workers really think about AI”
Okay, for the folks who asked: here's the majority AI view, writing up the reasonable, thoughtful view on AI that the vast majority of people in tech hold, that gets overshadowed by the bluster and hype of the tycoons trying to shill their nonsense. anildash.com/2025/10/17/t... Please share!
The Majority AI View - Anil Dash
A blog about making culture. Since 1999.
anildash.com
October 18, 2025 at 3:59 PM
Reposted by Anissa Wren
I really think somebody needs to sit the tech bros down and explain to them that they talked to the computer in Star Trek because that made for better tv than typing, not because it's actually a good UI strategy
I don't want to talk to my laptop! I like using a mouse and keypad shortcuts. I don't want anything multi-modal or AI. I want something that's basically a fancy typewriter that also let's me watch movies and play The Sims. I wish Windows would leave me the fuck alone with all this extra shit.
August 16, 2025 at 11:12 PM
Reposted by Anissa Wren
Responsible AI is also becoming a lucrative industry. The Big 5 consultancies are now offering (and monetising) Ethical AI audits, alongside a cottage industry of start-ups. Responsible AI has produced a slew of frameworks and checklists, many of which have never been meaningfully tested.
June 11, 2025 at 7:00 PM
Reposted by Anissa Wren
This story unfolded alongside a growing trend: “Responsible AI”, a constellation of think tanks, academics, non-profits, and multinational institutions purporting to make algorithmic systems fair, accountable and transparent.
June 11, 2025 at 7:00 PM
Reposted by Anissa Wren
Imagine studying a technology whose presence in the classroom is so detrimental to the development of writing and research skills (including even the will to know the sources behind claims!) that mitigating its effects becomes a central goal of course design, and concluding with tips on adopting it.
October 9, 2025 at 11:53 AM
Reposted by Anissa Wren
A striking thing about articles I’ve read claiming to “study the effects” of generative AI on student writing skills and consumption of information is that (1) they nearly always find the effects are negative and (2) most “conclusions” are still written assuming that we must use AI, for some reason.
October 9, 2025 at 11:49 AM
Reposted by Anissa Wren
Anthropomorphizing a computer demonstrates how little you care for your own and other people’s humanity. You have taken a thing and put it ahead of dehumanized peoples.
Define alive.
October 5, 2025 at 7:36 PM
Reposted by Anissa Wren
The current paradigm of "AI" encodes & recapitulates eugenicist, fascist, and generally bigoted tendencies— but previous paradigms did, too, & if these facts had been addressed, then, in the culture of technology specifically and our sociotechnical culture writ large, it might not still be like that
October 8, 2025 at 1:22 AM
This is not a fact! If you click through the link, it's just Microsoft's CTO saying so! It's a quote!

How do you get from dude who is a CTO of a company heavily invested in AI says AI is going to be everywhere to "in fact, [what that dude said]"

github.blog/ai-and-ml/th...
October 7, 2025 at 6:54 PM
Reposted by Anissa Wren
Our collective belief in our own inadequacy is required to sustain the AI project—a metaeugenic worship of intelligence and a belief that most people do not possess enough of it.
September 16, 2025 at 4:43 PM
Reposted by Anissa Wren
The desire for AI has been expressed as “taking away the tasks we hate.” But who designates contemptable tasks? And what does our contempt for this work say about how we regard the people who do it?
September 16, 2025 at 4:43 PM
Reposted by Anissa Wren
Disability accommodation represents a failure to plan and build accessible infrastructure —work structures, public spaces, events.

We force people thru appts, paperwork, mtgs w their boss and HR…bc of structural failures. The work of being included too often falls to individual disabled people.
September 25, 2025 at 5:01 PM
I feel like I'm not living in the same reality as a lot of folks who excited about the use of generative AI in earth systems science, particularly regarding their use as a public interface.

Using LLMs to do emergency weather communications WILL kill people.
September 16, 2025 at 4:48 PM
Possibly a niche concern here, but this is a massive reason I'm very against integrating LLM "chats" into climate model/software documentation
This piece also touches on a really crucial aspect of the AI boom across industries: it's going to encourage people to distrust others even more than they already do.
September 10, 2025 at 4:29 PM
Reposted by Anissa Wren
"To meet that need, AI companies ingested digital copies of practically every published work on the planet, without getting the permission of the copyright holders. That was probably the only practical option they had."

"i stole it because it was my only option" is not an acceptable argument
September 8, 2025 at 4:23 PM
Reposted by Anissa Wren
I've spent my entire life being told that everything I do is useless, mocked and being told no one should be paid for "underwater basket weaving," and now they're like, "I must steal all the underwater basket weaving lore because it's imperative for my survival"

Eat farts, my dudes.
September 8, 2025 at 4:25 PM
Reposted by Anissa Wren
I saw a thing the other day that basically said “no matter what prompt you give it, the question an LLM is answering is ‘what would a plausible response to this prompt look like?’”
September 9, 2025 at 1:54 AM
Reposted by Anissa Wren
LLM responses which do not reflect consensus reality & facts are produced via the Same Process which generate responses which *Do* broadly conform to consensus reality & facts.

The Same Processes.

The Same Ones.

"Hallucination" is just a word to distance yourself from the outputs you don't like.
How many times, in how many contexts, from how many internal and external researchers, or from how many CEO's are people going to have to receive this message before they believe it:

"Hallucinations" are an inherent part of the large language model architecture.
www.nature.com/articles/d41...
Can researchers stop AI making up citations?
OpenAI’s GPT-5 hallucinates less than previous models do, but cutting hallucination completely might prove impossible.
www.nature.com
September 8, 2025 at 6:20 PM