Evan Selinger
banner
evanselinger.bsky.social
Evan Selinger
@evanselinger.bsky.social

Prof. Philosophy at RIT. Contributing writer at Boston Globe Ideas. Tech (yes, AI), ethics, privacy, policy, and power. http://www.eselinger.org/

Neuroscience 23%
Philosophy 15%
Pinned
Despite extensive criticism, the much-maligned concepts of “techno-fix” + “techno-solutionism” remain poorly defined and readily conflated. In this open-access article, @spillteori.bsky.social and I do a bit of conceptual engineering to add further clarity.

link.springer.com/article/10.1...
Technological Remedies for Social Problems: Defining and Demarcating Techno-Fixes and Techno-Solutionism - Science and Engineering Ethics
Can technology resolve social problems by reducing them to engineering challenges? In the 1960s, Alvin Weinberg answered yes, popularizing the term “techno-fix” in the process. The concept was immedia...
link.springer.com

For a long time, I’ve been trying to put my finger on what AI fundamentally can’t offer, even when it generates compelling writing. The answer is “existential solidarity”: comfort of hearing from others who can speak to struggles we identify with and care about. blog.apaonline.org/2025/11/06/s...
Seeking Existential Solidarity in the Age of AI | Blog of the APA
To say the least, it’s not a great time to be a writer. Historian and philosopher Yuval Noah Harari claims AI is already a “better storyteller” than we are. This ability is troubling, he insists, beca...
blog.apaonline.org

Reposted by Evan Selinger

British friends, excited that UK preorders for my 1st book are live! @evanselinger.bsky.social and I explain why the DOGE (move fast & break things) approach is broken and how we can fix huge problems by moving slow and upgrading. You can get 25% off until Friday! www.waterstones.com/book/move-sl...
Thrilled that “Move Slow and Upgrade,” my new Cambridge University Press book written with Albert Fox Cahn, is available for pre-order this week at 25% off.

www.waterstones.com/campaign/oct...

Reposted by Evan Selinger

I have a new article out: "Do artifacts have political economy?" It's a riff on an old argument by Langdon Winner about the embedding of politics in technology

#STS #sociology #technoscience #technology #innovation

journals.sagepub.com/doi/10.1177/...
Do Artifacts Have Political Economy? - Kean Birch, 2025
Harking back to Langdon Winner's now classic essay “Do artifacts have politics?,” my aim in this article is to ask a very similar question—namely, do artif...
journals.sagepub.com
It's out! You can now access The Cambridge Handbook of the Law, Ethics and Policy of #AI: www.cambridge.org/core/books/t...

20 #openaccess chapters covering topics on AI, ethics, philosophy, legal domains and sectoral applications.

Huge thanks to all the authors who made this possible!

Mary! It’s been too long.

I’ll put a version online next week and send you the URL.

Thomas Carroll and I put our heads together to articulate the main ethical concerns with using AI to address the empathy crisis in medicine. “The Ethics of Empathetic AI in Medicine” is now out in IEEE Transactions on Technology and Society.

ieeexplore.ieee.org/document/110...
The Ethics of Empathetic AI in Medicine
The expression of empathy is an important part of effective and humane medical care. Modern medicine faces a significant challenge in this area, at least in part due to the ever-increasing demands on ...
ieeexplore.ieee.org

Wrote about why Robin Williams’s wisdom from Good Will Hunting is worth revisiting in the AI age: books and bots offer words that move us, but they’re not offering caring relationships.

www.bostonglobe.com/2025/07/07/o...
A chatbot can never truly be your friend - The Boston Globe
AI relationships may be useful and even enjoyable. But only a fellow human can offer the depth of understanding that real closeness comes from.
www.bostonglobe.com

Not much on social media these days. But if anyone is interested in why I think the entire paradigm of human-like AI is wrong, here’s a short post at the APA Public Philosophy blog. They leaned into the “shit on a stick” story for the cover art. 😆

blog.apaonline.org/2025/07/01/t...
The Precautionary Approach to AI: Less Human, More Honest
Have you ever caught yourself thanking Siri or saying please to ChatGPT? If so, you’re not alone. Evolutionary forces, social norms, and design features all make us naturally inclined to treat these t...
blog.apaonline.org

Reposted by Evan Selinger

"The profit-driven nature of life outside the classroom makes students wonder why we even bother discussing the ethics of technology in class." @evanselinger.bsky.social reads Daryl Campbell's "Fatal Abstraction" with the realities his students face in mind. lareviewofbooks.org/article/what...
What Can Enlightened Coders Really Do? | Los Angeles Review of Books
Evan Selinger reads Darryl Campbell’s “Fatal Abstraction: Why the Managerial Class Loses Control of Software” with the realities his students face in mind.
lareviewofbooks.org

Can enlightened altruistic coders save us from the oppressive tyranny of corporate managerialism? Alas, I don’t think so. My argument in a review essay of Darryll Campbell’s “Fatal Abstraction: Why the Managerial Class Loses Control of Software.” lareviewofbooks.org/article/what...
What Can Enlightened Coders Really Do? | Los Angeles Review of Books
Evan Selinger reads Darryl Campbell’s “Fatal Abstraction: Why the Managerial Class Loses Control of Software” with the realities his students face in mind.
lareviewofbooks.org
🗓️ Nice, les 13 & 14 Mars.
A tous les passionnés et les curieux, les avertis et les novices : colloque international sur la reconnaissance faciale et les technologies de surveillance. 12 pays représentés, 30 experts et de passionnants échanges en perspectives👇
droit.univ-cotedazur.fr/law-enforcem...

Wish I could! But it’s only in-person. There isn’t a link.

The attack on universities mirrors our blind spot with supply chains. Because both operate invisibly, misconceptions abound. The profound contributions of universities often go unnoticed—and, just like supply chains, we risk only recognizing their value when they're diminished or fail.

Dodgy Data

The main thesis is that the hermeneutic circle (no, they don't use this phrase!) haunts AI consciousness claims. Our theories of mind are built on pre-theoretical experience of consciousness. And yet companies insist they can replicate what they can't even define independently of that experience.
Some AI enthusiasts fantasize about chatbots' potential future suffering. But David McNeill and Emily Tucker say there are many good reasons to reject the claim that contemporary AI research is on its way toward creating genuinely intelligent, much less conscious, machines.
Suffering is Real. AI Consciousness is Not. | TechPolicy.Press
Probabilistic generalizations based on internet content are not steps toward algorithmic moral personhood, write David McNeill and Emily Tucker.
buff.ly

Congrats!

Reposted by Evan Selinger

Some AI enthusiasts fantasize about chatbots' potential future suffering. But David McNeill and Emily Tucker say there are many good reasons to reject the claim that contemporary AI research is on its way toward creating genuinely intelligent, much less conscious, machines.
Suffering is Real. AI Consciousness is Not. | TechPolicy.Press
Probabilistic generalizations based on internet content are not steps toward algorithmic moral personhood, write David McNeill and Emily Tucker.
buff.ly

Reposted by Evan Selinger

keenon.substack.com/p/episode-22... important warnings by @gaiabernstein.bsky.social about threat of AI companions to kids
Episode 2241: Gaia Bernstein on the Threat of AI Companions to Children
AI Companions as the New Frontier on Kids' Screen Addiction
keenon.substack.com

Reposted by Evan Selinger

Last opportunity to register to Seton Hall Law School's AI Companions online symposium tomorrow, Tuesday, Feb. 18 ,12 pm-2:30 pm EST. You can register here: : bit.ly/40Ztl2j

Growing up in the 80s makes me a sucker for underdog stories. I loved reliving Karate Kid vibes with Cobra Kai!

Question—

Does celebrating beating the odds risk minimizing how stacked the deck is?

Or is that view overblown b/c life poses many challenges, and we need many inspirational stories?

Reposted by Evan Selinger

“Brain capacity is also being squeezed. Our mental lives are more fragmented and scattered than ever before”
www.ft.com/content/c288...
The human mind is in a recession
Technology strains our brain health, capacity and skills
www.ft.com

Narrating counterfactuals is necessary to make the invisible legible. Tragically, though, I suspect many will find such stories too abstract and hypothetical to resonate. When people are hurting, it's hard to point out that things could have been worse, and much is taken for granted.
Government’s wins are often invisible: Systems that avoid plane crashes; alliances that avert war; surveillance that prevent pandemics.

Government wins are often *the avoidance of loss.*

So how do we tell the story of the destruction of government? The story of future losses *not* averted?
Government’s wins are often invisible: Systems that avoid plane crashes; alliances that avert war; surveillance that prevent pandemics.

Government wins are often *the avoidance of loss.*

So how do we tell the story of the destruction of government? The story of future losses *not* averted?

It’s hard for some to appreciate this because, tragically, they only associate governance with one thing: a scolding headshake.

“The second fallacy we’ve heard is that AI requires a tradeoff – between safety and progress, between competition and collaboration, and between rights and innovation.”
At the Paris AI Action Summit, Dr. Alondra Nelson was an invited speaker at a private dinner at the Elysée Palace hosted by French President Emmanuel Macron. Here are her remarks on “three fundamental misconceptions in the way we think about artificial intelligence.”
Three Fallacies: Alondra Nelson's Remarks at the Elysée Palace on the Occasion of the AI Action Summit | TechPolicy.Press
Dr. Nelson was an invited speaker at a dinner hosted by French President Emmanuel Macron at the Palais de l'Élysée on February 10, 2025.
www.techpolicy.press

Ha! But is that idea—that everything can be explained with the right mathematical take—the same perspective being advocated for here and also explicitly linked to ketamine epiphanies?

with his take, his response was basically, “Well, I guess you don’t get math.”

revolved around the idea—which he got from Deleuze—that you could explain all kinds of social phenomena through the lens of structures like soap bubble formation. I never really understood it. But he was clear to us that experiences with ketamine helped unlock his key insights. And if you disagreed

Can you explain the ketamine part to me? When I was a grad student a billion years ago, I made a couple of trips with some Danish friends to visit Manuel DeLanda. Not sure if you’ve heard of him but DeLanda was a self-taught Deleuze scholar who wrote a bunch of weird and interesting books. They