Evan Selinger
@evanselinger.bsky.social
Prof. Philosophy at RIT. Contributing writer at Boston Globe Ideas. Tech (yes, AI), ethics, privacy, policy, and power. http://www.eselinger.org/
Pinned
Technological Remedies for Social Problems: Defining and Demarcating Techno-Fixes and Techno-Solutionism - Science and Engineering Ethics
Can technology resolve social problems by reducing them to engineering challenges? In the 1960s, Alvin Weinberg answered yes, popularizing the term “techno-fix” in the process. The concept was immedia...
link.springer.com
Despite extensive criticism, the much-maligned concepts of “techno-fix” + “techno-solutionism” remain poorly defined and readily conflated. In this open-access article, @spillteori.bsky.social and I do a bit of conceptual engineering to add further clarity.
link.springer.com/article/10.1...
link.springer.com/article/10.1...
For a long time, I’ve been trying to put my finger on what AI fundamentally can’t offer, even when it generates compelling writing. The answer is “existential solidarity”: comfort of hearing from others who can speak to struggles we identify with and care about. blog.apaonline.org/2025/11/06/s...
Seeking Existential Solidarity in the Age of AI | Blog of the APA
To say the least, it’s not a great time to be a writer. Historian and philosopher Yuval Noah Harari claims AI is already a “better storyteller” than we are. This ability is troubling, he insists, beca...
blog.apaonline.org
November 6, 2025 at 6:24 PM
For a long time, I’ve been trying to put my finger on what AI fundamentally can’t offer, even when it generates compelling writing. The answer is “existential solidarity”: comfort of hearing from others who can speak to struggles we identify with and care about. blog.apaonline.org/2025/11/06/s...
Reposted by Evan Selinger
British friends, excited that UK preorders for my 1st book are live! @evanselinger.bsky.social and I explain why the DOGE (move fast & break things) approach is broken and how we can fix huge problems by moving slow and upgrading. You can get 25% off until Friday! www.waterstones.com/book/move-sl...
October 14, 2025 at 1:40 PM
British friends, excited that UK preorders for my 1st book are live! @evanselinger.bsky.social and I explain why the DOGE (move fast & break things) approach is broken and how we can fix huge problems by moving slow and upgrading. You can get 25% off until Friday! www.waterstones.com/book/move-sl...
Thrilled that “Move Slow and Upgrade,” my new Cambridge University Press book written with Albert Fox Cahn, is available for pre-order this week at 25% off.
www.waterstones.com/campaign/oct...
www.waterstones.com/campaign/oct...
October 13, 2025 at 2:07 PM
Thrilled that “Move Slow and Upgrade,” my new Cambridge University Press book written with Albert Fox Cahn, is available for pre-order this week at 25% off.
www.waterstones.com/campaign/oct...
www.waterstones.com/campaign/oct...
Reposted by Evan Selinger
I have a new article out: "Do artifacts have political economy?" It's a riff on an old argument by Langdon Winner about the embedding of politics in technology
#STS #sociology #technoscience #technology #innovation
journals.sagepub.com/doi/10.1177/...
#STS #sociology #technoscience #technology #innovation
journals.sagepub.com/doi/10.1177/...
Do Artifacts Have Political Economy? - Kean Birch, 2025
Harking back to Langdon Winner's now classic essay “Do artifacts have politics?,” my aim in this article is to ask a very similar question—namely, do artif...
journals.sagepub.com
August 5, 2025 at 1:15 PM
I have a new article out: "Do artifacts have political economy?" It's a riff on an old argument by Langdon Winner about the embedding of politics in technology
#STS #sociology #technoscience #technology #innovation
journals.sagepub.com/doi/10.1177/...
#STS #sociology #technoscience #technology #innovation
journals.sagepub.com/doi/10.1177/...
Thomas Carroll and I put our heads together to articulate the main ethical concerns with using AI to address the empathy crisis in medicine. “The Ethics of Empathetic AI in Medicine” is now out in IEEE Transactions on Technology and Society.
ieeexplore.ieee.org/document/110...
ieeexplore.ieee.org/document/110...
The Ethics of Empathetic AI in Medicine
The expression of empathy is an important part of effective and humane medical care. Modern medicine faces a significant challenge in this area, at least in part due to the ever-increasing demands on ...
ieeexplore.ieee.org
July 20, 2025 at 2:13 PM
Thomas Carroll and I put our heads together to articulate the main ethical concerns with using AI to address the empathy crisis in medicine. “The Ethics of Empathetic AI in Medicine” is now out in IEEE Transactions on Technology and Society.
ieeexplore.ieee.org/document/110...
ieeexplore.ieee.org/document/110...
Wrote about why Robin Williams’s wisdom from Good Will Hunting is worth revisiting in the AI age: books and bots offer words that move us, but they’re not offering caring relationships.
www.bostonglobe.com/2025/07/07/o...
www.bostonglobe.com/2025/07/07/o...
A chatbot can never truly be your friend - The Boston Globe
AI relationships may be useful and even enjoyable. But only a fellow human can offer the depth of understanding that real closeness comes from.
www.bostonglobe.com
July 7, 2025 at 3:42 PM
Wrote about why Robin Williams’s wisdom from Good Will Hunting is worth revisiting in the AI age: books and bots offer words that move us, but they’re not offering caring relationships.
www.bostonglobe.com/2025/07/07/o...
www.bostonglobe.com/2025/07/07/o...
Not much on social media these days. But if anyone is interested in why I think the entire paradigm of human-like AI is wrong, here’s a short post at the APA Public Philosophy blog. They leaned into the “shit on a stick” story for the cover art. 😆
blog.apaonline.org/2025/07/01/t...
blog.apaonline.org/2025/07/01/t...
The Precautionary Approach to AI: Less Human, More Honest
Have you ever caught yourself thanking Siri or saying please to ChatGPT? If so, you’re not alone. Evolutionary forces, social norms, and design features all make us naturally inclined to treat these t...
blog.apaonline.org
July 1, 2025 at 5:22 PM
Not much on social media these days. But if anyone is interested in why I think the entire paradigm of human-like AI is wrong, here’s a short post at the APA Public Philosophy blog. They leaned into the “shit on a stick” story for the cover art. 😆
blog.apaonline.org/2025/07/01/t...
blog.apaonline.org/2025/07/01/t...
Reposted by Evan Selinger
"The profit-driven nature of life outside the classroom makes students wonder why we even bother discussing the ethics of technology in class." @evanselinger.bsky.social reads Daryl Campbell's "Fatal Abstraction" with the realities his students face in mind. lareviewofbooks.org/article/what...
What Can Enlightened Coders Really Do? | Los Angeles Review of Books
Evan Selinger reads Darryl Campbell’s “Fatal Abstraction: Why the Managerial Class Loses Control of Software” with the realities his students face in mind.
lareviewofbooks.org
April 10, 2025 at 6:23 PM
"The profit-driven nature of life outside the classroom makes students wonder why we even bother discussing the ethics of technology in class." @evanselinger.bsky.social reads Daryl Campbell's "Fatal Abstraction" with the realities his students face in mind. lareviewofbooks.org/article/what...
Can enlightened altruistic coders save us from the oppressive tyranny of corporate managerialism? Alas, I don’t think so. My argument in a review essay of Darryll Campbell’s “Fatal Abstraction: Why the Managerial Class Loses Control of Software.” lareviewofbooks.org/article/what...
What Can Enlightened Coders Really Do? | Los Angeles Review of Books
Evan Selinger reads Darryl Campbell’s “Fatal Abstraction: Why the Managerial Class Loses Control of Software” with the realities his students face in mind.
lareviewofbooks.org
April 10, 2025 at 6:19 PM
Can enlightened altruistic coders save us from the oppressive tyranny of corporate managerialism? Alas, I don’t think so. My argument in a review essay of Darryll Campbell’s “Fatal Abstraction: Why the Managerial Class Loses Control of Software.” lareviewofbooks.org/article/what...
Reposted by Evan Selinger
What a great event on law enforcement use of #FRT with fantastic presentations & provocative keynotes by @petefussey.bsky.social @hartzog.bsky.social @evanselinger.bsky.social & closing words of Karen Yeung! Many thanks to wonderful @clequesne.bsky.social for the impeccable organisation! #AI
🗓️ Nice, les 13 & 14 Mars.
A tous les passionnés et les curieux, les avertis et les novices : colloque international sur la reconnaissance faciale et les technologies de surveillance. 12 pays représentés, 30 experts et de passionnants échanges en perspectives👇
droit.univ-cotedazur.fr/law-enforcem...
A tous les passionnés et les curieux, les avertis et les novices : colloque international sur la reconnaissance faciale et les technologies de surveillance. 12 pays représentés, 30 experts et de passionnants échanges en perspectives👇
droit.univ-cotedazur.fr/law-enforcem...
March 15, 2025 at 9:36 AM
What a great event on law enforcement use of #FRT with fantastic presentations & provocative keynotes by @petefussey.bsky.social @hartzog.bsky.social @evanselinger.bsky.social & closing words of Karen Yeung! Many thanks to wonderful @clequesne.bsky.social for the impeccable organisation! #AI
The attack on universities mirrors our blind spot with supply chains. Because both operate invisibly, misconceptions abound. The profound contributions of universities often go unnoticed—and, just like supply chains, we risk only recognizing their value when they're diminished or fail.
February 25, 2025 at 7:48 PM
The attack on universities mirrors our blind spot with supply chains. Because both operate invisibly, misconceptions abound. The profound contributions of universities often go unnoticed—and, just like supply chains, we risk only recognizing their value when they're diminished or fail.
The main thesis is that the hermeneutic circle (no, they don't use this phrase!) haunts AI consciousness claims. Our theories of mind are built on pre-theoretical experience of consciousness. And yet companies insist they can replicate what they can't even define independently of that experience.
Some AI enthusiasts fantasize about chatbots' potential future suffering. But David McNeill and Emily Tucker say there are many good reasons to reject the claim that contemporary AI research is on its way toward creating genuinely intelligent, much less conscious, machines.
Suffering is Real. AI Consciousness is Not. | TechPolicy.Press
Probabilistic generalizations based on internet content are not steps toward algorithmic moral personhood, write David McNeill and Emily Tucker.
buff.ly
February 19, 2025 at 11:55 PM
The main thesis is that the hermeneutic circle (no, they don't use this phrase!) haunts AI consciousness claims. Our theories of mind are built on pre-theoretical experience of consciousness. And yet companies insist they can replicate what they can't even define independently of that experience.
Reposted by Evan Selinger
Please tune in to my conversation with Andrew Keen on the risks of AI companions.
keenon.substack.com/p/episode-22... important warnings by @gaiabernstein.bsky.social about threat of AI companions to kids
Episode 2241: Gaia Bernstein on the Threat of AI Companions to Children
AI Companions as the New Frontier on Kids' Screen Addiction
keenon.substack.com
February 18, 2025 at 9:26 PM
Please tune in to my conversation with Andrew Keen on the risks of AI companions.
Reposted by Evan Selinger
Last opportunity to register to Seton Hall Law School's AI Companions online symposium tomorrow, Tuesday, Feb. 18 ,12 pm-2:30 pm EST. You can register here: : bit.ly/40Ztl2j
February 17, 2025 at 11:16 PM
Last opportunity to register to Seton Hall Law School's AI Companions online symposium tomorrow, Tuesday, Feb. 18 ,12 pm-2:30 pm EST. You can register here: : bit.ly/40Ztl2j
Growing up in the 80s makes me a sucker for underdog stories. I loved reliving Karate Kid vibes with Cobra Kai!
Question—
Does celebrating beating the odds risk minimizing how stacked the deck is?
Or is that view overblown b/c life poses many challenges, and we need many inspirational stories?
Question—
Does celebrating beating the odds risk minimizing how stacked the deck is?
Or is that view overblown b/c life poses many challenges, and we need many inspirational stories?
February 17, 2025 at 10:18 PM
Growing up in the 80s makes me a sucker for underdog stories. I loved reliving Karate Kid vibes with Cobra Kai!
Question—
Does celebrating beating the odds risk minimizing how stacked the deck is?
Or is that view overblown b/c life poses many challenges, and we need many inspirational stories?
Question—
Does celebrating beating the odds risk minimizing how stacked the deck is?
Or is that view overblown b/c life poses many challenges, and we need many inspirational stories?
Reposted by Evan Selinger
“Brain capacity is also being squeezed. Our mental lives are more fragmented and scattered than ever before”
www.ft.com/content/c288...
www.ft.com/content/c288...
The human mind is in a recession
Technology strains our brain health, capacity and skills
www.ft.com
February 17, 2025 at 2:43 PM
Reposted by Evan Selinger
It's out! You can now access The Cambridge Handbook of the Law, Ethics and Policy of #AI: www.cambridge.org/core/books/t...
20 #openaccess chapters covering topics on AI, ethics, philosophy, legal domains and sectoral applications.
Huge thanks to all the authors who made this possible!
20 #openaccess chapters covering topics on AI, ethics, philosophy, legal domains and sectoral applications.
Huge thanks to all the authors who made this possible!
February 17, 2025 at 9:01 AM
It's out! You can now access The Cambridge Handbook of the Law, Ethics and Policy of #AI: www.cambridge.org/core/books/t...
20 #openaccess chapters covering topics on AI, ethics, philosophy, legal domains and sectoral applications.
Huge thanks to all the authors who made this possible!
20 #openaccess chapters covering topics on AI, ethics, philosophy, legal domains and sectoral applications.
Huge thanks to all the authors who made this possible!
Narrating counterfactuals is necessary to make the invisible legible. Tragically, though, I suspect many will find such stories too abstract and hypothetical to resonate. When people are hurting, it's hard to point out that things could have been worse, and much is taken for granted.
Government’s wins are often invisible: Systems that avoid plane crashes; alliances that avert war; surveillance that prevent pandemics.
Government wins are often *the avoidance of loss.*
So how do we tell the story of the destruction of government? The story of future losses *not* averted?
Government wins are often *the avoidance of loss.*
So how do we tell the story of the destruction of government? The story of future losses *not* averted?
February 16, 2025 at 4:31 PM
Narrating counterfactuals is necessary to make the invisible legible. Tragically, though, I suspect many will find such stories too abstract and hypothetical to resonate. When people are hurting, it's hard to point out that things could have been worse, and much is taken for granted.
It’s hard for some to appreciate this because, tragically, they only associate governance with one thing: a scolding headshake.
“The second fallacy we’ve heard is that AI requires a tradeoff – between safety and progress, between competition and collaboration, and between rights and innovation.”
“The second fallacy we’ve heard is that AI requires a tradeoff – between safety and progress, between competition and collaboration, and between rights and innovation.”
At the Paris AI Action Summit, Dr. Alondra Nelson was an invited speaker at a private dinner at the Elysée Palace hosted by French President Emmanuel Macron. Here are her remarks on “three fundamental misconceptions in the way we think about artificial intelligence.”
Three Fallacies: Alondra Nelson's Remarks at the Elysée Palace on the Occasion of the AI Action Summit | TechPolicy.Press
Dr. Nelson was an invited speaker at a dinner hosted by French President Emmanuel Macron at the Palais de l'Élysée on February 10, 2025.
www.techpolicy.press
February 15, 2025 at 9:14 PM
It’s hard for some to appreciate this because, tragically, they only associate governance with one thing: a scolding headshake.
“The second fallacy we’ve heard is that AI requires a tradeoff – between safety and progress, between competition and collaboration, and between rights and innovation.”
“The second fallacy we’ve heard is that AI requires a tradeoff – between safety and progress, between competition and collaboration, and between rights and innovation.”
Reposted by Evan Selinger
The top 100 legal scholars of 2024 is in and a woman made number 1–so happy to appear alongside @ariezra.bsky.social @klonick.bsky.social @hartzog.bsky.social @lsolum.bsky.social @micahschwartzman.bsky.social @richschragger.bsky.social @meganstevenson.bsky.social @uvalaw.bsky.social Whoos!
February 15, 2025 at 3:04 PM
The top 100 legal scholars of 2024 is in and a woman made number 1–so happy to appear alongside @ariezra.bsky.social @klonick.bsky.social @hartzog.bsky.social @lsolum.bsky.social @micahschwartzman.bsky.social @richschragger.bsky.social @meganstevenson.bsky.social @uvalaw.bsky.social Whoos!
Reposted by Evan Selinger
Big news: From February 20 to 28, I’ll be on a North American tour with talks in Toronto, Yale, and Harvard about my latest book, Waiting for Robots. Register here to join 👉
www.casilli.fr/2025/02/10/u...
www.casilli.fr/2025/02/10/u...
Upcoming North American Book Tour: Four Talks on My New Book Waiting for Robots | Antonio A. Casilli
I'm excited to announce my upcoming North American book tour for Waiting for Robots: The Hidden Hands of Automation. Throughout February 2025, I’ll be giving talks between Toronto and Boston, discussi...
www.casilli.fr
February 11, 2025 at 11:30 AM
Big news: From February 20 to 28, I’ll be on a North American tour with talks in Toronto, Yale, and Harvard about my latest book, Waiting for Robots. Register here to join 👉
www.casilli.fr/2025/02/10/u...
www.casilli.fr/2025/02/10/u...
Lack of trust in universities has opened the door to the current political attempts to control research.
To add to @ibogost.bsky.social’s brilliant insights, for too long too many scientists & engineers acted as if only folks in the humanities had something to prove to the public.
To add to @ibogost.bsky.social’s brilliant insights, for too long too many scientists & engineers acted as if only folks in the humanities had something to prove to the public.
February 10, 2025 at 11:08 PM
Lack of trust in universities has opened the door to the current political attempts to control research.
To add to @ibogost.bsky.social’s brilliant insights, for too long too many scientists & engineers acted as if only folks in the humanities had something to prove to the public.
To add to @ibogost.bsky.social’s brilliant insights, for too long too many scientists & engineers acted as if only folks in the humanities had something to prove to the public.
Companies, organizations, and politicians rushing to AI and blaming "human inefficiency" often ignore an inconvenient truth. Poor working conditions limit human potential. In so many situations, the solution to better work is investing in people, not replacing them.
February 10, 2025 at 9:14 PM
Companies, organizations, and politicians rushing to AI and blaming "human inefficiency" often ignore an inconvenient truth. Poor working conditions limit human potential. In so many situations, the solution to better work is investing in people, not replacing them.
Reposted by Evan Selinger
Yes, sadly they do. It’s been built and enabled over decades. Techno-social engineering of humans and society at scale, as described at length in 2018 book, Re-Engineering Humanity. @evanselinger.bsky.social
Tech billionaires have an unprecedented ability—and incentive—to manipulate the public's perception of reality, Adam Serwer argues.
Americans Are Trapped in an Algorithmic Cage
The private companies in control of social-media networks possess an unprecedented ability to manipulate and control the populace.
www.theatlantic.com
February 9, 2025 at 4:31 PM
Yes, sadly they do. It’s been built and enabled over decades. Techno-social engineering of humans and society at scale, as described at length in 2018 book, Re-Engineering Humanity. @evanselinger.bsky.social