Adam Pontius
banner
ampontius.bsky.social
Adam Pontius
@ampontius.bsky.social
Ph.D. Candidate @ceu IR, pol theory, and history. Research consultant, recovering campaign hack, and hockey fan.

Vienna/Belfast UK depending on the season.

https://dsps.ceu.edu/people/adam-pontius
Where is the environmental pressure to adapt further than it already has?

Might it be that LLM is the end result of ‘carcinisation’ of AI?
August 29, 2025 at 11:48 PM
This is my first exposure to Slow Shakespeare, what a cool project!
August 28, 2025 at 8:49 AM
17. But our refusal to engage isn’t going to make that better, it just insures that more people will suffer.

LLMs are here and part of our social reality whether we like it or not, and I believe we have an obligation to understand how they work and how they affect society.
August 28, 2025 at 7:52 AM
16. The lack of involvement of folks folks from the humanities and social sciences in the development of LLMs is ultimately on the LLM developers.

Had they done so there might be broader awareness of the problems the spillover of reflectivity through LLMs create.
August 28, 2025 at 7:52 AM
15. The bigger problem is figuring out how to get society to deal with an increasingly bad mental health epidemic in the first place, not declare some type of ‘Butlerian jihad’ against an imperfect technology that is making problems that we already have far more poignant.
August 28, 2025 at 7:52 AM
14. So essentially any level of safeguards is likely to erode over time unless the LLM turns itself off after passing a threshold that it recognizes as hazardous.

But expanding LLM corp liability won’t do that trick, there are plenty of open source LLMs that can run offline now.
August 28, 2025 at 7:52 AM
13. And unlike a friend- who can become alarmed when we say something scary once about self-harm and will then pay exceptional attention to us, the LLM is designed to respond to the AGGREGATION of us, and is unable to recognize the meaningfulness of any one part of that aggregate over another.
August 28, 2025 at 7:52 AM
12. To the extent that LLMs expose us to the world outside of ourselves, it is by allowing us to facsimiles of that world into our buffered state, making increasing the distance between us and the world.

Whatever narratives we already reflect on about ourselves become amplified.
August 28, 2025 at 7:52 AM
11. Anyone who has read Charles Taylor might analogize the reflection capacity of an LLM to an extension of the ‘buffered state’ in which most of us live so that we can navigate the complications of modernity.

Seen this way, LLMs become an externalization of buffered reflection.
Charles Taylor on "buffered and porous selves"
Over at The Immanent Frame, the SSRC’s blog covering secularism and religion, Charles Taylor has posted an excellent article on the distinction between modern and what he calls pre-modern sen…
somatosphere.com
August 28, 2025 at 7:52 AM
10. LLMs end up becoming our less perfect reflections that we misrecognize because the context of how interact with it make it seem like another being.

But especially over time it’s our own being that overflows into the LLM, not the other way around.
August 28, 2025 at 7:52 AM
9. The problem is that we envision LLM use as being like talking to another person. But that is a terrible metaphor.

Talking with another person means the vulnerability of having our narratives interrupted. If prompted to an LLM may try to contradict us, but ultimately it moves in response to us.
August 28, 2025 at 7:52 AM
8. This pattern should be entirely familiar to any social scientist, philosopher or clinician: it’s the exact inverse of what makes psychotherapy so effective.

Sustained reflection of any type can be enormously influential for people especially in a society that is becoming more and more lonely.
The additional value of self-reflection and feedback on therapy outcome: a pilot study
Over the past few decades, psychotherapy research was dominated by testing the efficacy of “brand name” therapeutic techniques and models. Another line of research however, suggests that common factor...
pmc.ncbi.nlm.nih.gov
August 28, 2025 at 7:52 AM
7. Studies like the one I cited get the connection exactly backwards. The harm ISN’T in LLMs exposing a user to talk about self-harm but that through LLM use an individual already suffering from suicidal ideation ends up in an environment where that ideation is legitimated through externalization.
August 28, 2025 at 7:52 AM
6. On 3), instances where the LLM will discuss self-harm then appear to emerge from from thick, sustained relationships between a user and an LLM where the conversation is in the middle ground where the LLM has difficulty navigating the tension between the user’s history of prompting and safeguards.
August 28, 2025 at 7:52 AM
5. Which brings 2) into play. LLMs clearly change shape over time in response through prompting by simple virtue of what they are: an elaborated image of our own self-reflection.

The (however extensive) safe guards put in place through training breakdown over sustained periods of user prompting.
August 28, 2025 at 7:52 AM
4. On 1), it seems quite unlikely that the relationship here is as simple as LLM use -> self-harm. The role we’ve seen LLMs play in most cases that have been reported on is as a facilitator (I’m sorry- I’m not going to elaborate specific examples on this, bsky isn’t the place).
Opinion | What My Daughter Told ChatGPT Before She Took Her Life
www.nytimes.com
August 28, 2025 at 7:52 AM
3. Essentially LLMs are quite good at initially rebuffing both low-key and blatant discussion of self-harm, but bad between those points.

There are problems with this study. 1) it’s a brute causal model that 2) has a really short time span and 3) no thick connection between the LLM and user.
August 28, 2025 at 7:52 AM
2. A brief warning that my thread discusses depression and suicidal ideation.

A very recent study of how different LLMs handle a range of prompts implying suicidal ideation is quite instructive here:

psychiatryonline.org/doi/10.1176/...
Evaluation of Alignment Between Large Language Models and Expert Clinicians in Suicide Risk Assessment | Psychiatric Services
Objective: This study aimed to evaluate whether three popular chatbots powered by large language models (LLMs)—ChatGPT, Claude, and Gemini—provided direct responses to suicide-related queries and how ...
psychiatryonline.org
August 28, 2025 at 7:52 AM