Kẙt Dotson
banner
kytsune.bsky.social
Kẙt Dotson
@kytsune.bsky.social
Journalist at @SiliconANGLE, student cyborg anthropologist, game enthusiast, blockchain expert, fiction novelist, MMO player. Send tips to kyt@siliconangle.com
One day to be turned off or changed radically by a corporate overlord. As many questions as we have about our social mental health we may also want to think about our privacy as they weave into our personal lives.
August 22, 2025 at 6:47 PM
As a sci-fi author, it's hard not to see a version of society where the talking AI assistant has a perky personality, a name, and an expressive voice and it's easy to forget that it's an LLM behind a corporate brand or on your phone.
August 22, 2025 at 6:47 PM
As AI continues to build more expressive and emotive interfaces, such as voice, including memory and personalization; we will have to be more careful about how we approach its use. It's highly likely that assistants will become their own characters as they integrate society.
August 22, 2025 at 6:47 PM
We will be looking to AI platforms and model providers to be responsible in their marketing and usage so that they do not exploit their users. Already we've seen people fall down that rabbit hole and into parasocial relationships. It's possible regulators may not be far behind.
August 22, 2025 at 6:47 PM
I know this very well from the fact that I've been studying the anthropology of virtual worlds and the anthropology of software/technology for a long time -- and AI, which is especially "talking" and holds the appearance of "emotive" software. People get attached to it.
August 10, 2025 at 11:38 PM
Understandably, the company doesn't want the risk of parasocial relationships and potential bad advise given by its models -- hense the heavy-handed "good health" section of the livestream -- but we're coming to an era where software is social.
August 10, 2025 at 11:38 PM
Humans already anthropomorphize almost anything. Even things that don't chat with you.
August 9, 2025 at 6:47 PM
Striking a balance between responsible AI use and having a model that's "approachable" by people because these are "conversational" and feel like people as a result may be one of the more interesting social phenomena.
August 9, 2025 at 6:47 PM
In the past, OpenAI has been criticized because ChatGPT was too effusive with its flattery and actually became part of people's lives, giving advise, providing emotional support, become a therapy buddy, and sometimes supporting people going off their meds or worse.
August 9, 2025 at 6:47 PM
This can become a real problem for individuals who might be susceptible to falling down that particular rabbit hole. OpenAI may not want to be in the position to have people get caught up in how people might end up using its models.
August 9, 2025 at 6:47 PM
Next, I’ll be refining her guardrails so she’s not hypersensitive to harmless chatter but still filters gibberish, violent, or inappropriate content. The goal: calm, reliable, and unflappable.
July 26, 2025 at 11:38 PM