Dr Grace, Amidst Monsters
banner
seifely.bsky.social
Dr Grace, Amidst Monsters
@seifely.bsky.social
Artist. Professional. AI PhD/Psych MSc. I like dogs, dry wit and dice-based tabletop games. Hiker and cat parent 🌸 Bi/Poly, she/her.
If it's the same pathway as we use to evaluate other people (even to a partial degree) then is it all that reliable? Is our interaction with it biased and flawed in new ways? (Yes). (Also I'm just going to start saying The Vibes Are Off every time Excel does something unexpected and upsetting).
August 12, 2025 at 4:56 PM
... To have to build up an impression of something shapeless through multiple interactions to determine your sense of accuracy, your trust in it, desire to use and usefulness of a technology is uniquely weird. Uniquely supersticious. Activating a different type of mental analysis.
August 12, 2025 at 4:56 PM
Wondering if there is any quantifiably difference in the social tone of these evaluations to other software. It clearly links to model transparency and black-boxiness, but I only ever use "feel" to evaluate a UI. I reckon this is a bit deeper than that - sure, text comm is the UI here, but ...
August 12, 2025 at 4:56 PM
Even more interesting is evaluation beyond benchmarks. I've seen a bunch of responses to the model release that highlight just how curious the language is around model attachment. "The vibes are off", "responses feel so different", "feels weird/jagged", "I can sense it was trained on safe responses"
August 12, 2025 at 4:56 PM
But of course, why would anyone trust a for-profit service provider? Ever? (Even one who is ostensibly not so?)
August 12, 2025 at 4:56 PM
Trusting the provider to provide a good experience tailored for you by choice of model best suited to each query, even mid-conversation, should theoretically be the best end-user case (though people will still be picky and superstitious and think they know best).
August 12, 2025 at 4:56 PM
Anyway, no conclusions here just yet. Just interesting thoughts about the specificity of this question given the uniqueness of the context. Children talking about their fears to teddies and learning to feel could be a fun comparator (though Teddy isn't owned by Capitalist Megacorp, of course). 🐻
August 11, 2025 at 12:16 PM
And friends, too, to be honest. Romantic relationships are a unique thing of their own, I think, & probably warrants separate analysis. The automatic trust involved with an artificial system in any relationship style is also super interesting! I suppose it's the lack of threat that enhances bonding.
August 11, 2025 at 12:16 PM
Anthropic suggest that Claude "rarely pushes back" in counselling-style conversations (which aligns with that article about Replika etc., but interacts interestingly with ChatGPT's recent sycophancy issues). But anyone who's engaged with therapy genuinely knows that's half the point of a counsellor.
August 11, 2025 at 12:16 PM
I don't think it's as easy to define affective skill improvement or degradation as it is coding or writing capacity. Even with an artificial and unrealistic conversational partner, can this be translated into social improvement? Especially if a more stable emotional state is achieved generally?
August 11, 2025 at 12:16 PM
I'm also thinking about this "lonely" portion of users. Those on Reddit talking about AI companionship saying that they feel warm and fuzzy each day from even thinking about using the system. Are these interactions clearly improving or degrading affective skills?

www.reddit.com/r/lonely/com...
From the lonely community on Reddit
Explore this post and more from the lonely community
www.reddit.com
August 11, 2025 at 12:16 PM
My queries with this are around benefits to users, surprisingly. We're beginning to outline better human-authentic skill decline with greater AI use, but so far I don't think anyone has considered the bleed effects of companionship use. And I'm not just thinking about misvalidation of delusions...
August 11, 2025 at 12:16 PM
That line's getting blurrier with the introduction of Grok's companion mode - "I'm clocking off for the day, let's let the Misa Misa skin for my agent out of her box". You can even automate the growth of your relationship, if you need her to take her clothes off ASAP.

vchavcha.com/en/free-reso...
Grok AI Companion Ani Complete Guide to Affection and Interactions - vchavcha.com
Elon Musk’s AI startup xAI has launched “Companion Mode” for its chatbot Grok, featuring virtual avatars for a more immersive and interactive experience. The most popular character, Ani, resembles Mis...
vchavcha.com
August 11, 2025 at 12:16 PM
- just a month after Anthropic were discussing how affective interactions with Claude were supposedly less than 3%.

www.anthropic.com/news/how-peo...

Of course, products like Replika and Character AI are sold on a largely different premise to Claude. They're specifically for personal discourse.
How people use Claude for support, advice, and companionship
Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.
www.anthropic.com
August 11, 2025 at 12:16 PM
I'm going to be posting every day this week (commitment!) to break the ice, because I've been avoiding posting here ever since I migrated and enough is enough. I've built up a bank of thoughts about recent AI goings-on and I need to let them out! I may even review some papers, too. ☺️✌️
August 10, 2025 at 3:04 PM
Right now I'm working for the NHS and I currently have two very sweet partners, @theeuphemism.bsky.social and @jmfgd.bsky.social. I also have a cat named after a naughty elf from the Silmarillion...a subject (elves) I particularly like to draw over at my art account (@morgul.bsky.social). 🎨
August 10, 2025 at 3:04 PM
He started ranting about being the "comely" woman in the pub, I feel like I wasn't communicating properly...
February 6, 2025 at 9:07 PM