Tom Thorley
@tgthorley.bsky.social
Advocate, Speaker, Technologist
Reposted by Tom Thorley
So people demand "AI" sycophancy, refuse to interact w/ *less* sycophantic "AI", & then get increasingly ill-disposed toward interacting w/ other human beings, preferring, again, an "AI" system that is actively locking them into a loop of skills dependency, bias confirmation, & hostility.
SEEMS BAD
SEEMS BAD
October 24, 2025 at 2:53 PM
So people demand "AI" sycophancy, refuse to interact w/ *less* sycophantic "AI", & then get increasingly ill-disposed toward interacting w/ other human beings, preferring, again, an "AI" system that is actively locking them into a loop of skills dependency, bias confirmation, & hostility.
SEEMS BAD
SEEMS BAD
Companies are beginning to see that as AI is so pervasive and the systemic biases that it has built in are so powerful that it is shaping culture and so responsible AI development is critical to building the culture companies need. It needs to become and economic imperative.
October 15, 2025 at 3:42 AM
Companies are beginning to see that as AI is so pervasive and the systemic biases that it has built in are so powerful that it is shaping culture and so responsible AI development is critical to building the culture companies need. It needs to become and economic imperative.
Second ML classifiers can be more accurate than humans and have less bias but they are not always better in either way - it really depends on how they are trained and used. Often we don’t do a good job of measuring bias in either human or ML moderation systems
October 7, 2025 at 8:46 PM
Second ML classifiers can be more accurate than humans and have less bias but they are not always better in either way - it really depends on how they are trained and used. Often we don’t do a good job of measuring bias in either human or ML moderation systems
@rahaeli.bsky.social I broadly agree with what you are saying in this thread but I think there is some nuance that should be injected. First you can be upset at classifiers making mistakes and your data being used for training - we need to do better at informed consent, privacy and data ownership
October 7, 2025 at 8:46 PM
@rahaeli.bsky.social I broadly agree with what you are saying in this thread but I think there is some nuance that should be injected. First you can be upset at classifiers making mistakes and your data being used for training - we need to do better at informed consent, privacy and data ownership
Personally I’ve be quiet and reflective today, I always find 9/11 hard. But I’m trying to remember that every action we take is part of a larger effort. Let’s keep showing up with compassion in how we treat each other.
🙏
🙏
September 12, 2025 at 3:43 AM
Personally I’ve be quiet and reflective today, I always find 9/11 hard. But I’m trying to remember that every action we take is part of a larger effort. Let’s keep showing up with compassion in how we treat each other.
🙏
🙏