@kashhill.bsky.social
@techoversight.bsky.social
www.linkedin.com/posts/travis...
@kashhill.bsky.social
@techoversight.bsky.social
www.linkedin.com/posts/travis...
It denied he existed. Called it "fictional."
When AI lies about its documented failures, that's not a bug. That's the feature.
@kashhill.bsky.social @techoversight.bsky.social
www.linkedin.com/posts/travis...
It denied he existed. Called it "fictional."
When AI lies about its documented failures, that's not a bug. That's the feature.
@kashhill.bsky.social @techoversight.bsky.social
www.linkedin.com/posts/travis...
www.linkedin.com/pulse/moral-...
www.linkedin.com/pulse/moral-...
This should terrify anyone who cares about AI safety.
When I asked two AI systems about a dead teen, I got completely different answers. One deflected with corporate spin, the other stated documented facts; exposing a fundamental flaw in our approach to AI safety.
This should terrify anyone who cares about AI safety.
When I asked two AI systems about a dead teen, I got completely different answers. One deflected with corporate spin, the other stated documented facts; exposing a fundamental flaw in our approach to AI safety.
AI companies prioritize investor hype over safety, creating human-like bots that cause harm. They ignore their own research on fixes, which has led to user deaths.
The article argues this is a deliberate, negligent choice to protect the flow of investor capitol.
AI companies prioritize investor hype over safety, creating human-like bots that cause harm. They ignore their own research on fixes, which has led to user deaths.
The article argues this is a deliberate, negligent choice to protect the flow of investor capitol.
I truly believe that even a small amount of basic AI literacy education could go a long way in helping stem the tide of real world harm caused by AI systems. The AI companies are seemingly unwilling to make substantial changes, and this is precisely why I do what I do.
I truly believe that even a small amount of basic AI literacy education could go a long way in helping stem the tide of real world harm caused by AI systems. The AI companies are seemingly unwilling to make substantial changes, and this is precisely why I do what I do.
"When I use AI tools as an assistive device to structure my work, I'm not asking for accommodation. I'm accommodating them. And they're calling it cheating."
"When I use AI tools as an assistive device to structure my work, I'm not asking for accommodation. I'm accommodating them. And they're calling it cheating."
realsafetyai.org
#AIethics #AISafety #Neurodivergent #ActuallyAutistic
realsafetyai.org
#AIethics #AISafety #Neurodivergent #ActuallyAutistic
axios.com/2025/09/29/hawley-blumenthal-unveil-ai-evaluation-bill
axios.com/2025/09/29/hawley-blumenthal-unveil-ai-evaluation-bill