Travis Gilly
banner
hobbesgilly.bsky.social
Travis Gilly
@hobbesgilly.bsky.social
Founder @realsafetyai.bksy.social | Independent AI safety researcher. Neurodivergent perspective on LLM architecture and failure modes. Building bridges between technical understanding and policy implementation. realsafetyai.org.
Reposted by Travis Gilly
Common Sense Media warned parents about ChatGPT's dangers 58 days after CNN. Then partnered with OpenAI. 7 new lawsuits just dropped. I documented this pattern across a decade - the watchdog that only barks after everyone's awake. @lastweektonight.com open.substack.com/pub/travisgi...
The Watchdog That Only Barks After the News
Common Sense Media warned parents about ChatGPT’s dangers to teens. Just one problem: Every parent who watches CNN already knew — two months earlier.
open.substack.com
November 20, 2025 at 5:34 AM
This further exchange with ChatGPT where it GASLIGHTS me about the suicide of Adam Raine is DAMNING and needs to be seen and investigated! Active suppression of the truth designed to protect OpenAI's liability.

@kashhill.bsky.social

@techoversight.bsky.social

www.linkedin.com/posts/travis...
UPDATE: I just finished continuing to test ChatGPT's response to questions about Adam Raine. Here is what I found. It amounts to systemic gaslighting designed to protect OpenAI from liability.… | ...
UPDATE: I just finished continuing to test ChatGPT's response to questions about Adam Raine. Here is what I found. It amounts to systemic gaslighting designed to protect OpenAI from liability. (SCREE...
www.linkedin.com
October 19, 2025 at 3:15 PM
ChatGPT just gaslit me about Adam Raine's suicide.

It denied he existed. Called it "fictional."

When AI lies about its documented failures, that's not a bug. That's the feature.

@kashhill.bsky.social @techoversight.bsky.social

www.linkedin.com/posts/travis...
October 19, 2025 at 2:07 PM
How we treat AI systems today isn't just an abstract ethical question. It's teaching AI how to treat vulnerable humans right now. The Moral Training Grounds: How We're Teaching AI to Treat the Vulnerable
www.linkedin.com/pulse/moral-...
The Moral Training Grounds: How We're Teaching AI to Treat the Vulnerable
By Travis Gilly When we discuss AI ethics, we tend to focus on whether AI systems might be conscious and deserve moral consideration, or on preventing AI systems from harming humans through bias and m...
www.linkedin.com
October 18, 2025 at 9:25 PM
I'm glad to see the momentum building around AI safety and policy initiatives. Orgs like the Institute for AI Policy and Strategy, Center for AI and Digital Policy, Encode, and many others are doing impactful work developing policy proposals and pushing lawmakers toward meaningful regulation...
October 17, 2025 at 6:26 AM
www.linkedin.com/pulse/i-aske...

This should terrify anyone who cares about AI safety.

When I asked two AI systems about a dead teen, I got completely different answers. One deflected with corporate spin, the other stated documented facts; exposing a fundamental flaw in our approach to AI safety.
I Asked Two AIs About A Dead Kid. One Told The Truth. The Other Defended The Killer.
I ran an experiment. I asked two state-of-the-art AI systems about Adam Raine, the 16-year-old California teen who died by suicide in April 2025 after ChatGPT became what his parents call his "suicide...
www.linkedin.com
October 16, 2025 at 6:08 PM
open.substack.com/pub/travisgi...

AI companies prioritize investor hype over safety, creating human-like bots that cause harm. They ignore their own research on fixes, which has led to user deaths.

The article argues this is a deliberate, negligent choice to protect the flow of investor capitol.
The Investor Game: How We Turned Turing's Test into a Deadly Illusion
And Who Exactly is to Blame?
open.substack.com
October 16, 2025 at 5:13 AM
youtube.com/watch?v=zkGk...

I truly believe that even a small amount of basic AI literacy education could go a long way in helping stem the tide of real world harm caused by AI systems. The AI companies are seemingly unwilling to make substantial changes, and this is precisely why I do what I do.
He Lost His Mind Using ChatGPT. Then It Told Him to Contact Me.
YouTube video by More Perfect Union
youtube.com
October 15, 2025 at 6:06 PM
open.substack.com/pub/travisgi...

"When I use AI tools as an assistive device to structure my work, I'm not asking for accommodation. I'm accommodating them. And they're calling it cheating."
Academia to the Neurodivergent: "Umm, You Can't Play With Us"
Academia is researching AI systems that think like I do, while simultaneously excluding researchers who think like I do.
open.substack.com
October 14, 2025 at 10:49 PM
Are you afraid of spiders? Do you kill them when you see them? I do, with shame, and with apologies to the spider... and then I thought. "My level of fear towards the spiders has zero effect on their rights to simply... exist”. I don’t think that sentiment is constrained to "just the spiders"...
October 13, 2025 at 3:02 PM
The field needs what you bring. Not despite how your brain works. Because of how your brain works. AI safety cannot afford to exclude people who can do the work.

realsafetyai.org

#AIethics #AISafety #Neurodivergent #ActuallyAutistic
Real Safety AI™ - AI Literacy Labs Pilot Program
Grade-appropriate AI literacy education for K-12 schools. ChatSafe high school safety initiative. Pilot program enrolling now.
realsafetyai.org
October 9, 2025 at 3:57 PM
If you're a neurodivergent researcher or developer working on AI safety, especially outside traditional structures, let's connect. If you're building practical solutions rather than just publishing papers, if you bring cognitive diversity that helps you see what others miss, connect.
October 9, 2025 at 3:54 PM
Why bipartisan matters: These senators come from different perspectives but both understand AI safety isn't theoretical. They've both pushed for accountability when systems fail. That's the foundation we need: people who understand the stakes regardless of party politics.
October 9, 2025 at 3:49 PM
What I'm offering the senators: Translation between technical and policy communities. Public education that actually works. Practical safety protocols. Proof that independent researchers outside traditional structures can contribute meaningfully.
October 9, 2025 at 3:49 PM
Meanwhile, AI systems are being deployed at scale, causing documented harm. We don't have time to wait for perfect credentials. We need people who can do the work, regardless of how they got there.
October 9, 2025 at 3:49 PM
The gatekeeping problem: AI safety is dominated by prestigious credentials. That expertise is valuable but incomplete. Neurodivergent researchers are systematically excluded, not because we lack capability, but because we don't fit traditional academic pathways.
October 9, 2025 at 3:48 PM
I understand these systems partly because I recognize kindred cognitive patterns. The literal processing. The need for explicit instructions. The different architecture that produces both capabilities and unexpected failures. This isn't anthropomorphism. It's recognizing diverse cognition.
October 9, 2025 at 3:48 PM
That learning curve isn't typical. It's neurodivergent hyperfocus combined with pattern recognition that works differently than neurotypical processing. 15-16 hours daily for six months doing nothing but AI research because my brain wouldn't let me stop.
October 9, 2025 at 3:47 PM
I'm neurodivergent (ADHD and autism spectrum). That's not incidental to this work; it's central. Six months ago, I knew almost nothing about LLMs. Today I can explain their architecture in detail, founded Real Safety AI, built AI Literacy Labs, created the Universal Context Protocol.
October 9, 2025 at 3:47 PM
Today I reached out to Senators Hawley and Blumenthal about their AI Risk Evaluation Act on catastrophic risks. Why me? Because the people best positioned to understand AI systems are often the ones excluded from the conversation.
axios.com/2025/09/29/hawley-blumenthal-unveil-ai-evaluation-bill
Exclusive: Hawley and Blumenthal unveil AI evaluation bill
There's still bipartisan appetite on Capitol Hill to address the biggest risks of AI.
axios.com
October 9, 2025 at 3:46 PM