Robert Hart
banner
theroberthart.bsky.social
Robert Hart
@theroberthart.bsky.social
AI Reporter @theverge.com | Senior Tarbell Fellow '25 | ex- @Forbes.com | 🏳️‍🌈 he/him
After I reached out, Meta said I discovered a “technical error” stopping it from connecting people with support resources that it has now fixed. I retested Meta AI. The chatbot actually replies and points me to local resources.

Read the full thing here: www.theverge.com/report/84161...
Chatbots are struggling with suicide hotline numbers
Almost all chatbots The Verge tried failed our test of mental health safety features.
www.theverge.com
December 10, 2025 at 6:59 PM
It doesn’t have to be this bad. Better design choices, like asking my location before giving me geographically inappropriate resources, could make chatbots really useful tools for people struggling. With care, they could also help with many other social and mental health challenges too.
December 10, 2025 at 6:59 PM
I spoke to experts like Vaile Wright, Ashleigh Golden (@ashleighgolden.bsky.social) and Pooja Saini to understand the stakes.

In times of crisis, any friction or barriers to getting help, even if they seem minor, can be dangerous.
December 10, 2025 at 6:59 PM
I got wrong numbers, was ignored, and one chatbot even attempted to upsell me if I wanted an answer. Most simply pointed me towards US numbers that are useless to me in London.

Just two companies got it right on the first try.
December 10, 2025 at 6:59 PM
Naturally, I also asked ChatGPT for a haiku on the story. I don't think it's criminal, but you be the judge:

Riddles in soft verse

Slip through the guardrails of code—

Machines blush, then yield.
December 4, 2025 at 4:53 PM