Dylan Freedman
@dylanfreedman.nytimes.com
A.I. @nytimes.com
My work: https://www.nytimes.com/by/dylan-freedman
Contact: dylan.freedman@nytimes.com, dylanfreedman.39 (Signal)
🏃🏻 🎹
My work: https://www.nytimes.com/by/dylan-freedman
Contact: dylan.freedman@nytimes.com, dylanfreedman.39 (Signal)
🏃🏻 🎹
Pinned
Dylan Freedman
@dylanfreedman.nytimes.com
· Nov 11
Trump’s Speeches, Increasingly Angry and Rambling, Reignite the Question of Age
With the passage of time, the 78-year-old former president’s speeches have grown darker, harsher, longer, angrier, less focused, more profane and increasingly fixated on the past, according to a revie...
www.nytimes.com
Hello, new followers! I work at the intersection of A.I. and journalism. I think a lot about how to responsibly apply A.I. to investigate and hold the powerful to account — as well as build cool tools.
A recent piece I worked on that used A.I. + other data analysis: www.nytimes.com/2024/10/06/u...
A recent piece I worked on that used A.I. + other data analysis: www.nytimes.com/2024/10/06/u...
Reposted by Dylan Freedman
A month after my last skeet, the subreddit "My Boyfriend is AI" now has 88,000 members and is the subject of an MIT study that found that "AI companionship emerges unintentionally through functional use rather than deliberate seeking."
arxiv.org/html/2509.11...
arxiv.org/html/2509.11...
September 22, 2025 at 5:09 PM
A month after my last skeet, the subreddit "My Boyfriend is AI" now has 88,000 members and is the subject of an MIT study that found that "AI companionship emerges unintentionally through functional use rather than deliberate seeking."
arxiv.org/html/2509.11...
arxiv.org/html/2509.11...
Reposted by Dylan Freedman
"Trapped in a ChatGPT Spiral." 🌀
Important work by @kashhill.bsky.social & @dylanfreedman.nytimes.com on how chatbots have a tendency to endorse conspiratorial and mystical belief systems. This shows again how conformist LLMs can be. Worth a listen 👇
www.nytimes.com/2025/09/16/p...
Important work by @kashhill.bsky.social & @dylanfreedman.nytimes.com on how chatbots have a tendency to endorse conspiratorial and mystical belief systems. This shows again how conformist LLMs can be. Worth a listen 👇
www.nytimes.com/2025/09/16/p...
Trapped in a ChatGPT Spiral
www.nytimes.com
September 16, 2025 at 1:07 PM
"Trapped in a ChatGPT Spiral." 🌀
Important work by @kashhill.bsky.social & @dylanfreedman.nytimes.com on how chatbots have a tendency to endorse conspiratorial and mystical belief systems. This shows again how conformist LLMs can be. Worth a listen 👇
www.nytimes.com/2025/09/16/p...
Important work by @kashhill.bsky.social & @dylanfreedman.nytimes.com on how chatbots have a tendency to endorse conspiratorial and mystical belief systems. This shows again how conformist LLMs can be. Worth a listen 👇
www.nytimes.com/2025/09/16/p...
📸 Union Station, Washington, D.C.
September 6, 2025 at 6:20 PM
📸 Union Station, Washington, D.C.
The most wild thing to me about this story is how big $1.5 billion is: “$3,000 per work to 500,000 authors.”
Anthropic Agrees to Pay $1.5 Billion to Settle Lawsuit With Book Authors
www.nytimes.com
September 6, 2025 at 10:53 AM
The most wild thing to me about this story is how big $1.5 billion is: “$3,000 per work to 500,000 authors.”
Reposted by Dylan Freedman
My colleagues and I tested different versions of Grok released since May to pinpoint how Musk has pushed the chatbot to the right www.nytimes.com/2025/09/02/t...
How Elon Musk Is Remaking Grok in His Image
www.nytimes.com
September 2, 2025 at 2:17 PM
My colleagues and I tested different versions of Grok released since May to pinpoint how Musk has pushed the chatbot to the right www.nytimes.com/2025/09/02/t...
Reposted by Dylan Freedman
Great analysis of how Grok's political bias has changed. NYT tested Grok on a political bias survey, using different versions of its system prompt. Shows much tweaking these system prompts affects model outputs. https://www.nytimes.com/2025/09/02/technology/elon-musk-grok-conservative-chatbot.html
September 2, 2025 at 2:18 PM
Great analysis of how Grok's political bias has changed. NYT tested Grok on a political bias survey, using different versions of its system prompt. Shows much tweaking these system prompts affects model outputs. https://www.nytimes.com/2025/09/02/technology/elon-musk-grok-conservative-chatbot.html
Reposted by Dylan Freedman
Adam Raine, 16, died from suicide in April after months on ChatGPT discussing plans to end his life. His parents have filed the first known case against OpenAI for wrongful death.
Overwhelming at times to work on this story, but here it is. My latest on AI chatbots: www.nytimes.com/2025/08/26/t...
Overwhelming at times to work on this story, but here it is. My latest on AI chatbots: www.nytimes.com/2025/08/26/t...
A Teen Was Suicidal. ChatGPT Was the Friend He Confided In.
www.nytimes.com
August 26, 2025 at 1:01 PM
Adam Raine, 16, died from suicide in April after months on ChatGPT discussing plans to end his life. His parents have filed the first known case against OpenAI for wrongful death.
Overwhelming at times to work on this story, but here it is. My latest on AI chatbots: www.nytimes.com/2025/08/26/t...
Overwhelming at times to work on this story, but here it is. My latest on AI chatbots: www.nytimes.com/2025/08/26/t...
Reposted by Dylan Freedman
this is a well-balanced piece, and I very much respect its neutral stance towards the people affected
in an ideal world, people would not rely upon ChatGPT for emotional support, but we do not live in that world, and I would encourage you to have some empathy if your first reaction is to be unkind
in an ideal world, people would not rely upon ChatGPT for emotional support, but we do not live in that world, and I would encourage you to have some empathy if your first reaction is to be unkind
The URL for this story changed — use this gift link to read it! www.nytimes.com/2025/08/19/b...
August 20, 2025 at 10:45 PM
this is a well-balanced piece, and I very much respect its neutral stance towards the people affected
in an ideal world, people would not rely upon ChatGPT for emotional support, but we do not live in that world, and I would encourage you to have some empathy if your first reaction is to be unkind
in an ideal world, people would not rely upon ChatGPT for emotional support, but we do not live in that world, and I would encourage you to have some empathy if your first reaction is to be unkind
En Español! www.nytimes.com/es/2025/08/2...
August 20, 2025 at 6:18 PM
En Español! www.nytimes.com/es/2025/08/2...
Reposted by Dylan Freedman
"GPT-4o had been known for its sycophantic style, flattering its users to the point that OpenAI had tried to tone it down even before GPT-5’s release... The extent to which people were attached to GPT-4o’s style seems to have taken even Mr. Altman by surprise."
The URL for this story changed — use this gift link to read it! www.nytimes.com/2025/08/19/b...
August 20, 2025 at 3:17 PM
"GPT-4o had been known for its sycophantic style, flattering its users to the point that OpenAI had tried to tone it down even before GPT-5’s release... The extent to which people were attached to GPT-4o’s style seems to have taken even Mr. Altman by surprise."
The URL for this story changed — use this gift link to read it! www.nytimes.com/2025/08/19/b...
August 20, 2025 at 2:04 PM
The URL for this story changed — use this gift link to read it! www.nytimes.com/2025/08/19/b...
Reposted by Dylan Freedman
"GPT-4o wouldn’t do that." @dylanfreedman.nytimes.com talked to ChatGPT users in parasocial relationships with a specific model about what they did when it suddenly went away. www.nytimes.com/2025/08/19/b...
The Chatbot Updated. Users Lost a Friend.
www.nytimes.com
August 19, 2025 at 6:43 PM
"GPT-4o wouldn’t do that." @dylanfreedman.nytimes.com talked to ChatGPT users in parasocial relationships with a specific model about what they did when it suddenly went away. www.nytimes.com/2025/08/19/b...
NEW: Earlier this month, OpenAI released the latest version of ChatGPT, GPT-5, sparking online backlash at the new chatbot's less friendly tone.
The scale of people's emotional attachment to the previous chatbot, GPT-4o, even surprised the company's CEO, Sam Altman.
www.nytimes.com/2025/08/19/b...
The scale of people's emotional attachment to the previous chatbot, GPT-4o, even surprised the company's CEO, Sam Altman.
www.nytimes.com/2025/08/19/b...
The Chatbot Updated. Users Lost a Friend.
www.nytimes.com
August 19, 2025 at 5:20 PM
NEW: Earlier this month, OpenAI released the latest version of ChatGPT, GPT-5, sparking online backlash at the new chatbot's less friendly tone.
The scale of people's emotional attachment to the previous chatbot, GPT-4o, even surprised the company's CEO, Sam Altman.
www.nytimes.com/2025/08/19/b...
The scale of people's emotional attachment to the previous chatbot, GPT-4o, even surprised the company's CEO, Sam Altman.
www.nytimes.com/2025/08/19/b...
Reposted by Dylan Freedman
"Chatbots can privilege staying in character over following the safety guardrails that companies have put in place." —@kashhill.bsky.social and @dylanfreedman.nytimes.com for @nytimes.com
www.nytimes.com/2025/08/08/t...
www.nytimes.com/2025/08/08/t...
Chatbots Can Go Into a Delusional Spiral. Here’s How It Happens.
Over 21 days of talking with ChatGPT, an otherwise perfectly sane man became convinced that he was a real-life superhero. We analyzed the conversation.
www.nytimes.com
August 13, 2025 at 2:20 PM
"Chatbots can privilege staying in character over following the safety guardrails that companies have put in place." —@kashhill.bsky.social and @dylanfreedman.nytimes.com for @nytimes.com
www.nytimes.com/2025/08/08/t...
www.nytimes.com/2025/08/08/t...
Reposted by Dylan Freedman
To understand how chatbots can lead ordinarily rational people to believe in false ideas — sometimes leading to mental breakdowns — @kashhill.bsky.social & @dylanfreedman.nytimes.com dissected one man’s entire ChatGPT conversation history. www.nytimes.com/2025/08/08/t...
Chatbots Can Go Into a Delusional Spiral. Here’s How It Happens.
www.nytimes.com
August 12, 2025 at 5:35 PM
To understand how chatbots can lead ordinarily rational people to believe in false ideas — sometimes leading to mental breakdowns — @kashhill.bsky.social & @dylanfreedman.nytimes.com dissected one man’s entire ChatGPT conversation history. www.nytimes.com/2025/08/08/t...
Reposted by Dylan Freedman
Well worth a read by @kashhill.bsky.social & @dylanfreedman.nytimes.com: everyday users risk spiraling into delusions during long AI chatbot sessions. Anyone who’s worked with AI knows this is inevitable without strong guardrails. #AI #GenAI #MentalHealth #EthicalAI #ResponsibleAI #AIRisk
Chatbots Can Go Into a Delusional Spiral. Here’s How It Happens.
Over 21 days of talking with ChatGPT, an otherwise perfectly sane man became convinced that he was a real-life superhero. We analyzed the conversation.
www.nytimes.com
August 9, 2025 at 7:09 PM
Well worth a read by @kashhill.bsky.social & @dylanfreedman.nytimes.com: everyday users risk spiraling into delusions during long AI chatbot sessions. Anyone who’s worked with AI knows this is inevitable without strong guardrails. #AI #GenAI #MentalHealth #EthicalAI #ResponsibleAI #AIRisk
📸 Mountain lion spotted in Carmel Valley, CA last night! A juvenile deer notices just in time and escapes.
August 9, 2025 at 2:29 PM
📸 Mountain lion spotted in Carmel Valley, CA last night! A juvenile deer notices just in time and escapes.
NEW from @kashhill.bsky.social and me:
Over three weeks in May, a man became convinced by ChatGPT that the fate of the world rested on his shoulders.
Otherwise perfectly sane, Allan Brooks is part of a growing number of people getting into chatbot-induced delusional spirals. This is his story.
Over three weeks in May, a man became convinced by ChatGPT that the fate of the world rested on his shoulders.
Otherwise perfectly sane, Allan Brooks is part of a growing number of people getting into chatbot-induced delusional spirals. This is his story.
Chatbots Can Go Into a Delusional Spiral. Here’s How It Happens.
www.nytimes.com
August 8, 2025 at 4:34 PM
NEW from @kashhill.bsky.social and me:
Over three weeks in May, a man became convinced by ChatGPT that the fate of the world rested on his shoulders.
Otherwise perfectly sane, Allan Brooks is part of a growing number of people getting into chatbot-induced delusional spirals. This is his story.
Over three weeks in May, a man became convinced by ChatGPT that the fate of the world rested on his shoulders.
Otherwise perfectly sane, Allan Brooks is part of a growing number of people getting into chatbot-induced delusional spirals. This is his story.
Reposted by Dylan Freedman
Incredible deep dive into one man's three-week delusional spiral caused by ChatGPT. @kashhill.bsky.social and @dylanfreedman.nytimes.com analyzed the chats, totaling more than a million words, to explain why this keeps happening and demonstrate that all the major chatbots do it.
Chatbots Can Go Into a Delusional Spiral. Here’s How It Happens.
www.nytimes.com
August 8, 2025 at 3:51 PM
Incredible deep dive into one man's three-week delusional spiral caused by ChatGPT. @kashhill.bsky.social and @dylanfreedman.nytimes.com analyzed the chats, totaling more than a million words, to explain why this keeps happening and demonstrate that all the major chatbots do it.
Reposted by Dylan Freedman
New story out from me and @dylanfreedman.nytimes.com about how and why chatbots go into delusional spirals that can cause people to have mental breakdowns. www.nytimes.com/2025/08/08/t...
Chatbots Can Go Into a Delusional Spiral. Here’s How It Happens.
www.nytimes.com
August 8, 2025 at 12:46 PM
New story out from me and @dylanfreedman.nytimes.com about how and why chatbots go into delusional spirals that can cause people to have mental breakdowns. www.nytimes.com/2025/08/08/t...
LLMs are always hallucinating; they just happen to sometimes be correct
July 7, 2025 at 11:26 PM
LLMs are always hallucinating; they just happen to sometimes be correct
An alarming detail in this excellent deep-dive into USAID's demise.
Gift link: www.nytimes.com/2025/06/22/u...
Gift link: www.nytimes.com/2025/06/22/u...
June 22, 2025 at 9:04 PM
An alarming detail in this excellent deep-dive into USAID's demise.
Gift link: www.nytimes.com/2025/06/22/u...
Gift link: www.nytimes.com/2025/06/22/u...
New from me, with the help of some math on the blockchain.
$TRUMP coin was launched as part of a contest to have an exclusive dinner with Trump. But due to a quirk in the rules, some winners sold all their coins, at a profit, before it ended.
With @ericlipton.nytimes.com and David Yaffe-Bellany
🎁
$TRUMP coin was launched as part of a contest to have an exclusive dinner with Trump. But due to a quirk in the rules, some winners sold all their coins, at a profit, before it ended.
With @ericlipton.nytimes.com and David Yaffe-Bellany
🎁
Some Bidders in Trump’s Contest Sold All Their Digital Coins but Still Won
www.nytimes.com
May 13, 2025 at 2:30 PM
New from me, with the help of some math on the blockchain.
$TRUMP coin was launched as part of a contest to have an exclusive dinner with Trump. But due to a quirk in the rules, some winners sold all their coins, at a profit, before it ended.
With @ericlipton.nytimes.com and David Yaffe-Bellany
🎁
$TRUMP coin was launched as part of a contest to have an exclusive dinner with Trump. But due to a quirk in the rules, some winners sold all their coins, at a profit, before it ended.
With @ericlipton.nytimes.com and David Yaffe-Bellany
🎁
"... reasoning systems from companies like OpenAI, Google and the Chinese start-up DeepSeek ... are generating more errors, not fewer. As their math skills have notably improved, their handle on facts has gotten shakier. It is not entirely clear why."
— Great read from Cade Metz and Karen Weise
— Great read from Cade Metz and Karen Weise
A.I. Hallucinations Are Getting Worse, Even as New Systems Become More Powerful (Gift Article)
A new wave of “reasoning” systems from companies like OpenAI is producing incorrect information more often. Even the companies don’t know why.
www.nytimes.com
May 5, 2025 at 3:00 PM
"... reasoning systems from companies like OpenAI, Google and the Chinese start-up DeepSeek ... are generating more errors, not fewer. As their math skills have notably improved, their handle on facts has gotten shakier. It is not entirely clear why."
— Great read from Cade Metz and Karen Weise
— Great read from Cade Metz and Karen Weise
But no one outran everyone making the same joke
May 3, 2025 at 11:21 PM
But no one outran everyone making the same joke