#GenAILies
Google’s AI Overview says that Chuck Wendig has multiple pets that don’t really exist. Among other things.

gen-ai-lies.org/2025/12/09/c...

#GenAILies
Chuck Wendig’s cat – Generative AI Lies
gen-ai-lies.org
December 10, 2025 at 6:40 AM
For the past couple of years, I’ve been posting examples of generative AI making stuff up. I’ve been using the hashtag #GenAILies for those posts.

But it has become difficult to find all of those posts, so I recently decided to make a website for them.

Introducing:

gen-ai-lies.org

1/
Generative AI Lies – Examples of generative AI making stuff up
gen-ai-lies.org
December 8, 2025 at 6:46 PM

A California attorney must pay a $10,000 fine for filing a state court appeal full of fake quotations generated by the artificial intelligence tool ChatGPT.

[…] 21 of 23 quotes from cases cited in the attorney’s opening brief were made up.


calmatters.org/economy/tech...

#GenAILies
California issues historic fine over lawyer’s ChatGPT fabrications
The court of appeals issued an historic fine after 21 of 23 quotes in the lawyer's opening brief were fake. Courts want more AI regulations.
calmatters.org
September 22, 2025 at 10:34 PM
The line in question is from an Anna Russell piece. It doesn’t appear in the Roethke poem. It doesn’t appear in any of the Roethke pages that Google links to in support of its claim. The only thing that the line and the Roethke poem have in common is that both date from 1953.

#GenAILies

2/2
September 5, 2025 at 5:26 PM
Reporter: The FDA has a new AI tool that's intended to speed up drug approvals. But several FDA employees say the new AI helper is making up studies that do not exist. One FDA employee telling us, 'Anything that you don't have time to double check is unreliable. It hallucinates confidently'
July 28, 2025 at 9:55 PM
That thing where lawyers (and others) use generative AI in court filings, and the AI makes stuff up? Now there’s a list of such situations: the AI Hallucination Cases database.

Includes over 200 cases so far.

www.damiencharlotin.com/hallucinatio...

#GenAILies
AI Hallucination Cases Database – Damien Charlotin
Database tracking legal cases where generative AI produced hallucinated citations submitted in court filings.
www.damiencharlotin.com
July 7, 2025 at 10:09 PM
“when summarizing scientific texts, LLMs may omit details that limit the scope of research conclusions, leading to generalizations of results broader than warranted by the original study.”

(Article from April.)

(Indirectly via Aliette.)

royalsocietypublishing.org/doi/10.1098/...

#GenAILies
Generalization bias in large language model summarization of scientific research | Royal Society Open Science
Artificial intelligence chatbots driven by large language models (LLMs) have the potential to increase public science literacy and support scientific research, as they can quickly summarize complex sc...
royalsocietypublishing.org
May 31, 2025 at 6:09 AM
“Chicago Sun-Times prints summer reading list full of fake books”

“Reading list [created by generative AI] in advertorial supplement contains 66% made up books by real authors.”

arstechnica.com/ai/2025/05/c...

#GenAILies
ok so apparently:
1) Chicago Sun-Times did not hire this guy, they bought a mass-produced supplement from a company "strategic parteners" with openai

support local news, boycott openai
"'"I do use AI for background at times but always check out the material first. This time, I did not and I can't believe I missed it because it's so obvious. No excuses,'" Buscaglia said."
1)THIS IS NOT BACKGROUND! THIS IS MAKING UP ENTIRE BOOKS! WHICH ARE THE ENTIRE POINT OF THE ARTICLE!
May 20, 2025 at 10:23 PM
Generative AI company Anthropic tests its “Chain-of-Thought” “reasoning models” to see whether they’re “faithful”—that is, to see whether the models accurately report the steps that they’re following. Turns out that they don’t.

(Article from April.)

www.anthropic.com/research/rea...

#GenAILies
Reasoning models don't always say what they think
Research from Anthropic on the faithfulness of AI models' Chain-of-Thought
www.anthropic.com
May 16, 2025 at 9:37 PM
Yet another example of the kinds of falsehoods produced by LLMs: A post from early 2024 about Google Bard’s incorrect bio of Deirdre Saoirse Moen.

deirdre.net/2024/limits-...

#GenAILies
May 2, 2025 at 10:45 PM
Thread.

#GenAILies
I just read the order to show cause in the MyPillow guy’s case where his lawyers used generative AI and it did the thing AI always does (assembled words in a certain order without any understanding of sourcing) because you have to take your joy where you find it.
April 28, 2025 at 5:09 AM
The little kid shouted, “The emperor has no clothes!”

The other citizens all glared at the kid. “In the future, the emperor’s clothes will be awesome!” they said. “So awesome that they will solve all of our problems!”

This is a post about generative AI.

#GenAILies
April 4, 2025 at 10:01 PM
“A federal court judge has thrown out expert testimony from a Stanford University artificial intelligence and misinformation professor[, Jeff Hancock], saying his submission of fake information made up by an AI chatbot ‘shatters’ his credibility.”

www.mercurynews.com/2025/01/15/s...

#GenAILies
www.mercurynews.cXm
January 16, 2025 at 7:11 PM
If your ChatGPT prompt includes the names of certain humans, ChatGPT says “I'm unable to produce a response.”

Turns out that those names are names of some people who have prominently reported that ChatGPT was making up lies about them.

arstechnica.com/information-...

#GenAILies
Certain names make ChatGPT grind to a halt, and we know why
Filter resulting from subject of settled defamation lawsuit could cause trouble down the road.
arstechnica.com
December 5, 2024 at 5:36 PM
Researchers asked ChatGPT’s search tool to identify the source of excerpts from a couple hundred online articles.

The result: ChatGPT made up answers. (Not always, but often.)

Gasp! Shock! Surprise!

www.cjr.org/tow_center/h...

#GenAILies
How ChatGPT Search (Mis)represents Publisher Content
ChatGPT search—which is positioned as a competitor to search engines like Google and Bing—launched with a press release from OpenAI touting claims that the company had “collaborated extensively with t...
www.cjr.org
December 5, 2024 at 5:25 PM
I just tried it, and yep, if you do a Google search for [salt pork substitute kosher], the AI Overview tells you to try pancetta or bacon as a kosher substitute for salt pork.

Yet another example of why you should never believe anything that generative AI tells you.

#GenAILies
A friend told me about this one and I didn’t believe her until I tried it myself
November 18, 2024 at 1:00 AM