Woodrow Hartzog
banner
hartzog.bsky.social
Woodrow Hartzog
@hartzog.bsky.social

Andrew R. Randall Professor of Law at Boston University School of Law. Author of "Privacy's Blueprint" (2018) and co-author of "Breached!" (2022). Posting mainly about privacy, tech and the law.

Political science 32%
Computer science 24%
Pinned
I've just posted my latest essay w/ @jessicasilbey.bsky.social, titled "How AI Destroys Institutions." We argue that AI systems are designed in ways that degrade & are likely to destroy our crucial civic institutions like the rule of law, universities & a free press. papers.ssrn.com/sol3/papers....
How AI Destroys Institutions
Civic institutions—the rule of law, universities, and a free press—are the backbone of democratic life. They are the mechanisms through which complex societies
papers.ssrn.com
@garymarcus.bsky.social: LLMs still hallucinate and continue to make boneheaded errors, and this Caltech / Stanford paper is yet another suggesting that this is inherent. Even LLMs marketed as offering "reasoning" systems have major problems with reasoning. garymarcus.substack.com/p/breaking-l...
BREAKING: LLM “reasoning” continues to be deeply flawed
A new review underscores the breadth of the problem, and shows that close to a trillion dollars hasn’t changed that
garymarcus.substack.com
we need to talk about that Ring Super Bowl ad
"Crying girls will make you rich."

That's the actual pitch from a marketer selling packages of emotional videos that brands use to flood TikTok with undisclosed ads.

I just published an investigation into industrial UGC campaigns. What I found is troubling. 🧵 indicator.media/p/crying-gir...
“Crying girls will make you rich”: How marketers trick users with mass-produced "organic" content
Indicator analyzed UGC campaigns that generated over 2 billion TikTok views and found rampant violations of disclosure rules.
indicator.media
⚠️ Despite all the hype, chatbots still make terrible doctors. Out today is the largest user study of language models for medical self-diagnosis. We found that chatbots provide inaccurate and inconsistent answers, and that people are better off using online searches or their own judgment.

AI continues to go just great.

Reposted by Woodrow Hartzog

one chatbot told a participant in this study to go lay down in the dark when he was presenting with a brain hemorrhage... what are we even doing

www.404media.co/chatbots-hea...
Chatbots Make Terrible Doctors, New Study Finds
Chatbots provided incorrect, conflicting medical advice, researchers found: “Despite all the hype, AI just isn't ready to take on the role of the physician.”
www.404media.co
Here's that Ring #SuperBowl commercial:
“These surveillance tools are an authoritarian’s dream…”
ICE’s growing surveillance state - The Boston Globe
ICE has constructed a digital dragnet that captures and retains massive amounts of data about all of us, citizens and noncitizens alike.
www.bostonglobe.com

Reposted by Woodrow Hartzog

Generative and other kinds of AI have numerous harmful effects. In a recent paper, Boston University professors @hartzog.bsky.social and @jessicasilbey.bsky.social identify and highlight a new concern: AI is sabotaging democracy by destroying the institutions that undergird it.
Researchers: How AI undermining democracy
Paper argues tech threatens democratic foundations.
www.sfexaminer.com
“The company launched its Search Party feature in September and, despite some misgivings about it being enabled by default (and concerns around privacy and relationships with law enforcement), the company is taking a victory lap with a Super Bowl commercial.”
Now anyone can tap Ring doorbells to search for lost dogs
Ring says Search Party helps find more than one missing pet a day.
www.theverge.com

Exactly.

This is, by far, the best thing I have read on the topic: discovery.ucl.ac.uk/id/eprint/10...
discovery.ucl.ac.uk
Sure, it was the model that was ‘reckless.’
OpenAI says it's going to retire GPT-4o. Again.

This time, the move to sunset 4o comes amid a wave of lawsuits alleging that the sycophantic model pushed users into delusional and suicidal spirals, upending lives and causing psychological harm, self-injury, and death.

futurism.com/artificial-i...
Amid Lawsuits, OpenAI Says It Will Retire "Reckless" Model Linked to Deaths
OpenAI will retire GPT-4o, an especially sycophantic version of its chatbot linked in lawsuits to multiple deaths, next month.
futurism.com

This is no way to live.

Reposted by Woodrow Hartzog

😬
1. The thing about science that these jokers don't understand is that science cannot be vibe-coded.

Whatever its flaws, the point with vibe coding is that you're trying to quickly make something that sorta works, where you can immediately sorta see if it sorta works and then sorta use it.
“The idea is to put ChatGPT front and center inside software that scientists use to write up their work in much the same way that chatbots are now embedded into popular programming editors.

It’s vibe coding, but for science.”
OpenAI’s latest product lets you vibe code science
Prism is a ChatGPT-powered text editor that automates much of the work involved in writing scientific papers.
www.technologyreview.com
“The idea is to put ChatGPT front and center inside software that scientists use to write up their work in much the same way that chatbots are now embedded into popular programming editors.

It’s vibe coding, but for science.”
OpenAI’s latest product lets you vibe code science
Prism is a ChatGPT-powered text editor that automates much of the work involved in writing scientific papers.
www.technologyreview.com

Just saw this. It's fantastic, James. Thank you and it's helpful not just for juniors!

Reposted by Woodrow Hartzog

The UK is planning to drastically increase its use of live facial recognition systems—upping the number of facial recognition vans from 10 to 50
Facial recognition technology to be rolled out nationally and police to also get AI support
The home secretary has set out the government's long-awaited reforms to policing in England and Wales, which aim to reduce the amount of time officers will spend behind their desks - instead of being ...
news.sky.com
The @usdot.bsky.social, which oversees the safety of airplanes, cars and pipelines, plans to use Google Gemini to draft new regulations. “We don’t need the perfect rule,” said DOT’s top lawyer. “We want good enough.”
Government by AI? Trump Administration Plans to Write Regulations Using Artificial Intelligence
The Transportation Department, which oversees the safety of airplanes, cars and pipelines, plans to use Google Gemini to draft new regulations. “We don’t need the perfect rule,” said DOT’s top lawyer....
www.propublica.org

NOT TODAY, SATAN.
Today in luxury surveillance.
Your house knows it’s you: inside SF’s era of sentient doors and psychic windows
SF smart homes use facial recognition doors and AI security.
sfstandard.com

Reposted by Woodrow Hartzog

The real AI extinction threat, and far more imminent — the extinction of society. Yes, “absolutely everyone should read.” Via @hartzog.bsky.social and @jessicasilbey.bsky.social, via @garymarcus.bsky.social. open.substack.com/pub/garymarc...
How Generative AI is destroying society
An astonishingly lucid new paper that should be read by all
open.substack.com

Reposted by Woodrow Hartzog

@hartzog.bsky.social & @jessicasilbey.bsky.social argue that GenAI, by its very design, undercuts the institutions that are core to democracy. It's "anathema to the kind of cooperation, transparency, accountability, and evolution that give vital institutions their purpose and sustainability."
download.ssrn.com
“…the endgame is to record and analyze everything in your life, and that's not hyperbole.”
CES 2026 laid out a Black Mirror future of wearable AI that's always listening, watching, ready to help, and 'knows everything about you.' I'm not enthusiastic
With new AI assistants like Lenovo's Qira, tech brands are laying out the helpful-surveillance endgame that started with smart glasses.
www.androidcentral.com

Reposted by Woodrow Hartzog

This new paper by @hartzog.bsky.social and @jessicasilbey.bsky.social makes "one simple point: AI systems are built to function in ways that degrade and are likely to destroy our crucial civic institutions."

Grim but important reading:
papers.ssrn.com/sol3/papers....
How AI Destroys Institutions
Civic institutions—the rule of law, universities, and a free press—are the backbone of democratic life. They are the mechanisms through which complex societies
papers.ssrn.com

Reposted by Woodrow Hartzog

Ads are starting to show up in chatbot conversations. It's easy to imagine how this "could turn unwelcome or creepy" in exchanges "that feel like intimate personal spaces and that also save gobs of your private data." www.washingtonpost.com/technology/2...
Analysis | Here comes the advertising in AI chatbots
Can you trust AI advice if comes with a promotional pitch for soda?
www.washingtonpost.com

Reposted by Woodrow Hartzog

Quite. My resharing my related piece from yesterday >>>

"The fact that Grok can be used to defile images of women and children is not an unfortunate accident of technological progress, but the direct consequence of decision-making, and of an ideology"

www.prospectmagazine.co.uk/ideas/techno...

Reposted by Woodrow Hartzog

“Women and girls are far more reluctant to use AI. This should be no surprise to any of us. Women don’t see this as exciting new technology, but as simply new ways to harass and abuse us and try and push us offline.”
Use of AI to harm women has only just begun, experts warn
While Grok has introduced belated safeguards to prevent sexualised AI imagery, other tools have far fewer limits
www.theguardian.com
An invaluable new report. A "premortem" is exactly what we need.
The risks of AI in schools outweigh the benefits, report says
A new report warns that AI poses a serious threat to children's cognitive development and emotional well-being.
www.npr.org