Sandra Wachter
swachter.bsky.social
Sandra Wachter
@swachter.bsky.social
Professor of Technology and Regulation, Oxford Internet Institute, University of Oxford https://tinyurl.com/3rkmbmsf

Humboldt Professor of Technology & Regulation, Hasso Plattner Institute https://tinyurl.com/47rkrt6c

Governance of Emerging Technologies
Pinned
My new work on AI:

Limitations & Loopholes in the EU AI Act & AI Liability Directives tinyurl.com/277c5xpe

Do large language models have a legal duty to tell the truth? tinyurl.com/3kzs777b

To curb hallucinations & protect science, we must use LLMs as zero-shot translators tinyurl.com/44m3h2p2
Limitations and Loopholes in the EU AI Act and AI Liability Directives: What This Means for the European Union, the United States, and Beyond | Yale Journal of Law & Technology
tinyurl.com
Reposted by Sandra Wachter
⚠️ AI is seen by many as a promise of salvation for humanity. But what if this technology of the future hides authoritarian fantasies of power behind its shiny facade? Join this keynote by Rainer Mühlhoff (Uni Osnabrück) 👉 Register now: buff.ly/DNNizzn
November 4, 2025 at 3:01 PM
Super excited to see my work on GenAI & subtle hallucinations or what we coin “careless speech” cited in @coe.int report. We need to think about the cumulative, long-term risks of “careless speech” to science, education, media & shared social truth in democratic societies.

tinyurl.com/44pjvrjp
The human line: safeguarding rights and democracy in the AI era
Strasbourg 20/10/2025
tinyurl.com
October 27, 2025 at 9:39 AM
Reposted by Sandra Wachter
So that's teaching wrapped up for another year, next class start of March. Marking then writing then Kenya then writing... Will the bubble burst while I am drafting?
October 23, 2025 at 8:05 AM
Reposted by Sandra Wachter
“Medical staff did not give her any food, water, or pain medication for several hours. Much later that evening, after a significant loss of blood, Lucia was transported to an emergency room approximately an hour away, with her arms and legs shackled.” www.nbcnews.com/news/us-news...
Pregnant women describe miscarrying and bleeding out while in ICE custody, advocates say
The ACLU and other groups are pressing for ICE to identify and release all pregnant women in custody and to stop detaining anyone known to be pregnant, postpartum or nursing.
www.nbcnews.com
October 23, 2025 at 2:39 AM
Reposted by Sandra Wachter
EPA scientists linked PFNA with developmental, liver and reproductive harms.

Their final report was ready in mid-April, according to an internal document reviewed by ProPublica, but it has yet to be released by the Trump administration.

By @fastlerner.bsky.social
Scientists Completed a Toxicity Report on This Forever Chemical. The EPA Hasn’t Released It.
Agency scientists found that PFNA could cause developmental, liver and reproductive harms. Their final report was ready in mid-April, according to an internal document reviewed by ProPublica, but the ...
www.propublica.org
October 22, 2025 at 4:26 PM
Reposted by Sandra Wachter
Super excited to see my work on the dangers of “careless speech”, subtle hallucinations & GenAI for science, academia & education or any areas where truth & detail matter w/ @bmittelstadt.bsky.social @cruss.bsky.social ft @elsevierconnect.bsky.social Mitch Leslie
tinyurl.com/4ms2mkub
Scientists Increasingly Using AI to Help Write Papers—for Better or Worse
tinyurl.com
October 22, 2025 at 6:26 AM
Super excited to see my work on the dangers of “careless speech”, subtle hallucinations & GenAI for science, academia & education or any areas where truth & detail matter w/ @bmittelstadt.bsky.social @cruss.bsky.social ft @elsevierconnect.bsky.social Mitch Leslie
tinyurl.com/4ms2mkub
Scientists Increasingly Using AI to Help Write Papers—for Better or Worse
tinyurl.com
October 22, 2025 at 6:27 AM
Super excited to see my work on the dangers of “careless speech”, subtle hallucinations & GenAI for science, academia & education or any areas where truth & detail matter w/ @bmittelstadt.bsky.social @cruss.bsky.social ft @elsevierconnect.bsky.social Mitch Leslie
tinyurl.com/4ms2mkub
Scientists Increasingly Using AI to Help Write Papers—for Better or Worse
tinyurl.com
October 22, 2025 at 6:26 AM
Reposted by Sandra Wachter
Prof @swachter.bsky.social @oii.ox.ac.uk comments about the worrying consequences that can arise if people are more likely to engage in unethical behaviour when using AI.

Read @the-independent.com article: ⬇️

www.independent.co.uk/news/uk/home...
Why AI could make people more likely to lie
A new study has revealed that people feel much more comfortable being deceitful when using AI
www.independent.co.uk
October 16, 2025 at 3:22 PM
Reposted by Sandra Wachter
“Concerns over an AI bubble bursting have grown lately, with analysts recently finding that it’s 17 times the size of the dotcom-era bubble and four times bigger than the 2008 financial crisis.”

Hang onto your butts. This “correction” is gonna hurt.
futurism.com/artificial-i...
Bank of England Warns of Impending AI Disaster
The Bank of England has sounded the alarm, warning of an intensifying risk of a "sudden correction" due to an AI spending frenzy.
futurism.com
October 10, 2025 at 3:45 AM
intereting work!
September 20, 2025 at 5:37 AM
Reposted by Sandra Wachter
Why AI could make people more likely to lie

Coverage of our recent paper by THe Independent, with nice commentary by @swachter.bsky.social

www.independent.co.uk/news/uk/home...
Why AI could make people more likely to lie
A new study has revealed that people feel much more comfortable being deceitful when using AI
www.independent.co.uk
September 18, 2025 at 4:38 PM
Reposted by Sandra Wachter
LLMs produce responses that are plausible but that contain factual inaccuracies. Its time for accountability! Precedents are established that companies are liable for answers they provide eg 2013 German Google case thx @financialtimes.com @johnthornhill.bsky.social for ft my work on.ft.com/46deNjr
How chatbots are changing the internet
As artificial and human intelligence becomes harder to tell apart, do we need new rules of engagement?
on.ft.com
September 15, 2025 at 5:06 AM
Reposted by Sandra Wachter
It is unsurprising to me that models have different results but it doesn’t make the harm go away.

GenAI is a popular tool for people to inform themselves, tech companies have a responsibility to ensure that their content is not harmful. With big tech comes big responsibility tinyurl.com/3zwwnr7y
AI models are struggling to identify hate speech, study finds
A new study has found that AI content moderators are evaluating statements of hate speech differently which is a “critical issue for the public”, according to the researcher
tinyurl.com
September 17, 2025 at 5:04 AM
Thanks to the Independent & Harriette Boucher for including me, @oii.ox.ac.uk @socsci.ox.ac.uk
It is unsurprising to me that models have different results but it doesn’t make the harm go away.

GenAI is a popular tool for people to inform themselves, tech companies have a responsibility to ensure that their content is not harmful. With big tech comes big responsibility tinyurl.com/3zwwnr7y
AI models are struggling to identify hate speech, study finds
A new study has found that AI content moderators are evaluating statements of hate speech differently which is a “critical issue for the public”, according to the researcher
tinyurl.com
September 17, 2025 at 5:06 AM
It is unsurprising to me that models have different results but it doesn’t make the harm go away.

GenAI is a popular tool for people to inform themselves, tech companies have a responsibility to ensure that their content is not harmful. With big tech comes big responsibility tinyurl.com/3zwwnr7y
AI models are struggling to identify hate speech, study finds
A new study has found that AI content moderators are evaluating statements of hate speech differently which is a “critical issue for the public”, according to the researcher
tinyurl.com
September 17, 2025 at 5:04 AM
Reposted by Sandra Wachter
I'm hiring again! Please share. I'm recruiting a postdoc research fellow in human-centred AI for scalable decision support. Join us to investigate how to balance scalability and human control in medical decision support. Closing date: 4 October (AEST).
uqtmiller.github.io/recruitment/
Recruitment
uqtmiller.github.io
September 16, 2025 at 4:34 AM
Reposted by Sandra Wachter
New! Prof @swachter.bsky.social, @oii.ox.ac.uk explains how AI chatbots don’t always speak the truth and why we all need to more vigilant in distinguishing fact from fiction. Read the full @financialtimes.com article by @johnthornhill.bsky.social: bit.ly/47KQhHA.
September 15, 2025 at 3:18 PM
LLMs produce responses that are plausible but that contain factual inaccuracies. Its time for accountability! Precedents are established that companies are liable for answers they provide eg 2013 German Google case thx @financialtimes.com @johnthornhill.bsky.social for ft my work on.ft.com/46deNjr
How chatbots are changing the internet
As artificial and human intelligence becomes harder to tell apart, do we need new rules of engagement?
on.ft.com
September 15, 2025 at 5:07 AM
LLMs produce responses that are plausible but that contain factual inaccuracies. Its time for accountability! Precedents are established that companies are liable for answers they provide eg 2013 German Google case thx @financialtimes.com @johnthornhill.bsky.social for ft my work on.ft.com/46deNjr
How chatbots are changing the internet
As artificial and human intelligence becomes harder to tell apart, do we need new rules of engagement?
on.ft.com
September 15, 2025 at 5:06 AM
Reposted by Sandra Wachter
One person tried to get on a plane with a shoe bomb and we all had to take our shoes off at airports for decades

We've had YEARS of reports on generative "AI" programs telling people to self-harm, leave their spouses, and engage in other dangerous behavior - without repercussions

Shut it down
I got the complaint in the horrific OpenAI self harm case the the NY Times reported today

This is way way worse even than the NYT article makes it out to be

OpenAI absolutely deserves to be run out of business
August 27, 2025 at 5:52 AM
time to implement it into healthcare systems www.reddit.com/r/ChatGPT/co...
From the ChatGPT community on Reddit: ChatGPT asked if I wanted a diagram of what’s going on inside my pregnant belly.
Explore this post and more from the ChatGPT community
www.reddit.com
August 26, 2025 at 5:40 AM
Reposted by Sandra Wachter
ICYMI: The OII's @swachter.bsky.social was quoted in @wired.com's coverage of AI-generated YouTube videos "rage baiting" viewers:

www.wired.com/story/cheapf...
August 21, 2025 at 11:04 AM