Stanford CIS
banner
stanfordcis.bsky.social
Stanford CIS
@stanfordcis.bsky.social
Stanford Center for Internet & Society. See also @vanschewick.bsky.social
AI-hallucinated case citations exploded from novelty to major court burden—712 decisions globally in 2025 (90% this year) says @stanfordhai.bsky.social @riana.bsky.social Judges say fake cases waste resources. Sanctions rising: 1 lawyer fined $15.5K, firm $59.5K news.bloomberglaw.com/legal-ops-an...
AI-Faked Cases Become Core Issue Irritating Overworked Judges
AI-hallucinated case citations have moved from novelty to a core challenge for the courts, prompting complaints from judges that the issue distracts from the merits of the cases in front of them.
news.bloomberglaw.com
January 5, 2026 at 7:38 PM
Stanford CIS Affiliate @riana.bsky.social discusses her research on AI-generated CSAM—examining how "nudify" apps targeting students have created new harms, and how educators, platforms, law enforcement, and legislators are responding. Recorded Dec 3, 2025.
youtu.be/ewS6RacTWGI?...
Riana Pfefferkorn: Student Misuse of AI-Powered “Undress” Apps
YouTube video by Stanford HAI
youtu.be
December 17, 2025 at 6:44 PM
Join @daniel-solove.bsky.social and @rcalo.bsky.social to discuss Calo's new book "Law and Technology: A Methodological Approach" - exploring how law can channel technology toward human flourishing Wed Dec 17, 2 PM ET.
teachprivacy.com/video-dealin...
Video: Dealing with Technology's Hazards
Dealing with Technology's Hazards Wed, Dec 17, at 2 PM ET Daniel Solove and Ryan Calo (U. Washington Law) will discuss Calo’s new book, Law and
teachprivacy.com
December 16, 2025 at 12:53 PM
Trump's AI executive order aims to preempt state regulation, but exempts child safety laws. States retain authority over AI-CSAM and chatbot protections despite federal pressure.
Analysis by @riana.bsky.social : cyberlaw.stanford.edu/blog/2025/12...
Well, At Least the Anti-States’ Rights AI EO Spares AI-CSAM Laws
On December 11, 2025, President Trump signed an executive order (EO) that purports to deprive states of the ability to regulate artificial intelligence (AI) – to the modest extent possible given the l...
cyberlaw.stanford.edu
December 15, 2025 at 2:43 PM
CIS Affiliate Giancarlo Frosio argues Munich court's GEMA v OpenAI ruling misunderstands AI training by treating memorization as reproduction. The decision conflates training stages and ignores that model weights are lossy compression, not copies. legalblogs.wolterskluwer.com/copyright-bl...
Copyright in Formaldehyde: How GEMA v OpenAI Freezes Doctrine and Chills AI – Part 1
legalblogs.wolterskluwer.com
December 11, 2025 at 4:52 PM
CIS Affiliate Christopher Sprigman argues the Supreme Court's Warhol decision opens the door for antitrust competition analysis in copyright fair use cases, bridging two related legal fields. www.law.nyu.edu/news/ideas/c...
Christopher Jon Sprigman explains what copyright can learn from its antitrust cousin
When the Supreme Court ruled in Andy Warhol Foundation for the Visual Arts, Inc. v. Goldsmith (2023) that the legendary artist’s transformation of a photographer’s shot of the musician Prince didn’t c...
www.law.nyu.edu
December 10, 2025 at 3:20 PM
As people worldwide worry about data collection, this film explores 25 years of privacy evolution and the profession that emerged to protect it. Includes CIS Affiliate @hartzog.bsky.social
youtu.be/EqZOzwVaZp8?...
Privacy People (full documentary)
YouTube video by B Team Films
youtu.be
December 9, 2025 at 2:20 PM
Latest paper from @hartzog.bsky.social, Neil M. Richards & @jordfran.bsky.social "Privacy's Autonomy Thicket: Disentangling Choice, Consent, and Control" argues "choice," "consent," and "control" are conflated in privacy law, weakening individual autonomy papers.ssrn.com/sol3/papers....
Privacy's Autonomy Thicket: Disentangling Choice, Consent, and Control
<p>When it comes to talking about autonomy, privacy law could use a little clarity. Its discourse uses terms like “choice,” “consent,” and “control” to evoke au
papers.ssrn.com
December 1, 2025 at 6:43 PM
CIS Affiliate @kingjen.bsky.social testified to Congress on AI chatbot privacy risks, highlighting how users share sensitive health data with unregulated platforms. She urges action on: data privacy design, transparency in AI training, and safety metrics. hai.stanford.edu/policy/jen-k...
November 19, 2025 at 2:35 PM
Former White House attorney Ty Cobb warns of rule of law erosion in this @hearsayculture.bsky.social interview with Dave Levine. He discusses threats to judiciary independence and federal agencies, urging lawyers to serve as ethical guardians. youtu.be/Y-k7BiP1D0Y?...
Ty Cobb | Hearsay Culture Radio | October 15, 2025 | KZSU-FM (Stanford)
YouTube video by Hearsay Culture Network
youtu.be
November 13, 2025 at 9:45 PM
As AI shapes what we see and believe, truth is under strain. @daniellecitron.bsky.social explores accountability in the AI age—how data systems amplify inequality and distort trust. See her at Datapalooza 11/14 hosted by @uvadatascience.bsky.social datascience.virginia.edu/events/datap...
Datapalooza 2025: Truth and Accountability in the Age of AI — School of Data Science
The UVA School of Data Science presents Datapalooza 2025: Truth and Accountability in the Age of AI, signature fall event open to all.
datascience.virginia.edu
November 11, 2025 at 2:08 PM
Silicon Flatirons hosts Professor Harry Surden and CIS Affiliate Scholar David Levine today on AI's impact on legal work: enhancing efficiency while raising questions about hiring and training junior lawyers. Panel discussion follows with local experts siliconflatirons.org/events/the-f...
November 10, 2025 at 4:57 PM
Austrian/German NGOs filed a complaint against Deutsche Telekom for creating paid fast lanes, violating EU net neutrality. @vanschewick.bsky.social says ISPs can't treat traffic differently for commercial reasons. euobserver.com/digital/ar5d...
Deutsche Telekom case shines light on 'two-speed' internet
A group of NGOs have filed a complaint against Deutsche Telekom over practices they see violating the EU’s net neutrality laws.
euobserver.com
November 7, 2025 at 11:35 PM
Micromobility isn't new—bikes, scooters & skates have fought for street space for 100+ years. US laws remain a patchwork mess, classifying devices inconsistently. CIS Affiliate @bwalkersmith.bsky.social writes in his latest post cyberlaw.stanford.edu/blog/2025/11...
Micromobility Vehicles in the Park
“Micromobility” refers to a diverse set of transportation modes that, at least on the ground, fall somewhere between traveling by foot and traveling by car: “bicycles, scooters, electric-assist bicycl...
cyberlaw.stanford.edu
November 6, 2025 at 4:59 PM
"It connects back to my fear that the people with the fewest resources will be most affected by the downsides of AI" says @riana.bsky.social in latest @thenation.com article: Our Racist, Terrifying Deepfake Future Is Here www.thenation.com/article/soci...
Our Racist, Terrifying Deepfake Future Is Here
A faked viral video of a white CEO shoplifting is one thing. What happens when an AI-generated video incriminates a Black suspect? That’s coming, and we’re completely unprepared.
www.thenation.com
November 5, 2025 at 1:56 PM
CIS Affiliate @daphnek.bsky.social examines three researcher categories under the DSA: vetted academics, public data collectors, and everyone else. Many valuable projects fall outside DSA protections, facing legal ambiguity and risks from the AI data wars.
www.techpolicy.press/determining-...
Determining Which Researchers Can Collect Public Data Under the DSA | TechPolicy.Press
The DSA opens important opportunities for researchers collecting publicly available data, but leaves key questions unresolved, writes Daphne Keller.
www.techpolicy.press
October 30, 2025 at 2:59 PM
CIS Affiliate @rcalo.bsky.social argues law should proactively shape tech rather than just react to it. His book proposes a methodical approach: define tech carefully, assess impacts, analyze legal implications, and recommend solutions www.techpolicy.press/ryan-calo-wa...
Ryan Calo Wants to Change the Relationship Between Law and Technology | TechPolicy.Press
Calo is the author of Law and Technology: A Methodical Approach, published by Oxford University Press.
www.techpolicy.press
October 29, 2025 at 3:00 PM
BU's @morganweiland.bsky.social discusses Communication Research Center's (CRC) survey showing 74% of Americans oppose government censorship of media, despite Trump admin pressures on ABC over Kimmel. Public supports First Amendment across political lines. sites.bu.edu/crc/2025/10/...
Letter from the Director: October 2025 | Communication Research Center
sites.bu.edu
October 28, 2025 at 2:15 PM
OpenAI received its first known warrant seeking ChatGPT user data in a child exploitation case. @riana.bsky.social warns this opens the door to "reverse prompt warrants" like Google faced. AI companies must limit data collection they collect on their users cyberlaw.stanford.edu/blog/2025/10...
Eight (or so) Questions to Ask about the ChatGPT Warrant
Earlier this week, the indefatigable Thomas Brewster at Forbes, a journalist who’s been covering the digital surveillance beat for years, reported on a search warrant to OpenAI seeking to unmask a par...
cyberlaw.stanford.edu
October 27, 2025 at 3:06 PM
Recent upheavals at X and Meta stem from oligarchic ownership by Musk and Zuckerberg who directly control content policies. CIS Affiliate @pjleerssen.bsky.social examines how these moguls influence digital governance through ideological or economic motives.
journals.sagepub.com/doi/10.1177/...
Sage Journals: Discover world-class research
Subscription and open access journals from Sage, the world's leading independent academic publisher.
journals.sagepub.com
October 24, 2025 at 4:18 PM
@hartzog.bsky.social and @daniel-solove.bsky.social explores The Great Scrape: how AI's massive data scraping violates privacy principles like fairness, consent, and transparency. Despite scrapers treating public data as free, privacy law protects it. www.californialawreview.org/print/great-...
The Great Scrape: The Clash Between Scraping and Privacy — California Law Review
Artificial intelligence (AI) systems depend on massive quantities of data, often gathered by “scraping”—the automated extraction of large amounts of data from the internet. A great deal of scraped dat...
www.californialawreview.org
October 22, 2025 at 5:01 PM
How do we protect young people online without sacrificing privacy? Justice Shannon talks with @stanfordhai.bsky.social fellow @kingjen.bsky.social about age assurance and verification practices. Essential listening for policymakers and technologists www.ilpfoundry.us/podcast/s6e2...
S6E2: Can Age Assurance Respect Our Privacy? - The Foundry
How do we protect young people online without sacrificing privacy and autonomy? In our latest episode of the Tech Policy Grind, Justice Shannon sits down with Dr. Jennifer King, Privacy […]
www.ilpfoundry.us
October 21, 2025 at 8:41 PM
Privacy defies single definition per debate between @daniel-solove.bsky.social's taxonomy view & @rcalo.bsky.social /Angel's critique. Privacy pros face uncertainty in roles/budgets. Impact matters more than definitions. By @chuckcosson.bsky.social
Defining privacy — An academic debate that's not just academic
REI's Chuck Cosson explores the debate among privacy academics on just what "privacy" means.
iapp.org
October 20, 2025 at 1:02 PM
@kingjen.bsky.social & her Stanford team found AI developers' privacy policies concerning: long data retention, training on children's data, and poor transparency. Users should carefully consider what they share w/ AI chatbots and opt out of data training if possible hai.stanford.edu/news/be-care...
Be Careful What You Tell Your AI Chatbot | Stanford HAI
A Stanford study reveals that leading AI companies are pulling user conversations for training, highlighting privacy risks and a need for clearer policies.
hai.stanford.edu
October 17, 2025 at 12:20 PM
CIS Affiliate @riana.bsky.social analyzed 114 US cases from the AI Hallucination Cases database: 90% involve solo/small firms, 56% are plaintiffs, ChatGPT most common. Real issue: majority are pro se litigants who rely on AI most but get failed by it.
cyberlaw.stanford.edu/blog/2025/10...
Who’s Submitting AI-Tainted Filings in Court?
It seems like every day brings another news story about a lawyer caught unwittingly submitting a court filing that cites nonexistent cases hallucinated by AI. The problem persists despite courts’ stan...
cyberlaw.stanford.edu
October 16, 2025 at 3:57 PM