Stanford CIS
@stanfordcis.bsky.social
Stanford Center for Internet & Society. See also @vanschewick.bsky.social
Silicon Flatirons hosts Professor Harry Surden and CIS Affiliate Scholar David Levine today on AI's impact on legal work: enhancing efficiency while raising questions about hiring and training junior lawyers. Panel discussion follows with local experts siliconflatirons.org/events/the-f...
November 10, 2025 at 4:57 PM
Silicon Flatirons hosts Professor Harry Surden and CIS Affiliate Scholar David Levine today on AI's impact on legal work: enhancing efficiency while raising questions about hiring and training junior lawyers. Panel discussion follows with local experts siliconflatirons.org/events/the-f...
Austrian/German NGOs filed a complaint against Deutsche Telekom for creating paid fast lanes, violating EU net neutrality. @vanschewick.bsky.social says ISPs can't treat traffic differently for commercial reasons. euobserver.com/digital/ar5d...
Deutsche Telekom case shines light on 'two-speed' internet
A group of NGOs have filed a complaint against Deutsche Telekom over practices they see violating the EU’s net neutrality laws.
euobserver.com
November 7, 2025 at 11:35 PM
Austrian/German NGOs filed a complaint against Deutsche Telekom for creating paid fast lanes, violating EU net neutrality. @vanschewick.bsky.social says ISPs can't treat traffic differently for commercial reasons. euobserver.com/digital/ar5d...
Micromobility isn't new—bikes, scooters & skates have fought for street space for 100+ years. US laws remain a patchwork mess, classifying devices inconsistently. CIS Affiliate @bwalkersmith.bsky.social writes in his latest post cyberlaw.stanford.edu/blog/2025/11...
Micromobility Vehicles in the Park
“Micromobility” refers to a diverse set of transportation modes that, at least on the ground, fall somewhere between traveling by foot and traveling by car: “bicycles, scooters, electric-assist bicycl...
cyberlaw.stanford.edu
November 6, 2025 at 4:59 PM
Micromobility isn't new—bikes, scooters & skates have fought for street space for 100+ years. US laws remain a patchwork mess, classifying devices inconsistently. CIS Affiliate @bwalkersmith.bsky.social writes in his latest post cyberlaw.stanford.edu/blog/2025/11...
"It connects back to my fear that the people with the fewest resources will be most affected by the downsides of AI" says @riana.bsky.social in latest @thenation.com article: Our Racist, Terrifying Deepfake Future Is Here www.thenation.com/article/soci...
Our Racist, Terrifying Deepfake Future Is Here
A faked viral video of a white CEO shoplifting is one thing. What happens when an AI-generated video incriminates a Black suspect? That’s coming, and we’re completely unprepared.
www.thenation.com
November 5, 2025 at 1:56 PM
"It connects back to my fear that the people with the fewest resources will be most affected by the downsides of AI" says @riana.bsky.social in latest @thenation.com article: Our Racist, Terrifying Deepfake Future Is Here www.thenation.com/article/soci...
CIS Affiliate @daphnek.bsky.social examines three researcher categories under the DSA: vetted academics, public data collectors, and everyone else. Many valuable projects fall outside DSA protections, facing legal ambiguity and risks from the AI data wars.
www.techpolicy.press/determining-...
www.techpolicy.press/determining-...
Determining Which Researchers Can Collect Public Data Under the DSA | TechPolicy.Press
The DSA opens important opportunities for researchers collecting publicly available data, but leaves key questions unresolved, writes Daphne Keller.
www.techpolicy.press
October 30, 2025 at 2:59 PM
CIS Affiliate @daphnek.bsky.social examines three researcher categories under the DSA: vetted academics, public data collectors, and everyone else. Many valuable projects fall outside DSA protections, facing legal ambiguity and risks from the AI data wars.
www.techpolicy.press/determining-...
www.techpolicy.press/determining-...
CIS Affiliate @rcalo.bsky.social argues law should proactively shape tech rather than just react to it. His book proposes a methodical approach: define tech carefully, assess impacts, analyze legal implications, and recommend solutions www.techpolicy.press/ryan-calo-wa...
Ryan Calo Wants to Change the Relationship Between Law and Technology | TechPolicy.Press
Calo is the author of Law and Technology: A Methodical Approach, published by Oxford University Press.
www.techpolicy.press
October 29, 2025 at 3:00 PM
CIS Affiliate @rcalo.bsky.social argues law should proactively shape tech rather than just react to it. His book proposes a methodical approach: define tech carefully, assess impacts, analyze legal implications, and recommend solutions www.techpolicy.press/ryan-calo-wa...
BU's @morganweiland.bsky.social discusses Communication Research Center's (CRC) survey showing 74% of Americans oppose government censorship of media, despite Trump admin pressures on ABC over Kimmel. Public supports First Amendment across political lines. sites.bu.edu/crc/2025/10/...
Letter from the Director: October 2025 | Communication Research Center
sites.bu.edu
October 28, 2025 at 2:15 PM
BU's @morganweiland.bsky.social discusses Communication Research Center's (CRC) survey showing 74% of Americans oppose government censorship of media, despite Trump admin pressures on ABC over Kimmel. Public supports First Amendment across political lines. sites.bu.edu/crc/2025/10/...
OpenAI received its first known warrant seeking ChatGPT user data in a child exploitation case. @riana.bsky.social warns this opens the door to "reverse prompt warrants" like Google faced. AI companies must limit data collection they collect on their users cyberlaw.stanford.edu/blog/2025/10...
Eight (or so) Questions to Ask about the ChatGPT Warrant
Earlier this week, the indefatigable Thomas Brewster at Forbes, a journalist who’s been covering the digital surveillance beat for years, reported on a search warrant to OpenAI seeking to unmask a par...
cyberlaw.stanford.edu
October 27, 2025 at 3:06 PM
OpenAI received its first known warrant seeking ChatGPT user data in a child exploitation case. @riana.bsky.social warns this opens the door to "reverse prompt warrants" like Google faced. AI companies must limit data collection they collect on their users cyberlaw.stanford.edu/blog/2025/10...
Recent upheavals at X and Meta stem from oligarchic ownership by Musk and Zuckerberg who directly control content policies. CIS Affiliate @pjleerssen.bsky.social examines how these moguls influence digital governance through ideological or economic motives.
journals.sagepub.com/doi/10.1177/...
journals.sagepub.com/doi/10.1177/...
Sage Journals: Discover world-class research
Subscription and open access journals from Sage, the world's leading independent academic publisher.
journals.sagepub.com
October 24, 2025 at 4:18 PM
Recent upheavals at X and Meta stem from oligarchic ownership by Musk and Zuckerberg who directly control content policies. CIS Affiliate @pjleerssen.bsky.social examines how these moguls influence digital governance through ideological or economic motives.
journals.sagepub.com/doi/10.1177/...
journals.sagepub.com/doi/10.1177/...
@hartzog.bsky.social and @daniel-solove.bsky.social explores The Great Scrape: how AI's massive data scraping violates privacy principles like fairness, consent, and transparency. Despite scrapers treating public data as free, privacy law protects it. www.californialawreview.org/print/great-...
The Great Scrape: The Clash Between Scraping and Privacy — California Law Review
Artificial intelligence (AI) systems depend on massive quantities of data, often gathered by “scraping”—the automated extraction of large amounts of data from the internet. A great deal of scraped dat...
www.californialawreview.org
October 22, 2025 at 5:01 PM
@hartzog.bsky.social and @daniel-solove.bsky.social explores The Great Scrape: how AI's massive data scraping violates privacy principles like fairness, consent, and transparency. Despite scrapers treating public data as free, privacy law protects it. www.californialawreview.org/print/great-...
How do we protect young people online without sacrificing privacy? Justice Shannon talks with @stanfordhai.bsky.social fellow @kingjen.bsky.social about age assurance and verification practices. Essential listening for policymakers and technologists www.ilpfoundry.us/podcast/s6e2...
S6E2: Can Age Assurance Respect Our Privacy? - The Foundry
How do we protect young people online without sacrificing privacy and autonomy? In our latest episode of the Tech Policy Grind, Justice Shannon sits down with Dr. Jennifer King, Privacy […]
www.ilpfoundry.us
October 21, 2025 at 8:41 PM
How do we protect young people online without sacrificing privacy? Justice Shannon talks with @stanfordhai.bsky.social fellow @kingjen.bsky.social about age assurance and verification practices. Essential listening for policymakers and technologists www.ilpfoundry.us/podcast/s6e2...
Privacy defies single definition per debate between @daniel-solove.bsky.social's taxonomy view & @rcalo.bsky.social /Angel's critique. Privacy pros face uncertainty in roles/budgets. Impact matters more than definitions. By @chuckcosson.bsky.social
Defining privacy — An academic debate that's not just academic
REI's Chuck Cosson explores the debate among privacy academics on just what "privacy" means.
iapp.org
October 20, 2025 at 1:02 PM
Privacy defies single definition per debate between @daniel-solove.bsky.social's taxonomy view & @rcalo.bsky.social /Angel's critique. Privacy pros face uncertainty in roles/budgets. Impact matters more than definitions. By @chuckcosson.bsky.social
@kingjen.bsky.social & her Stanford team found AI developers' privacy policies concerning: long data retention, training on children's data, and poor transparency. Users should carefully consider what they share w/ AI chatbots and opt out of data training if possible hai.stanford.edu/news/be-care...
Be Careful What You Tell Your AI Chatbot | Stanford HAI
A Stanford study reveals that leading AI companies are pulling user conversations for training, highlighting privacy risks and a need for clearer policies.
hai.stanford.edu
October 17, 2025 at 12:20 PM
@kingjen.bsky.social & her Stanford team found AI developers' privacy policies concerning: long data retention, training on children's data, and poor transparency. Users should carefully consider what they share w/ AI chatbots and opt out of data training if possible hai.stanford.edu/news/be-care...
CIS Affiliate @riana.bsky.social analyzed 114 US cases from the AI Hallucination Cases database: 90% involve solo/small firms, 56% are plaintiffs, ChatGPT most common. Real issue: majority are pro se litigants who rely on AI most but get failed by it.
cyberlaw.stanford.edu/blog/2025/10...
cyberlaw.stanford.edu/blog/2025/10...
Who’s Submitting AI-Tainted Filings in Court?
It seems like every day brings another news story about a lawyer caught unwittingly submitting a court filing that cites nonexistent cases hallucinated by AI. The problem persists despite courts’ stan...
cyberlaw.stanford.edu
October 16, 2025 at 3:57 PM
CIS Affiliate @riana.bsky.social analyzed 114 US cases from the AI Hallucination Cases database: 90% involve solo/small firms, 56% are plaintiffs, ChatGPT most common. Real issue: majority are pro se litigants who rely on AI most but get failed by it.
cyberlaw.stanford.edu/blog/2025/10...
cyberlaw.stanford.edu/blog/2025/10...
NHTSA launched its sixth Tesla self-driving probe after incidents including red-light violations and crashes. But regulators can only react to problems, not approve tech beforehand. "I call it regulatory whack-a-mole," says CIS Affiliate BW Smith
edition.cnn.com/2025/10/13/b...
edition.cnn.com/2025/10/13/b...
Tesla’s self-driving tech keeps being investigated for safety violations. So why is it allowed? | CNN Business
Federal safety regulators are once again looking into Tesla’s self-driving mode, the latest in a seemingly endless stream of investigations into the safety of the technology.
edition.cnn.com
October 15, 2025 at 2:02 PM
NHTSA launched its sixth Tesla self-driving probe after incidents including red-light violations and crashes. But regulators can only react to problems, not approve tech beforehand. "I call it regulatory whack-a-mole," says CIS Affiliate BW Smith
edition.cnn.com/2025/10/13/b...
edition.cnn.com/2025/10/13/b...
Europe's Digital Services Act lets users challenge platform moderation decisions through independent certified bodies. Article 21 dispute settlements cover takedowns, suspensions & more. @pjleerssen.bsky.social interviewed by @techpolicypress.bsky.social
www.techpolicy.press/what-we-can-...
www.techpolicy.press/what-we-can-...
What We Can Learn from the First Digital Services Act Out-of-Court Dispute Settlements? | TechPolicy.Press
Ramsha Jahangir spoke to two experts to unpack what the early wave of disputes tells us about how the out-of-court system is working under the DSA.
www.techpolicy.press
October 14, 2025 at 1:23 PM
Europe's Digital Services Act lets users challenge platform moderation decisions through independent certified bodies. Article 21 dispute settlements cover takedowns, suspensions & more. @pjleerssen.bsky.social interviewed by @techpolicypress.bsky.social
www.techpolicy.press/what-we-can-...
www.techpolicy.press/what-we-can-...
Reposted by Stanford CIS
Join us tomorrow for Prof. @hartzog.bsky.social's (@bostonu.bsky.social Law) talk: "Against AI Half Measures"
Tuesday, October 14, 2025 - 12:10PM-1:30PM - SLB 128
DM for Zoom Details
Cosponsored by YJoLT
Tuesday, October 14, 2025 - 12:10PM-1:30PM - SLB 128
DM for Zoom Details
Cosponsored by YJoLT
October 13, 2025 at 10:29 PM
Join us tomorrow for Prof. @hartzog.bsky.social's (@bostonu.bsky.social Law) talk: "Against AI Half Measures"
Tuesday, October 14, 2025 - 12:10PM-1:30PM - SLB 128
DM for Zoom Details
Cosponsored by YJoLT
Tuesday, October 14, 2025 - 12:10PM-1:30PM - SLB 128
DM for Zoom Details
Cosponsored by YJoLT
New paper by CIS Affiliate Giancarlo Frosio: Automated copyright enforcement needs rights-driven governance to avoid "algorithmic enclosure." License first, block only clear infringements, require human oversight & transparency.
papers.ssrn.com/sol3/papers....
papers.ssrn.com/sol3/papers....
Algorithmic Enclosure? Reclaiming a Human-Centred Governance Model for Online Creativity
This chapter argues that the real risk of 'algorithmic enclosure' arises not from using automation, but from using it without rights-driven, human-centred gover
papers.ssrn.com
October 13, 2025 at 12:18 PM
New paper by CIS Affiliate Giancarlo Frosio: Automated copyright enforcement needs rights-driven governance to avoid "algorithmic enclosure." License first, block only clear infringements, require human oversight & transparency.
papers.ssrn.com/sol3/papers....
papers.ssrn.com/sol3/papers....
CIS Affiliate Giancarlo Frosio "offers a clear, comparative map of algorithmic enforcement in the IP domain with emphasis on copyright and trade marks online" in his latest Constitutionalising Algorithmic Enforcement paper papers.ssrn.com/sol3/papers....
Constitutionalising Algorithmic Enforcement
<p>This chapter offers a clear, comparative map of algorithmic enforcement in the IP domain, with emphasis on copyright and trade marks online. In short, the ch
papers.ssrn.com
October 10, 2025 at 1:21 PM
CIS Affiliate Giancarlo Frosio "offers a clear, comparative map of algorithmic enforcement in the IP domain with emphasis on copyright and trade marks online" in his latest Constitutionalising Algorithmic Enforcement paper papers.ssrn.com/sol3/papers....
Reposted by Stanford CIS
Speakers include @genevievelakier.bsky.social, Derek Bambauer, and moderator @daphnek.bsky.social. They’ll discuss post-Vullo developments, emerging vectors of influence, and how to balance government communication with protection from coercion. Register to attend:
The Future of Speech Online 2025: The Age of Constitutional Evasion: Jawboning and Other Forms of Government Pressure to Control Private Speech
Dates: Tuesday and Wednesday, October 28-29, 2025 Times: Day One: noon-3:00 pm ET / 9:00 am-noon PT, Day Two: noon-3:00 pm ET / 9:00 am-noon PT Governments have always worked to shape the speech of private parties and these efforts have always threatened overcensorship of dissent and governmentally disfavored viewpoints. The current Administration in the […]
cdt.org
October 8, 2025 at 5:30 PM
Speakers include @genevievelakier.bsky.social, Derek Bambauer, and moderator @daphnek.bsky.social. They’ll discuss post-Vullo developments, emerging vectors of influence, and how to balance government communication with protection from coercion. Register to attend:
Omer Tene & Lars Oleson explores privacy by design for facial recognition in their latest @iapp.bsky.social post. They suggest the technology shouldn't leak personal information any more so than your dog, who also recognizes you but doesn't sell out your privacy iapp.org/news/a/recog...
IAPP
iapp.org
October 8, 2025 at 3:47 PM
Omer Tene & Lars Oleson explores privacy by design for facial recognition in their latest @iapp.bsky.social post. They suggest the technology shouldn't leak personal information any more so than your dog, who also recognizes you but doesn't sell out your privacy iapp.org/news/a/recog...
Richard Forno at @us.theconversation.com: The Oct 1 shutdown furloughed 2/3 of CISA staff as a key cyber threat-sharing law expired. The agency lready lost ~1,000 employees in 2025, facing big 2026 cuts—amid active attacks like Salt Typhoon theconversation.com/federal-shut...
Federal shutdown deals blow to already hobbled cybersecurity agency
The triple whammy of deep staff cuts, shutdown furloughs and the expiration of an information-sharing law leaves national cybersecurity in a perilous state.
theconversation.com
October 7, 2025 at 8:10 PM
Richard Forno at @us.theconversation.com: The Oct 1 shutdown furloughed 2/3 of CISA staff as a key cyber threat-sharing law expired. The agency lready lost ~1,000 employees in 2025, facing big 2026 cuts—amid active attacks like Salt Typhoon theconversation.com/federal-shut...
After 6 months, an Indiana federal court granted @riana.bsky.social's motion to unseal the search warrants for fired Indiana University cybersecurity professor XiaoFeng Wang homes. The unsealed warrants reveal new details about this strange and perplexing situation www.ipm.org/news/2025-10...
Newly unsealed warrant shows search of fired IU professor's homes part of federal funding fraud investigation
The records show that U.S. agents searching Wang's homes were looking for funding applications, including drafts, and submissions to the National Science Foundation. An inventory of 42 seized items li...
www.ipm.org
October 6, 2025 at 7:28 PM
After 6 months, an Indiana federal court granted @riana.bsky.social's motion to unseal the search warrants for fired Indiana University cybersecurity professor XiaoFeng Wang homes. The unsealed warrants reveal new details about this strange and perplexing situation www.ipm.org/news/2025-10...
New research from @daniellecitron.bsky.social @ariezra.bsky.social examines growing concerns around content moderators — people tasked with keeping social media free of explicit content. Their upcoming paper adds to mounting warnings about this challenging job www.techpolicy.press/is-trust-saf...
Is Trust & Safety Dead, or Just Evolving? | TechPolicy.Press
Tech Policy Press contributing editor Dean Jackson considers the conclusions of a new paper by Danielle Keats Citron and Ari Ezra Waldman.
www.techpolicy.press
September 29, 2025 at 1:36 PM
New research from @daniellecitron.bsky.social @ariezra.bsky.social examines growing concerns around content moderators — people tasked with keeping social media free of explicit content. Their upcoming paper adds to mounting warnings about this challenging job www.techpolicy.press/is-trust-saf...
Generative AI models offered by major AI companies are used by tens of millions of people every day, and we should encourage them to make their models as safe as they possibly can” says @stanfordhai.bsky.social Tech Policy Fellow @riana.bsky.social via @techpolicypress.bsky.social bit.ly/46E9oRn
How Congress Could Stifle The Onslaught of AI-Generated Child Sexual Abuse Material | TechPolicy.Press
Cleaning training data might not be enough to hinder a model from creating CSAM, writes Jasmine Mithani.
www.techpolicy.press
September 26, 2025 at 4:25 PM
Generative AI models offered by major AI companies are used by tens of millions of people every day, and we should encourage them to make their models as safe as they possibly can” says @stanfordhai.bsky.social Tech Policy Fellow @riana.bsky.social via @techpolicypress.bsky.social bit.ly/46E9oRn