Laura Bartley
laurabartley13.bsky.social
Laura Bartley
@laurabartley13.bsky.social
Tech policy & digital safety advocate | Currently mastering the art of dodging tourists on Amsterdam bike lanes 🚲 🇳🇱
Formerly T&S Policy at TikTok, Analyst at Global Disinformation Index, Uni of Glasgow, Dublin City Univ, Washington Ireland Programme
Reposted by Laura Bartley
Today on @indicator.media: A first-of-its-kind audit of AI labels on major social platforms.
Tech platforms promised to label AI content. They're not delivering.
An Indicator audit of hundreds of synthetic images and videos reveals that platforms frequently fail to label AI content
indicator.media
October 23, 2025 at 12:45 PM
Reposted by Laura Bartley
Could not doom-scroll earlier today? Yeah, me neither bc of our dangerous reliance on hyperscaler clouds

#AWS us-east-1 outage took down #Signal and chunks of the internet. The culprit was too many critical operations crammed into a single AWS data center cluster in Northern Virginia.
October 20, 2025 at 2:00 PM
Reposted by Laura Bartley
The use of AI generated content in political messaging has become a feature in today’s communication ecosystem.

How often is it used in the 🇳🇱 election campaign, and by whom?

Check out colleague @favstats.eu and @meinungsfuehrer.bsky.social’s AI campaign tracker #tk2025

www.campaigntracker.nl/en/
  Dashboard -
www.campaigntracker.nl
October 12, 2025 at 8:38 AM
Reposted by Laura Bartley
Thread - I've been thinking a lot about Walter Lippmann’s Public Opinion lately, especially in the context of Musk’s role in dismantling the federal workforce through DOGE. It’s a perfect example of what Lippmann warned about over a century ago.
February 23, 2025 at 3:24 PM
Reposted by Laura Bartley
There's obviously not enough to worry about right now, so I thought, hey why not drop an episode about deepfakes...

Fortunately, what @samgregory.bsky.social told me did make feel better. We discuss what technologists can do to help protect our shared reality:

open.spotify.com/episode/6HE6...
AI, Deepfakes, and How Technologists Can Help Us Trust What We See and Hear | Sam Gregory (Exec. Dir. of WITNESS)
CRAFTED. · Episode
open.spotify.com
February 12, 2025 at 3:07 PM
Reposted by Laura Bartley
This is huge in terms of the fair use piece.
February 11, 2025 at 9:20 PM
Mixed emotions from Paris today, but immensely enjoyed the rich conversations at the side events I had the chance to attend. Most focused on the power of deliberative democracy, trust and safety, the benefits of AI regulation, the potential of open source and themes of sustainability and inclusion.
February 11, 2025 at 8:23 PM
Great discussion from @f-mb.bsky.social, Catherine Régis, @ginasue.bsky.social and Célia Zolynski on their recommended actions for AI and elections at the Sorbonne.
February 10, 2025 at 8:39 PM
En route to Paris to join some of the discussions on AI safety happening at the AI Action Summit. Very happy to be returning to the City of Light almost a decade after studying at SciencesPo!
February 10, 2025 at 1:24 PM
Reposted by Laura Bartley
DeepSeek forces a rethink of the compute & energy costs of AI models. In a new paper with @sashamtl.bsky.social @strubell.bsky.social, we look at the full environmental impacts of AI – both direct and indirect – and what 𝐉e𝐯𝐨𝐧𝐬 𝐏𝐚𝐫𝐚𝐝𝐨𝐱 means for AI and climate. A thread 🧵
January 29, 2025 at 5:54 PM
Visited de Koepel in Haarlem over the weekend, a former panopticon-style prison turned cultural space (slightly strange, with the questionable inclusion of an escape room...). Stark reminder of the role of surveillance practices and their extension into our daily lives.
January 30, 2025 at 12:12 PM
Great & timely research that provides rich insights into safety issues with some open VLMs and across different languages 👇
Today, we are releasing MSTS, a new Multimodal Safety Test Suite for vision-language models!

MSTS is exciting because it tests for safety risks *created by multimodality*. Each prompt consists of a text + image that *only in combination* reveal their full unsafe meaning.

🧵
January 24, 2025 at 4:20 PM
Reposted by Laura Bartley
thereader.mitpress.mit.edu/the-staggeri... It's not one to discuss scale but it's by far the text that's ignited most engagement on AI and environment among technical undergrads, that I've seen
The Staggering Ecological Impacts of Computation and the Cloud
Anthropologist Steven Gonzalez Monserrate draws on five years of research and ethnographic fieldwork in server farms to illustrate some of the diverse environmental impacts of data storage.
thereader.mitpress.mit.edu
January 7, 2025 at 4:34 AM
Reposted by Laura Bartley
Setting the scene for a head-on clash between Facebook and its EU regulators (where enforcement is split between Ireland/EU Commission).
Zuckerberg says Meta plans to eliminate its fact-checking program, move content moderators to Texas to address claims of bias, raise the bar for enforcing its terms and lift certain “restrictions” on immigration and gender talk.

Huge shift in platform policy.
January 7, 2025 at 12:58 PM
Reposted by Laura Bartley
The number of risk assessments are coming in #DSA #commsky #eusky

Check this overview ⬇️

X, Facebook, Instagram, Google, YouTube, TikTok are there

docs.google.com/spreadsheets...
DSA: Risk Assessment & Audit Database
docs.google.com
November 28, 2024 at 5:17 PM
Had already stepped away from the noise of X/Twitter for quite some time, but on days like today, I'm seeking meaningful connection. So, hello Bluesky - here to listen, share & hopefully find thoughtful discourse.

Here's to brighter skies ahead...
November 6, 2024 at 5:41 PM
Reposted by Laura Bartley
In our latest investigation, Bellingcat researcher @koltai.bsky.social reveals the hidden network behind OpenDream, an AI art generator platform. She found several instances of child sexual abuse material being generated and featured on the site. www.bellingcat.com/news/2024/10...
AI Site OpenDream Let Users Generate CSAM
For months, OpenDream displayed synthetic child sexual abuse imagery online. It now denies responsibility for how people are using its platform.
www.bellingcat.com
October 14, 2024 at 8:26 AM
Reposted by Laura Bartley
From Toni Morrison's 1995 lecture "Racism and Fascism"
November 6, 2024 at 11:07 AM