David Sullivan
@davidsullivan.bsky.social
Tech policy, human rights, trust & safety. Raised in Brooklyn, based in Boulder. @david_msullivan elsewhere. Posts auto delete.
I'm on Germ DM 🔑
https://ger.mx/A1Y8CYx9TUtTxirjHuY_ibg6kbSF7BF5inQ7c-04zOPv#did:plc:sqd3wsj7e6w6do6i26nn62r2
I'm on Germ DM 🔑
https://ger.mx/A1Y8CYx9TUtTxirjHuY_ibg6kbSF7BF5inQ7c-04zOPv#did:plc:sqd3wsj7e6w6do6i26nn62r2
Pinned
David Sullivan
@davidsullivan.bsky.social
· Oct 23
Periodic reminder that I made a #trustandsafety starter pack. Let me know about new arrivals who should be added to this list!
All of the copyright experts are here in Boulder today, with @pamelasamuelson.bsky.social giving a treatise on disruptive tech, copyright, and genAI
November 7, 2025 at 4:58 PM
All of the copyright experts are here in Boulder today, with @pamelasamuelson.bsky.social giving a treatise on disruptive tech, copyright, and genAI
Reposted by David Sullivan
📊 The #DSA data access portal is live, and VLOPs/VLOSEs have begun publishing their data catalogues.
Trying to collect the links again: docs.google.com/spreadsheets...
Trying to collect the links again: docs.google.com/spreadsheets...
LinkedIn
This link will take you to a page that’s not on LinkedIn
lnkd.in
October 29, 2025 at 3:40 PM
📊 The #DSA data access portal is live, and VLOPs/VLOSEs have begun publishing their data catalogues.
Trying to collect the links again: docs.google.com/spreadsheets...
Trying to collect the links again: docs.google.com/spreadsheets...
looking forward to the AI and Copyright jamboree that Blake Reid is bringing to Boulder next month!
AI and the Future of Copyright Politics | Silicon Flatirons
siliconflatirons.org
October 28, 2025 at 1:34 PM
looking forward to the AI and Copyright jamboree that Blake Reid is bringing to Boulder next month!
Zero regrets unsubscribing from the Pod Save cinematic universe after the 2024 election.
October 23, 2025 at 3:08 PM
Zero regrets unsubscribing from the Pod Save cinematic universe after the 2024 election.
Reposted by David Sullivan
Belated happy DSA Data Access Day to all who celebrate!! October 2 was the first day that national regulators could start formally reviewing researcher applications for Art. 40.4 access to platforms' internally held data. (Bc that was 3 months after the Delegated Act came out).
FAQs: DSA data access for researchers
Under article 40 of the Digital Services Act (DSA), vetted researchers will be able to request data from very large online platforms (VLOPs) and search engines (VLOSEs) to conduct research on systemic...
algorithmic-transparency.ec.europa.eu
October 13, 2025 at 3:24 PM
Belated happy DSA Data Access Day to all who celebrate!! October 2 was the first day that national regulators could start formally reviewing researcher applications for Art. 40.4 access to platforms' internally held data. (Bc that was 3 months after the Delegated Act came out).
Reposted by David Sullivan
Today's Lawfare Daily is a @scalinglaws.bsky.social episode where @kevintfrazier.bsky.social spoke to @davidsullivan.bsky.social and Rayi Iyer about the evolution of the Trust & Safety field and its relevance to ongoing conversations about how best to govern AI. youtu.be/6irBgNfwmdc?...
October 10, 2025 at 1:31 PM
Today's Lawfare Daily is a @scalinglaws.bsky.social episode where @kevintfrazier.bsky.social spoke to @davidsullivan.bsky.social and Rayi Iyer about the evolution of the Trust & Safety field and its relevance to ongoing conversations about how best to govern AI. youtu.be/6irBgNfwmdc?...
in this conversation with Kevin and Ravi, I used "clippy" as a verb
Today's Lawfare Daily is a @scalinglaws.bsky.social episode where @kevintfrazier.bsky.social spoke to @davidsullivan.bsky.social and Rayi Iyer about the evolution of the Trust & Safety field and its relevance to ongoing conversations about how best to govern AI. youtu.be/6irBgNfwmdc?...
October 10, 2025 at 2:19 PM
in this conversation with Kevin and Ravi, I used "clippy" as a verb
Currently overwhelmed with UNGA humblebrags.
Remind me to deactivate my account 2 weeks after the next Davos!
Remind me to deactivate my account 2 weeks after the next Davos!
October 9, 2025 at 5:04 PM
Currently overwhelmed with UNGA humblebrags.
Remind me to deactivate my account 2 weeks after the next Davos!
Remind me to deactivate my account 2 weeks after the next Davos!
👇
an interesting note re: scaling image moderation. in the past 24 hours:
at least 500 thousand images in posts (not considering # of images, just if an image was in the post)
86 thousand avatars
500 thousand url thumbnails
58 thousand videos.
that's a lot to moderate!
at least 500 thousand images in posts (not considering # of images, just if an image was in the post)
86 thousand avatars
500 thousand url thumbnails
58 thousand videos.
that's a lot to moderate!
October 8, 2025 at 5:07 PM
👇
Reposted by David Sullivan
i desperately want everyone involved in the destruction of USAID to have to, st the very least, answer to the american people for the suffering and misery they have caused apnews.com/article/myan...
Starving children screaming for food as US aid cuts unleash devastation and death across Myanmar
U.S. Secretary of State Marco Rubio has repeatedly said “no one has died" because of his government’s decision to gut its foreign aid program.
apnews.com
October 8, 2025 at 2:59 PM
i desperately want everyone involved in the destruction of USAID to have to, st the very least, answer to the american people for the suffering and misery they have caused apnews.com/article/myan...
Reposted by David Sullivan
To build a safer AI product, where should companies focus their governance efforts?
On @lawfaremedia.org's Scaling Laws podcast DTSP ED @davidsullivan.bsky.social emphasizes how AI companies must break down the silos between AI safety and Trust & Safety teams.
www.lawfaremedia.org/article/scal...
On @lawfaremedia.org's Scaling Laws podcast DTSP ED @davidsullivan.bsky.social emphasizes how AI companies must break down the silos between AI safety and Trust & Safety teams.
www.lawfaremedia.org/article/scal...
Scaling Laws: AI Safety Meet Trust & Safety with Ravi Iyer and David Sullivan
Discussing trust & safety's relevance to artificial intelligence.
www.lawfaremedia.org
October 8, 2025 at 1:03 PM
To build a safer AI product, where should companies focus their governance efforts?
On @lawfaremedia.org's Scaling Laws podcast DTSP ED @davidsullivan.bsky.social emphasizes how AI companies must break down the silos between AI safety and Trust & Safety teams.
www.lawfaremedia.org/article/scal...
On @lawfaremedia.org's Scaling Laws podcast DTSP ED @davidsullivan.bsky.social emphasizes how AI companies must break down the silos between AI safety and Trust & Safety teams.
www.lawfaremedia.org/article/scal...
Reposted by David Sullivan
Want to work on open source full time? The @roost.tools engineering team is starting to hatch! Come build OSS tools making a difference in Trust & Safety.
This is a fully remote role, though some schedule overlap with North American time zones is expected.
www.linkedin.com/jobs/view/43...
This is a fully remote role, though some schedule overlap with North American time zones is expected.
www.linkedin.com/jobs/view/43...
ROOST.tools hiring Staff Software Engineer in United States | LinkedIn
Posted 8:52:18 PM. About ROOSTROOST is a community effort to build scalable and resilient safety infrastructure for…See this and similar jobs on LinkedIn.
www.linkedin.com
October 7, 2025 at 10:14 PM
Want to work on open source full time? The @roost.tools engineering team is starting to hatch! Come build OSS tools making a difference in Trust & Safety.
This is a fully remote role, though some schedule overlap with North American time zones is expected.
www.linkedin.com/jobs/view/43...
This is a fully remote role, though some schedule overlap with North American time zones is expected.
www.linkedin.com/jobs/view/43...
Reminder: *everyone* uses AI for content moderation.
Much more about how this works here:
Much more about how this works here:
October 7, 2025 at 7:42 PM
Reminder: *everyone* uses AI for content moderation.
Much more about how this works here:
Much more about how this works here:
This was a fun conversation, thanks @kevintfrazier.bsky.social for having me on the pod!
There's a real chance for mutual gains when responsible AI and T&S folks work together to focus on safer products.
There's a real chance for mutual gains when responsible AI and T&S folks work together to focus on safer products.
October 7, 2025 at 3:56 PM
This was a fun conversation, thanks @kevintfrazier.bsky.social for having me on the pod!
There's a real chance for mutual gains when responsible AI and T&S folks work together to focus on safer products.
There's a real chance for mutual gains when responsible AI and T&S folks work together to focus on safer products.
Reposted by David Sullivan
What can AI safety learn from Trust & Safety?
@davidsullivan.bsky.social & Ravi Iyer join @kevintfrazier.bsky.social to tackle this important question.
Listen here: podcasts.apple.com/in/podcast/a...
@davidsullivan.bsky.social & Ravi Iyer join @kevintfrazier.bsky.social to tackle this important question.
Listen here: podcasts.apple.com/in/podcast/a...
AI Safety Meet Trust & Safety with Ravi Iyer and David Sullivan
Podcast Episode · Scaling Laws · 07/10/2025 · 47m
podcasts.apple.com
October 7, 2025 at 11:40 AM
What can AI safety learn from Trust & Safety?
@davidsullivan.bsky.social & Ravi Iyer join @kevintfrazier.bsky.social to tackle this important question.
Listen here: podcasts.apple.com/in/podcast/a...
@davidsullivan.bsky.social & Ravi Iyer join @kevintfrazier.bsky.social to tackle this important question.
Listen here: podcasts.apple.com/in/podcast/a...
seems like a great opportunity for the authors of the twitter files
Great piece from @dell.bsky.social at @wired.com about ICE's newest foray into social media monitoring: the agency is putting out feelers for contractors who can do on-site, open source intel 24/7 for ICE's Targeting Operations Division (TOD). 1/ www.wired.com/story/ice-so...
ICE Wants to Build Out a 24/7 Social Media Surveillance Team
Documents show that ICE plans to hire dozens of contractors to scan X, Facebook, TikTok, and other platforms to target people for deportation.
www.wired.com
October 6, 2025 at 9:02 PM
seems like a great opportunity for the authors of the twitter files
Whoops—Ohio Accidentally Excludes Most Major Porn Platforms From Anti-Porn Law
File this under “lawmakers trying to regulate tech and the internet without understanding tech and the internet."
reason.com/2025/10/06/w...
File this under “lawmakers trying to regulate tech and the internet without understanding tech and the internet."
reason.com/2025/10/06/w...
Whoops—Ohio accidentally excludes most major porn platforms from anti-porn law
Ohio lawmakers set out to block minors from viewing online porn. They messed up.
reason.com
October 6, 2025 at 7:50 PM
Reposted by David Sullivan
Whoops—Ohio Accidentally Excludes Most Major Porn Platforms From Anti-Porn Law
File this under “lawmakers trying to regulate tech and the internet without understanding tech and the internet."
reason.com/2025/10/06/w...
File this under “lawmakers trying to regulate tech and the internet without understanding tech and the internet."
reason.com/2025/10/06/w...
Whoops—Ohio accidentally excludes most major porn platforms from anti-porn law
Ohio lawmakers set out to block minors from viewing online porn. They messed up.
reason.com
October 6, 2025 at 4:34 PM
Whoops—Ohio Accidentally Excludes Most Major Porn Platforms From Anti-Porn Law
File this under “lawmakers trying to regulate tech and the internet without understanding tech and the internet."
reason.com/2025/10/06/w...
File this under “lawmakers trying to regulate tech and the internet without understanding tech and the internet."
reason.com/2025/10/06/w...
I too just read this report and can say that Mike is not exaggerating this even slightly.
It would be impressive if Ted Cruz could figure out who was President from 2018 to 2020, because he seems to think it was Biden.
He also seems to have difficulty understanding what words mean, because he thinks "monitor and correct" false statements is "censorship."
He also seems to have difficulty understanding what words mean, because he thinks "monitor and correct" false statements is "censorship."
Senator Cruz Figure Out Who Was President From 2018 To 2020 Challenge; Impossible
I have a simple question for Senator Ted Cruz: Who was president in 2018? How about 2020? I ask because Cruz just released a “bombshell” report claiming that the Biden administration "converted" CISA into "the…
I have a simple question for Senator Ted Cruz: Who was president in 2018? How about 2020? I ask because Cruz just released a “bombshell” report claiming that the Biden administration "converted" CISA into "the…
October 6, 2025 at 7:38 PM
I too just read this report and can say that Mike is not exaggerating this even slightly.
i am forever grateful to be a juvenile Gen Xer, having lived through the time when finding out stuff took going to very specific places and working with tactile things.
I have been trying to explain a microfiche machine to one of my dear, brilliant, talented, but clearly too young to be alive collaborators. And it is taking the last of my soul.
“Micro…fish?? I have never heard that word in my life.” I recorded the timestamp so it can be put on my tombstone.
“Micro…fish?? I have never heard that word in my life.” I recorded the timestamp so it can be put on my tombstone.
October 6, 2025 at 7:31 PM
i am forever grateful to be a juvenile Gen Xer, having lived through the time when finding out stuff took going to very specific places and working with tactile things.
Reposted by David Sullivan
Alex Givens & Karen Kornbluh make a compelling case that "AI Must not Ignore Human Rights." As they note, existing frameworks for responsible business conduct such as GNI's already exist & "leading AI companies that are not yet part of GNI could benefit from its framework and network."
AI Must Not Ignore Human Rights
Alexandra Reeve Givens & Karen Kornbluh worry that the industry is not being held to the same standards as others.
www.project-syndicate.org
October 3, 2025 at 2:13 PM
Alex Givens & Karen Kornbluh make a compelling case that "AI Must not Ignore Human Rights." As they note, existing frameworks for responsible business conduct such as GNI's already exist & "leading AI companies that are not yet part of GNI could benefit from its framework and network."
Jason Isbell, Cast Iron Skillet, 100% of the time
Name a Song that makes you Cry.
October 3, 2025 at 2:01 PM
Jason Isbell, Cast Iron Skillet, 100% of the time
ICYMI, highly recommend this human rights impact assessment of @wikimediafoundation.org AI/ML work by @royapak.bsky.social (w/ @farzdusa.bsky.social and david liu)
upload.wikimedia.org
October 2, 2025 at 8:32 PM
ICYMI, highly recommend this human rights impact assessment of @wikimediafoundation.org AI/ML work by @royapak.bsky.social (w/ @farzdusa.bsky.social and david liu)
it physically hurts to see emails in my inbox that use the DoW acronym
October 2, 2025 at 8:06 PM
it physically hurts to see emails in my inbox that use the DoW acronym