Critical AI Lab
banner
jwicriticalai.bsky.social
Critical AI Lab
@jwicriticalai.bsky.social
Research on data, algorthmic systems and ethics @weizenbauminstitut.bsky.social
Also on YouTube (http://youtube.com/channel/UCytCD), Twitter (@JWI_CriticalAI), and Mastodon (@jwicriticalai.bsky.social)
Reposted by Critical AI Lab
Who is Really Fueling your #AI? Join us on September 17 to discuss precarization and resistance in #datawork, with the @dataworkersinquiry.bsky.social, @milamiceli.bsky.social and @superrrnetwork.bsky.social. Don't miss your chance to meet some of the shadow workforce behind AI. buff.ly/dHMHXPV
September 9, 2025 at 3:46 PM
Reposted by Critical AI Lab
Huge congrats to @milamiceli.bsky.social 🌟

This is a recognition to all of us making the Data Workers’ Inquiry project!
En tiempos de cientificidio y oscurantismo en mi país, estoy orgullosa de ser la primera científica argentina reconocida por la revista TIME como una de las 100 personas más influyentes a nivel mundial en el campo de la inteligencia artificial.
time.com/collections/...
August 28, 2025 at 12:54 PM
Reposted by Critical AI Lab
The event is in-person:

📅 𝗪𝗲𝗱𝗻𝗲𝘀𝗱𝗮𝘆, 𝗦𝗲𝗽𝘁𝗲𝗺𝗯𝗲𝗿 𝟭𝟳𝘁𝗵.
⏱️ 𝟭𝟳:𝟬𝟬 – 𝟭𝟵:𝟯𝟬
📍@weizenbauminstitut.bsky.social, Hardenbergstrasse 32, in Berlin.
🌐 R𝗲𝗴𝗶𝘀𝘁𝗿𝗮𝘁𝗶𝗼𝗻: www.weizenbaum-institut.de/news/detail/...

For those unable to attend, we'll be live-streaming here: www.youtube.com/live/jA3wSMe...
Who is Really Fueling your AI? Precarization and Resistance in Data Work.
Join us for an evening with the hidden workforce behind AI, get insights into how workers organize against its harsh realities and learn more about our research project The Data Workers’ Inquiry.
www.weizenbaum-institut.de
August 27, 2025 at 10:40 AM
Reposted by Critical AI Lab
📣 𝗦𝗔𝗩𝗘 𝗧𝗛𝗘 𝗗𝗔𝗧𝗘!
We're excited to co-organize a transnational data worker assembly together with @superrrnetwork.bsky.social on September 16-17 in Berlin.
While the first day is closed-door, we’ll host a public event on the 17th.
👉 Program and registration: www.weizenbaum-institut.de/news/detail/...
August 27, 2025 at 10:40 AM
Reposted by Critical AI Lab
Thrilled to welcome 13 data workers to the transnational assembly co-hosted by @superrrnetwork.bsky.social & @dataworkersinquiry.bsky.social, and honored to give a keynote at the public event on the 17th.
Join us @weizenbauminstitut.bsky.social!
Registration: tickets.weizenbaum-institut.de/jl89e/
August 27, 2025 at 10:47 AM
Reposted by Critical AI Lab
Who is really fueling your #AI? 🤔 It's not just code & algorithms. Behind every LLM are millions of people, often in invisible roles. Join us, @dataworkersinquiry.bsky.social, @milamiceli.bsky.social @superrrnetwork.bsky.social & on Sept. 17 to hear directly from data workers!
🔗 buff.ly/2ug32Bx
August 27, 2025 at 10:30 AM
Reposted by Critical AI Lab
@milamiceli.bsky.social and @alexhanna.bsky.social at the Weizenbaum Institute yesterday to discuss Hanna’s and @emilymbender.bsky.social’s new book “The AI Con”.
July 1, 2025 at 2:15 PM
Reposted by Critical AI Lab
🚨NEW INQUIRY!

@joan1k.bsky.social from @datalabelers.bsky.social discusses beginnings, gratitude, and partnerships in this piece.

@dataworkersinquiry.bsky.social is an active partner of the DLA, and we remain united in the fight for the rights of data workers worldwide!

data-workers.org/DLA/
Organizing Across Borders, by Joan Kinyua
The Data Labelers Association is developing mutual support structures and fighting for better working conditions. This inquiry recounts our path to founding it and acknowledges our ongoing partnership...
data-workers.org
July 1, 2025 at 3:09 PM
Reposted by Critical AI Lab
Join us next week to talk about THE #AI CON! Alex Hanna unpacks the myths and marketing that surround today’s #AI discourse, and @milamiceli.bsky.social‬ brings in critical insights from @dataworkersinquiry.bsky.social‬ about the invisible labor behind AI systems. June 30, 6pm 👉 buff.ly/47fMwFf
June 24, 2025 at 9:30 AM
Next Monday, we are thrilled to welcome @alexhanna.bsky.social to the @weizenbauminstitut.bsky.social for a conversation about her new book, THE AI CON: How to Fight Big Tech's Hype and Create the Future We Want (thecon.ai)

🔗Register here: tickets.weizenbaum-institut.de/sgvbf/
THE AI CON
How to Fight Big Tech's Hype and Create the Future We Want
thecon.ai
June 23, 2025 at 9:39 AM
📢NEW INQUIRY‼️

Data workers see the internet’s worst, so the rest of us don’t have to.

Former content moderator and clinical psychologist, Kauna Malgwi, collaborated with DWI to design a mental health plan tailored to data workers' needs: data-workers.org/kauna/
A Mental Health Intervention for Data Workers, by Kauna Ibrahim Malgwi
A scalable mental health intervention designed for data workers, grounded in hands‑on insight and evidence‑based practice from my dual perspective as a former content moderator and a registered clinic...
data-workers.org
June 5, 2025 at 3:13 PM
Taking content moderators' suffering and mental health serious requires evidence-based intervention. Kauna has a concrete proposal!
This is more than a critique, it’s a call to action. Kauna urges tech platforms like Meta to replace performative “resilience” programs with real, trauma-informed care. These workers are critical to the integrity of our digital world; it’s time they were treated as such.
June 5, 2025 at 3:12 PM
Reposted by Critical AI Lab
We’re working on securing funds to make Kaunas much-needed program a reality. Reach out if you’re in a position to help!

Trigger warning!
June 5, 2025 at 2:15 PM
Reposted by Critical AI Lab
This is more than a critique, it’s a call to action. Kauna urges tech platforms like Meta to replace performative “resilience” programs with real, trauma-informed care. These workers are critical to the integrity of our digital world; it’s time they were treated as such.
June 5, 2025 at 2:15 PM
Reposted by Critical AI Lab
She presents a scalable, evidence-based intervention designed to address the unique mental health challenges these workers face.
June 5, 2025 at 2:15 PM
Reposted by Critical AI Lab
In her report “A Mental Health Intervention for Data Workers”, Kauna - former content moderator and now clinical psychologist - offers a solution grounded in both personal experience and professional expertise.
June 5, 2025 at 2:15 PM
Reposted by Critical AI Lab
Our brand NEW INQUIRY just dropped!

Every day, data workers, content moderators, annotators, and labellers confront some of the internet’s most harmful content to keep digital spaces safe for others.
June 5, 2025 at 2:15 PM
Reposted by Critical AI Lab
📢NEW INQUIRY‼️

Data workers see the internet’s worst, so the rest of us don’t have to.

Former content moderator and clinical psychologist, Kauna Malgwi, collaborated with DWI to design a mental health plan tailored to data workers' needs: data-workers.org/kauna/
A Mental Health Intervention for Data Workers, by Kauna Ibrahim Malgwi
A scalable mental health intervention designed for data workers, grounded in hands‑on insight and evidence‑based practice from my dual perspective as a former content moderator and a registered clinic...
data-workers.org
June 5, 2025 at 1:57 PM
Reposted by Critical AI Lab
🚨New Inquiry!! Today, we present Kauna’s mental healthcare proposal tailored specifically to data workers - addressing PTSD, burnout, depression, and more. Her plan delivers a way forward for those affected and a call for change. data-workers.org/kauna/
#MentalHealthInTech #DataWorkersInquiry
A Mental Health Intervention for Data Workers, by Kauna Ibrahim Malgwi
A scalable mental health intervention designed for data workers, grounded in hands‑on insight and evidence‑based practice from my dual perspective as a former content moderator and a registered clinic...
data-workers.org
June 5, 2025 at 1:54 PM
1/4 Exciting News!
David Hartmann et al.'s paper, "Lost in Moderation: How Commercial Content Moderation APIs Over- and Under-Moderate Group-Targeted Hate Speech and Linguistic Variations," has been accepted to CHI Conference 2025.
March 5, 2025 at 3:51 PM
Let's take care of the Internet! @dawiet.bsky.social was recently featured in the "Intenet Governance Explained"-Podcast. Dive into the episode to learn more about the risks of using AI in content moderation and the working conditions of content moderators. open.spotify.com/episode/3u2i...
February 3, 2025 at 12:59 PM
Just published with a Best Paper award! Adriana Alvarado, Tianling Yang & @milamiceli.bsky.social examine the discursive strategies in the papers that combine social media data and computational methods.
doi.org/10.1145/3701...
What Knowledge Do We Produce from Social Media Data and How? | Proceedings of the ACM on Human-Computer Interaction
HCI and CSCW research that uses social media data to make inferences about individuals and communities has proliferated in the last decade. Previous studies have elaborated on methodological concerns ...
doi.org
January 23, 2025 at 9:26 AM

New article from Dr. Fatma Elsafoury: Dive into the different facets of discrimination and bias in AI algorithms, focusing on the growing deep-fake industry. Stay informed—read and share widely! www.bpb.de/lernen/beweg... (english: efatmae.github.io/posts/2024/1...)
Diskriminierung
Algorithmen sind ebenso wenig neutral, wie die Menschen, die sie programmieren oder auf deren Daten sie basieren. Das zeigt sich auch in der alltäglichen Nutzung von KI-Anwendungen, z.B. Deepfakes.
www.bpb.de
January 8, 2025 at 3:20 PM
New publication by @dawiet.bsky.social et al.!
Their research exposes a critical regulatory gap in the AI Act & DSA, calling for a diverse AI audit ecosystem that empowers civil society and affected groups to ensure oversight and accountability doi.org/10.1007/s436...
Addressing the regulatory gap: moving towards an EU AI audit ecosystem beyond the AI Act by including civil society - AI and Ethics
The European legislature has proposed the Digital Services Act (DSA) and Artificial Intelligence Act (AIA) to regulate platforms and Artificial Intelligence (AI) products. We review to what extent thi...
doi.org
December 18, 2024 at 8:37 AM