Critical AI Lab
banner
jwicriticalai.bsky.social
Critical AI Lab
@jwicriticalai.bsky.social
Research on data, algorthmic systems and ethics @weizenbauminstitut.bsky.social
Also on YouTube (http://youtube.com/channel/UCytCD), Twitter (@JWI_CriticalAI), and Mastodon (@jwicriticalai.bsky.social)
Reposted by Critical AI Lab
The event is in-person:

📅 𝗪𝗲𝗱𝗻𝗲𝘀𝗱𝗮𝘆, 𝗦𝗲𝗽𝘁𝗲𝗺𝗯𝗲𝗿 𝟭𝟳𝘁𝗵.
⏱️ 𝟭𝟳:𝟬𝟬 – 𝟭𝟵:𝟯𝟬
📍@weizenbauminstitut.bsky.social, Hardenbergstrasse 32, in Berlin.
🌐 R𝗲𝗴𝗶𝘀𝘁𝗿𝗮𝘁𝗶𝗼𝗻: www.weizenbaum-institut.de/news/detail/...

For those unable to attend, we'll be live-streaming here: www.youtube.com/live/jA3wSMe...
Who is Really Fueling your AI? Precarization and Resistance in Data Work.
Join us for an evening with the hidden workforce behind AI, get insights into how workers organize against its harsh realities and learn more about our research project The Data Workers’ Inquiry.
www.weizenbaum-institut.de
August 27, 2025 at 10:40 AM
Join the conversation!
August 27, 2025 at 11:24 AM
Reposted by Critical AI Lab
We’re working on securing funds to make Kaunas much-needed program a reality. Reach out if you’re in a position to help!

Trigger warning!
June 5, 2025 at 2:15 PM
Reposted by Critical AI Lab
This is more than a critique, it’s a call to action. Kauna urges tech platforms like Meta to replace performative “resilience” programs with real, trauma-informed care. These workers are critical to the integrity of our digital world; it’s time they were treated as such.
June 5, 2025 at 2:15 PM
Reposted by Critical AI Lab
She presents a scalable, evidence-based intervention designed to address the unique mental health challenges these workers face.
June 5, 2025 at 2:15 PM
Reposted by Critical AI Lab
In her report “A Mental Health Intervention for Data Workers”, Kauna - former content moderator and now clinical psychologist - offers a solution grounded in both personal experience and professional expertise.
June 5, 2025 at 2:15 PM
Wondering what can be done to improve menatl health outcomes for content moderators? Kauna Malgwi and the Data workers' Inquiry have some answers
June 5, 2025 at 2:00 PM
4/4 This work has been a joined Project between the @weizenbauminstitut.bsky.social, @cais-research.bsky.social and @hertieschool.bsky.social with the co-authors @dawiet.bsky.social, Amin Oueslati, Dimitri Staufer @dimitristaufer.bsky.social, Lena Pohlmann, Simon Munzert, Hendrik Heuer.
March 5, 2025 at 3:54 PM
3/4 They conclude that using commercial APIs for content moderation risks silencing legitimate speech and failing to protect online platforms from harmful speech!
Read the preprint here: pdf.arxiv.org/pdf/2503.01623
pdf.arxiv.org
March 5, 2025 at 3:52 PM
2/4 This Paper evaluates five content moderation APIs. They find that all APIs have problems correctly identifying implicit hate speech - like sarcasm. At the same time, the APIs tend to misclassify content related to Black, LGBTQIA+, Jewish, and Muslim people as hate speech.
March 5, 2025 at 3:51 PM