Simon Munzert
simonsaysnothin.bsky.social
Simon Munzert
@simonsaysnothin.bsky.social
Professor of Data Science and Public Policy | Hertie School Data Science Lab | Elections, Public Opinion, Data
Putting out evidence isn’t enough. Public engagement is critical to drive climate–health progress. Breaking misinformation silos and rebuilding trust in science has become a public health priority. 4/4
October 31, 2025 at 8:43 AM
Yet there’s hope: the Lancet Countdown shows that clean energy growth already boosts jobs, GDP, and health. A just, health-centered transition can still prevent millions of deaths. The evidence is clear—climate policy is health policy. 3/4
October 31, 2025 at 8:43 AM
Overall, it's a grim read: 2024 saw global temps exceed 1.5 °C for the first time, heat deaths, food insecurity, and disease spread. Political backsliding and fossil fuel expansion are putting millions at risk. Climate inaction is a health crisis. 2/4
October 31, 2025 at 8:43 AM
@medem.bsky.social be sure to check this out!
October 12, 2025 at 10:14 AM
Congrats Andreu, this is so cool!! 🎉
September 16, 2025 at 6:08 PM
A shout-out to the people who did all the hard work for this study - in particular @dawiet.bsky.social and Amin Oueslati, who paved the way in his master's thesis he wrote @hertieschool.bsky.social - super proud of him!
May 12, 2025 at 8:25 PM
Finally, black-box audits are not ideal. Researchers need better access to the models behind those services. More implications, and more findings, in the paper!
May 12, 2025 at 8:25 PM
Multiply that by the number of calls made to those APIs every day, and the decisions they inform. (We don't know that number, but it's probably no less than ~500k calls over the minute you've engaged with this thread.)
May 12, 2025 at 8:25 PM
And what is more, those decisions probably affect those most who are not perpetrators but members of attacked groups.
May 12, 2025 at 8:25 PM
But our results indicate that if your moderation pipeline largely builds on automated decisions informed by those services, you're going to produce A LOT of questionable decisions.
May 12, 2025 at 8:25 PM
Hate speech moderation is hard, and we wouldn't expect any model to do a perfect job. Also, it's not straightforward to agree on what constitutes hate speech in the first place, which is why our benchmark datasets are not beyond any doubt.
May 12, 2025 at 8:25 PM
Main finding #2: Group markers drive over-moderation. Words like "muslim", "gay", or "jews" make mis-classifying non-hate speech as hate speech more likely.
May 12, 2025 at 8:25 PM
Some APIs seem overly sensitive (in particular Google's NL API) while others tend to under-moderate (Perspective and Microsoft).
May 12, 2025 at 8:25 PM