Yannis Theocharis
banner
yannistheocharis.bsky.social
Yannis Theocharis
@yannistheocharis.bsky.social
Professor and Chair of Digital Governance | Department of Governance | Technical University of Munich. Info: https://www.hfp.tum.de/en/digitalgovernance/home/
Reposted by Yannis Theocharis
This paper tested my resilience in ways I can't quite remember (or I have given up). Massive thanks to the amazing co-authors Stiene Praet, Sebastian Popa, @yannistheocharis.bsky.social, @zoltanfazekas.bsky.social, Pablo Barbera, @jatucker.bsky.social and to the many dedicated reviewers!!
July 30, 2025 at 9:23 AM
Reposted by Yannis Theocharis
🔗📃 Link to he paper now published in the ICWSM conference proceedings (ojs.aaai.org/index.php/IC...) co-authored with @nilsweidmann.bsky.social, @friederikeq.bsky.social, @sebnagel.bsky.social, @yannistheocharis.bsky.social & Molly Roberts
Written for Lawyers or Users? Mapping the Complexity of Community Guidelines | Proceedings of the International AAAI Conference on Web and Social Media
ojs.aaai.org
June 25, 2025 at 4:18 PM
This raises a key question:

Is publishing rules enough if users can't understand them?
Our team, led by @friederikeq.bsky.social, answers this question with two innovative experimental studies on user comprehension.

More on that soon !!
June 10, 2025 at 1:30 PM
Our key findings:
-- The largest platforms are most likely to publish guidelines
-- But those guidelines tend to be longer and more complex
-- Guidelines have grown more complex under new regulation (like the DSA)
June 10, 2025 at 1:30 PM
COMPARE includes 132 moderation policies and 89 community guidelines.

We analyzed them for:
-- Length
-- Readability
-- Semantic complexity
June 10, 2025 at 1:30 PM
What do 132 of the most popular social media platforms globally actually say in their community guidelines?

And more importantly — can regular users understand those rules?

We built a new dataset (COMPARE) to find out. Access it on Github: github.com/transparency...
GitHub - transparency-in-content-moderation/COMPARE
Contribute to transparency-in-content-moderation/COMPARE development by creating an account on GitHub.
github.com
June 10, 2025 at 1:30 PM
As debates on platform regulation continue, this report offers new, data-driven insights for policymakers, platforms & civil society.

Download and read the full report: osf.io/s3kcw
OSF
osf.io
February 11, 2025 at 2:09 PM
This research wouldn’t have been possible without an incredible team:

🎓 @spyroskosmidis.bsky.social
🎓 @janzilinsky.bsky.social
🎓 @friederikeq.bsky.social
🎓 @franziskapradel.bsky.social

Led by the Chair of Digital Governance @tum.de + the Content Moderation Lab (a TUM & @ox.ac.uk collaboration)
February 11, 2025 at 2:09 PM
Our data show:

🔹 Most people want some level of content moderation—but opinions vary on who should be responsible.

🔹 There’s strong public concern about misinformation, toxicity, and online harm.

🔹 Many users feel platforms aren’t doing enough—but they also worry about overreach.
February 11, 2025 at 2:09 PM
What’s inside?

🔹 Who do people think should moderate online content?
🔹 How do they balance free speech vs. harm prevention?
🔹 What concerns them most—toxicity, misinformation, platform power?

Below you can find a glimpse into some of the many key findings you will find in the report.
February 11, 2025 at 2:09 PM
What do people really think about online speech, moderation & platform responsibility? Despite claims that users want fewer restrictions, there’s surprisingly little empirical evidence on public opinion,especially beyond the U.S. Our study fills this gap with representative samples from 10 countries
February 11, 2025 at 2:09 PM
🌐 How do citizens view the trade-off between tackling harmful content and preserving free expression online?

Stay tuned for the upcoming report with @janzilinsky.bsky.social, @friederikeq.bsky.social, @spyroskosmidis.bsky.social & @franziskapradel.bsky.social. 🚀
January 13, 2025 at 9:22 AM
40% of respondents agreed:

"We should be free to express ourselves, even if it hurts, offends, shocks, or disturbs others."

But where do we draw the line between free expression and harmful speech?

In an upcoming report, we dive into this critical trade-off 📊
January 13, 2025 at 9:22 AM
Only 17% of respondents across 10 countries support allowing offensive content targeting certain groups on social media.

In the U.S., support rises to just 29%.

There seems to be very little tolerance for hateful speech.
January 13, 2025 at 9:22 AM