Sami Nenno
saminenno.bsky.social
Sami Nenno
@saminenno.bsky.social
Researcher at Synosys TU Dresden & Associate at Humboldt Institute for Internet and Society

css-synosys.github.io
Finally, a big thanks to my co-authors! From my team at @tudresden.bsky.social: @lorenzspreen.bsky.social and Kamil Fuławka, and from @zemki.bsky.social : @cbpuschmann.bsky.social 8/8
November 3, 2025 at 10:13 AM
To end with some good news: this also means there are plenty of topics and actors where misinformation is almost absent. We should focus on high-prevalence contexts, but can remain (cautiously) optimistic about those with low prevalence. 7/8
November 3, 2025 at 10:13 AM
Overall, misinformation makes up just above 1% of posts. Yet, for some parties on certain topics, the probability of misinformation can be at 10%. So, while the overall rate is low, misinformation isn’t uniformly rare — it’s clustered in specific contexts. 6/8
November 3, 2025 at 10:13 AM
Now to our study’s core idea: recent research has focused on the overall spread of misinformation. But is that the right question?
What if misinformation isn’t evenly distributed, but instead concentrated among specific actors and topics? 5/8
November 3, 2025 at 10:13 AM
The misinformation rate is highest for BSW, followed by AfD, and then CSU/CDU. This pattern is fairly stable across platforms.
However, party contributions to overall misinformation differ: on Facebook, X, and TikTok, AfD accounts for most misinformation, while on Instagram, it’s the CDU. 4/8
November 3, 2025 at 10:13 AM
Only about 4% of posts on FB, Insta, X, TT by German politicians include news links but over 90% contain text. Our text-level method, which matches fact-checks and community notes with posts, detects about ten times more misinformation than the news-domain approach. 3/8
November 3, 2025 at 10:13 AM
Previous studies identified misinformation via link sharing to news domains. However, news sharing has always been just a fraction of social media activity, has declined in recent years, and isn’t possible on all platforms. This is reflected in the numbers… 2/8
November 3, 2025 at 10:13 AM
I’m happy to share my first publication after my PhD, during my move from @hiigberlin.bsky.social to the CSS group led by @lorenzspreen.bsky.social. 🧵10/10
September 23, 2025 at 9:15 AM
This study has many limitations. That is why I call it exploratory rather than representative. But take a look yourself! 🧵9/10
September 23, 2025 at 9:15 AM
But the difference lies in targets: CDU/CSU use misinformation mainly against political opponents. AfD uses it to undermine democratic institutions. 🧵8/10
September 23, 2025 at 9:15 AM
I also found a pattern on the supply side of misinformation. In quantity, politicians from AfD and CDU/CSU spread about the same amount (though conservatives are invited more often).
🧵7/10
September 23, 2025 at 9:15 AM
That matters: research shows that failing to challenge radical claims makes audiences believe they are publicly accepted. This may also apply to misinformation.
🧵6/10
September 23, 2025 at 9:15 AM
The talk show results aren’t surprising. In live, heated debates it’s hard to stop guests (often politicians) from spreading misinformation. Sometimes moderators challenged the claims. But in many cases, they didn’t. 🧵5/10
September 23, 2025 at 9:15 AM
Result: about 12% of talk shows contained at least one match, compared to just over 2% of news programs. 🧵4/10
September 23, 2025 at 9:15 AM
I analyzed subtitles from four news and six talk shows over one year. Too much material to check manually. So I used vector embeddings and LLMs to match fact-checks with subtitle segments. If a claim could be debunked with a fact-check, I counted it as misinformation.
🧵3/10
September 23, 2025 at 9:15 AM
I examined German public broadcasting news (e.g., Tagesschau) and talk shows (e.g., Markus Lanz). Are they gatekeepers or amplifiers of misinformation? Spoiler: the truth might be somewhere in between. 🧵2/10
September 23, 2025 at 9:15 AM