Philipp Lorenz-Spreen
banner
lorenzspreen.bsky.social
Philipp Lorenz-Spreen
@lorenzspreen.bsky.social
Heading the Computational Social Science group within SynoSys & ScaDS.AI at TU Dresden, where we study how online information environments impact public discourse and develop alternatives that benefit democracy.

CSS-Group: https://css-synosys.github.io/
Reposted by Philipp Lorenz-Spreen
We’re hiring a Postdoctoral Researcher to help lead my interdisciplinary team at FZI!
Work with us on the automated analysis and countering of #disinformation — at the intersection of AI, computational social science, and #democracy.

👉 Please apply and share! karriere.fzi.de/Vacancies/11...
November 10, 2025 at 11:32 AM
Reposted by Philipp Lorenz-Spreen
⏰ Last chance to register for #CDSM2025!

Don't miss your chance to join us Nov 12–13 for two days of talks & debates at the intersection of causality, data science & AI.

💻 Online | 🎟️ Free
👉 causalscience.org
November 8, 2025 at 9:05 AM
Reposted by Philipp Lorenz-Spreen
NEW: In a recent Oxford-led study by @mmosleh.bsky.social, researchers analysed millions of social media posts containing links to news stories, across seven different social platforms.

Read the full research paper here: www.pnas.org/doi/10.1073/...
October 31, 2025 at 1:24 PM
Reposted by Philipp Lorenz-Spreen
Thank you @techpolicypress.bsky.social and @ramshajahangir.bsky.social for giving me the space to rant about the “end” of political advertising on social media in the EU!
Meta and Google’s decision to ban political ads in the EU isn’t just a regulatory adjustment, it’s a setback for accountability and election integrity, writes Fabio Votta. Drawing on 5 years of tracking political ads, he explores the ban's impact on transparency.
buff.ly/BRbLgUw
What Data Reveals About Meta and Google’s Political Ad Ban in the EU | TechPolicy.Press
The ad ban is a complete failure of platform responsibility that will make elections less transparent and more vulnerable to algorithms, writes Fabio Votta.
www.techpolicy.press
November 3, 2025 at 2:13 PM
How prevalent is misinformation really? Many detection methods rely on URL/Source-level, but they miss a big part of posts that don't include links.

In a new preprint from the group, led by @saminenno.bsky.social, we analyze 10x more posts from German politicians on four platforms.

thread below👇
November 3, 2025 at 5:06 PM
Reposted by Philipp Lorenz-Spreen
Happy to share our new preprint “Content-based detection of misinformation expands its scope across politicians and platforms.” We analyzed misinformation at the text level in posts by German politicians on Facebook, Instagram, X, and TikTok.
osf.io/preprints/so... 🧵1/8
OSF
osf.io
November 3, 2025 at 10:13 AM
Reposted by Philipp Lorenz-Spreen
It is happening 👇
📊 The #DSA data access portal is live, and VLOPs/VLOSEs have begun publishing their data catalogues.
Trying to collect the links again: docs.google.com/spreadsheets...
LinkedIn
This link will take you to a page that’s not on LinkedIn
lnkd.in
October 29, 2025 at 5:03 PM
Reposted by Philipp Lorenz-Spreen
Selective Causal Focus: Research produced and funded by tech companies often either frames problems as user-driven, or solutions as the obligation of users (E.g. community notes). Distracting us from their design, business model, interface, and other causes steering us away from their profit model
October 24, 2025 at 12:12 AM
Had a great time @css-lmu.bsky.social in Munich, leaving with many ideas and connections. Thanks for having me!
This week, we have @lorenzspreen.bsky.social visiting our lab. He presented his work on disinformation classification and toxicity in group discussions on Reddit. He also has a new CSS group, check it out: css-synosys.github.io
October 30, 2025 at 8:08 PM
Reposted by Philipp Lorenz-Spreen
Across social media sites, political posting is tightly linked to affective #polarization - the most partisan users post the most

As casual users disengage & polarized partisans remain vocal, the online public sphere grows smaller, sharper, and more ideologically extreme
arxiv.org/abs/2510.25417
October 30, 2025 at 5:49 PM
Reposted by Philipp Lorenz-Spreen
This week, we have @lorenzspreen.bsky.social visiting our lab. He presented his work on disinformation classification and toxicity in group discussions on Reddit. He also has a new CSS group, check it out: css-synosys.github.io
October 30, 2025 at 1:53 PM
Reposted by Philipp Lorenz-Spreen
Hier diskutieren wir gleich in den Franckeschen Stiftungen in Halle im Unterhausformat über Medien, Misinformation und Meinungsfreiheit u.a. mit @lorenzspreen.bsky.social und @tobiasrothmund.bsky.social
October 30, 2025 at 4:18 PM
Very timely and fitting article by @kakape.bsky.social on the recent DSA developments around data access (with contributions from myself and many great colleagues):

www.science.org/content/arti...
Meta and TikTok are obstructing researchers’ access to data, European Commission rules
Data are needed to study how social media spreads misinformation and influences elections, scientists say
www.science.org
October 29, 2025 at 8:24 PM
It is happening 👇
📊 The #DSA data access portal is live, and VLOPs/VLOSEs have begun publishing their data catalogues.
Trying to collect the links again: docs.google.com/spreadsheets...
LinkedIn
This link will take you to a page that’s not on LinkedIn
lnkd.in
October 29, 2025 at 5:03 PM
Reposted by Philipp Lorenz-Spreen
I am deeply honored to be awarded the Lagrange Prize 🏆, the premier award in the field of Complex Systems.

I'd like to share this moment with all my current and past students, research team members, and collaborators over the years.

Thank you CRT Foundation & @isi.it for this honor
🎉 ISI Foundation is thrilled to announce that the Lagrange Prize – CRT Foundation Edition 2025 has been awarded to Professor Iyad Rahwan @iyadrahwan.bsky.social, Director of the Max Planck Institute for Human Development in Berlin!
October 27, 2025 at 6:11 PM
Reposted by Philipp Lorenz-Spreen
My lab is looking for a Senior Scientist (= PostDoc with option of permanency)!

We are looking for someone interested in doing cutting-edge computational social science + helping us with data & software engineering 🤓.

See job ad for details jobs.uni-graz.at/en/jobs/7d14...
Universität Graz
jobs.uni-graz.at
October 28, 2025 at 7:02 AM
Reposted by Philipp Lorenz-Spreen
I think this Facebook page is a terrifying case study in how gen AI slop has enabled the creation of an automated hate machine, churning out endless caricatures of made-up enemies, immigrants and politicians, optimized for engagement. In many ways, it embodied the very nightmare many of us feared.
October 27, 2025 at 7:59 AM
Reposted by Philipp Lorenz-Spreen
Attention on social media depends far more on how you express yourself (49% of variance) than on who you are (10%)!

Expressing emotions is more influential than gender, education, family background or personality traits, according to an analysis of 2.1 million posts.
www.nature.com/articles/s41...
October 27, 2025 at 6:04 PM
Great to be back where I studied: This week I'm in Munich at
@lmumuenchen.bsky.social from the faculty of physics to the faculty of social sciences, for a Computational Social Science Fellowship, kindly hosted by @valeriehase.bsky.social

www.sw.lmu.de/en/faculty-a...
October 27, 2025 at 9:26 AM
Very good overview of the current state of Platform data access under the DSA by @daphnek.bsky.social

verfassungsblog.de/dsa-platform...
Using the DSA to Study Platforms
verfassungsblog.de
October 27, 2025 at 9:23 AM
Let’s not be naive: We need to be very aware of industry influence in computational social science! Through control of data (and experiment) access the situation is even more problematic than in other industries, which makes the data access rights under DSA article 40 particular important!
1. We ( @jbakcoleman.bsky.social, @cailinmeister.bsky.social, @jevinwest.bsky.social, and I) have a new preprint up on the arXiv.

There we explore how social media companies and other online information technology firms are able to manipulate scientific research about the effects of their products.
October 24, 2025 at 2:30 PM
Reposted by Philipp Lorenz-Spreen
1. We ( @jbakcoleman.bsky.social, @cailinmeister.bsky.social, @jevinwest.bsky.social, and I) have a new preprint up on the arXiv.

There we explore how social media companies and other online information technology firms are able to manipulate scientific research about the effects of their products.
October 24, 2025 at 12:47 AM
Reposted by Philipp Lorenz-Spreen
Three weeks remaining: Call for Papers for the Cambridge Disinformation Summit, 8-10 April 2026, at the University of Cambridge.

Research on systemic risks from technology that affects information streams or the amplification or monetization of disinformation.

www.jbs.cam.ac.uk/events/cambr...
Cambridge disinformation summit (2026) - Cambridge Judge Business School
Connect with Cambridge Judge! Explore our full calendar of upcoming events, learn about our programmes and connect with world-class speakers.
www.jbs.cam.ac.uk
October 24, 2025 at 5:50 AM
Reposted by Philipp Lorenz-Spreen
Yet again, we can't afford to let LLMs become a source of epistemic grounding for society.
Largest study of its kind shows AI assistants misrepresent news content 45% of the time – regardless of language or territory
An intensive international study was coordinated by the European Broadcasting Union (EBU) and led by the BBC
www.bbc.co.uk
October 24, 2025 at 5:21 AM