Petter Törnberg
pettertornberg.com
Petter Törnberg
@pettertornberg.com
Assistant Professor in Computational Social Science at University of Amsterdam

Studying the intersection of AI, social media, and politics.

Polarization, misinformation, radicalization, digital platforms, social complexity.
Pinned
Misinformation isn't random - it's strategic. 🧵

In the first cross-national comparative study, we examine 32M tweets from politicians.

We find that misinformation is not a general condition: it is driven by populist radical right parties.

with @julianachueri.bsky.social
doi.org/10.1177/1940...
LLMs are now widely used in social science as stand-ins for humans—assuming they can produce realistic, human-like text

But... can they? We don’t actually know.

In our new study, we develop a Computational Turing Test.

And our findings are striking:
LLMs may be far less human-like than we think.🧵
Computational Turing Test Reveals Systematic Differences Between Human and AI Language
Large language models (LLMs) are increasingly used in the social sciences to simulate human behavior, based on the assumption that they can generate realistic, human-like text. Yet this assumption rem...
arxiv.org
November 7, 2025 at 11:13 AM
Most people study what misinformation says.

We decided to study how it looks.

Using novel multi-modal AI methods, we study 17,848 posts by top climate denial accounts - and uncovered a new front in the misinformation war.

Here's what it means 🧵

www.tandfonline.com/doi/full/10....
November 4, 2025 at 8:48 PM
Is social media dying? How much has Twitter changed as it became X? Which party now dominates the conversation?

Using nationally representative ANES data from 2020 & 2024, I map how the U.S. social media landscape has transformed.

Here are the key take-aways 🧵

arxiv.org/abs/2510.25417
October 30, 2025 at 8:09 AM
@jerusalem.bsky.social Will The Argument bring back 'Good on Paper' in some form? Please do, its loss has left a huge hole in my podcast app!

And there are a lot of persuasive papers out there!
September 30, 2025 at 6:33 PM
Reposted by Petter Törnberg
🚨 New #SingularityFM 🎙

What do Facebook, Google & TikTok really see when they look at us — and what do they miss?

My latest interview with @pettertornberg.com explores algorithmic tyranny, digital modernity & the future of power.

👉 snglrty.co/4mDn1Gb
Petter Törnberg: Algorithmic Tyranny & Digital Modernity
See this interview to discover Petter Törnberg’s insights on algorithmic tyranny, digital modernity, and how platforms reshape power.
snglrty.co
September 30, 2025 at 12:22 PM
Reposted by Petter Törnberg
Not just one but two podcasts featuring @pettertornberg.com who wrote some of the most interesting papers in recent years.

1: @seanmcarroll.bsky.social, below
2: Singularity Weblog: Algorithmic Tyranny, the Rise of Digital Modernity and Seeing Like a Platform www.youtube.com/watch?v=lJrV...
September 29, 2025 at 3:37 PM
Such a crazy and amazing experience to be on a podcast of which I'm a huge fan!
Mindscape 330 | Petter Törnberg @pettertornberg.com on the Dynamics of (Mis)Information. #MindscapePodcast

www.preposterousuniverse.com/podcast/2025...
September 29, 2025 at 2:25 PM
Reposted by Petter Törnberg
www.flamman.se/valfardsstat...

It was great to talk to @flamman.se about the link between weak welfare state protection and the expansion of online platform work in Europe. @pettertornberg.com
Välfärdsstat ut, gigjobb in
Gigjobb och nedmonterad välfärd går hand i hand
www.flamman.se
September 27, 2025 at 12:30 PM
And I now received an automatic request to review a revised version.

This is depressing. Academia is becoming AI slop.
I flagged to the editor that the paper I reviewed looked AI-generated.

Immediately received a generic AI-generated email from the editor.

The AI future is here!
September 19, 2025 at 11:57 AM
Reposted by Petter Törnberg
New article!

The aesthetics of climate misinformation: computational multimodal framing analysis with BERTopic and CLIP by Anton Törnberg & Petter Törnberg / @pettertornberg.com

doi.org/10.1080/0964...
September 17, 2025 at 6:12 AM
The one-plot summary of what Elon Musk did to Twitter.
This is where it gets wild 👇

In 2020, Twitter use was highest among people who loved Democrats and disliked Republicans.

By 2024, it completely flipped: the more polarized Republican you are, the more you use Twitter/X.

From blue stronghold → red megaphone. All for just $44 billion.
September 16, 2025 at 11:40 AM
How much did Elon's takeover reshape Twitter/X? How did the partisan tilt of social media use change from 2020 to 2024?

The ANES 2024 data is out — and this thread answers all your burning questions! 🔥
September 16, 2025 at 11:20 AM
I was interviewed on New Books Network Podcast with Nicholas McCay on Justus and my book "Seeing Like a Platform" (available open access!)

newbooksnetwork.com/seeing-like-...
Petter Törnberg and Justus Uitermark, "Seeing Like a Platform: An Inquiry into the Condition of Digital Modernity" (Taylor & Francis, 2025) - New Books Network
newbooksnetwork.com
September 10, 2025 at 12:46 PM
I was interviewed on Justin Hendrix's TechPolicy podcast about 'Seeing Like a Platform: An Inquiry into the Condition of Digital Modernity', that sets out to address the “entanglement of epistemology, technology, and politics in digital modernity."

www.techpolicy.press/seeing-like-...
'Seeing Like a Platform' — A Conversation with Petter Törnberg | TechPolicy.Press
With Justus Uitermark, Törnberg is one of the authors of the new book Seeing Like a Platform: An Inquiry into the Condition of Digital Modernity.
www.techpolicy.press
September 10, 2025 at 11:31 AM
Reposted by Petter Törnberg
Algorithms are not necessary for the creation of echo chambers.

All that is required is people muting or leaving when they have a bad interaction. Then an echo chamber arises with minimal preferences.

From @pettertornberg.com

#psych

arxiv.org/pdf/2508.10466
September 9, 2025 at 1:32 PM
Reposted by Petter Törnberg
Destabilisieren #Musk & Co. durch die Algorithmen sozialer Netzwerke die liberale Demokratie?

So einfach ist die Sache nicht, zeigen @maiklarooij.nl und @pettertornberg.com von der Universität Amsterdam. Spannend, wie sich Sozialforschung durch #KI weiterentwickelt.

www.faz.net/aktuell/feui...
Studie simuliert Netzwerk: Ist Social Media von Natur aus asozial?
Zwei Forscher kreieren einen Social-Media-Kanal mit 500 Nutzern und schauen, was passiert. Es geht zu, wie man es von existierenden Plattformen kennt. Das könnte damit zu tun haben, dass die Nutzer mi...
www.faz.net
September 5, 2025 at 2:03 PM
🥳🥳🥳 Huge achievement and an amazing project on AI & the politics of welfare!

Follow her work if you're interested in how we can manage the mess that's coming!
Super excited to share that I got the ERC Starting Grant! 🎉

Over the next five years, I’ll be studying how AI is reshaping the politics of the welfare state.

A huge thank you to all the friends and colleagues who supported me along the way.

vu.nl/en/news/2025...

#ERCStG
ERC Starting Grant for political science research on the consequences of AI - Vrije Universiteit Amsterdam
Political scientist Juliana Chueri has received a Starting Grant from the European Research Council (ERC).
vu.nl
September 4, 2025 at 5:24 PM
I flagged to the editor that the paper I reviewed looked AI-generated.

Immediately received a generic AI-generated email from the editor.

The AI future is here!
September 1, 2025 at 8:23 AM
The "digital" never exists in isolation.
The platform society is always part and parcel of deeper transformations in capitalism.
Gig labor is often understood as the expression of digital disruption.

But our new study suggests a deeper story:

👉 The rise of platform labor is inseparable from the retreat of the welfare state.
🧵
w/ @pettertornberg.com

journals.sagepub.com/doi/pdf/10.1...
journals.sagepub.com
August 31, 2025 at 5:31 PM
Reposted by Petter Törnberg
Gig labor is often understood as the expression of digital disruption.

But our new study suggests a deeper story:

👉 The rise of platform labor is inseparable from the retreat of the welfare state.
🧵
w/ @pettertornberg.com

journals.sagepub.com/doi/pdf/10.1...
journals.sagepub.com
August 31, 2025 at 5:25 PM
🚨 PhD Position at the University of Amsterdam 🚨

Join my team as a computer scientist / computational social scientist working on LLMs, social media, and politics.

We offer freedom, impact, and an inspiring environment at one of Europe's leading universities.

🔗 werkenbij.uva.nl/en/vacancies...
Vacancy — PhD Position on Improving Social Media Using Large Language Models
The Institute for Logic, Language and Computation (ILLC) at the University of Amsterdam is inviting applications for a fully funded PhD position in the NWO VIDI project "Improving Social Media Using L...
werkenbij.uva.nl
August 25, 2025 at 8:34 AM
Reposted by Petter Törnberg
Don’t blame the algorithm: Polarization may be inherent in social media | Science | AAAS www.science.org/content/arti... @pettertornberg.com
Don’t blame the algorithm: Polarization may be inherent in social media
In simulations, AI-generated users of stripped-down social media without content algorithms still split into polarized echo chambers
www.science.org
August 20, 2025 at 9:57 PM
Somehow, Marx is suddenly popular with tech bros talking about AI.

I just wished they would actually read him.

Because Marx did have a lot to say about automation -- and it's eerily relevant to today’s debates. 🧵
August 20, 2025 at 12:34 PM
In the literature, there are two competing explanations for "echo chambers":
1️⃣ Algorithms curate what we see (“filter bubbles”)
2️⃣ People choose like-minded peers (“selective exposure”)

Our new study suggests something surprising:

both explanations might be wrong. 🧵

arxiv.org/abs/2508.10466
Online Homogeneity Can Emerge Without Filtering Algorithms or Homophily Preferences
Ideologically homogeneous online environments - often described as "echo chambers" or "filter bubbles" - are widely seen as drivers of polarization, radicalization, and misinformation. A central debat...
arxiv.org
August 15, 2025 at 7:46 AM
I'm really stoked about this paper!

Builds on amazing MSc thesis work by @maiklarooij.nl
We built the simplest possible social media platform. No algorithms. No ads. Just LLM agents posting and following.

It still became a polarization machine.

Then we tried six interventions to fix social media.

The results were… not what we expected.

arxiv.org/abs/2508.03385
Can We Fix Social Media? Testing Prosocial Interventions using Generative Social Simulation
Social media platforms have been widely linked to societal harms, including rising polarization and the erosion of constructive debate. Can these problems be mitigated through prosocial interventions?...
arxiv.org
August 6, 2025 at 8:28 AM