Fil Menczer
banner
fil.bsky.social
Fil Menczer
@fil.bsky.social
Researcher on social media misinformation and manipulation, director of the Observatory on Social Media (OSoMe.iu.edu, pronounced “awesome”) at Indiana University
ICYMI -- Delighted that the Handbook of Computational Social Science is finally out. Amazing cast of coauthors, with special thanks to @tahayasseri.bsky.social for leading the effort. Happy Holidays!

www.elgaronline.com/edcollbook/b...
December 26, 2025 at 11:47 PM
Reposted by Fil Menczer
In the time since I first posted this thread, this TikTok spam network has grown in size and added new types of repetitive content to its lineup.

Additionally, many of the accounts have pivoted to hawking dubious dietary supplements.
In recent days, oddly similar AI-generated videos depicting nonexistent Black people accompanied by captions such as “dear white people” and “hit follow if you want peace” have proliferated on TikTok. Here's a look at the spam network posting the videos.
www.conspirator0.com/p/one-ai-gen...
One AI-generated human race
None of these people exist, but that hasn’t stopped them from posting profusely on TikTok
www.conspirator0.com
December 14, 2025 at 7:37 PM
OpenAI's Sora 2 ultrarealistic (but fake) AI videos used by Russian disinformation operations, who could have predicted it?!?
www.nbcnews.com/tech/social-...
As war with Russia drags on, ultrarealistic AI videos attempt to portray Ukrainian soldiers in peril
A series of AI-generated deepfakes and videos, many made with OpenAI's Sora, appears to show Ukrainian soldiers apologizing to the Russian people and blaming their government for the war.
www.nbcnews.com
December 15, 2025 at 1:56 AM
2024-25 has been an academic year of challenges and opportunities. We have been working on exciting problems, such as the exploitation of AI for manipulation at scale and tools to promote a healthier information environment. Read all about it in our latest annual report:
osome.iu.edu/research/blo...
OSoMe Annual Report 2024-2025
Our annual report is now available. The stories, achievements, and insights in this report are a testament to the dedication of our team and the support of...
osome.iu.edu
December 13, 2025 at 9:25 PM
Pretty much all the worst-case scenarios we have been predicting through our research in the last 10-15 years are coming true. Sad but excellent year-in-review by @craigsilverman.bsky.social and @mantzarlis.com
I've been investigating digital deception for ~15yrs and 2025 was the worst year.

VC-backed bot farms, endless AI slop, industrial level scams, abusive AI nudifiers, Meta paying $ for hoaxes... Deception was legitimized, monetized & shoved down the public’s throat:

indicator.media/p/2025-the-y...
2025: The year tech embraced fakeness
This year, powerful people, companies, and institutions welcomed digital deception like never before. The rest of us faced the consequences.
indicator.media
December 10, 2025 at 7:21 AM
Increasingly Aligned Russian and Chinese Disinformation Threatens U.S. Citizens
www.americansecurityproject.org/increasingly...
Increasingly Aligned Russian and Chinese Disinformation Threatens U.S. Citizens
www.americansecurityproject.org
December 7, 2025 at 11:28 PM
Reposted by Fil Menczer
The European Commissision announced its first non-compliance decision under the Digital Services Act, fining X €120 million for deceptive practices and lack of transparency. 1/

ec.europa.eu/commission/p...
Commission fines X €120 million under the Digital Services Act
Today, the Commission has issued a fine of €120 million to X for breaching its transparency obligations under the Digital Services Act (DSA).
ec.europa.eu
December 5, 2025 at 12:15 PM
Reposted by Fil Menczer
Don't let anyone tell you that the Commission's DSA enforcement against X is about speech or censorship.

That would, indeed, be interesting. But this is just the EU enforcing some normal, boring laws that would get bipartisan support in the U.S. (I bet similar bills *have* had that support.) 1/
December 5, 2025 at 2:58 PM
We apologize for the interruption of the OSoMe Awesome speaker today due to an IU blackout. We will restart as soon as power comes back.
December 3, 2025 at 5:44 PM
Reposted by Fil Menczer
Reposted by Fil Menczer
It represents the first detailed study of how a social platform transitions from being invitation-only to open to the public, with a focus on user activities and the evolution of the platform.
December 1, 2025 at 12:14 PM
Reposted by Fil Menczer
I am pleased to announce that our work “A longitudinal analysis of misinformation, polarization, and toxicity on Bluesky after its public launch” has been accepted at #OSNEM.
The paper is an extension of our previous work presented at #ASONAM.
Don't miss it: www.sciencedirect.com/science/arti...
A longitudinal analysis of misinformation, polarization and toxicity on Bluesky after its public launch
Bluesky is a decentralized, Twitter-like social media platform that has rapidly gained popularity. Following an invite-only phase, it officially opene…
www.sciencedirect.com
December 1, 2025 at 12:14 PM
Reposted by Fil Menczer
Even though our website is down, registration for our next OSoMe Awesome Speaker is open!

📅 Dec 3 @ 12pm ET
🎤 James Evans (UChicago)
📖 Information Laundering: How Misinformation Gets Cleaned and Dirty Across Digital and Policy Ecosystems

Register directly through Zoom: iu.zoom.us/meeting/regi...
December 1, 2025 at 5:10 PM
Reposted by Fil Menczer
Exploring GPT citation patterns...

Cited sources were mostly fact-checking outlets, mainstream news, and government sites.

They have high reliability scores (NewsGuard) and tend to align with the political left.
November 29, 2025 at 10:06 PM
Reposted by Fil Menczer
Reasoning didn’t help much.

Web search improved GPT models, but Gemini saw no benefit—likely because it failed to return sources for most queries.

GPT models often return citations and many are the PolitiFact article containing the fact check.

Again, curated info helps a lot.
November 29, 2025 at 10:06 PM
Reposted by Fil Menczer
"Standard" models—i.e., models that do not leverage reasoning or web-search abilities—perform poorly when predicting PolitiFact's fact-checking label, with macro F1 typically ranging from 0.1 to 0.3.

However, when we give models curated fact-checking evidence, performance improves dramatically.
November 29, 2025 at 10:06 PM
Reposted by Fil Menczer
🚨 New working paper 🚨

Can LLMs with reasoning + web search reliably fact-check political claims?

We evaluated 15 models from OpenAI, Google, Meta, and DeepSeek on 6,000+ PolitiFact claims (2007–2024).

Short answer: Not reliably—unless you give them curated evidence.

arxiv.org/abs/2511.18749
November 29, 2025 at 10:06 PM
Reposted by Fil Menczer
Happy to announce the 2nd edition of our Summer School in Computational Social Science that will take place in the beautiful Villa del Grumello on Lake Como between June 22-26, 2026!

*** DEADLINE FOR APPLICATION: February 15, 2026 (firm deadline) ***

More details here:
css2.lakecomoschool.org
November 27, 2025 at 5:21 PM
I don't think the fight against disinformation and foreign malign influence campaigns is *compatible* with free speech. I think it is *necessary* to protect speech.
www.nytimes.com/2025/11/17/u...
France Steps Up Fight Against Disinformation as U.S. Pulls Back, Official Says
www.nytimes.com
November 24, 2025 at 2:52 AM
IU shuts down all DEI activities and scrubs DEI mentions from websites and courses, despite no laws requiring it.

iubaaup.org/2025/10/23/e...
November 17, 2025 at 11:25 PM
Don't miss our OSoMe Awesome Speaker Michelle Amazeen this Wednesday! Register at iu.zoom.us/meeting/regi...
November 17, 2025 at 3:53 PM