Walter Quattrociocchi
walter4c.bsky.social
Walter Quattrociocchi
@walter4c.bsky.social
Full Professor of Computer Science @Sapienza University of Rome.
Data Science, Complex Systems
Grokipedia is not the problem.
It’s the signal.
What we’re seeing isn’t about AI or neutrality — it’s the rise of the post-epistemic web.
The question isn’t: is it true?
The question is: who made the model?
October 29, 2025 at 8:05 AM
Together, these papers suggest a transformation:
→ Knowledge is no longer verified, but simulated
→ Platforms no longer host views, they shape belief architectures
→ Truth is not disappearing. It’s being automated, fragmented, and rebranded
October 29, 2025 at 8:05 AM
Paper 2 — Ideological Fragmentation of the Social Media Ecosystem
We analyzed 117M posts from 9 platforms (Facebook, Reddit, Parler, Gab, etc).
Some now function as ideological silos — not just echo chambers, but echo platforms.
www.nature.com/articles/s41...
Ideology and polarization set the agenda on social media - Scientific Reports
Scientific Reports - Ideology and polarization set the agenda on social media
www.nature.com
October 29, 2025 at 8:05 AM
Paper 1 — The Simulation of Judgment in LLMs
We benchmarked 6 large language models against experts and humans.
They often agree on outputs — but not on how they decide.
Models rely on lexical shortcuts, not reasoning.
We called this epistemia.
www.pnas.org/doi/10.1073/...
October 29, 2025 at 8:05 AM
We studied both, in two recent papers on
@PNASNews
and
@PNASNexus
:
Epistemia — the illusion of knowledge when LLMs replace reasoning with surface plausibility
Echo Platforms — when whole platforms, not just communities, become ideologically sealed
October 29, 2025 at 8:05 AM
Two structural shifts are unfolding right now:
Platforms are fragmenting into echo platforms — entire ecosystems aligned around ideology.
LLMs are being used to simulate judgment — plausible, fluent, unverifiable.
October 29, 2025 at 8:05 AM
Don’t know your approach.
Ours assumes that to understand the perturbation, you first need to operationalize the task and compare how humans and models diverge.
That’s the empirical ground — not a belief about what LLMs “are.”
October 21, 2025 at 5:52 AM
Coming from misinfo/polarization,
we’re not asking what LLMs are.
We’re asking: what happens when users start trusting them as if they were search engines?
We compare LLMs and humans on how reliability and bias are judged.
That’s where the illusion epistemia begins.
October 19, 2025 at 2:45 PM
Yes, we include recent works on evaluation heuristics and bias in LLMs.
Our focus is on how LLMs outputs simulate judgment.
We compare LLMs and humans directly, under identical pipelines, on the same dataset.
May rely is empirical caution.
The illusion of reasoning is the point (not the premise).
October 19, 2025 at 2:29 PM
Absolutely we build on that line.
What we address is how these dynamics unfold now, at scale, where reliability is operationalized.
The novelty isn’t saying “LLMs aren’t agents.”
It’s showing how and when humans treat them as if they were.
Plausibility replacing reliability. Epistemia.
October 19, 2025 at 2:15 PM
Thank you for sharing.
We explore the perturbation introduced when judgment is delegated to LLMs.
We study how the concept of reliability is operationalized in (moderation, policy, ranking).
Epistemia is a name for judgment without grounding.
IMHO it is already here.
(a new layer of the infodemic).
October 18, 2025 at 1:15 PM
6/ Curious about the details?
Read the full paper here: link.springer.com/article/10.1...

We hope this sparks new conversations about the value of attention in the digital age.

Let us know your thoughts! 💬
Evaluating the effect of viral posts on social media engagement - Scientific Reports
As virality has become increasingly central in shaping information sources’ strategies, it raises concerns about its consequences for society, particularly when referring to the impact of viral news o...
link.springer.com
January 3, 2025 at 2:04 PM
5/ 💡 What does this mean?
In the attention economy, chasing virality is risky. Instead, building consistent, sustained engagement is key to forming lasting connections with users.
January 3, 2025 at 2:04 PM
4/ Rapid viral effects fade quickly, while slower, gradual processes last longer.
This suggests that collective attention is elastic and influenced by pre-existing engagement trends.

A "like" or viral post is often fleeting—it doesn’t guarantee long-term impact.
January 3, 2025 at 2:04 PM
3/ Key findings:

Viral events rarely lead to sustained growth in engagement.
We identified two types of virality:
1️⃣ "Loaded" virality: The final burst after a growth phase, followed by a decline.
2️⃣ "Sudden" virality: Unexpected events that briefly reactivate user attention.
January 3, 2025 at 2:04 PM
2/ 📊 We analyzed over 1000 European news outlets on Facebook & YouTube (2018-2023), using a Bayesian structural time series model.

Our goal: Understand the impact of viral posts on user engagement, from short-term spikes to long-term trends.
January 3, 2025 at 2:04 PM