Data Science, Complex Systems
They skip straight to the answer.
No evaluation.
No grounding.
Just fluent output.
When generation bypasses judgment, knowledge becomes a performance.
Welcome to Epistemia.
PNAS commentary ⬇️
www.pnas.org/doi/10.1073/...
They skip straight to the answer.
No evaluation.
No grounding.
Just fluent output.
When generation bypasses judgment, knowledge becomes a performance.
Welcome to Epistemia.
PNAS commentary ⬇️
www.pnas.org/doi/10.1073/...
In delegating, are we confusing linguistic plausibility with epistemic reliability?
The age of "epistemia"
www.pnas.org/doi/epdf/10....
In delegating, are we confusing linguistic plausibility with epistemic reliability?
The age of "epistemia"
www.pnas.org/doi/epdf/10....
www.nature.com/articles/s43...
"driven by lexical and statistical associations rather than deliberative reasoning"
www.nature.com/articles/s43...
"driven by lexical and statistical associations rather than deliberative reasoning"
LLMs drop cost of “knowledge-like” content to zero.
Result: Epistemia — when language sounds like knowledge.
Outsourcing shifts decisions from evidence → plausibility
PNAS:https://www.pnas.org/doi/10.1073/pnas.1517441113
LLMs drop cost of “knowledge-like” content to zero.
Result: Epistemia — when language sounds like knowledge.
Outsourcing shifts decisions from evidence → plausibility
PNAS:https://www.pnas.org/doi/10.1073/pnas.1517441113
It’s the signal.
What we’re seeing isn’t about AI or neutrality — it’s the rise of the post-epistemic web.
The question isn’t: is it true?
The question is: who made the model?
It’s the signal.
What we’re seeing isn’t about AI or neutrality — it’s the rise of the post-epistemic web.
The question isn’t: is it true?
The question is: who made the model?
→ Knowledge is no longer verified, but simulated
→ Platforms no longer host views, they shape belief architectures
→ Truth is not disappearing. It’s being automated, fragmented, and rebranded
→ Knowledge is no longer verified, but simulated
→ Platforms no longer host views, they shape belief architectures
→ Truth is not disappearing. It’s being automated, fragmented, and rebranded
We analyzed 117M posts from 9 platforms (Facebook, Reddit, Parler, Gab, etc).
Some now function as ideological silos — not just echo chambers, but echo platforms.
www.nature.com/articles/s41...
We analyzed 117M posts from 9 platforms (Facebook, Reddit, Parler, Gab, etc).
Some now function as ideological silos — not just echo chambers, but echo platforms.
www.nature.com/articles/s41...
We benchmarked 6 large language models against experts and humans.
They often agree on outputs — but not on how they decide.
Models rely on lexical shortcuts, not reasoning.
We called this epistemia.
www.pnas.org/doi/10.1073/...
We benchmarked 6 large language models against experts and humans.
They often agree on outputs — but not on how they decide.
Models rely on lexical shortcuts, not reasoning.
We called this epistemia.
www.pnas.org/doi/10.1073/...
@PNASNews
and
@PNASNexus
:
Epistemia — the illusion of knowledge when LLMs replace reasoning with surface plausibility
Echo Platforms — when whole platforms, not just communities, become ideologically sealed
@PNASNews
and
@PNASNexus
:
Epistemia — the illusion of knowledge when LLMs replace reasoning with surface plausibility
Echo Platforms — when whole platforms, not just communities, become ideologically sealed
Platforms are fragmenting into echo platforms — entire ecosystems aligned around ideology.
LLMs are being used to simulate judgment — plausible, fluent, unverifiable.
Platforms are fragmenting into echo platforms — entire ecosystems aligned around ideology.
LLMs are being used to simulate judgment — plausible, fluent, unverifiable.
An AI-built encyclopedia, pitched as a “neutral” alternative to Wikipedia.
But neutrality is not the point.
What happens underneath is.
👇
An AI-built encyclopedia, pitched as a “neutral” alternative to Wikipedia.
But neutrality is not the point.
What happens underneath is.
👇
For more trending articles, visit https://ow.ly/6hok50Xj6l3.
Ours assumes that to understand the perturbation, you first need to operationalize the task and compare how humans and models diverge.
That’s the empirical ground — not a belief about what LLMs “are.”
Ours assumes that to understand the perturbation, you first need to operationalize the task and compare how humans and models diverge.
That’s the empirical ground — not a belief about what LLMs “are.”
Of course. That was never the point.
The point is: we’re already using them as if they do —
to moderate, to classify, to prioritize, to decide.
That’s not a model problem.
It’s a systemic one.
The shift from verification to plausibility is real.
Welcome to Epistemia.
Of course. That was never the point.
The point is: we’re already using them as if they do —
to moderate, to classify, to prioritize, to decide.
That’s not a model problem.
It’s a systemic one.
The shift from verification to plausibility is real.
Welcome to Epistemia.
we’re not asking what LLMs are.
We’re asking: what happens when users start trusting them as if they were search engines?
We compare LLMs and humans on how reliability and bias are judged.
That’s where the illusion epistemia begins.
we’re not asking what LLMs are.
We’re asking: what happens when users start trusting them as if they were search engines?
We compare LLMs and humans on how reliability and bias are judged.
That’s where the illusion epistemia begins.
Our focus is on how LLMs outputs simulate judgment.
We compare LLMs and humans directly, under identical pipelines, on the same dataset.
May rely is empirical caution.
The illusion of reasoning is the point (not the premise).
Our focus is on how LLMs outputs simulate judgment.
We compare LLMs and humans directly, under identical pipelines, on the same dataset.
May rely is empirical caution.
The illusion of reasoning is the point (not the premise).
What we address is how these dynamics unfold now, at scale, where reliability is operationalized.
The novelty isn’t saying “LLMs aren’t agents.”
It’s showing how and when humans treat them as if they were.
Plausibility replacing reliability. Epistemia.
What we address is how these dynamics unfold now, at scale, where reliability is operationalized.
The novelty isn’t saying “LLMs aren’t agents.”
It’s showing how and when humans treat them as if they were.
Plausibility replacing reliability. Epistemia.
We explore the perturbation introduced when judgment is delegated to LLMs.
We study how the concept of reliability is operationalized in (moderation, policy, ranking).
Epistemia is a name for judgment without grounding.
IMHO it is already here.
(a new layer of the infodemic).
We explore the perturbation introduced when judgment is delegated to LLMs.
We study how the concept of reliability is operationalized in (moderation, policy, ranking).
Epistemia is a name for judgment without grounding.
IMHO it is already here.
(a new layer of the infodemic).
⛓️💥For online attendants, please register here: bit.ly/3FomgkF
⛓️💥For online attendants, please register here: bit.ly/3FomgkF
Read the full paper here: link.springer.com/article/10.1...
We hope this sparks new conversations about the value of attention in the digital age.
Let us know your thoughts! 💬
Read the full paper here: link.springer.com/article/10.1...
We hope this sparks new conversations about the value of attention in the digital age.
Let us know your thoughts! 💬