David Rand
banner
dgrand.bsky.social
David Rand
@dgrand.bsky.social

Prof at Cornell studying how human-AI dialogues can correct inaccurate beliefs, why people share falsehoods, and ways to reduce political polarization and promote cooperation. Computational social science + cognitive psychology.
https://www.DaveRand.org/ .. more

David G. Rand is the Erwin H. Schell Professor and Professor of Management Science and Brain and Cognitive Sciences at Massachusetts Institute of Technology.

Source: Wikipedia
Political science 25%
Sociology 23%
Pinned
🚨In Science🚨
Conspiracy beliefs famously resist correction, ya?
WRONG: We show brief convos w GPT4 reduce conspiracy beliefs by ~20%!
-Lasts over 2mo
-Works on entrenched beliefs
-Tailored AI response rebuts specific evidence offered by believers
www.science.org/doi/10.1126/...
1/
[X->BSky repost]

Reposted by David G. Rand

Interesting new paper in Political Psychology from @benmtappin.bsky.social and Ryan McKay investigating party cues

onlinelibrary.wiley.com/doi/10.1111/...

Reposted by David G. Rand

Reposted by David G. Rand

Reposted by David G. Rand

Does a motivation to persuade someone of a view we do not hold cause us to deceptively self-persuade to shift our view?

No, research by Zhang & @dgrand.bsky.social suggests—simple preferential exposure to information has the same effect:

buff.ly/vB18poi

Reposted by David G. Rand

our open model proving out specialized rag LMs over scientific literature has been published in nature ✌🏻

congrats to our lead @akariasai.bsky.social & team of students and Ai2 researchers/engineers

www.nature.com/articles/s41...

Reposted by Mohsen Mosleh

Grok fact-checks our paper on Grok fact-checking - and it approves!

Reposted by David G. Rand

Stay tuned for another paper digging deep into fact checking performance of a bunch of different API models

Reposted by David G. Rand

Grateful as always to amazing coauthors @thomasrenault.bsky.social @mmosleh.bsky.social
and you can check out other papers from my group on human-AI interaction here: docs.google.com/document/d/1...
Human-AI dialogue papers
Human-AI dialogue research from the team of David Rand, Gordon Pennycook, and Tom Costello Durably reducing conspiracy beliefs through dialogues with AI Science 2024 [NYTimes write up] [MIT Tech Revi...
docs.google.com

SUMMARY:
📌AI fact-checking on X is widespread
📌Models are reasonably accurate, and likely to improve
📌But usage and response are highly polarized
📌First indication that AI is heading in the direction of other media: “different political tribes, different AI referees”

In a survey exp (N=1,592 US adults), LLM factchecks meaningfully shifts beliefs in direction of fact-check - BUT responses to Grok factchecks become polarized by partisanship when the model identity is disclosed.
Similarly, trust in Grok is highly polarized

Compared to professional fact-checkers on a 100-tweet sample:
Grok bot agrees 55%
Perplexity bot agrees 58%
Fact-checkers agree with each other 64%

So: signal, but not perfect
BUT Grok-4 API agrees 64% - as good as interfact-checker! Promising for AI fact-checking...

Usage gap is polarized: Reps are +59% more likely to use Grok, Dems +16% more likely to use Perplexity. BUT Reps ~2x more likely to be targeted by factcheck requests, and Rep posts rated as false more often - even by Grok. Extends prior results on partisan asymmetry in misinformation

We examine *ALL* English tags of Grok+Perplexity on X Feb–Sep 2025
First finding: Fact-checking is not a niche use case - are ~7.6% of all direct interactions with these LLM bots on X. Primary focus is on politics and current events
🚨New WP "@Grok is this true?"
We analyze 1.6M factcheck requests on X (grok & Perplexity)
📌Usage is polarized, Grok users more likely to be Reps
📌BUT Rep posts rated as false more often—even by Grok
📌Bot agreement with factchecks is OK but not great; APIs match fact-checkers
osf.io/preprints/ps...
Please contact Nina if you're interested in working with us! Much of this work is also with @dgrand.bsky.social & @tomcostello.bsky.social, and others! Very fun collaborative environment. And Nina is wonderful to work with!! (She is also the coolest among us, FWIW)

New on @indicator.media: "@grok is this true" was the single most frequent reply tagging X's AI chatbot in the six months following its launch.
@Grok is this true: How X’s chatbot performs as a fact-checking tool
New research explores whether the chatbot might replace the crowdsourced fact-checking program – and what that might mean for getting to the truth on X
indicator.media

Reposted by David G. Rand

If you tell an AI to convince someone of a true vs. false claim, does truth win? In our *new* working paper, we find...

‘LLMs can effectively convince people to believe conspiracies’

But telling the AI not to lie might help.

Details in thread