Prof at Cornell studying how human-AI dialogues can correct inaccurate beliefs, why people share falsehoods, and ways to reduce political polarization and promote cooperation. Computational social science + cognitive psychology.
https://www.DaveRand.org/ ..
more
Prof at Cornell studying how human-AI dialogues can correct inaccurate beliefs, why people share falsehoods, and ways to reduce political polarization and promote cooperation. Computational social science + cognitive psychology.
https://www.DaveRand.org/
David G. Rand is the Erwin H. Schell Professor and Professor of Management Science and Brain and Cognitive Sciences at Massachusetts Institute of Technology.
Conspiracy beliefs famously resist correction, ya?
WRONG: We show brief convos w GPT4 reduce conspiracy beliefs by ~20%!
-Lasts over 2mo
-Works on entrenched beliefs
-Tailored AI response rebuts specific evidence offered by believers
www.science.org/doi/10.1126/...
1/
[X->BSky repost]
Reposted by David G. Rand
onlinelibrary.wiley.com/doi/10.1111/...
Reposted by David G. Rand
No, research by Zhang & @dgrand.bsky.social suggests—simple preferential exposure to information has the same effect:
buff.ly/vB18poi
Reposted by David G. Rand, Brian Keegan
congrats to our lead @akariasai.bsky.social & team of students and Ai2 researchers/engineers
www.nature.com/articles/s41...
Reposted by Mohsen Mosleh
and you can check out other papers from my group on human-AI interaction here: docs.google.com/document/d/1...
📌AI fact-checking on X is widespread
📌Models are reasonably accurate, and likely to improve
📌But usage and response are highly polarized
📌First indication that AI is heading in the direction of other media: “different political tribes, different AI referees”
Similarly, trust in Grok is highly polarized
Grok bot agrees 55%
Perplexity bot agrees 58%
Fact-checkers agree with each other 64%
So: signal, but not perfect
BUT Grok-4 API agrees 64% - as good as interfact-checker! Promising for AI fact-checking...
First finding: Fact-checking is not a niche use case - are ~7.6% of all direct interactions with these LLM bots on X. Primary focus is on politics and current events
We analyze 1.6M factcheck requests on X (grok & Perplexity)
📌Usage is polarized, Grok users more likely to be Reps
📌BUT Rep posts rated as false more often—even by Grok
📌Bot agreement with factchecks is OK but not great; APIs match fact-checkers
osf.io/preprints/ps...
Reposted by Lisa W. Fazio, David G. Rand, Linda J. Skitka
Reposted by David G. Rand, Mohsen Mosleh
Reposted by David G. Rand, Gordon Pennycook, Mark J. Brandt
‘LLMs can effectively convince people to believe conspiracies’
But telling the AI not to lie might help.
Details in thread