Prof at Cornell studying how human-AI dialogues can correct inaccurate beliefs, why people share falsehoods, and ways to reduce political polarization and promote cooperation. Computational social science + cognitive psychology.
https://www.DaveRand.org/ ..
more
Prof at Cornell studying how human-AI dialogues can correct inaccurate beliefs, why people share falsehoods, and ways to reduce political polarization and promote cooperation. Computational social science + cognitive psychology.
https://www.DaveRand.org/
David G. Rand is the Erwin H. Schell Professor and Professor of Management Science and Brain and Cognitive Sciences at Massachusetts Institute of Technology.
Conspiracy beliefs famously resist correction, ya?
WRONG: We show brief convos w GPT4 reduce conspiracy beliefs by ~20%!
-Lasts over 2mo
-Works on entrenched beliefs
-Tailored AI response rebuts specific evidence offered by believers
www.science.org/doi/10.1126/...
1/
[X->BSky repost]
www.science.org/doi/10.1126/...
1) those groups (women, African Americans, lower SES, rural) that are underrepresented in science have been less trusting of science.
2) If you improve representation in science, you improve trust among those groups.
www.nature.com/articles/s41...
Reposted by Alexander Wuttke
Reposted by David G. Rand
(@cornellbowers.bsky.social)
news.cornell.edu/stories/2025...
Reposted by David G. Rand, Brendan Nyhan
Plus, AI is getting more persuasive as models grow bigger & persuasion effects lasted over a month.
www.science.org/doi/epdf/10....
76k UK participants, 707 political issues, 19 different LLMs
What makes LLMs persuasive?
• Model size + personalization matter a bit
• BUT post-training + info-dense prompting increase persuasion far more
• More persuasion → lower factual accuracy