David Rand
banner
dgrand.bsky.social
David Rand
@dgrand.bsky.social

Prof at Cornell studying how human-AI dialogues can correct inaccurate beliefs, why people share falsehoods, and ways to reduce political polarization and promote cooperation. Computational social science + cognitive psychology.
https://www.DaveRand.org/ .. more

David G. Rand is the Erwin H. Schell Professor and Professor of Management Science and Brain and Cognitive Sciences at Massachusetts Institute of Technology.

Source: Wikipedia
Political science 25%
Sociology 23%
Pinned
🚨In Science🚨
Conspiracy beliefs famously resist correction, ya?
WRONG: We show brief convos w GPT4 reduce conspiracy beliefs by ~20%!
-Lasts over 2mo
-Works on entrenched beliefs
-Tailored AI response rebuts specific evidence offered by believers
www.science.org/doi/10.1126/...
1/
[X->BSky repost]
Recently accepted by #QJE, “Marginal Returns to Public Universities,” by Jack Mountjoy: doi.org/10.1093/qje/...
Marginal Returns to Public Universities
Abstract. This paper studies the returns to enrolling in American public universities by comparing the long-term outcomes of barely admitted versus barely
doi.org

Reposted by David G. Rand

New paper out in @science.org! We unveil the online manipulation market with the Cambridge Online Trust & Safety Index (COTSI). We show in real time the cost of purchasing fake accounts across every social platform around the world - so they can be held accountable

www.science.org/doi/10.1126/...
Mapping the online manipulation economy
A market perspective on digital manipulation may help improve online trust and safety
www.science.org
Paper out today in @nathumbehav.nature.com:

1) those groups (women, African Americans, lower SES, rural) that are underrepresented in science have been less trusting of science.

2) If you improve representation in science, you improve trust among those groups.

www.nature.com/articles/s41...
Representation in science and trust in scientists in the USA - Nature Human Behaviour
Druckman et al. document gaps in trust in scientists in the USA. People from groups less represented among scientists (for example, women and those with lower economic status) are less trusting. Incre...
www.nature.com

Our new study provides rare causal evidence about NYC’s speed camera program. We find large reductions in collisions (30%) and injuries (16%) near intersections with cameras. www.pnas.org/doi/abs/10.1... @astagoff.bsky.social ky.social @brendenbeck.bsky.social nbeck.bsky.social 🧪
Can speed cameras make streets safer? Quasi-experimental evidence from New York City | PNAS
Each year, approximately 40,000 people die in vehicle collisions in the United States, generating $340 billion in economic costs. To make roads saf...
www.pnas.org

Reposted by David G. Rand

Yes for sure, they have a proprietary interest in keeping the prompts hidden. But perhaps regulation could force them to reveal their prompts? There's also the technical question of whether there is a way to make prompt reveals credible (ie prevent lying about the prompt a model uses)

Reposted by David G. Rand

I dont think its about thinking the AI is unbiased - in the conspiracy context, we show that we get just as much debunking if people think its a human expert (academic.oup.com/pnasnexus/ar...) or if we tell them the AI has political bias
Validate User
academic.oup.com

Regulation is not my area of expertise, but IMO there is little hope of regulating the companies that make AI because powerful open source models are already out there. Focusing on deployers seems Key: eg transparency about model prompts, so that users can understand the biases, seems promising

Reposted by Alexander Wuttke

Yes I totally agree, but to me that's the key learning point that I want laypeople to take away from this - depending on the what the model is instructed to do, it can sway you either way. So it's critical to know who is instructing any model you're talking to and have some sense of their agenda

Ya I think this is a reasonable read of the findings...

Reposted by David G. Rand

Reposted by David G. Rand

AI chatbots can sway voters – in either direction: “LLMs can really move people’s attitudes towards presidential candidates and policies, and they do it by providing many factual claims that support their side,” said @dgrand.bsky.social.
(@cornellbowers.bsky.social)
news.cornell.edu/stories/2025...
AI chatbots can effectively sway voters – in either direction | Cornell Chronicle
A short interaction with a chatbot can meaningfully shift a voter’s opinion about a presidential candidate or proposed policy in either direction, new Cornell research finds.
news.cornell.edu
A set if large scale-experiments in the UK, US & Poland where people chatted with LLMs about political topics found AI is very good at persuasion, primarily by providing lots of fact-based claims

Plus, AI is getting more persuasive as models grow bigger & persuasion effects lasted over a month.

Thanks for the Perspective!!

Reposted by David G. Rand

In Science:
www.science.org/doi/epdf/10....

76k UK participants, 707 political issues, 19 different LLMs

What makes LLMs persuasive?
• Model size + personalization matter a bit
• BUT post-training + info-dense prompting increase persuasion far more
• More persuasion → lower factual accuracy