Andrew Little
banner
anthlittle.bsky.social
Andrew Little
@anthlittle.bsky.social
Prof at UC Berkeley. Formal theory, political beliefs, democracy.

Associate Editor at ‪@ajpseditor.bsky.social‬

https://anthlittle.github.io/
May I tap this sign 😉
September 30, 2025 at 4:08 AM
Reposted by Andrew Little
Ok - so I'm going to do a real context+write up but for now, here's what some of these things look like.

To start my data reference is DCinbox which is ~208,000 official e-newsletters over the past 15 years.
August 22, 2025 at 6:13 PM
Definitely some discussion of this in global games papers but blanking on specific cites :)
August 22, 2025 at 7:06 PM
Sorry just saw this! I think this idea could be strategic in a strict sense: e.g., if I want to participate in political movements that succeed even though I'm not pivotal ("warm glow" effects, etc.), beliefs about whether it will succeed may drive participation.
August 22, 2025 at 7:06 PM
For a shorter description of the key idea with some toy models applying it to persuasion and whether "thinking" leads to more or less confidence, we also have a companion AEA P&P piece here:
anthlittle.github.io/files/augenb...
anthlittle.github.io
August 22, 2025 at 6:59 PM
In sum, the world is complicated and we need to make simplifying assumptions to understand it. This is a key driver (if not the key driver) of both overconfidence and disagreement in beliefs. My hunch is that this also explains much disagreement in politics.
August 22, 2025 at 6:59 PM
The observational data: the Survey of Professional Forecasters lets us test some other predictions from the theory. As predicted, “excess” MSE above what variance alone would imply equals twice the disagreement in individual forecasts. (Honestly we were shocked at how well the data fits the theory!)
August 22, 2025 at 6:58 PM
Here is a graphical version of the key result. Participants are unresponsive to changes in across-model uncertainty (left & middle panels), but reasonably responsive to within-model uncertainty (right panel).
August 22, 2025 at 6:57 PM
A nice thing about this design is that we can also independently vary across-model uncertainty by changing the prediction date when the trend-line is hidden, and within-model uncertainty by changing the noise when it is shown.
August 22, 2025 at 6:55 PM
The experiment: participants predict future “sales” data generated by a linear trend plus noise, and report their uncertainty. Sometimes they see the trendline (so only within-model uncertainty matters). Sometimes they don’t (introducing across-model uncertainty).
August 22, 2025 at 6:54 PM
The theory produces many predictions, but the core one is that under broad conditions, across-model uncertainty, overprecision, and disagreement move together. Under stronger conditions, they exactly coincide.
August 22, 2025 at 6:54 PM
In particular, we assume people account for uncertainty given their assumptions (“within-model uncertainty”) but neglect the fact that other assumptions could imply different beliefs (“across-model uncertainty”).
August 22, 2025 at 6:53 PM
The theory: forming beliefs requires assumptions (or a "model") about how data map to outcomes. We develop a model with strong simplifying assumptions to explore how using models to make simplifying assumptions affects beliefs.
August 22, 2025 at 6:53 PM
Sure but for that goal we could just arbitrarily delay making all decisions, and if you want to make that case you are on your own 😉
July 16, 2025 at 4:25 AM
Also apologies to @edogrillo.bsky.social for accidentally demoting him to second author!
May 14, 2025 at 7:54 PM
Yeah i've also had moments of "sounds like Grillo and Prato's AJPS," I think for the same reason Adam thought of it
May 14, 2025 at 7:49 PM