Brian Guay
banner
brianguay.bsky.social
Brian Guay
@brianguay.bsky.social
Assistant Professor of Political Science @ UNC Chapel Hill | Public opinion, behavior, polarization, misinformation | Last name pronounced without the u | brianguay.com
@fguelzau.bsky.social Here's the ungated draft of the PNAS paper: www.brianguay.com/files/guay_2...
www.brianguay.com
September 18, 2025 at 4:59 PM
April 25, 2025 at 7:32 PM
thanks so much Conrad, I'll fix this!
April 8, 2025 at 11:42 PM
Thanks to my fantastic co-authors @tylermarghetis.bsky.social , @david-landy.bsky.social , Cara Wong and everyone who gave us feedback over many years

Ungated earlier version of the paper here: www.brianguay.com/files/guay_2...
www.brianguay.com
April 7, 2025 at 12:00 PM
Our findings suggest that the public knows more about politics than we give them credit for:

People make errors when estimating politically-relevant percentages, but this is due to the format of the question not underlying misinformation about what they are estimating
April 7, 2025 at 12:00 PM
Of course, characteristics of specific groups may matter, but only at the margins. We should first account for the domain-general errors people make *anytime* they estimate a percentage, then examine group-specific explanations
April 7, 2025 at 12:00 PM
The same is true of theories that people overestimate the size of groups that they have a lot of social contact with. Very little evidence of this!
April 7, 2025 at 12:00 PM
We also test popular theories that people overestimate the size of groups they fear. Not the case.

Again, misestimates result mainly from the psychological errors we make anytime we estimate %s, not from anything specific to the group being estimated
April 7, 2025 at 12:00 PM
We argue that this pattern of over-under estimation arises from 🧠Bayesian reasoning under uncertainty🧠: people often have uncertain ideas in their minds about the size of these groups, but when they convert these ideas to percentages they ‘hedge’ their estimates toward a prior
April 7, 2025 at 12:00 PM
And this is the same pattern of errors people make when estimating things like the percentage of dots on a page that are red 👇
April 7, 2025 at 12:00 PM
Here’s the key figure: people make the same estimation errors regardless of what they are estimating---political and *entirely non-political* quantities.

These are 100k estimates of the size of racial and non-racial groups made by 37k people in 22 countries
April 7, 2025 at 12:00 PM
Instead, people are just really bad at estimating percentages

They systematically overestimate smaller %s and underestimate larger %s, including ENTIRELY NON-POLITICAL %s, such as the % of the population that owns an Apple product, has a passport, or has indoor plumbing
April 7, 2025 at 12:00 PM
We argue that journalists and academics are *wrong* when they interpret these misperceptions as evidence that the public is ignorant and misinformed 👇
April 7, 2025 at 12:00 PM
But the MOST important thing is that researchers *justify* their research design & analysis approach on normative/theoretical grounds and *pre-register* it

Doing so will help prevent researchers from talking past each other and move toward tackling problem of misinfo
March 29, 2024 at 8:32 PM
Key takeaway: choose the design that aligns with your normative claim about how people should interact with information

e.g., the normative claim that aligns with discernment is that people should maximize accuracy of the content that they believe and share
March 29, 2024 at 8:32 PM
We demonstrate these differences empirically by re-analyzing data from recent misinformation studies

Different research designs and outcomes = different conclusions about whether misinformation interventions work
March 29, 2024 at 8:32 PM