Patrick Liu
@patrickpliu.bsky.social
Columbia Political Science | PhD Student
Our study draws renewed attention to the distinction between beliefs and attitudes. It also showcases how LLMs can be used to peer into belief systems. We welcome any feedback!
April 2, 2025 at 1:04 PM
Our study draws renewed attention to the distinction between beliefs and attitudes. It also showcases how LLMs can be used to peer into belief systems. We welcome any feedback!
Across 2 studies, focal + distal counterarguments reduced focal + distal belief strength (respectively). But focal arguments had larger and more durable effects on downstream attitudes.
We explore mechanisms in the paper, e.g., ppl recalled focal args better than distal args a week later.
We explore mechanisms in the paper, e.g., ppl recalled focal args better than distal args a week later.
April 2, 2025 at 1:04 PM
Across 2 studies, focal + distal counterarguments reduced focal + distal belief strength (respectively). But focal arguments had larger and more durable effects on downstream attitudes.
We explore mechanisms in the paper, e.g., ppl recalled focal args better than distal args a week later.
We explore mechanisms in the paper, e.g., ppl recalled focal args better than distal args a week later.
Ex: Respondent said they care about public infrastructure.
In the same wave, they held the following convo with an AI chatbot. After GPT synthesized a summary attitude, focal belief, and distal belief, they saw treatment/placebo text and answered pre- and post-treatment Qs.
In the same wave, they held the following convo with an AI chatbot. After GPT synthesized a summary attitude, focal belief, and distal belief, they saw treatment/placebo text and answered pre- and post-treatment Qs.
April 2, 2025 at 1:04 PM
Ex: Respondent said they care about public infrastructure.
In the same wave, they held the following convo with an AI chatbot. After GPT synthesized a summary attitude, focal belief, and distal belief, they saw treatment/placebo text and answered pre- and post-treatment Qs.
In the same wave, they held the following convo with an AI chatbot. After GPT synthesized a summary attitude, focal belief, and distal belief, they saw treatment/placebo text and answered pre- and post-treatment Qs.
Ordinarily, a design that a) elicits personally important issues + relevant beliefs through convos, b) uses tailored treatments, & c) measures persistence of effects would require 3 survey waves and immense resource/labor costs.
We overcome these issues (+ replicate) using LLMs.
We overcome these issues (+ replicate) using LLMs.
April 2, 2025 at 1:04 PM
Ordinarily, a design that a) elicits personally important issues + relevant beliefs through convos, b) uses tailored treatments, & c) measures persistence of effects would require 3 survey waves and immense resource/labor costs.
We overcome these issues (+ replicate) using LLMs.
We overcome these issues (+ replicate) using LLMs.
We engaged ppl in direct dialogue to discuss an issue they care about and the reasons for their stance. We generated a “focal” belief from this text convo and a less relevant “distal” belief, then randomly assigned a focal belief counterargument, distal argument, or placebo text.
April 2, 2025 at 1:04 PM
We engaged ppl in direct dialogue to discuss an issue they care about and the reasons for their stance. We generated a “focal” belief from this text convo and a less relevant “distal” belief, then randomly assigned a focal belief counterargument, distal argument, or placebo text.
Identifying relevant beliefs is challenging! Fact-checking studies rely on databases to identify prevalent misinfo and network methods map mental associations at a group level, but the beliefs ppl personally treat as relevant on an issue are diverse and shaped by political preferences.
April 2, 2025 at 1:04 PM
Identifying relevant beliefs is challenging! Fact-checking studies rely on databases to identify prevalent misinfo and network methods map mental associations at a group level, but the beliefs ppl personally treat as relevant on an issue are diverse and shaped by political preferences.
We build on classic psych models that represent attitudes as weighted sums of beliefs about an object. The impact of belief change on subsequent attitude change increases with the belief’s weight, capturing its relevance. Low relevance = small effect of info on attitudes.
April 2, 2025 at 1:04 PM
We build on classic psych models that represent attitudes as weighted sums of beliefs about an object. The impact of belief change on subsequent attitude change increases with the belief’s weight, capturing its relevance. Low relevance = small effect of info on attitudes.
There is a tendency to conclude that attitudes (evaluations of an object) are stickier than beliefs (factual positions) about the object, possibly b/c of motivations to preserve attitudes.
But this assumes beliefs targeted by the informational treatment matter for the attitude.
But this assumes beliefs targeted by the informational treatment matter for the attitude.
April 2, 2025 at 1:04 PM
There is a tendency to conclude that attitudes (evaluations of an object) are stickier than beliefs (factual positions) about the object, possibly b/c of motivations to preserve attitudes.
But this assumes beliefs targeted by the informational treatment matter for the attitude.
But this assumes beliefs targeted by the informational treatment matter for the attitude.
Puzzle: Studies widely find learning occurs w/o attitude change. Correcting vaccine misinformation fails to alter vax intentions, reducing misperceptions of the # of immigrants doesn’t reduce hostility, learning about govt spending doesn’t affect econ policy preferences… the list goes on.
April 2, 2025 at 1:04 PM
Puzzle: Studies widely find learning occurs w/o attitude change. Correcting vaccine misinformation fails to alter vax intentions, reducing misperceptions of the # of immigrants doesn’t reduce hostility, learning about govt spending doesn’t affect econ policy preferences… the list goes on.
Link: go.shr.lc/4j9My8H
We find arguments targeting relevant beliefs produce strong and durable attitude change—more than arguments targeting distal beliefs. To ID relevant beliefs, we elicited deeply held attitudes + interviewed ppl about their reasons using an LLM chatbot. More on why below!
We find arguments targeting relevant beliefs produce strong and durable attitude change—more than arguments targeting distal beliefs. To ID relevant beliefs, we elicited deeply held attitudes + interviewed ppl about their reasons using an LLM chatbot. More on why below!
When Information Affects Attitudes: The Effectiveness of Targeting Attitude-Relevant Beliefs
Do citizens update strongly held beliefs when presented with belief-incongruent information, and does such updating affect downstream attitudes? Though fact-checking studies find that corrections reli...
go.shr.lc
April 2, 2025 at 1:04 PM
Link: go.shr.lc/4j9My8H
We find arguments targeting relevant beliefs produce strong and durable attitude change—more than arguments targeting distal beliefs. To ID relevant beliefs, we elicited deeply held attitudes + interviewed ppl about their reasons using an LLM chatbot. More on why below!
We find arguments targeting relevant beliefs produce strong and durable attitude change—more than arguments targeting distal beliefs. To ID relevant beliefs, we elicited deeply held attitudes + interviewed ppl about their reasons using an LLM chatbot. More on why below!