Harry Yan
harryyan.bsky.social
Harry Yan
@harryyan.bsky.social
Postdoc at Stanford Social Media Lab, Cyber Policy Center. Incoming AP @TAMUComm. PhD*2 in Informatics @IULuddy + Media Sciences @IUMediaSchool. @KnightFdn @OsoMe_IU Fellow. @ICR_IU Researcher. #PublicOpinion #Tech #GenAI #Bots #MediaEffects
In collaboration with @ryanmoore.bsky.social @fangjingtu.bsky.social and Dr. Jeff Hacock, and supported by Stanford Social Media Lab, and @stanfordcyber.bsky.social.
April 18, 2025 at 9:29 PM
🌐 Big picture:
This study shows we should focus on building what we call digital strength:
a holistic skillset for navigating AI-mediated information environments--
Focused not just on detection skills
But also on cultivating open-minded thinking and evidentiary judgment (10/10)
April 18, 2025 at 9:24 PM
🎯 Policy and design takeaway:
It’s not enough to teach people how to spot AI.

We also need to help them know when to trust authentic content.
Effective interventions must combine GenAI literacy, cognitive reflection training, and demographic targeting. (9/)
April 18, 2025 at 9:23 PM
💡 But there’s hope.
Two factors helped:
🧠 Actively Open-Minded Thinking (AOT):
A cognitive tendency to consider evidence that challenges one’s prior beliefs.
📚 GenAI knowledge:
Factual understanding of generative AI.
AOT especially helped restore trust in real images—not just spot synthetics(8/)
April 18, 2025 at 9:23 PM
👥 Who’s most vulnerable?

Older adults: more likely to doubt authentic images

Women: showed a larger accuracy gap than men

Partisans: more likely to doubt real images that conflict with their beliefs

#GenAI is amplifying existing digital and partisan divides. (7/)
April 18, 2025 at 9:21 PM
📉 Why does this matter?
Because trust in authentic political imagery is eroding.
This isn’t just about deception—it’s about undermining visual evidence itself, leading to a "liar’s dividend":
real images get dismissed as fake. (6/)
April 18, 2025 at 9:19 PM
📊 Key finding:
Participants over-attributed AI generation, labeling nearly 60% of all images as synthetic—even though only half were.
This "AI attribution bias" leads to:
✅ Higher accuracy detecting synthetic images
❌ Lower accuracy recognizing authentic images (5/)
April 18, 2025 at 9:18 PM
👁️ We ran a large pre-registered experiment with 1,800 U.S. adults.
Participants evaluated political images balanced by party lean (pro-Dem vs. pro-Rep) and image type (authentic vs. AI-generated)— using actual images that circulated online during the election. (4/)
April 18, 2025 at 9:16 PM
The answer is...Not exactly.

⚠️ BUT our study shows a different threat:
People have become suspicious of real images too.
Authentic visual evidence is no longer taken for granted. (3/)
April 18, 2025 at 9:15 PM
🗳️ During the 2024 U.S. presidential election, many #GenAI AI-generated political images appeared on social media.
But did voters mistake them for authentic imagery? (2/)
April 18, 2025 at 9:14 PM