AI, Media & Democracy Lab
banner
aimediademlab.bsky.social
AI, Media & Democracy Lab
@aimediademlab.bsky.social
Ethical Legal Societal laboratory focused on the implications of AI for Media and Democracy 🤝 CWI, HvA & UvA #ELSAlab
Follow us to keep up to date with future research, and let us know which insights you found the most interesting! 💬
October 28, 2025 at 12:45 PM
Some findings:
→ Confirmed: AI disclosures increase reader skepticism
→ Users enjoy "surprise" news recommendations & diversity in their feeds
→ News outlets differ in language & prioritisations of AI risks depending on national & political contexts
→ Scenario use aids in tech research outreach
October 28, 2025 at 12:45 PM
Surprise me! A longitudinal user study on serendipitous interface design in news recommender systems
by Zilin Lin, @damiantrilling.net, Stuart Duncan, Kasper Welbers, and @susanvermeer.bsky.social
🔗 ceur-ws.org/Vol-4027/pap...
ceur-ws.org
October 28, 2025 at 12:45 PM
Informing AI Risk Assessment with News Media: Analyzing National and Political Variation in the Coverage of AI Risks
by Mowafak Allaham, @kimonkieslich.bsky.social, and Nicholas Diakopoulos
🔗 doi.org/10.1609/aies...
October 28, 2025 at 12:45 PM
Scenarios in Computing Research: A Systematic Review of the Use of Scenario Methods for Exploring the Future of Computing Technologies in Society
by @juliabbarnett.bsky.social, @kimonkieslich.bsky.social, Jasmine Sinchai, and Nicholas Diakopoulos
🔗 doi.org/10.1609/aies...
October 28, 2025 at 12:45 PM
“Transparency is More Than a Just Label": Audiences’ Information Needs for AI use Disclosures in News
by @hannescools.bsky.social, S. Morosoli, @laurensnaudts.bsky.social, K. Venkatraj, @claesdevreese.bsky.social, & @natalihelberger.bsky.social
🔗 osf.io/preprints/so...
October 28, 2025 at 12:45 PM
Reposted by AI, Media & Democracy Lab
A big thank you to Laura and Shannon for starting this conversation in our Lab! We're looking forward to seeing this research getting published and informing professional decisions in journalism.
October 9, 2025 at 7:20 AM
💬 Because this is still a new phenomenon, the strength of these reactions to AI may change over time and across publics, especially if new developments emerge. Audiences may become accustomed to AI use in media production, or further negotiate how they approach the issue from a moral standpoint.
October 9, 2025 at 7:20 AM
🔎 AI disclosures caused Gen Z to go into "detective mode": once they noticed the AI label, they stopped focusing on the news content and instead tried to find visual flaws. Any initial trust in the post was lost upon seeing the label. So, transparency can backfire and distract in big ways.
October 9, 2025 at 7:20 AM
🏷️Without labels, AI-generated images got rated as more credible than those taken by photojournalists — participants had trouble identifying whether the images were artificial or not. This suggests that image generator performance is good enough to fly under the radar, even in journalistic contexts.
October 9, 2025 at 7:20 AM
Combining info from eye trackers with participant interviews, they analyzed how young people process news in Instagram post formats, what they notice first, and how they interpret different types of image credit labels (taken by a photojournalist, created with AI, or no attribution at all).
October 9, 2025 at 7:20 AM