Andreas Jungherr
ajungherr.bsky.social
Andreas Jungherr
@ajungherr.bsky.social

Making sense of digital technology - the changes it brings, the opportunities it provides, and the challenges it presents. Professor, University of Bamberg.

Communication & Media Studies 43%
Political science 22%
Pinned
📢 Wie beeinflusst Künstliche Intelligenz Demokratie und Politik? Am Lehrstuhl für Politikwissenschaft, insbes. Digitale Transformation der Uni Bamberg, forschen wir genau dazu. Hier sind einige unserer wichtigsten Studien: 🧵👇

📄 Open Access paper:
Public Opinion on the Politics of AI Alignment: Cross-National Evidence on Expectations for AI Moderation From Germany and the United States.
Published in @socialmedia-soc.bsky.social.
journals.sagepub.com/doi/10.1177/...
Sage Journals: Discover world-class research
Subscription and open access journals from Sage, the world's leading independent academic publisher.
journals.sagepub.com

Our findings highlight the need to:
• Recognize public heterogeneity across and within countries
• Build transparent governance frameworks
• Carefully distinguish between safety-related and value-laden interventions
• Avoid assuming that alignment preferences are universal

📌 Why this matters:
Debates about AI alignment often focus on technical challenges.
But alignment is also political: public expectations shape what people see as legitimate, trustworthy, and acceptable interventions in AI governance.

We also find consistent effects for:
• Political partisanship: Green/Democratic identifiers more supportive of all forms of output adjustments.
• Gender: Women show stronger support, especially for safety and bias-mitigating interventions.

🇩🇪 In Germany, attitudes vary more with personal experience, free speech orientations, and political ideology.
🇺🇸 In the U.S., views are more uniform except for the promotion of aspirational imaginaries, where political ideology plays a stronger role.

🇺🇸🇩🇪 Cross-national differences:
U.S. respondents consistently show higher support for most alignment goals, except for the promotion of aspirational imaginaries.
They also report much higher AI use, which we interpret as greater societal involvement with AI and more consolidated expectations.

But support drops for bias mitigation and especially for aspirational imaginaries, AI outputs that promote particular social values. These value-laden interventions are viewed more cautiously.

🔍 Key finding:
Across both countries, accuracy and safety top the list. People want AI systems that are factually reliable and avoid harmful content. Broad, cross-national consensus.

We ran surveys in Germany (n=1800) and the U.S. (n=1756) to understand what people expect from AI-enabled systems across four #alignment goals:

• Accuracy & reliability
• Safety
• Bias mitigation
• Providing aspirational imaginaries

Reposted by Alexander Wuttke

📢 New paper out!
What do people want from AI systems? How should outputs be adjusted? And how do views differ between countries?
@adrauc.bsky.social and I explore this for @socialmedia-soc.bsky.social in Public Opinion on the Politics of AI Alignment.

journals.sagepub.com/doi/10.1177/...
My Nieman Lab prediction for 2026: The AI bubble may pop but people’s use of AI for information won’t and it's better if we start taking this seriously.

Digital public debates offer unique insights into how people make sense of technological change, and highlight cross-national differences in culture, politics, and expectations.

You can find the paper with full findings here: www.sciencedirect.com/science/arti...
Winning and losing with Artificial Intelligence: What public discourse about ChatGPT tells us about how societies make sense of technological change
Public product launches in Artificial Intelligence can serve as focusing events for collective attention, surfacing how societies react to technologic…
www.sciencedirect.com

💡 Takeaway
How societies talk about AI is tied to economic interests and cultural values.
These conversations don’t just reflect attitudes toward technology - they signal future societal fault lines.

🔍 Finding 3: Beware aggregated trends
The debate became increasingly critical over time, but not because early participants changed their views.
Rather, later entrants were systematically more skeptical.

🔍 Finding 2: Cultural context shapes reactions
Users from individualistic cultures engaged earlier - but were also more critical.
Users from cultures with high uncertainty avoidance were less likely to expressed positive views.

🔍 Finding 1: Professional background matters
People in with technical skills (coding, math) were early participants and tended to be positive.
Those with skills focused on creative / writing-heavy tasks entered later and tended to be more negative.

In our paper, we analyze 3.8M tweets from 1.6M users across 117 countries.

We ask:
👉 Who took part in the debate?
👉 When did they join?
👉 How did they evaluate ChatGPT?

The public launch of ChatGPT wasn’t just a technical milestone - it was a focusing event that made visible how people around the world think about emerging technologies.

New paper in Telematics and Informatics with
@adrauc.bsky.social, Joshua Philip Suarez, Nikka Marie Sales: Winning and losing with Artificial Intelligence: What public discourse about ChatGPT tells us about how societies make sense of technological change.

Reposted by Andreas Jungherr

✨New working paper on the trade-offs involved in AI transparency in news 🤖📝

Based on a case study of the @financialtimes.com, Liz Lohn and I argue that transparency about AI in news is a spectrum, evolving with tech, commercial, professional & ethical considerations & audience attitudes.
🚨✨ Publication alert: How do people in 6 countries (🇬🇧 🇺🇸 🇫🇷 🇦🇷 🇩🇰 🇯🇵 ) use AI 🤖 and think about it in the context of information, news, and institutions?

Our new @reutersinstitute.bsky.social survey research (n ≈ 12,000) with @richardfletcher.bsky.social & @rasmuskleis.bsky.social explores this.

The article is part of the project “Generative AI in Election Campaigns: Applications, Preferences, and Trust”, funded by the @bidt.bsky.social: www.bidt.digital/forschungspr...
Generative Künstliche Intelligenz im Wahlkampf: Anwendungen, Präferenzen und Vertrauen (AI Wahlkampf) | bidt
Das Projekt untersucht, wie deutsche Parteien generative KI nutzen, deren Einfluss auf Wahlkampagnen und auf das Vertrauen der Öffentlichkeit.
www.bidt.digital

📖 The article contributes to a better understanding of public opinion and digital governance — and shows why international comparison matters for both research and regulation.

🌏 Our findings highlight that cultural and societal contexts shape how people think about digital campaign regulation. The same perceptions and cognitions can have very different consequences across countries.

General attitudes toward AI also play out differently:

🇺🇸 In the U.S., perceived AI risks increase support for regulation, while perceived AI benefits reduce it.
🇹🇼 In Taiwan, both critical and optimistic citizens tend to support stricter rules.

In Taiwan, by contrast, we observe a second-person effect: People favor regulation when they think that both they and others can be influenced by campaigning.

In the U.S., we find a third-person effect: People tend to support regulation when they believe others are more influenced by campaign messages than they themselves are.

🇺🇸 & 🇹🇼 Majorities in both the U.S. and Taiwan favor clear rules for using AI in election campaigns. But factors correlated with supporting regulation differ markedly between the two countries.
🧵 New publication: How do people feel about regulating #AI in election campaigns? 🧵

In a new article, @adrauc.bsky.social, @kunkakom.bsky.social, and I examine when and why people support stronger AI regulation in political competition.

www.sciencedirect.com/science/arti...

👇
Explaining public preferences for regulating Artificial Intelligence in election campaigns: Evidence from the U.S. and Taiwan
The increasing use of Artificial Intelligence (AI) in election campaigns, such as AI-generated political ads, automated messaging, and the widespread …
www.sciencedirect.com