Making sense of digital technology - the changes it brings, the opportunities it provides, and the challenges it presents. Professor, University of Bamberg.
Public Opinion on the Politics of AI Alignment: Cross-National Evidence on Expectations for AI Moderation From Germany and the United States.
Published in @socialmedia-soc.bsky.social.
journals.sagepub.com/doi/10.1177/...
• Recognize public heterogeneity across and within countries
• Build transparent governance frameworks
• Carefully distinguish between safety-related and value-laden interventions
• Avoid assuming that alignment preferences are universal
Debates about AI alignment often focus on technical challenges.
But alignment is also political: public expectations shape what people see as legitimate, trustworthy, and acceptable interventions in AI governance.
• Political partisanship: Green/Democratic identifiers more supportive of all forms of output adjustments.
• Gender: Women show stronger support, especially for safety and bias-mitigating interventions.
🇺🇸 In the U.S., views are more uniform except for the promotion of aspirational imaginaries, where political ideology plays a stronger role.
U.S. respondents consistently show higher support for most alignment goals, except for the promotion of aspirational imaginaries.
They also report much higher AI use, which we interpret as greater societal involvement with AI and more consolidated expectations.
Across both countries, accuracy and safety top the list. People want AI systems that are factually reliable and avoid harmful content. Broad, cross-national consensus.
• Accuracy & reliability
• Safety
• Bias mitigation
• Providing aspirational imaginaries
Reposted by Alexander Wuttke
What do people want from AI systems? How should outputs be adjusted? And how do views differ between countries?
@adrauc.bsky.social and I explore this for @socialmedia-soc.bsky.social in Public Opinion on the Politics of AI Alignment.
journals.sagepub.com/doi/10.1177/...
Reposted by Richard Fletcher, Andreas Jungherr
You can find the paper with full findings here: www.sciencedirect.com/science/arti...
How societies talk about AI is tied to economic interests and cultural values.
These conversations don’t just reflect attitudes toward technology - they signal future societal fault lines.
The debate became increasingly critical over time, but not because early participants changed their views.
Rather, later entrants were systematically more skeptical.
Users from individualistic cultures engaged earlier - but were also more critical.
Users from cultures with high uncertainty avoidance were less likely to expressed positive views.
People in with technical skills (coding, math) were early participants and tended to be positive.
Those with skills focused on creative / writing-heavy tasks entered later and tended to be more negative.
We ask:
👉 Who took part in the debate?
👉 When did they join?
👉 How did they evaluate ChatGPT?
@adrauc.bsky.social, Joshua Philip Suarez, Nikka Marie Sales: Winning and losing with Artificial Intelligence: What public discourse about ChatGPT tells us about how societies make sense of technological change.
Reposted by Andreas Jungherr
Based on a case study of the @financialtimes.com, Liz Lohn and I argue that transparency about AI in news is a spectrum, evolving with tech, commercial, professional & ethical considerations & audience attitudes.
Reposted by Hugo Mercier, Richard Fletcher, Andreas Jungherr
Our new @reutersinstitute.bsky.social survey research (n ≈ 12,000) with @richardfletcher.bsky.social & @rasmuskleis.bsky.social explores this.
🇺🇸 In the U.S., perceived AI risks increase support for regulation, while perceived AI benefits reduce it.
🇹🇼 In Taiwan, both critical and optimistic citizens tend to support stricter rules.
Reposted by Alexander Wuttke, Sebastian Stier
In a new article, @adrauc.bsky.social, @kunkakom.bsky.social, and I examine when and why people support stronger AI regulation in political competition.
www.sciencedirect.com/science/arti...
👇