Andreas Jungherr
ajungherr.bsky.social
Andreas Jungherr
@ajungherr.bsky.social

Making sense of digital technology - the changes it brings, the opportunities it provides, and the challenges it presents. Professor, University of Bamberg.

Communication & Media Studies 43%
Political science 22%
Pinned
📢 Wie beeinflusst Künstliche Intelligenz Demokratie und Politik? Am Lehrstuhl für Politikwissenschaft, insbes. Digitale Transformation der Uni Bamberg, forschen wir genau dazu. Hier sind einige unserer wichtigsten Studien: 🧵👇

Reposted by Andreas Jungherr

✨New working paper on the trade-offs involved in AI transparency in news 🤖📝

Based on a case study of the @financialtimes.com, Liz Lohn and I argue that transparency about AI in news is a spectrum, evolving with tech, commercial, professional & ethical considerations & audience attitudes.
🚨✨ Publication alert: How do people in 6 countries (🇬🇧 🇺🇸 🇫🇷 🇦🇷 🇩🇰 🇯🇵 ) use AI 🤖 and think about it in the context of information, news, and institutions?

Our new @reutersinstitute.bsky.social survey research (n ≈ 12,000) with @richardfletcher.bsky.social & @rasmuskleis.bsky.social explores this.

The article is part of the project “Generative AI in Election Campaigns: Applications, Preferences, and Trust”, funded by the @bidt.bsky.social: www.bidt.digital/forschungspr...
Generative Künstliche Intelligenz im Wahlkampf: Anwendungen, Präferenzen und Vertrauen (AI Wahlkampf) | bidt
Das Projekt untersucht, wie deutsche Parteien generative KI nutzen, deren Einfluss auf Wahlkampagnen und auf das Vertrauen der Öffentlichkeit.
www.bidt.digital

📖 The article contributes to a better understanding of public opinion and digital governance — and shows why international comparison matters for both research and regulation.

🌏 Our findings highlight that cultural and societal contexts shape how people think about digital campaign regulation. The same perceptions and cognitions can have very different consequences across countries.

General attitudes toward AI also play out differently:

🇺🇸 In the U.S., perceived AI risks increase support for regulation, while perceived AI benefits reduce it.
🇹🇼 In Taiwan, both critical and optimistic citizens tend to support stricter rules.

In Taiwan, by contrast, we observe a second-person effect: People favor regulation when they think that both they and others can be influenced by campaigning.

In the U.S., we find a third-person effect: People tend to support regulation when they believe others are more influenced by campaign messages than they themselves are.

🇺🇸 & 🇹🇼 Majorities in both the U.S. and Taiwan favor clear rules for using AI in election campaigns. But factors correlated with supporting regulation differ markedly between the two countries.
🧵 New publication: How do people feel about regulating #AI in election campaigns? 🧵

In a new article, @adrauc.bsky.social, @kunkakom.bsky.social, and I examine when and why people support stronger AI regulation in political competition.

www.sciencedirect.com/science/arti...

👇
Explaining public preferences for regulating Artificial Intelligence in election campaigns: Evidence from the U.S. and Taiwan
The increasing use of Artificial Intelligence (AI) in election campaigns, such as AI-generated political ads, automated messaging, and the widespread …
www.sciencedirect.com

🔍 Study: Representative, pre-registered survey experiment (n=1850), conducted by Ipsos, funded by the EU 🇪🇺 as part of the AI4Deliberation project.

👉 Read the article: www.sciencedirect.com/science/arti...

#AI #Deliberation #DigitalDemocracy #Democracy

(7/7)
Artificial Intelligence in deliberation: The AI penalty and the emergence of a new deliberative divide
Advances in Artificial Intelligence (AI) promise help for democratic deliberation, such as processing information, moderating discussion, and fact-che…
www.sciencedirect.com

⚠️ This means: Even if AI might factually improve the processes of democratic deliberation, there is a risk that its use will exacerbate existing inequalities in willingness to participate.

(6/7)

🔸 Positive attitudes toward AI increase acceptance; perceived risks, on the other hand, significantly reduce it.

(5/7)

🔸 A new "deliberation divide" emerges: those who are skeptical of AI are less likely to participate.

(4/7)

🔸 If people are informed about the use of AI in deliberation, they expect discussions to be of lower quality than when moderated by a human.

(3/7)

🧐 Our key findings:

🔸 AI-supported deliberation significantly reduces the willingness to participate.

(2/7)
📢 New Journal Article: AI & Deliberation 📢

What impact does #AI have on democratic deliberation? Together with @adrauc.bsky.social, I explore this question in a new article in Government Information Quarterly. Findings in the thread 🧵👇

www.sciencedirect.com/science/arti...

(1/7)
Artificial Intelligence in deliberation: The AI penalty and the emergence of a new deliberative divide
Advances in Artificial Intelligence (AI) promise help for democratic deliberation, such as processing information, moderating discussion, and fact-che…
www.sciencedirect.com

You can take the speaker out of pol sci, but you can’t take pol sci out of the speaker :)

In short: let’s start with what we do control and by doing so, expand our chances to manage interdependencies.

Enforce internal reform of our own institutions & practices that slow development and fuel discontent: politics, journalism, industry-protective tendencies, and EU regulatory habits.

Build capacity and capability for future tech & industries. Not replicate what’s already settled. That gives the EU power it currently lacks to negotiate real commitments from others and better manage interdependencies.

I agree it’s high time to engage. But for me, this is about addressing aspects we can control. I see two arms to this:

From a European perspective, that’s a lose–lose.

Blaming technology lets institutions dodge responsibility and internal reform, while deepening Europe’s dependencies on foreign infrastructures.

Narratives of “disinformation” and “manipulated unruly publics” too often serve established elites and institutions as a way to avoid facing their own contribution to discontent and reform.

Especially if we base policy on shaky analyses claiming that digital media themselves cause discontent with the state of play in Western democracies.

The impulse to demand greater control is understandable. But unless we are honest about why we’re in this mess to begin with, we risk only increasing dependencies.

I think we’re in an unfortunate bind. Because of past industry-protective regulation in the EU, we lack the structures, knowledge, and power to govern today’s crucial information infrastructures, let alone those of the future.

Who is “we”? Wresting control of communication structures from capitalist entities and handing it to bureaucratic or academic elites feels like a technocratic answer to a popular problem. No?