Andreas Jungherr
ajungherr.bsky.social
Andreas Jungherr
@ajungherr.bsky.social
Making sense of digital technology - the changes it brings, the opportunities it provides, and the challenges it presents. Professor, University of Bamberg.
🔸 Positive attitudes toward AI increase acceptance; perceived risks, on the other hand, significantly reduce it.

(5/7)
September 29, 2025 at 12:25 PM
🔸 A new "deliberation divide" emerges: those who are skeptical of AI are less likely to participate.

(4/7)
September 29, 2025 at 12:24 PM
🧐 Our key findings:

🔸 AI-supported deliberation significantly reduces the willingness to participate.

(2/7)
September 29, 2025 at 12:23 PM
Emphasizing these tensions in treatments leads to a sharp drop in both trust and sense of control among respondents.
July 8, 2025 at 10:24 AM
🧵1/ In a new working paper with @kunkakom.bsky.social & @adrauc.bsky.social, we examine how people feel about AI use by governments.

We find an unsettling tension: information about AI-driven efficiency gains boosts trust – but makes people feel less in control.

arxiv.org/abs/2505.01085
July 8, 2025 at 10:16 AM
New preprint on "Political Disinformation: Fake News, and Deep Fakes". In the piece, I take stock of the current state of disinformation research.

osf.io/preprints/so...

🧵 A few key points:
June 4, 2025 at 7:24 AM
🧠 Why the U.S.–Germany gap?
We argue it’s about societal involvement with AI:
– The U.S. is a high-involvement context → more exposure, more consensus
– Germany is a low-involvement context → views depend more on individual factors
May 7, 2025 at 8:33 AM
💡What shapes these views?

🇺🇸 In the U.S.:
– Political ideology is the key dividing line
– Views are more uniform due to higher AI exposure

🇩🇪 In Germany:
– Personal experience with AI and views on free speech matter more
– Attitudes are more varied, less consolidated
May 7, 2025 at 8:31 AM
📌 Key findings:
– Accuracy & Safety enjoy strong support in both countries
– Support drops for Bias Mitigation & Aspirational goals, especially in Germany
– U.S. respondents show consistently higher support across all goals
May 7, 2025 at 8:28 AM
What do people want from AI moderation?
🚨 New Working Paper with @adrauc.bsky.social: What do people expect from Artificial Intelligence?
📊 Public attitudes on AI alignment in 🇩🇪 Germany and 🇺🇸 the U.S.
[Link to paper: arxiv.org/abs/2504.124...
🧵 A short summary of what we found – and why it matters:
May 7, 2025 at 8:21 AM
🔸 Positive attitudes toward AI increase acceptance; perceived risks, on the other hand, significantly reduce it.

(5/7)
March 31, 2025 at 10:58 AM
🔸 A new “deliberation divide” emerges: those who are skeptical of AI are less likely to participate.

(4/7)
March 31, 2025 at 10:57 AM
🧐 Our key findings:

🔸 AI-supported deliberation significantly reduces the willingness to participate.

(2/7)
March 31, 2025 at 10:55 AM
📢 New Working Paper: AI & Deliberation 📢

What role does Artificial Intelligence (AI) play in democratic discussions? Together with @OuzhouAdi, I explored this question. Findings in the thread 🧵👇

(1/7)
March 31, 2025 at 10:52 AM
In dem Artikel „Artificial Intelligence and the Public Arena“ diskutieren Ralph Schroeder und ich die Rolle von KI für politische Öffentlichkeit:

🔹 Informationsflüsse & Nutzerverhalten
🔹 Generierung von Inhalten
🔹 Kommunikation durch KI-Agenten

academic.oup.com/ct/article/3...
March 12, 2025 at 9:33 AM
In dem Artikel „Artificial Intelligence and Democracy: A Conceptual Framework“ zeige ich, wie #KI Demokratie verändert:

🔹 Politische Selbstbestimmung.
🔹 Politischer Wettbewerb.
🔹 Auswirkungen auf Wahlen.
🔹 Wettbewerb zwischen Demokratien und Autokratien.

journals.sagepub.com/doi/10.1177/...
March 12, 2025 at 9:28 AM
🔸 Positive Einstellungen zu KI erhöhen die Akzeptanz; wahrgenommene Risiken senken sie dagegen stark.

(5/7)
March 12, 2025 at 8:26 AM
🔸 Es entsteht ein neuer „Deliberations-Graben“: Wer KI gegenüber skeptisch ist, nimmt weniger wahrscheinlich teil.

(4/7)
March 12, 2025 at 8:25 AM
🧐 Unsere zentralen Ergebnisse:

🔸 KI-unterstützte Deliberation senkt deutlich die Bereitschaft zur Teilnahme.

(2/7)
March 12, 2025 at 8:22 AM
5/ ⚠️ But despite the strong public disapproval of deceptive AI use, our study finds that these tactics don’t significantly harm the political parties that use them. This creates a troubling misalignment between public opinion and political incentives.
November 18, 2024 at 12:55 PM
4/ 😡 Deceptive uses of AI, such as deepfakes or misinformation, are not only seen as clear norm violations by campaigns but also increase public support for banning AI altogether. This is not true for AI use for campaign operations or voter outreach.
November 18, 2024 at 12:50 PM
3/ 📊 Public Perception: Through a representative survey and two survey experiments (n=7,635), the study shows that while people generally view AI's role in elections negatively, they are particularly opposed to deceptive AI practices.
November 18, 2024 at 12:45 PM