Iyad Rahwan | إياد رهوان
@iyadrahwan.bsky.social
Director, Max Planck Center for Humans & Machines http://chm.mpib-berlin.mpg.de | Former prof. @MIT | Creator of http://moralmachine.net | Art: http://instagram.com/iyad.rahwan Web: rahwan.me
Pinned
Im Gespräch mit Prof. Dr. Iyad Rahwan حوار الديوان مع البروفسور د. إياد رهوان
YouTube video by Der Divan - Das Arabische Kulturhaus
youtu.be
🤖 = ⚛️ x 🎨
Interview (in English) at Der Divan about AI, from a scientific and artistic perspective.
youtu.be/9CVTlcNvN24?...
Interview (in English) at Der Divan about AI, from a scientific and artistic perspective.
youtu.be/9CVTlcNvN24?...
🚨Paper Alert 🚨 Did 1st contact with ChatGPT alter sentiment about AI?
First interactions with generative chatbots shape local but not global sentiments about AI
Work with Eva Schmidt, Mengchen Dong, @jfbonnefon.bsky.social, Clara Bersch, @nckobis.bsky.social
www.sciencedirect.com/science/arti...
First interactions with generative chatbots shape local but not global sentiments about AI
Work with Eva Schmidt, Mengchen Dong, @jfbonnefon.bsky.social, Clara Bersch, @nckobis.bsky.social
www.sciencedirect.com/science/arti...
First interactions with generative chatbots shape local but not global sentiments about AI
As artificial intelligence (AI) chatbots become increasingly integrated into everyday life, it is important to understand how direct interaction with …
www.sciencedirect.com
November 6, 2025 at 5:03 PM
🚨Paper Alert 🚨 Did 1st contact with ChatGPT alter sentiment about AI?
First interactions with generative chatbots shape local but not global sentiments about AI
Work with Eva Schmidt, Mengchen Dong, @jfbonnefon.bsky.social, Clara Bersch, @nckobis.bsky.social
www.sciencedirect.com/science/arti...
First interactions with generative chatbots shape local but not global sentiments about AI
Work with Eva Schmidt, Mengchen Dong, @jfbonnefon.bsky.social, Clara Bersch, @nckobis.bsky.social
www.sciencedirect.com/science/arti...
Thank you Francesca, and everyone who attended, for a wonderful evening.
Great day in Turin yesterday! Congratulations to Professor @iyadrahwan.bsky.social winner of the Lagrange Prize 2025 for his pioneering work at the intersection of computer science and human behavior.
@isi.it
@isi.it
October 29, 2025 at 4:24 PM
Thank you Francesca, and everyone who attended, for a wonderful evening.
Reposted by Iyad Rahwan | إياد رهوان
Congratulations to Iyad Rahwan (@iyadrahwan.bsky.social), recipient of the Lagrange Prize – CRT Foundation Edition 2025, recognizing groundbreaking research in complex systems and data science! 🌍🤖
More info @isi.it:
www.isi.it/press-releas...
More info @isi.it:
www.isi.it/press-releas...
October 29, 2025 at 11:44 AM
Congratulations to Iyad Rahwan (@iyadrahwan.bsky.social), recipient of the Lagrange Prize – CRT Foundation Edition 2025, recognizing groundbreaking research in complex systems and data science! 🌍🤖
More info @isi.it:
www.isi.it/press-releas...
More info @isi.it:
www.isi.it/press-releas...
I am deeply honored to be awarded the Lagrange Prize 🏆, the premier award in the field of Complex Systems.
I'd like to share this moment with all my current and past students, research team members, and collaborators over the years.
Thank you CRT Foundation & @isi.it for this honor
I'd like to share this moment with all my current and past students, research team members, and collaborators over the years.
Thank you CRT Foundation & @isi.it for this honor
🎉 ISI Foundation is thrilled to announce that the Lagrange Prize – CRT Foundation Edition 2025 has been awarded to Professor Iyad Rahwan @iyadrahwan.bsky.social, Director of the Max Planck Institute for Human Development in Berlin!
October 27, 2025 at 6:11 PM
I am deeply honored to be awarded the Lagrange Prize 🏆, the premier award in the field of Complex Systems.
I'd like to share this moment with all my current and past students, research team members, and collaborators over the years.
Thank you CRT Foundation & @isi.it for this honor
I'd like to share this moment with all my current and past students, research team members, and collaborators over the years.
Thank you CRT Foundation & @isi.it for this honor
A word of gratitude to the anonymous reviewers, the unsung heroes of science.
We recently had the great fortune to publish in @nature.com. We even made the cover of the issue, with a witty tagline that summarizes the paper: "Cheat Code: Delegating to AI can encourage dishonest behaviour"
🧵 1/n
We recently had the great fortune to publish in @nature.com. We even made the cover of the issue, with a witty tagline that summarizes the paper: "Cheat Code: Delegating to AI can encourage dishonest behaviour"
🧵 1/n
October 21, 2025 at 3:20 PM
A word of gratitude to the anonymous reviewers, the unsung heroes of science.
We recently had the great fortune to publish in @nature.com. We even made the cover of the issue, with a witty tagline that summarizes the paper: "Cheat Code: Delegating to AI can encourage dishonest behaviour"
🧵 1/n
We recently had the great fortune to publish in @nature.com. We even made the cover of the issue, with a witty tagline that summarizes the paper: "Cheat Code: Delegating to AI can encourage dishonest behaviour"
🧵 1/n
Delighted that our paper on 'Delegation to AI can increase dishonest behaviour' is featured today on the cover of @nature.com
Paper: www.nature.com/articles/s41...
Paper: www.nature.com/articles/s41...
October 2, 2025 at 7:51 AM
Delighted that our paper on 'Delegation to AI can increase dishonest behaviour' is featured today on the cover of @nature.com
Paper: www.nature.com/articles/s41...
Paper: www.nature.com/articles/s41...
PhD Scholarships
If you're interested in studying with me, here's a new funding scheme just launched by @maxplanck.de: The Max Planck AI Network
ai.mpg.de
Application deadline 31 October
If you're interested in studying with me, here's a new funding scheme just launched by @maxplanck.de: The Max Planck AI Network
ai.mpg.de
Application deadline 31 October
September 29, 2025 at 11:56 AM
PhD Scholarships
If you're interested in studying with me, here's a new funding scheme just launched by @maxplanck.de: The Max Planck AI Network
ai.mpg.de
Application deadline 31 October
If you're interested in studying with me, here's a new funding scheme just launched by @maxplanck.de: The Max Planck AI Network
ai.mpg.de
Application deadline 31 October
Now out in Scientific American. Great interview with @nckobis.bsky.social & Zoe Rahwan about our recent @nature.com article.
People Are More Likely to Cheat When They Use AI
www.scientificamerican.com/article/peop...
Thanks @rachelnuwer.bsky.social & @parshallison.bsky.social
People Are More Likely to Cheat When They Use AI
www.scientificamerican.com/article/peop...
Thanks @rachelnuwer.bsky.social & @parshallison.bsky.social
People Are More Likely to Cheat When They Use AI
Participants in a new study were more likely to cheat when delegating to AI—especially if they could encourage machines to break rules without explicitly asking for it
www.scientificamerican.com
September 28, 2025 at 4:13 PM
Now out in Scientific American. Great interview with @nckobis.bsky.social & Zoe Rahwan about our recent @nature.com article.
People Are More Likely to Cheat When They Use AI
www.scientificamerican.com/article/peop...
Thanks @rachelnuwer.bsky.social & @parshallison.bsky.social
People Are More Likely to Cheat When They Use AI
www.scientificamerican.com/article/peop...
Thanks @rachelnuwer.bsky.social & @parshallison.bsky.social
Thank you @meharpist.bsky.social for handling this paper, and helping us improve it substantially over the revisions. And many thanks for the amazing anonymous reviewers, who gave the paper tough but fair love.
Would you let AI (LLMs) cheat for you? New work out in @nature.com shows that people are indeed willing to instruct AI in ways that will benefit themselves, despite not being totally honest. Great work by @nckobis.bsky.social and @iyadrahwan.bsky.social et al 🧪 www.nature.com/articles/s41...
Delegation to artificial intelligence can increase dishonest behaviour - Nature
People cheat more when they delegate tasks to artificial intelligence, and large language models are more likely than humans to comply with unethical instructions—a risk that can be minimized by ...
www.nature.com
September 19, 2025 at 9:27 PM
Thank you @meharpist.bsky.social for handling this paper, and helping us improve it substantially over the revisions. And many thanks for the amazing anonymous reviewers, who gave the paper tough but fair love.
Reposted by Iyad Rahwan | إياد رهوان
Nature research paper: Delegation to artificial intelligence can increase dishonest behaviour
go.nature.com/3KsDgbG
go.nature.com/3KsDgbG
Delegation to artificial intelligence can increase dishonest behaviour - Nature
People cheat more when they delegate tasks to artificial intelligence, and large language models are more likely than humans to comply with unethical instructions—a risk that can be minimized by introducing prohibitive, task-specific guardrails.
go.nature.com
September 18, 2025 at 12:46 PM
Nature research paper: Delegation to artificial intelligence can increase dishonest behaviour
go.nature.com/3KsDgbG
go.nature.com/3KsDgbG
Why AI could make people more likely to lie
Coverage of our recent paper by THe Independent, with nice commentary by @swachter.bsky.social
www.independent.co.uk/news/uk/home...
Coverage of our recent paper by THe Independent, with nice commentary by @swachter.bsky.social
www.independent.co.uk/news/uk/home...
Why AI could make people more likely to lie
A new study has revealed that people feel much more comfortable being deceitful when using AI
www.independent.co.uk
September 18, 2025 at 4:38 PM
Why AI could make people more likely to lie
Coverage of our recent paper by THe Independent, with nice commentary by @swachter.bsky.social
www.independent.co.uk/news/uk/home...
Coverage of our recent paper by THe Independent, with nice commentary by @swachter.bsky.social
www.independent.co.uk/news/uk/home...
Would you let AI cheat for you?
Our new paper in @nature.com, 5 years in the making, is out today.
www.nature.com/articles/s41...
Our new paper in @nature.com, 5 years in the making, is out today.
www.nature.com/articles/s41...
September 17, 2025 at 3:53 PM
Would you let AI cheat for you?
Our new paper in @nature.com, 5 years in the making, is out today.
www.nature.com/articles/s41...
Our new paper in @nature.com, 5 years in the making, is out today.
www.nature.com/articles/s41...
Reposted by Iyad Rahwan | إياد رهوان
The new application cycle for our fully funded international graduate program has just started. You can now apply via our website, sign up for a Q&A, or participate in the Applicant Support Program cognition.maxplanckschools.org/en ! 👍🏻🧠👏🏾#passionforscience, #maxplanckschools
September 4, 2025 at 11:21 AM
The new application cycle for our fully funded international graduate program has just started. You can now apply via our website, sign up for a Q&A, or participate in the Applicant Support Program cognition.maxplanckschools.org/en ! 👍🏻🧠👏🏾#passionforscience, #maxplanckschools
Symposium on Cross-Cultural Artificial Intelligence
We are organizing this in-person event in Berlin on 10 Oct 2025, with a 'School on Cross Cultural AI' on 9 Oct.
We have an amazing line-up of speakers (see link)
Registration is open, but places are limited: derdivan.org/event/sympos...
We are organizing this in-person event in Berlin on 10 Oct 2025, with a 'School on Cross Cultural AI' on 9 Oct.
We have an amazing line-up of speakers (see link)
Registration is open, but places are limited: derdivan.org/event/sympos...
September 8, 2025 at 6:23 PM
Symposium on Cross-Cultural Artificial Intelligence
We are organizing this in-person event in Berlin on 10 Oct 2025, with a 'School on Cross Cultural AI' on 9 Oct.
We have an amazing line-up of speakers (see link)
Registration is open, but places are limited: derdivan.org/event/sympos...
We are organizing this in-person event in Berlin on 10 Oct 2025, with a 'School on Cross Cultural AI' on 9 Oct.
We have an amazing line-up of speakers (see link)
Registration is open, but places are limited: derdivan.org/event/sympos...
Fully funded PhD scholarships at the Max Planck School of Cognition (Deadline Dec 1st)
You can apply to work with me or one of the many amazing school faculty.
Apply here: cognition.maxplanckschools.org/en/application
You can apply to work with me or one of the many amazing school faculty.
Apply here: cognition.maxplanckschools.org/en/application
Application
Application; application procedure; FAQs; handbook
cognition.maxplanckschools.org
September 5, 2025 at 3:19 PM
Fully funded PhD scholarships at the Max Planck School of Cognition (Deadline Dec 1st)
You can apply to work with me or one of the many amazing school faculty.
Apply here: cognition.maxplanckschools.org/en/application
You can apply to work with me or one of the many amazing school faculty.
Apply here: cognition.maxplanckschools.org/en/application
Great article by the legendary @philipcball.bsky.social about the 'Science Fiction Science Method' that @jfbonnefon.bsky.social & @azimshariff.bsky.social recently proposed.
[Paywalled but can be accessed by a free sign-up]
www.thenewworld.co.uk/philip-ball-...
[Paywalled but can be accessed by a free sign-up]
www.thenewworld.co.uk/philip-ball-...
Predicting the traffic jam, not the automobile
Some of the best science fiction is not so much about dreaming up futuristic technologies but imagining the kinds of societies they will engender
www.thenewworld.co.uk
September 3, 2025 at 6:32 AM
Great article by the legendary @philipcball.bsky.social about the 'Science Fiction Science Method' that @jfbonnefon.bsky.social & @azimshariff.bsky.social recently proposed.
[Paywalled but can be accessed by a free sign-up]
www.thenewworld.co.uk/philip-ball-...
[Paywalled but can be accessed by a free sign-up]
www.thenewworld.co.uk/philip-ball-...
If you know a 🚨 Scholar at Risk 🚨 please share!
I am delighted to share that applications are now open for the MAXMINDS mentoring program.
I am delighted to share that applications are now open for the MAXMINDS mentoring program.
September 1, 2025 at 6:05 PM
If you know a 🚨 Scholar at Risk 🚨 please share!
I am delighted to share that applications are now open for the MAXMINDS mentoring program.
I am delighted to share that applications are now open for the MAXMINDS mentoring program.
Reposted by Iyad Rahwan | إياد رهوان
New job ad: Assistant Professor of Quantitative Social Science, Dartmouth College apply.interfolio.com/172357
Please share with your networks. I am the search chair and happy to answer questions!
Please share with your networks. I am the search chair and happy to answer questions!
August 21, 2025 at 6:50 PM
New job ad: Assistant Professor of Quantitative Social Science, Dartmouth College apply.interfolio.com/172357
Please share with your networks. I am the search chair and happy to answer questions!
Please share with your networks. I am the search chair and happy to answer questions!
Reposted by Iyad Rahwan | إياد رهوان
The science fiction science method www.nature.com/articles/s41...
The science fiction science method - Nature
The ‘science fiction science’ method simulates future technologies and collects quantitative data on the attitudes and behaviours of participants in various future scenarios, with the aim of predictin...
www.nature.com
August 7, 2025 at 10:58 AM
The science fiction science method www.nature.com/articles/s41...
Can we turn a science fiction thought experiment into an actual experiment?
Check out our new @nature.com paper on the "Science Fiction Science" method (summarized by @jfbonnefon.bsky.social below)
Check out our new @nature.com paper on the "Science Fiction Science" method (summarized by @jfbonnefon.bsky.social below)
We need data, not guesses, on how future tech may reshape behavior & society. Our new paper with @azimshariff.bsky.social and @iyadrahwan.bsky.social out in @nature.com spells out a framework we call the ❝science fiction science method❞ (sci-fi-sci) +
www.nature.com/articles/s41...
www.nature.com/articles/s41...
The science fiction science method - Nature
The ‘science fiction science’ method simulates future technologies and collects quantitative data on the attitudes and behaviours of participants in various future scenarios, with the aim of predictin...
www.nature.com
August 6, 2025 at 10:34 PM
Can we turn a science fiction thought experiment into an actual experiment?
Check out our new @nature.com paper on the "Science Fiction Science" method (summarized by @jfbonnefon.bsky.social below)
Check out our new @nature.com paper on the "Science Fiction Science" method (summarized by @jfbonnefon.bsky.social below)
I hope our new preprint raises awareness about how LLMs threaten the integrity of online participant pools. We hope leading platforms, such as @joinprolific.bsky.social, will act fast to ensure reliable reporting and sanctioning of LLM-use by participants.
🚨New paper alert!
"Recognising, Anticipating & Mitigating LLM Pollution of Online Behavioural Research"
Online experiments are being polluted by LLMs. We map the threat and fixes🧵
w/ Raluca Rilla, @hiromu1996.bsky.social, @iyadrahwan.bsky.social &
Anne-Marie Nussberger
arxiv.org/abs/2508.01390
"Recognising, Anticipating & Mitigating LLM Pollution of Online Behavioural Research"
Online experiments are being polluted by LLMs. We map the threat and fixes🧵
w/ Raluca Rilla, @hiromu1996.bsky.social, @iyadrahwan.bsky.social &
Anne-Marie Nussberger
arxiv.org/abs/2508.01390
Recognising, Anticipating, and Mitigating LLM Pollution of Online Behavioural Research
Online behavioural research faces an emerging threat as participants increasingly turn to large language models (LLMs) for advice, translation, or task delegation: LLM Pollution. We identify three int...
arxiv.org
August 5, 2025 at 11:31 AM
I hope our new preprint raises awareness about how LLMs threaten the integrity of online participant pools. We hope leading platforms, such as @joinprolific.bsky.social, will act fast to ensure reliable reporting and sanctioning of LLM-use by participants.
Reposted by Iyad Rahwan | إياد رهوان
🚨New paper alert!
"Recognising, Anticipating & Mitigating LLM Pollution of Online Behavioural Research"
Online experiments are being polluted by LLMs. We map the threat and fixes🧵
w/ Raluca Rilla, @hiromu1996.bsky.social, @iyadrahwan.bsky.social &
Anne-Marie Nussberger
arxiv.org/abs/2508.01390
"Recognising, Anticipating & Mitigating LLM Pollution of Online Behavioural Research"
Online experiments are being polluted by LLMs. We map the threat and fixes🧵
w/ Raluca Rilla, @hiromu1996.bsky.social, @iyadrahwan.bsky.social &
Anne-Marie Nussberger
arxiv.org/abs/2508.01390
Recognising, Anticipating, and Mitigating LLM Pollution of Online Behavioural Research
Online behavioural research faces an emerging threat as participants increasingly turn to large language models (LLMs) for advice, translation, or task delegation: LLM Pollution. We identify three int...
arxiv.org
August 5, 2025 at 8:04 AM
🚨New paper alert!
"Recognising, Anticipating & Mitigating LLM Pollution of Online Behavioural Research"
Online experiments are being polluted by LLMs. We map the threat and fixes🧵
w/ Raluca Rilla, @hiromu1996.bsky.social, @iyadrahwan.bsky.social &
Anne-Marie Nussberger
arxiv.org/abs/2508.01390
"Recognising, Anticipating & Mitigating LLM Pollution of Online Behavioural Research"
Online experiments are being polluted by LLMs. We map the threat and fixes🧵
w/ Raluca Rilla, @hiromu1996.bsky.social, @iyadrahwan.bsky.social &
Anne-Marie Nussberger
arxiv.org/abs/2508.01390
Reposted by Iyad Rahwan | إياد رهوان
This is the kind of outcome we foretold six years ago in a paper aptly titled Drivers Are Blamed More Than Their Automated Cars When Both Make Mistakes, with @awad.bsky.social @sohandsouza.info @sydneylevine.bsky.social @maxkw.bsky.social @azimshariff.bsky.social @iyadrahwan.bsky.social
❝Neither the driver of the Tesla sedan nor the Autopilot software braked in time for an intersection. The jury assigned Tesla one-third of the blame and assigned two-thirds to the driver, who was reaching for his cell phone at the time of the crash❞ www.nbcnews.com/news/us-news...
Tesla hit with $243 million in damages after jury finds its Autopilot feature contributed to fatal crash
The verdict follows a three-week trial that threw a spotlight on how Tesla and CEO Elon Musk have marketed their driver-assistance software.
www.nbcnews.com
August 2, 2025 at 6:25 PM
This is the kind of outcome we foretold six years ago in a paper aptly titled Drivers Are Blamed More Than Their Automated Cars When Both Make Mistakes, with @awad.bsky.social @sohandsouza.info @sydneylevine.bsky.social @maxkw.bsky.social @azimshariff.bsky.social @iyadrahwan.bsky.social
Our latest paper, showing how the public may not have an accurate understanding of actual stakeholder preferences when it comes to AI deployment in government.
Mengchen Dong @iyadrahwan.bsky.social and I report data from 3,000+ people (incl. welfare claimants) showing how much accuracy they're willing to sacrifice to get faster AI decisions for social benefits. it's very hard for non-claimants to understand claimants. www.nature.com/articles/s41...
Heterogeneous preferences and asymmetric insights for AI use among welfare claimants and non-claimants - Nature Communications
Governments use AI to speed up welfare decisions, raising concerns about fairness and accuracy. Here, the authors find that welfare claimants are more averse to AI and their preferences less understoo...
www.nature.com
July 30, 2025 at 11:58 AM
Our latest paper, showing how the public may not have an accurate understanding of actual stakeholder preferences when it comes to AI deployment in government.
Reposted by Iyad Rahwan | إياد رهوان
🧠 Is your research still relying on Western participants?
15 years after “The WEIRDest People in the World” by Henrich et al., most studies still overuse WEIRD samples.
In this paper, published in in Behavior Research Methods, I offer digital pathways to move beyond that. 1/5
15 years after “The WEIRDest People in the World” by Henrich et al., most studies still overuse WEIRD samples.
In this paper, published in in Behavior Research Methods, I offer digital pathways to move beyond that. 1/5
July 25, 2025 at 9:27 AM
🧠 Is your research still relying on Western participants?
15 years after “The WEIRDest People in the World” by Henrich et al., most studies still overuse WEIRD samples.
In this paper, published in in Behavior Research Methods, I offer digital pathways to move beyond that. 1/5
15 years after “The WEIRDest People in the World” by Henrich et al., most studies still overuse WEIRD samples.
In this paper, published in in Behavior Research Methods, I offer digital pathways to move beyond that. 1/5