Across 3 experiments (n = 3,285), we found that interacting with sycophantic (or overly agreeable) AI chatbots entrenched attitudes and led to inflated self-perceptions.
Yet, people preferred sycophantic chatbots and viewed them as unbiased!
osf.io/preprints/ps...
Thread 🧵
Across 3 experiments (n = 3,285), we found that interacting with sycophantic (or overly agreeable) AI chatbots entrenched attitudes and led to inflated self-perceptions.
Yet, people preferred sycophantic chatbots and viewed them as unbiased!
osf.io/preprints/ps...
Thread 🧵
Across 3 experiments (n = 3,285), we found that interacting with sycophantic (or overly agreeable) AI chatbots entrenched attitudes and led to inflated self-perceptions.
Yet, people preferred sycophantic chatbots and viewed them as unbiased!
osf.io/preprints/ps...
Thread 🧵
www.nuancebehavior.com/article/mora...
www.nuancebehavior.com/article/mora...
pubmed.ncbi.nlm.nih.gov/39883419/#:~....
pubmed.ncbi.nlm.nih.gov/39883419/#:~....
Can AI offer empathy that’s better than humans? Maybe. Our new study found that people rated AI-generated responses as more compassionate than from humans, including trained crisis responders.
www.nature.com/commspsychol/
Can AI offer empathy that’s better than humans? Maybe. Our new study found that people rated AI-generated responses as more compassionate than from humans, including trained crisis responders.
www.nature.com/commspsychol/