https://hamzaalshamy.github.io/
Across 3 experiments (n = 3,285), we found that interacting with sycophantic (or overly agreeable) AI chatbots entrenched attitudes and led to inflated self-perceptions.
Yet, people preferred sycophantic chatbots and viewed them as unbiased!
osf.io/preprints/ps...
Thread 🧵
Across 3 experiments (n = 3,285), we found that interacting with sycophantic (or overly agreeable) AI chatbots entrenched attitudes and led to inflated self-perceptions.
Yet, people preferred sycophantic chatbots and viewed them as unbiased!
osf.io/preprints/ps...
Thread 🧵
We find that AI sources are preferred over ingroup and outgroup sources--even when people know both are equally accurate (N = 1,600+): osf.io/preprints/ps...
We find that AI sources are preferred over ingroup and outgroup sources--even when people know both are equally accurate (N = 1,600+): osf.io/preprints/ps...