Across 3 experiments (n = 3,285), we found that interacting with sycophantic (or overly agreeable) AI chatbots entrenched attitudes and led to inflated self-perceptions.
Yet, people preferred sycophantic chatbots and viewed them as unbiased!
osf.io/preprints/ps...
Thread 🧵
Across 3 experiments (n = 3,285), we found that interacting with sycophantic (or overly agreeable) AI chatbots entrenched attitudes and led to inflated self-perceptions.
Yet, people preferred sycophantic chatbots and viewed them as unbiased!
osf.io/preprints/ps...
Thread 🧵
Today is yet another example of how the company leadership has refused to incorporate these insights: www.powerofusnewsletter.com/p/the-facebo...
Today is yet another example of how the company leadership has refused to incorporate these insights: www.powerofusnewsletter.com/p/the-facebo...
Let’s fact-check Zuckerberg’s fact-checking announcement www.niemanlab.org/2025/01/zuck...
"...so much bad faith reasoning..."
Eg:
- Evidence doesn't NOT support bias claim
- Fact-checking NOT about censorship. Correction ≠ censorship!
Let’s fact-check Zuckerberg’s fact-checking announcement www.niemanlab.org/2025/01/zuck...
"...so much bad faith reasoning..."
Eg:
- Evidence doesn't NOT support bias claim
- Fact-checking NOT about censorship. Correction ≠ censorship!