But... can they? We don’t actually know.
In our new study, we develop a Computational Turing Test.
And our findings are striking:
LLMs may be far less human-like than we think.🧵
But... can they? We don’t actually know.
In our new study, we develop a Computational Turing Test.
And our findings are striking:
LLMs may be far less human-like than we think.🧵
In our new paper, we discuss how generative AI (GenAI) tools like ChatGPT can mediate confirmation bias in health information seeking.
As people turn to these tools for health-related queries, new risks emerge.
🧵👇
nyaspubs.onlinelibrary.wiley.com/doi/10.1111/...
In our new paper, we discuss how generative AI (GenAI) tools like ChatGPT can mediate confirmation bias in health information seeking.
As people turn to these tools for health-related queries, new risks emerge.
🧵👇
nyaspubs.onlinelibrary.wiley.com/doi/10.1111/...
But is there any evidence for that?
In our latest work w/ David Danks @berkustun, we show explanations fail to help people, even under optimal conditions.
PDF shorturl.at/yaRua
But is there any evidence for that?
In our latest work w/ David Danks @berkustun, we show explanations fail to help people, even under optimal conditions.
PDF shorturl.at/yaRua
In our latest w @anniewernerfelt.bsky.social @berkustun.bsky.social @friedler.net, we show how existing explanation frameworks fail and present an alternative for recourse
In our latest w @anniewernerfelt.bsky.social @berkustun.bsky.social @friedler.net, we show how existing explanation frameworks fail and present an alternative for recourse
And what a cool feature of this place. 🦋
bsky.app/profile/did:...
And what a cool feature of this place. 🦋
bsky.app/profile/did:...
go.bsky.app/BYkRryU
go.bsky.app/BYkRryU