Julian Skirzynski
@jskirzynski.bsky.social
PhD student in Computer Science @UCSD. Studying interpretable AI and RL to improve people's decision-making.
Reposted by Julian Skirzynski
LLMs are now widely used in social science as stand-ins for humans—assuming they can produce realistic, human-like text
But... can they? We don’t actually know.
In our new study, we develop a Computational Turing Test.
And our findings are striking:
LLMs may be far less human-like than we think.🧵
But... can they? We don’t actually know.
In our new study, we develop a Computational Turing Test.
And our findings are striking:
LLMs may be far less human-like than we think.🧵
Computational Turing Test Reveals Systematic Differences Between Human and AI Language
Large language models (LLMs) are increasingly used in the social sciences to simulate human behavior, based on the assumption that they can generate realistic, human-like text. Yet this assumption rem...
arxiv.org
November 7, 2025 at 11:13 AM
LLMs are now widely used in social science as stand-ins for humans—assuming they can produce realistic, human-like text
But... can they? We don’t actually know.
In our new study, we develop a Computational Turing Test.
And our findings are striking:
LLMs may be far less human-like than we think.🧵
But... can they? We don’t actually know.
In our new study, we develop a Computational Turing Test.
And our findings are striking:
LLMs may be far less human-like than we think.🧵
Reposted by Julian Skirzynski
Preliminary results show that the current framework of "AI" makes ppl less likely to help or seek help from other humans, or to seek to soothe conflict, and that people actively prefer that framework to any others, literally serving to make them more dependent on it.
Sycophantic AI Decreases Prosocial Intentions and Promotes Dependence
Both the general public and academic communities have raised concerns about sycophancy, the phenomenon of artificial intelligence (AI) excessively agreeing with or flattering users. Yet, beyond isolat...
arxiv.org
October 5, 2025 at 5:45 PM
Preliminary results show that the current framework of "AI" makes ppl less likely to help or seek help from other humans, or to seek to soothe conflict, and that people actively prefer that framework to any others, literally serving to make them more dependent on it.
Reposted by Julian Skirzynski
New research out!🚨
In our new paper, we discuss how generative AI (GenAI) tools like ChatGPT can mediate confirmation bias in health information seeking.
As people turn to these tools for health-related queries, new risks emerge.
🧵👇
nyaspubs.onlinelibrary.wiley.com/doi/10.1111/...
In our new paper, we discuss how generative AI (GenAI) tools like ChatGPT can mediate confirmation bias in health information seeking.
As people turn to these tools for health-related queries, new risks emerge.
🧵👇
nyaspubs.onlinelibrary.wiley.com/doi/10.1111/...
NYAS Publications
Generative artificial intelligence (GenAI) applications, such as ChatGPT, are transforming how individuals access health information, offering conversational and highly personalized interactions. Whi...
nyaspubs.onlinelibrary.wiley.com
July 28, 2025 at 10:15 AM
New research out!🚨
In our new paper, we discuss how generative AI (GenAI) tools like ChatGPT can mediate confirmation bias in health information seeking.
As people turn to these tools for health-related queries, new risks emerge.
🧵👇
nyaspubs.onlinelibrary.wiley.com/doi/10.1111/...
In our new paper, we discuss how generative AI (GenAI) tools like ChatGPT can mediate confirmation bias in health information seeking.
As people turn to these tools for health-related queries, new risks emerge.
🧵👇
nyaspubs.onlinelibrary.wiley.com/doi/10.1111/...
Right to explanation laws assume explanations help people detect algorithmic discrimination.
But is there any evidence for that?
In our latest work w/ David Danks @berkustun, we show explanations fail to help people, even under optimal conditions.
PDF shorturl.at/yaRua
But is there any evidence for that?
In our latest work w/ David Danks @berkustun, we show explanations fail to help people, even under optimal conditions.
PDF shorturl.at/yaRua
June 24, 2025 at 6:14 AM
Right to explanation laws assume explanations help people detect algorithmic discrimination.
But is there any evidence for that?
In our latest work w/ David Danks @berkustun, we show explanations fail to help people, even under optimal conditions.
PDF shorturl.at/yaRua
But is there any evidence for that?
In our latest work w/ David Danks @berkustun, we show explanations fail to help people, even under optimal conditions.
PDF shorturl.at/yaRua
Reposted by Julian Skirzynski
Denied a loan, an interview, or an insurance claim by machine learning models? You may be entitled to a list of reasons.
In our latest w @anniewernerfelt.bsky.social @berkustun.bsky.social @friedler.net, we show how existing explanation frameworks fail and present an alternative for recourse
In our latest w @anniewernerfelt.bsky.social @berkustun.bsky.social @friedler.net, we show how existing explanation frameworks fail and present an alternative for recourse
April 24, 2025 at 6:19 AM
Denied a loan, an interview, or an insurance claim by machine learning models? You may be entitled to a list of reasons.
In our latest w @anniewernerfelt.bsky.social @berkustun.bsky.social @friedler.net, we show how existing explanation frameworks fail and present an alternative for recourse
In our latest w @anniewernerfelt.bsky.social @berkustun.bsky.social @friedler.net, we show how existing explanation frameworks fail and present an alternative for recourse
Reposted by Julian Skirzynski
You wanted starter packs to be searchable. Our engineers are busy keeping us online, so in the meantime, an independent developer built a new searchable library of starter packs. This is the beauty of building in the open 🦋
November 26, 2024 at 5:11 AM
You wanted starter packs to be searchable. Our engineers are busy keeping us online, so in the meantime, an independent developer built a new searchable library of starter packs. This is the beauty of building in the open 🦋
Reposted by Julian Skirzynski
I'm really enjoying this AI papers feed - thanks for making it @sethlazar.org !
And what a cool feature of this place. 🦋
bsky.app/profile/did:...
And what a cool feature of this place. 🦋
bsky.app/profile/did:...
November 26, 2024 at 3:06 PM
I'm really enjoying this AI papers feed - thanks for making it @sethlazar.org !
And what a cool feature of this place. 🦋
bsky.app/profile/did:...
And what a cool feature of this place. 🦋
bsky.app/profile/did:...
I tried to find everyone who works in the area but I certainly missed some folks so please lmk...
go.bsky.app/BYkRryU
go.bsky.app/BYkRryU
November 23, 2024 at 5:11 AM
I tried to find everyone who works in the area but I certainly missed some folks so please lmk...
go.bsky.app/BYkRryU
go.bsky.app/BYkRryU