nicolo-pagan.bsky.social
@nicolo-pagan.bsky.social
Reposted
LLMs are now widely used in social science as stand-ins for humans—assuming they can produce realistic, human-like text

But... can they? We don’t actually know.

In our new study, we develop a Computational Turing Test.

And our findings are striking:
LLMs may be far less human-like than we think.🧵
Computational Turing Test Reveals Systematic Differences Between Human and AI Language
Large language models (LLMs) are increasingly used in the social sciences to simulate human behavior, based on the assumption that they can generate realistic, human-like text. Yet this assumption rem...
arxiv.org
November 7, 2025 at 11:13 AM
How are different cultures represented in AI-generated images? Which biases do they embed? We did a deep exploratory analysis during our internal hackathon! A great way to also welcome the recent new members of our team!
About last week’s internal hackathon 😏
Last week, we -- the (Amazing) Social Computing Group, held an internal hackathon to work on our informally called “Cultural Imperialism” project.
September 17, 2025 at 8:46 AM
Reposted
If you are looking for a great postdoc opportunity and a) are into survey research and social media data collection, apply to the job advertised by @cbarrie.bsky.social below
b) more into ML & algorithmic fairness, apply to this job with our group led by Aniko Hannak www.ifi.uzh.ch/en/scg/jobs....
December 7, 2024 at 4:25 PM
Reposted
Job opportunity: Postdoc who will lead innovative computational social science projects around the topics of responsible AI and algorithmic fairness in resource allocation problems. At the University of Zurich. www.ifi.uzh.ch/en/scg/jobs....
Open positions
www.ifi.uzh.ch
December 9, 2024 at 1:46 PM