Jacy Reese Anthis
banner
jacyanthis.bsky.social
Jacy Reese Anthis
@jacyanthis.bsky.social
Computational social scientist researching human-AI interaction and machine learning, particularly the rise of digital minds. Visiting scholar at Stanford, co-founder of Sentience Institute, and PhD candidate at University of Chicago. jacyanthis.com
Reposted by Jacy Reese Anthis
This week, we published in @science.org an article outlining the current ethical and societal implications of research involving human neural #organoids and #assembloids, their transplantation, and highlighted potential next steps.
November 7, 2025 at 12:12 PM
This was a great event! No recordings (Chatham house), but it's amazing how far you can get when you have a room of people talk about AI consciousness with humility and open-mindedness. So much online discourse is just endless intuition jousting.
At Google NYC for the "Workshop on Emerging Topics in #AI: Consciousness and Moral Patienthood." And they are using one of my social media posts as the framing mechanism. I will be talking about "Person, Thing, Robot" @mitpress.bsky.social later this afternoon.
November 6, 2025 at 2:25 PM
Reposted by Jacy Reese Anthis
Nature suggests you use their "Manuscript Adviser" bot to get advice before submitting

I uploaded the classic Watson & Crick paper about DNA structure, and the Adviser had this to say about one of the greatest paper endings of the century:
November 3, 2025 at 1:55 PM
This paper is a great exposition of how "personhood" doesn't need to be, and in fact should not be, all-or-nothing or grounded in abstruse, ill-defined metaphysical properties. As I argued in my recent @theguardian.com essay, we can and should prepare now: www.theguardian.com/commentisfre...
[1/9] Excited to share our new paper "A Pragmatic View of AI Personhood" published today. We feel this topic is timely, and rapidly growing in importance as AI becomes agentic, as AI agents integrate further into the economy, and as more and more users encounter AI.
November 2, 2025 at 3:30 PM
Reposted by Jacy Reese Anthis
Identifying human morals and values in language is crucial for analysing lots of human- and AI-generated text.

We introduce "MoVa: Towards Generalizable Classification of Human Morals and Values" - to be presented at @emnlpmeeting.bsky.social oral session next Thu #CompSocialScience #LLMs
🧵 (1/n)
October 30, 2025 at 12:20 AM
Reposted by Jacy Reese Anthis
Can AI simulate human behavior? 🧠
The promise is revolutionary for science & policy. But there’s a huge "IF": Do these simulations actually reflect reality?
To find out, we introduce SimBench: The first large-scale benchmark for group-level social simulation. (1/9)
October 28, 2025 at 4:54 PM
Reposted by Jacy Reese Anthis
For the next six days, I'll be posting a bunch about #CSCW2025 in Bergen, Norway. I am one of the General Chairs and have been preparing this conference for the last 21 months, so it's exciting to have the event finally here!

Don't know what CSCW is? Check out cscw.acm.org
CSCW 2025
cscw.acm.org
October 17, 2025 at 9:40 AM
Reposted by Jacy Reese Anthis
When one AI misbehaves, do we hold all AI accountable? New research by @sentienceinstitute.bsky.social @kmanoli.bsky.social @jacyanthis.bsky.social shows that people blame all AI for just one AI's misconduct. 🤖
The AI Double Standard: Why We Judge All AI for One Bot’s Mistakes
By Katerina Manoli, Janet Pauketat, and Jacy Reese Anthis
tinyurl.com
October 16, 2025 at 5:04 PM
Last school year, 19% of US high schoolers had or have a friend who had a “romantic relationship” with AI.

42% had or have a friend with an AI “friend/companion.”

42% had or have a friend who got “mental health support” from AI.

(Source: cdt.org/wp-content/u..., n = 1,030, June-Aug 2025, quotas.)
October 11, 2025 at 10:50 PM
Reposted by Jacy Reese Anthis
The Jane Goodall Institute of Canada has learned this morning, Wednesday, October 1st, 2025, that Dr. Jane Goodall DBE, UN Messenger of Peace and Founder of the Jane Goodall Institute, has passed away due to natural causes.

She was in California as part of her speaking tour in the United States.
October 1, 2025 at 6:14 PM
It’s time to prepare for AI personhood. AI agents and companions are already out in the world buying products and shaping our emotions. The future will only get weirder. We need social science, policy, and norms for this brave new world. My latest @theguardian.com www.theguardian.com/commentisfre...
It’s time to prepare for AI personhood | Jacy Reese Anthis
Technological advances will bring social upheaval. How will we treat digital minds, and how will they treat us?
www.theguardian.com
October 2, 2025 at 9:07 PM
In our new paper, we discovered "The AI Double Standard": People judge all AIs for the harm done by one AI, more strongly than they judge humans.

First impressions will shape the future of human-AI interaction—for better or worse. Accepted at #CSCW2025. See you in Norway! dl.acm.org/doi/10.1145/...
September 29, 2025 at 3:29 PM
Reposted by Jacy Reese Anthis
Hello everyone 👋 Good news!

🚨 Our Game Theory & Multiagent Systems team at Google DeepMind is hiring! 🚨

.. and we have not one, but two open positions! One Research Scientist role and one Research Engineer role. 😁

Please repost and tell anyone who might be interested!

Details in thread below 👇
September 29, 2025 at 12:36 PM
Reposted by Jacy Reese Anthis
British AI startup beats humans in international forecasting competition

ManticAI ranked eighth in the Metaculus Cup, leaving some believing bots’ prediction skills could soon overtake experts
#ai #forecasting

www.theguardian.com/technology/2...
British AI startup beats humans in international forecasting competition
ManticAI ranked eighth in the Metaculus Cup, leaving some believing bots’ prediction skills could soon overtake experts
www.theguardian.com
September 20, 2025 at 2:04 PM
LLM agents are optimized for thumbs-up instant gratification. RLHF -> sycophancy

We propose human agency as a new alignment target in HumanAgencyBench, made possible by AI simulation/evals. We find e.g., Claude most supports agency but also most tries to steer user values 👇 arxiv.org/abs/2509.08494
September 15, 2025 at 5:11 PM
Reposted by Jacy Reese Anthis
🚨 New paper alert 🚨 Using LLMs as data annotators, you can produce any scientific result you want. We call this **LLM Hacking**.

Paper: arxiv.org/pdf/2509.08825
September 12, 2025 at 10:33 AM
Reposted by Jacy Reese Anthis
Adam Raine, 16, died from suicide in April after months on ChatGPT discussing plans to end his life. His parents have filed the first known case against OpenAI for wrongful death.

Overwhelming at times to work on this story, but here it is. My latest on AI chatbots: www.nytimes.com/2025/08/26/t...
A Teen Was Suicidal. ChatGPT Was the Friend He Confided In.
www.nytimes.com
August 26, 2025 at 1:01 PM
Reposted by Jacy Reese Anthis
EXCLUSIVE: Sean Duffy will announce expedited plans to build a nuclear reactor on the moon, his first major action as interim NASA administrator.
Duffy to announce nuclear reactor on the moon
This is the first major agency effort by the interim NASA administrator, who is also the Transportation secretary and a former Fox News host.
www.politico.com
August 4, 2025 at 9:03 PM
Morality in AI is often oversimplified. @davidjurgens.bsky.social and @shivanikumar.bsky.social kick off the "Human-Centred NLP" orals #ACL2025NLP with UniMoral, a huge dataset of moral scenario ratings in 6 languages! They find LLMs fail to simulated human moral decisions. bsky.app/profile/shiv...
July 30, 2025 at 7:14 AM
Grateful to have our recent ICML paper covered by @stanfordhai.bsky.social. Humanity is building incredibly powerful AI technology that can usher in utopia or dystopia. We need human-computer interaction research, particularly with “social science tools like simulations that can keep pace.”
Social science research can be time-consuming, expensive, and hard to replicate. But with AI, scientists can now simulate human data and run studies at scale. Does it actually work? hai.stanford.edu/news/social-...
Social Science Moves In Silico | Stanford HAI
Despite limitations, advances in AI offer social science researchers the ability to simulate human subjects.
hai.stanford.edu
July 27, 2025 at 2:48 PM
I'm at #ACL2025 for 2 papers w/ @kldivergence.bsky.social et al! Let's chat, e.g., scaling evals, simulations, and HCI to unique challenges of general-purpose AI.

Bias in Language Models: Beyond Trick Tests and Towards RUTEd Evaluation
🗓️ Mon 11–12:30

The Impossibility of Fair LLMs
🗓️ Tue 16–17:30
July 27, 2025 at 12:54 PM
Reposted by Jacy Reese Anthis
In 2021 NeurIPS discovered that 50% of all spotlight papers would have been rejected if reviewed again. The solution? Mostly not change anything.
blog.neurips.cc/2021/12/08/t...
The NeurIPS 2021 Consistency Experiment – NeurIPS Blog
blog.neurips.cc
July 27, 2025 at 12:19 AM
@diyiyang.bsky.social and @sherrytswu.bsky.social kick off #ACL2025 with "Human-AI Collaboration: How AIs Augment Human Teammates," showing why and how we need centaur evaluations. Realistic evals take work, but reliance on easy, short, and simple LLM evals led us to this current evaluation crisis.
July 27, 2025 at 8:05 AM
Do bias and fairness metrics work for general-purpose AI like LLMs? In 2 papers just published in #ACL2025, we argue: not yet, but deep qualitative studies of social context scaled with AI assistance can get there!

Theory: aclanthology.org/2025.acl-lon...
Empirics: aclanthology.org/2025.acl-lon...
July 25, 2025 at 7:37 AM