Anita Keshmirian, PhD.
banner
anitakeshmirian.bsky.social
Anita Keshmirian, PhD.
@anitakeshmirian.bsky.social
Assistant Prof (TT) DataScience and Psych @ Forward/LSE.Postdoc @ LMU Munich - AI researcher @ Fraunhofer. Past: Harvard, Berlin, Bordeaux. 1st Gen from Iran.
I’ll be giving a talk this Friday at the Institute of Philosophy, Heidelberg University (and online) on our recent #NeurIPS paper. “Many LLMs Are More Utilitarian Than One”.

Oct 31 4pm CET
Zoom: shorturl.at/8yCL9

Preprint 👉 arxiv.org/abs/2507.00814
Many LLMs Are More Utilitarian Than One
Moral judgment is integral to large language models' (LLMs) social reasoning. As multi-agent systems gain prominence, it becomes crucial to understand how LLMs function when collaborating compared to ...
arxiv.org
October 30, 2025 at 8:12 AM
I'll be giving a talk on our #NeurIPS paper:

*Many LLMs Are More Utilitarian Than One* at Institute of Philosophy, Heidelberg & Zoom.

Fri, 31 Oct PM CET.

When AI models start reasoning together, they become more utilitarian: more willing to sacrifice one to save many.
🔗 arxiv.org/abs/2507.00814
October 30, 2025 at 7:34 AM
Reposted by Anita Keshmirian, PhD.
Please join us on Friday 4pm for a talk entitled "Many LLMs Are More Utilitarian Than One" by Anita Keshmirian forward-college.eu/profiles/fac... at eu02web.zoom-x.de/j/3634599978... Abstract and more info at www.imseam.uni-heidelberg.de/en/heinzelma...
Dr. Anita Keshmirian – Forward College
Head of Data Science & Assistant Professor in Psychology
forward-college.eu
October 29, 2025 at 10:06 AM
Excited to share that my recent paper *many llms are more utilitarian than one * has been accepted to NeurIPS 2025! 🎉
#NeurIPS

Thanks to my amazing collaborators!
@razanbaltaji.bsky.social

Preprint: arxiv.org/abs/2507.00814
Many LLMs Are More Utilitarian Than One
Moral judgment is integral to large language model (LLM) alignment and social reasoning. As multi-agent systems gain prominence, it becomes crucial to understand how LLMs function collectively during ...
arxiv.org
September 19, 2025 at 3:33 AM
In beautiful Norrköping for #IC2S2 @ic2s2.bsky.social Tomorrow, I’ll be presenting our recent work: Collective Moral Reasoning in LLMs: Many LLMs are more utilitarian than one. Let's connect!
July 21, 2025 at 9:31 PM
Norrkoping.
July 21, 2025 at 7:15 PM
In Norrköping to attend
at #IC2S2 2025.

How do LLMs reason morally in groups? We found that multi-agent LLMs, like humans, show a utilitarian shift, but the reasons differ.

📍 Session: ABM July 22 11:00–12:30

#AI #LLMs #MoralAI #CollectiveCognition #MultiAgentSystems
#IC2S2 2025 is just around the corner! July 21–24 in Norrköping, Sweden. Bookmark tutorials, keynotes, and must-see sessions and connect with fellow attendees using #IC2S2.
Full program here: www.ic2s2-2025.org/program/
See you in Norrköping! 🇸🇪
July 19, 2025 at 11:44 AM
Join us at #IC2S2 2025 in Norrköping!

How do LLMs reason morally in groups? We found that multi-agent LLMs, like humans, show a utilitarian shift, but the reasons differ.

📍 Session: ABM July 22 | 11:00–12:30

#AI #LLMs #MoralAI #CollectiveCognition #MultiAgentSystems
New preprint: Do LLMs make different moral decisions when reasoning in collective?

We find that LLM collectives endorse welfare-maximizing actions more when in groups than solo runs, even at the cost of harming a minority.

📄 arxiv.org/abs/2507.00814

@razanbaltaji.bsky.social
Many LLMs Are More Utilitarian Than One
Moral judgment is integral to large language model (LLM) alignment and social reasoning. As multi-agent systems gain prominence, it becomes crucial to understand how LLMs function collectively during ...
arxiv.org
July 19, 2025 at 11:39 AM
Reposted by Anita Keshmirian, PhD.
What an exciting day for cognitive science with a double feature @nature.com including tiny models (www.nature.com/articles/s41...) and less tiny ones (www.nature.com/articles/s41...).
July 2, 2025 at 10:21 PM
New preprint: Do LLMs make different moral decisions when reasoning in collective?

We find that LLM collectives endorse welfare-maximizing actions more when in groups than solo runs, even at the cost of harming a minority.

📄 arxiv.org/abs/2507.00814

@razanbaltaji.bsky.social
Many LLMs Are More Utilitarian Than One
Moral judgment is integral to large language model (LLM) alignment and social reasoning. As multi-agent systems gain prominence, it becomes crucial to understand how LLMs function collectively during ...
arxiv.org
July 2, 2025 at 9:39 PM
I’ll present "Many Minds, Diverging Morals: Human Groups vs. AI in Moral Decision-Making," my recent work with Eric Schulz & RazanBaltaji, next week at SCIoI, Berlin.

Join us! Open to the public!

www.scienceofintelligence.de/event/anita-...
Anita Keshmirian (Forward College, Berlin): "Many Minds, Diverging Morals: Human Groups vs. AI in Moral Decision-Making" - scienceofintelligence.de
Moral judgments are inherently social, shaped by interactions with others in everyday life. Despite this, psychological research has rarely examined the
www.scienceofintelligence.de
January 8, 2025 at 7:54 AM
Excited to be at #ICLR2024 in Vienna to present our latest work on causal reasoning in humans and LLMs! 🧠🤖 We examined biases in causal judgments using Causal Bayesian Networks. Join us at the Re-Align workshop!
openreview.net/forum?id=544...
Biased Causal Strength Judgments in Humans and Large Language Models
Causal reasoning is a critical aspect of both human cognition and artificial intelligence (AI), playing a prominent role in understanding the relationships between events. Causal Bayesian Networks...
openreview.net
May 11, 2024 at 3:12 AM