Oct 31 4pm CET
Zoom: shorturl.at/8yCL9
Preprint 👉 arxiv.org/abs/2507.00814
Oct 31 4pm CET
Zoom: shorturl.at/8yCL9
Preprint 👉 arxiv.org/abs/2507.00814
*Many LLMs Are More Utilitarian Than One* at Institute of Philosophy, Heidelberg & Zoom.
Fri, 31 Oct PM CET.
When AI models start reasoning together, they become more utilitarian: more willing to sacrifice one to save many.
🔗 arxiv.org/abs/2507.00814
*Many LLMs Are More Utilitarian Than One* at Institute of Philosophy, Heidelberg & Zoom.
Fri, 31 Oct PM CET.
When AI models start reasoning together, they become more utilitarian: more willing to sacrifice one to save many.
🔗 arxiv.org/abs/2507.00814
#NeurIPS
Thanks to my amazing collaborators!
@razanbaltaji.bsky.social
Preprint: arxiv.org/abs/2507.00814
#NeurIPS
Thanks to my amazing collaborators!
@razanbaltaji.bsky.social
Preprint: arxiv.org/abs/2507.00814
at #IC2S2 2025.
How do LLMs reason morally in groups? We found that multi-agent LLMs, like humans, show a utilitarian shift, but the reasons differ.
📍 Session: ABM July 22 11:00–12:30
#AI #LLMs #MoralAI #CollectiveCognition #MultiAgentSystems
Full program here: www.ic2s2-2025.org/program/
See you in Norrköping! 🇸🇪
at #IC2S2 2025.
How do LLMs reason morally in groups? We found that multi-agent LLMs, like humans, show a utilitarian shift, but the reasons differ.
📍 Session: ABM July 22 11:00–12:30
#AI #LLMs #MoralAI #CollectiveCognition #MultiAgentSystems
How do LLMs reason morally in groups? We found that multi-agent LLMs, like humans, show a utilitarian shift, but the reasons differ.
📍 Session: ABM July 22 | 11:00–12:30
#AI #LLMs #MoralAI #CollectiveCognition #MultiAgentSystems
We find that LLM collectives endorse welfare-maximizing actions more when in groups than solo runs, even at the cost of harming a minority.
📄 arxiv.org/abs/2507.00814
@razanbaltaji.bsky.social
How do LLMs reason morally in groups? We found that multi-agent LLMs, like humans, show a utilitarian shift, but the reasons differ.
📍 Session: ABM July 22 | 11:00–12:30
#AI #LLMs #MoralAI #CollectiveCognition #MultiAgentSystems
We find that LLM collectives endorse welfare-maximizing actions more when in groups than solo runs, even at the cost of harming a minority.
📄 arxiv.org/abs/2507.00814
@razanbaltaji.bsky.social
We find that LLM collectives endorse welfare-maximizing actions more when in groups than solo runs, even at the cost of harming a minority.
📄 arxiv.org/abs/2507.00814
@razanbaltaji.bsky.social
Join us! Open to the public!
www.scienceofintelligence.de/event/anita-...
Join us! Open to the public!
www.scienceofintelligence.de/event/anita-...
openreview.net/forum?id=544...
openreview.net/forum?id=544...