Razan Baltaji
razanbaltaji.bsky.social
Razan Baltaji
@razanbaltaji.bsky.social
Reposted by Razan Baltaji
New preprint: Do LLMs make different moral decisions when reasoning in collective?

We find that LLM collectives endorse welfare-maximizing actions more when in groups than solo runs, even at the cost of harming a minority.

📄 arxiv.org/abs/2507.00814

@razanbaltaji.bsky.social
Many LLMs Are More Utilitarian Than One
Moral judgment is integral to large language model (LLM) alignment and social reasoning. As multi-agent systems gain prominence, it becomes crucial to understand how LLMs function collectively during ...
arxiv.org
July 2, 2025 at 9:39 PM
Reposted by Razan Baltaji
I'm presenting a seminar at the University of Illinois tomorrow at 4 pm. It'll be about human-centered trustworthy AI in the age of agentic AI and how a systems theory might help us understand and govern certain risks like loss of dignity and loss of control.
calendars.illinois.edu/detail/5528?...
Toward a Systems Theory for Human-Centered Trustworthy Agentic AI
calendars.illinois.edu
February 19, 2025 at 10:39 PM
Reposted by Razan Baltaji
I’ll present "Many Minds, Diverging Morals: Human Groups vs. AI in Moral Decision-Making," my recent work with Eric Schulz & RazanBaltaji, next week at SCIoI, Berlin.

Join us! Open to the public!

www.scienceofintelligence.de/event/anita-...
Anita Keshmirian (Forward College, Berlin): "Many Minds, Diverging Morals: Human Groups vs. AI in Moral Decision-Making" - scienceofintelligence.de
Moral judgments are inherently social, shaped by interactions with others in everyday life. Despite this, psychological research has rarely examined the
www.scienceofintelligence.de
January 8, 2025 at 7:54 AM