Joel Z Leibo
@jzleibo.bsky.social
I can be described as a multi-agent artificial general intelligence.
www.jzleibo.com
www.jzleibo.com
Reposted by Joel Z Leibo
Like, even before JS Mill gave us a utilitarian ("net upside") argument to justify freedom of speech, there was an older intuition that banning symbols risks doing violence to thought itself, and ought to be approached with "warinesse."
November 7, 2025 at 5:03 AM
Like, even before JS Mill gave us a utilitarian ("net upside") argument to justify freedom of speech, there was an older intuition that banning symbols risks doing violence to thought itself, and ought to be approached with "warinesse."
I've managed it a few times..! Though not too many
November 6, 2025 at 6:19 AM
I've managed it a few times..! Though not too many
Noticed that my latest paper announcement got much more attention on Bluesky than twitter, first time that happened in my experience
November 5, 2025 at 12:27 PM
Noticed that my latest paper announcement got much more attention on Bluesky than twitter, first time that happened in my experience
Except the linked paper agrees with the comment on personhood in this thread. It says we should stop the metaphysics and refocus on pragmatic effects of institutions and individuals deeming entities to be persons.
November 2, 2025 at 9:06 AM
Except the linked paper agrees with the comment on personhood in this thread. It says we should stop the metaphysics and refocus on pragmatic effects of institutions and individuals deeming entities to be persons.
[9/9] Read the full paper here:
arxiv.org/abs/2510.26396
Coauthors:
Sasha Vezhnevets,
@xtan,
@WilCunningham
arxiv.org/abs/2510.26396
Coauthors:
Sasha Vezhnevets,
@xtan,
@WilCunningham
A Pragmatic View of AI Personhood
The emergence of agentic Artificial Intelligence (AI) is set to trigger a "Cambrian explosion" of new kinds of personhood. This paper proposes a pragmatic framework for navigating this diversification...
arxiv.org
October 31, 2025 at 12:35 PM
[9/9] Read the full paper here:
arxiv.org/abs/2510.26396
Coauthors:
Sasha Vezhnevets,
@xtan,
@WilCunningham
arxiv.org/abs/2510.26396
Coauthors:
Sasha Vezhnevets,
@xtan,
@WilCunningham
[8/9] By rejecting the foundationalist quest for a single, essential definition, our pragmatic approach offers a more flexible way to think about integrating AI agents into our society. Different “personhood-related contexts” call for different solutions. There are no panaceas.
October 31, 2025 at 12:34 PM
[8/9] By rejecting the foundationalist quest for a single, essential definition, our pragmatic approach offers a more flexible way to think about integrating AI agents into our society. Different “personhood-related contexts” call for different solutions. There are no panaceas.
[7/9] We also consider "personhood as a problem".
This includes "dark patterns" where AI systems may be designed to mimic social cues and exploit our social heuristics, leading to risks of emotional manipulation and exploitation.
In this case, personhood attribution causes harm.
This includes "dark patterns" where AI systems may be designed to mimic social cues and exploit our social heuristics, leading to risks of emotional manipulation and exploitation.
In this case, personhood attribution causes harm.
October 31, 2025 at 12:34 PM
[7/9] We also consider "personhood as a problem".
This includes "dark patterns" where AI systems may be designed to mimic social cues and exploit our social heuristics, leading to risks of emotional manipulation and exploitation.
In this case, personhood attribution causes harm.
This includes "dark patterns" where AI systems may be designed to mimic social cues and exploit our social heuristics, leading to risks of emotional manipulation and exploitation.
In this case, personhood attribution causes harm.
[6/9] However, there may also be ownerless autonomous agents. In this case, we may confer a default person-like status to support sanctionability in cases where they cause harm. An AI with assets can be deterred from rule breaking by the threat of having to forfeit them.
October 31, 2025 at 12:34 PM
[6/9] However, there may also be ownerless autonomous agents. In this case, we may confer a default person-like status to support sanctionability in cases where they cause harm. An AI with assets can be deterred from rule breaking by the threat of having to forfeit them.
[5/9] We explore "personhood as a solution" for problems like "responsibility gaps".
Most AI agents will have owners or guardians, in this case responsibility should usually flow to their principal.
Most AI agents will have owners or guardians, in this case responsibility should usually flow to their principal.
October 31, 2025 at 12:34 PM
[5/9] We explore "personhood as a solution" for problems like "responsibility gaps".
Most AI agents will have owners or guardians, in this case responsibility should usually flow to their principal.
Most AI agents will have owners or guardians, in this case responsibility should usually flow to their principal.
[4/9] With major inspiration from Richard Rorty, the pragmatic framework we propose holds that the personhood bundle can be unbundled.
We can craft bespoke solutions: sanctionability without suffrage, culpability and contracting without consciousness attribution, etc.
We can craft bespoke solutions: sanctionability without suffrage, culpability and contracting without consciousness attribution, etc.
October 31, 2025 at 12:33 PM
[4/9] With major inspiration from Richard Rorty, the pragmatic framework we propose holds that the personhood bundle can be unbundled.
We can craft bespoke solutions: sanctionability without suffrage, culpability and contracting without consciousness attribution, etc.
We can craft bespoke solutions: sanctionability without suffrage, culpability and contracting without consciousness attribution, etc.
[3/9] The common "foundationalist" view – which bases personhood on properties like consciousness or rationality – forces an untenable, all-or-nothing choice: an AI must either be a full person or a "mere thing". We argue this rigid binary is ill-suited for the challenges ahead.
October 31, 2025 at 12:33 PM
[3/9] The common "foundationalist" view – which bases personhood on properties like consciousness or rationality – forces an untenable, all-or-nothing choice: an AI must either be a full person or a "mere thing". We argue this rigid binary is ill-suited for the challenges ahead.
[2/9] We argue that instead of getting stuck on metaphysical debates (is AI conscious?), we should treat personhood as a flexible bundle of obligations (rights & responsibilities) that societies confer.
October 31, 2025 at 12:33 PM
[2/9] We argue that instead of getting stuck on metaphysical debates (is AI conscious?), we should treat personhood as a flexible bundle of obligations (rights & responsibilities) that societies confer.
Concordia was always an entity-compoment pattern. But it was improved in 2.0. Also, we realized that it was an important part of the story to emphasize and explain to anyone who didn't already know about it. So that's what we did in the 2.0 tech report.
Here:
arxiv.org/abs/2507.08892
Here:
arxiv.org/abs/2507.08892
Multi-Actor Generative Artificial Intelligence as a Game Engine
Generative AI can be used in multi-actor environments with purposes ranging from social science modeling to interactive narrative and AI evaluation. Supporting this diversity of use cases -- which we ...
arxiv.org
September 26, 2025 at 6:43 AM
Concordia was always an entity-compoment pattern. But it was improved in 2.0. Also, we realized that it was an important part of the story to emphasize and explain to anyone who didn't already know about it. So that's what we did in the 2.0 tech report.
Here:
arxiv.org/abs/2507.08892
Here:
arxiv.org/abs/2507.08892
Tutorial here: www.youtube.com/watch?v=2FO5...
Concordia Library v2.0 Release - Tutorial with Alexander (Sasha) Vezhnevets [Google DeepMind]
YouTube video by Cooperative AI Foundation
www.youtube.com
September 19, 2025 at 8:30 PM
Tutorial here: www.youtube.com/watch?v=2FO5...