Joel Z Leibo
jzleibo.bsky.social
Joel Z Leibo
@jzleibo.bsky.social

I can be described as a multi-agent artificial general intelligence.

www.jzleibo.com

Computer science 38%
Neuroscience 20%
Pinned
Concordia is a library for generative agent-based modeling that works like a table-top role-playing game.

It's open source and model agnostic.

Try it today!

github.com/google-deepm...
GitHub - google-deepmind/concordia: A library for generative social simulation
A library for generative social simulation. Contribute to google-deepmind/concordia development by creating an account on GitHub.
github.com

Reposted by Joel Z. Leibo

Reposted by Joel Z. Leibo

dream-logic is more powerful than logic-logic and oral cultures must encode knowledge into powerful meme-spells newsletter.squishy.computer/p/llms-and-h...

Reposted by Joel Z. Leibo

Like, even before JS Mill gave us a utilitarian ("net upside") argument to justify freedom of speech, there was an older intuition that banning symbols risks doing violence to thought itself, and ought to be approached with "warinesse."

Reposted by Joel Z. Leibo

I'm hiring a student researcher for next summer at the intersection of MARL x LLM. If you're a phd student with experience in MARL algorithm research, please apply and drop me an email so that I know you've applied! www.google.com/about/career...
Student Researcher, PhD, Winter/Summer 2026 — Google Careers
www.google.com

Reposted by Joel Z. Leibo

I've managed it a few times..! Though not too many

Noticed that my latest paper announcement got much more attention on Bluesky than twitter, first time that happened in my experience

Reposted by Joel Z. Leibo

Today my colleagues in the Paradigms of Intelligence team have announced Project Suncatcher:

research.google/blog/explori...

tl;dr: How can we put datacentres in space where solar energy is near limitless? Requires changes to current practices (due to radiation and bandwidth issues).

🧪 #MLSky
Exploring a space-based, scalable AI infrastructure system design
research.google

Reposted by Joel Z. Leibo

Reposted by Joel Z. Leibo

Reposted by Joel Z. Leibo

Reposted by Joel Z. Leibo

Except the linked paper agrees with the comment on personhood in this thread. It says we should stop the metaphysics and refocus on pragmatic effects of institutions and individuals deeming entities to be persons.

Reposted by Joel Z. Leibo

intelligence is the thing which i have. admitting things are intelligent means considering them morally and socially equal to me. i will never consider a computer morally or socially equal to me. therefore no computer program will ever be intelligent

Reposted by Joel Z. Leibo

This looks interesting
Interestingly, @simondedeo.bsky.social uses exactly this context of apology as a place where people can use "Mental Proof" to overcome the perception of AI use, by *credibly* communicating intentions -- based on proof of shared knowledge and values.

ojs.aaai.org/index.php/AA...
Undermining Mental Proof: How AI Can Make Cooperation Harder by Making Thinking Easier | Proceedings of the AAAI Conference on Artificial Intelligence
ojs.aaai.org

Reposted by Joel Z. Leibo

Interestingly, @simondedeo.bsky.social uses exactly this context of apology as a place where people can use "Mental Proof" to overcome the perception of AI use, by *credibly* communicating intentions -- based on proof of shared knowledge and values.

ojs.aaai.org/index.php/AA...
Undermining Mental Proof: How AI Can Make Cooperation Harder by Making Thinking Easier | Proceedings of the AAAI Conference on Artificial Intelligence
ojs.aaai.org

[9/9] Read the full paper here:
arxiv.org/abs/2510.26396

Coauthors:

Sasha Vezhnevets,
@xtan,
@WilCunningham
A Pragmatic View of AI Personhood
The emergence of agentic Artificial Intelligence (AI) is set to trigger a "Cambrian explosion" of new kinds of personhood. This paper proposes a pragmatic framework for navigating this diversification...
arxiv.org

[8/9] By rejecting the foundationalist quest for a single, essential definition, our pragmatic approach offers a more flexible way to think about integrating AI agents into our society. Different “personhood-related contexts” call for different solutions. There are no panaceas.

[7/9] We also consider "personhood as a problem".

This includes "dark patterns" where AI systems may be designed to mimic social cues and exploit our social heuristics, leading to risks of emotional manipulation and exploitation.

In this case, personhood attribution causes harm.

[6/9] However, there may also be ownerless autonomous agents. In this case, we may confer a default person-like status to support sanctionability in cases where they cause harm. An AI with assets can be deterred from rule breaking by the threat of having to forfeit them.

[5/9] We explore "personhood as a solution" for problems like "responsibility gaps".

Most AI agents will have owners or guardians, in this case responsibility should usually flow to their principal.

[4/9] With major inspiration from Richard Rorty, the pragmatic framework we propose holds that the personhood bundle can be unbundled.

We can craft bespoke solutions: sanctionability without suffrage, culpability and contracting without consciousness attribution, etc.

[3/9] The common "foundationalist" view – which bases personhood on properties like consciousness or rationality – forces an untenable, all-or-nothing choice: an AI must either be a full person or a "mere thing". We argue this rigid binary is ill-suited for the challenges ahead.

[2/9] We argue that instead of getting stuck on metaphysical debates (is AI conscious?), we should treat personhood as a flexible bundle of obligations (rights & responsibilities) that societies confer.

[1/9] Excited to share our new paper "A Pragmatic View of AI Personhood" published today. We feel this topic is timely, and rapidly growing in importance as AI becomes agentic, as AI agents integrate further into the economy, and as more and more users encounter AI.

Reposted by Joel Z. Leibo

Reposted by Joel Z. Leibo

Very excited to be able to talk about something I've been working on for a while now - we're working with Commonwealth Fusion Systems, IMO the leading fusion startup in the world, to take our work on AI and tokamaks and make it work at the frontier of fusion energy. deepmind.google/discover/blo...
Google DeepMind is bringing AI to the next generation of fusion energy
We’re announcing our research partnership with Commonwealth Fusion Systems (CFS) to bring clean, safe, limitless fusion energy closer to reality with our advanced AI systems. This partnership...
deepmind.google

Reposted by Joel Z. Leibo