xuan (ɕɥɛn / sh-yen)
banner
xuanalogue.bsky.social
xuan (ɕɥɛn / sh-yen)
@xuanalogue.bsky.social
Assistant professor at NUS. Scaling cooperative intelligence & infrastructure for an increasingly automated future. PhD @ MIT ProbComp / CoCoSci. Pronouns: 祂/伊
Unfortunately they are student-made slides so my default policy is not to share them!
November 13, 2025 at 10:37 AM
Hopefully this syllabus is useful to at least some of you!

And if you happen to be in Singapore, I'll be teaching a for-credit version of this class again next semester at NUS, under CS6208! Reach out if you're interested in auditing.

cosilab.notion.site/cs6101-raci-...
CS6101: Rational Approaches to Cooperative Intelligence (Fall 2025) | Notion
What are the computational principles underlying human-like cooperative intelligence, and how can we use them to engineer cooperative and human-aligned machines? This reading-based seminar introduces ...
cosilab.notion.site
November 12, 2025 at 2:25 PM
The class ends with a discussion of what AI alignment means in a pluralistic world, and -- now that students understand it well enough -- a critique of the descriptive and normative validity of standard theories of rationality.

bsky.app/profile/xuan...
A summary of the paper we wrote up on The Other Site -- we argue that AI alignment has to move beyond a conception of alignment that is focused on human preferences / utility / reward!

twitter.com/xuanalogue/s...
November 12, 2025 at 2:25 PM
This toolkit can be extended to cooperation at a larger scale: Rational inference allows agents to rapidly adapt to the people, norms & institutions that make up society, while rational deliberation allows agents to move toward social structures that are jointly beneficial.
November 12, 2025 at 2:25 PM
Rational modeling + decision-making provide a unifying frame for the topics we cover: By modeling other agents as (boundedly) rational, we can infer their goals & beliefs, plan to assist them, work in a team, communicate intent, and teach new concepts.
November 12, 2025 at 2:25 PM
As the title says, this class focuses on broadly *rational* approaches to cooperative intelligence, grounded in standard theories of epistemic & instrumental rationality (Bayes + utility theory) -- but with some caveats!
November 12, 2025 at 2:25 PM
Thank you, means a lot!! We were very happy to be able to replicate all the past BToM results :)
November 6, 2025 at 12:18 PM
Finally, this paper represents the hard work of an incredible team of co-authors:

@lanceying.bsky.social (paper lead), @heyodogo.bsky.social, Katie M Colins, Megan Wei, Ced Zhang, @tbrookewilson.bsky.social, and my co-senior authors Lio Wong + @joshtenenbaum.bsky.social.
November 5, 2025 at 3:55 PM
Interested in learning more? Catch my coauthor Ced Zhang when he presents LIRAS at #EMNLP2025 in Suzhou this week!

Or find the paper here:
aclanthology.org/2025.finding...
Language-Informed Synthesis of Rational Agent Models for Grounded Theory-of-Mind Reasoning On-the-fly
Lance Ying, Ryan Truong, Katherine M. Collins, Cedegao E. Zhang, Megan Wei, Tyler BrookeWilson, Tan Zhi-Xuan, Lionel Wong, Joshua B. Tenenbaum. Findings of the Association for Computational Linguistic...
aclanthology.org
November 5, 2025 at 3:55 PM
We show that LIRAS correlates strongly with human judgments about an agent's likely goals, beliefs, costs and rewards.

In contrast, large reasoning models like OpenAI o3 show a much weaker correlation, in line with other work indicating that LRMs struggle at ToM reasoning.
November 5, 2025 at 3:55 PM
As a proof of concept, we tested LIRAS on Theory of Mind reasoning tasks from 7 cognitive science experiments on probabilistic social inference, including key experiments that established the Bayesian theory-of-mind paradigm (Baker et al, 2012/2017; @julianje.bsky.social et al, 2016).
November 5, 2025 at 3:55 PM
To instantiate LIRAS, we build on recent work by @tbrookewilson.bsky.social suggesting LLMs can serve as ad-hoc model constructors (arxiv.org/abs/2507.12547):
- From language, LLMs synthesize code representing agent + env. models that support coherent inference
- VLMs parse images to env. states
Modeling Open-World Cognition as On-Demand Synthesis of Probabilistic Models
When faced with novel situations, people are able to marshal relevant considerations from a wide range of background knowledge and put these to use in inferences and predictions. What permits us to dr...
arxiv.org
November 5, 2025 at 3:55 PM
We propose that humans draw such inferences by constructing ad-hoc Bayesian models of agents & their environments, then inferring the agent's goals / beliefs / costs / rewards via inverse planning -- a framework we call Language-Informed Rational Agent Synthesis (LIRAS).
November 5, 2025 at 3:55 PM
If you stare at these scenarios enough, you'll probably infer that:
1. The robot is helping the human reach gem A
2. The astronaut prefers the orange resource & finds blue terrain dangerous
3. The student thought their favorite foodcart might be behind the building, but was wrong
November 5, 2025 at 3:55 PM
Humans can rapidly make sense of novel social situations like below:

1. A robot helps a human reach one of 4 gems
2. An astronaut collects resources on the way to their spaceship
3. A hungry student looks for their favorite foodcart behind a building

How do we achieve this?
November 5, 2025 at 3:55 PM
Reposted by xuan (ɕɥɛn / sh-yen)
shockingly, it can both be true that 1) we should improve labor conditions 2) the workers prefer what they have now to what they had before.
November 2, 2025 at 10:16 PM