ryantlowe.bsky.social
@ryantlowe.bsky.social
This is a huge project. We'll need lots of help.

But if we succeed, the future could be more beautiful than we can possibly imagine today.
July 11, 2025 at 6:57 PM
Instead, we call for a new paradigm — "Thick models of value" (TMV).

TMV is a broad class of structured approaches to modeling values and norms that:

1. are more robust against distortions
2. have better treatment of collective values and norms
3. have better generalization
July 11, 2025 at 6:57 PM
In principle, unstructured text could be an improvement; after all, language is how humans naturally express values.

But this lack of internal structure becomes a critical weakness when we need *reliability* across contexts and institutions.
July 11, 2025 at 6:56 PM
PMV in particular (the dominant paradigm in microeconomics, game theory, mechanism design, social choice theory, etc) fails to capture the richness of human motivation, because preferences bundle all kinds of signals into a flattened ordering.
July 11, 2025 at 6:56 PM
Current approaches tend to fall into what we call "Preferentist models of Value" (PMV), or "Values-as-text" (VAT). Both have issues preserving the richness of what people care about, as value information propagates up the "societal stack"
July 11, 2025 at 6:56 PM
So, full-stack alignment is our way of saying that we need AI and institutions that "fit" us, and that help us live the lives we want to live.
July 11, 2025 at 6:56 PM
Why do we need to co-align AI *and* institutions?

AI systems don't exist in a vacuum. They are embedded within institutions whose incentives shape their deployment.

Often, institutional incentives are not aligned with what's in our best interest.
July 11, 2025 at 6:56 PM
Today we're launching:
- A position paper that articulates the conceptual foundations of FSA (jytmawd4y4kxsznl.public.blob.vercel-storage.com/Full_Stack_...)
- A website which will be the homepage of FSA going forward (www.full-stack-alignment.ai/)
July 11, 2025 at 6:56 PM
Introducing: Full-Stack Alignment 🥞

A research program dedicated to co-aligning AI systems *and* institutions with what people value.

It's the most ambitious project I've ever undertaken.

Here's what we're doing: 🧵
July 11, 2025 at 6:56 PM