Abel Jansma
banner
abelaer.bsky.social
Abel Jansma
@abelaer.bsky.social
Emergence and compositionality in complex and living systems || Fellow @emergenceDIEP, University of Amsterdam || prev at MPI Leipzig & Un. of Edinburgh

abeljansma.nl
A few months ago I started discussing causal emergence with Erik Hoel.
This led to a really fun collaboration, and a new approach to “engineer emergence”.

Erik just published an overview of the ideas, goals, and dreams:
October 22, 2025 at 5:24 PM
I know you like showing pictures of lenses but this seems a little excessive
October 14, 2025 at 7:54 PM
This shows how intimately related Shapley values and Möbius inversions are: we derive an expression that expresses Shapley values *purely in terms of the incidence algebra*!
October 8, 2025 at 2:49 PM
Doing so required also generalising the Möbius inversion theorem to this setting (prev. only defined for ring-valued functions). We show that it's a natural theorem in the *path algebra* of the graph:
October 8, 2025 at 2:49 PM
But we go further.
Classical Shapley values only work for real-valued functions on power sets of players (or lattices).

We generalise them even beyond posets to
✅vector/group-valued fns
✅weighted directed acyclic multigraphs
, and prove uniqueness!
October 8, 2025 at 2:49 PM
That’s exactly what we do.
We reinterpret Shapley values as projection operators: a recursive re-attribution of higher-order synergy to lower-order parts.

This turns Shapley values into a general projection framework for hierarchical structure, valid far beyond game theory.
October 8, 2025 at 2:49 PM
Möbius inversions are a way to derive higher-order interactions ion a system's mereology. I wrote a blog post about this here 👉https://abeljansma.nl/2025/01/28/mereoPhysics.html

If Shapley values are truly general, we should be able to express them for any Möbius inversion/higher-order structure.
October 8, 2025 at 2:49 PM
This is how Rota originally introduced the incidence algebra. Everyone since has (correctly) required the ring to be commutative. Did people in the '60s just refer to commutative rings as associative rings?
September 23, 2025 at 7:11 PM
The #NeurIPS2025 version is now online: arxiv.org/pdf/2501.11447

It includes a new analysis to show that LLM semantics can be decomposed: the negativity of "horribly bad" is redundantly encoded in the two words, whereas "not bad" has synergistic semantics (i.e. negation):
September 22, 2025 at 7:58 AM
While I'm flattered, it's a bit weird that google's AI defers to me when you search for this:
August 5, 2025 at 11:57 AM
just one more index bro I swear just one more subscript and it's gonna be so clear just one more index please bro
February 6, 2025 at 2:31 PM
In chemical networks, reaction rates determine whether control is redundant (independent but similar pathways, in red) or synergistic (cooperative synthesis, in green).
January 22, 2025 at 11:13 AM
In cellular automata, causal power is strongly context-dependent. The same rule can show different causal decompositions on different initial conditions!
January 22, 2025 at 11:13 AM
In logic gates, we found that causal power shifts between redundant and synergistic depending on input probabilities. XOR gates only show pure synergy at p=0.5! (synergy in green, redundancy in red)
January 22, 2025 at 11:13 AM
You can now calculate the ‘partial causal effects’, that tell you exactly how causality is distributed among variables.
I tested this on
- Logic gates
- Cellular automata
- Chemical reaction networks.
January 22, 2025 at 11:13 AM
The decomposition is a sum over a partial order called the ‘redundancy lattice’.
It's pretty complicated, but with
@frosas.bsky.social
and @PedroMediano we recently calculated the ‘fast Möbius transform’ for it: arxiv.org/abs/2410.06224
January 22, 2025 at 11:13 AM
The 'partial information decomposition' splits information into ‘antichains’. I do the same with do-operators from
@yudapearl's do-calculus:
January 22, 2025 at 11:13 AM
You can now calculate the ‘partial causal effects’, that tell you exactly how causality is distributed among variables.
I tested this on
- Logic gates
- Cellular automata
- Chemical reaction networks.
January 22, 2025 at 10:26 AM
The decomposition is a sum over a partial order called the ‘redundancy lattice’.
It's pretty complicated, but with @frosas.bsky.social and @PedroMediano we recently calculated the ‘fast Möbius transform’ for it: arxiv.org/abs/2410.06224
January 22, 2025 at 10:26 AM
The 'partial information decomposition' splits information into ‘antichains’. I do the same with do-operators from
@yudapearl's do-calculus:
January 22, 2025 at 10:26 AM
I bet those ants can get an even bigger sofa around a corner.
December 27, 2024 at 7:06 AM
I always found it crazy that there were just 66 years between the first human flight and the moon landing.

Now in 15 months, AIs have gone from random guessing to expert level.
November 28, 2024 at 8:20 AM
That's right! If you squint a little, all these quantities are based on the same principle of Möbius inversion. I originally thought this was mostly a fun observation, but it's now becoming clear that it's very useful--we recently used it here, for example: arxiv.org/abs/2410.06224
November 27, 2024 at 12:53 PM
In the app you can use the mic here:
November 27, 2024 at 5:10 AM
How your email finds me
November 24, 2024 at 9:53 AM