Denis Sutter
denissutter.bsky.social
Denis Sutter
@denissutter.bsky.social
Msc at @eth interested in ML interpretability
arxiv.org
July 15, 2025 at 2:37 PM
9/9
Paper title: The Non-Linear Representation Dilemma: Is Causal Abstraction Enough for Mechanistic Interpretability?
Authored by Denis Sutter, @jkminder, Thomas Hoffmann, and @tpimentelms.
July 15, 2025 at 11:21 AM
8/9 In summary, causal abstraction remains a valuable framework, but without explicit assumptions about how mechanisms are represented, it risks producing interpretability results that are not robust or meaningful.
July 15, 2025 at 11:21 AM
7/9 For generality, we present these findings on simpler architectures (MLPs) across multiple random seeds and two additional tasks. This indicates that the issue is not confined to LLMs, but applies more broadly.
July 15, 2025 at 11:21 AM
6/9 We further show that small LLMs, which fail at the Indirect Object Identification task, can nevertheless be interpreted as containing such an algorithm.
July 15, 2025 at 11:21 AM
5/9 Beyond the theoretical argument, we present a broad set of experiments supporting our claim. Most notably, we show that a randomly initialised LLM can be interpreted as implementing an algorithm for Indirect Object Identification.
July 15, 2025 at 11:21 AM
4/9 This occurs because the existing theoretical framework makes no structural assumptions about how mechanisms are encoded in distributed representations. This relates to the accuracy complexity trade-off of probing.
July 15, 2025 at 11:21 AM
3/9 While we do not critique causal abstraction as a framework, we show that combining it with current insights that modern models store information in a distributed way introduces a fundamental problem.
July 15, 2025 at 11:21 AM
2/9 We demonstrate both theoretically (under reasonable assumptions) and empirically on real-world models that with arbitrarily complex representations, any algorithm can be mapped to any model.
July 15, 2025 at 11:21 AM