Pierre Beckmann
pierrebeckmann.bsky.social
Pierre Beckmann
@pierrebeckmann.bsky.social
DL researcher who turned to philosphy.

Epistemology of AI.
Does SORA "understand" the world? For example, does it understand the movement of the ship in the coffee cup below?

In my latest Synthese article I tackle this question!
November 28, 2025 at 2:25 PM
Level 3: Principled understanding
At this last tier, LLMs can grasp the underlying principles that connect and unify a diverse array of facts.
Research on tasks like modular addition provides cases where LLMs move beyond memorizing examples to internalizing general rules. (6/9)
July 15, 2025 at 1:27 PM
But LLMs aren’t limited to static facts—they can also track dynamic states.
OthelloGPT, a GPT-2 model trained on legal Othello moves, encodes the board state in internal representations that update as the game unfolds, as shown by linear probes. (5/9)
July 15, 2025 at 1:27 PM
Level 2: State-of-the-world understanding
LLMs can encode factual associations in the linear projections of their MLP layers.
For instance, they can ensure that a strong activation of the “Golden Gate Bridge” feature leads to a strong activation of the “in SF” feature. (4/9)
July 15, 2025 at 1:27 PM
How does the model use these features?
Attention layers are key. They retrieve relevant information from earlier tokens and integrate it into the current token’s representation, making the model context-aware. (3/9)
July 15, 2025 at 1:27 PM
Level 1: Conceptual understanding
Emerges when a model forms “features” as directions in latent space, allowing it to recognize and unify diverse manifestations of an entity or a property.
E.g., LLMs subsume “SF’s landmark” or “orange bridge” under a “Golden Gate Bridge” feature.
July 15, 2025 at 1:27 PM