Pierre Beckmann
pierrebeckmann.bsky.social
Pierre Beckmann
@pierrebeckmann.bsky.social
DL researcher who turned to philosphy.

Epistemology of AI.
Pinned
Does SORA "understand" the world? For example, does it understand the movement of the ship in the coffee cup below?

In my latest Synthese article I tackle this question!
Reposted by Pierre Beckmann
A discussion on the philosophy of deep learning, mechanistic interpretability and the epistemology of LLMs. @pierrebeckmann.bsky.social @matthieu-queloz.bsky.social youtu.be/1_0ttM8zp9o?...
Mechanistic Interpretability and How LLMs Understand
YouTube video by Rahul Sam
youtu.be
January 10, 2026 at 10:55 PM
Reposted by Pierre Beckmann
One of the best discussions of AI I've seen in a while, because it's deeply informed by philosophy AND computer science. LLM’s are more than just “stochastic parrots”, but their understanding is still nonhuman. The discussion of concepts, understanding, and world models is especially informative.
January 12, 2026 at 1:43 AM
Does SORA "understand" the world? For example, does it understand the movement of the ship in the coffee cup below?

In my latest Synthese article I tackle this question!
November 28, 2025 at 2:25 PM
Reposted by Pierre Beckmann
We’ve recently updated our collaborative open-access book, “Neural Networks in Cognitive Science”, adding a few new authors, chapters, and lots of content.

downloads.jeffyoshimi.net/NeuralNetwor...
October 21, 2025 at 8:44 PM
New preprint: “Mechanistic Indicators of Understanding in LLMs” with @matthieu-queloz.bsky.social
Building on mechanistic interpretability, we argue that LLMs exhibit signs of understanding—across three tiers: conceptual –, state-of-the-world –, and principled understanding. 🧵(1/9)
July 15, 2025 at 1:27 PM