banner
wolfstammer.bsky.social
@wolfstammer.bsky.social
PhD candidate at AI & ML lab @ TU Darmstadt (he/him). Research on deep learning, representation learning, neuro-symbolic AI, explainable AI, verifiable AI and interactive AI
🧠🔍 Can deep models be verifiably right for the right reasons?

At ICML’s Actionable Interpretability Workshop, we present Neural Concept Verifier—bringing Prover–Verifier Games to concept space.

📅 Poster: Sat, July 19
📄 arxiv.org/abs/2507.07532
#ICML2025 #XAI #NeuroSymbolic
July 13, 2025 at 10:44 AM
Can concept-based models handle complex, object-rich images? We think so! Meet Object-Centric Concept Bottlenecks (OCB) — adding object-awareness to interpretable AI. Led by David Steinmann w/ @toniwuest.bsky.social & @kerstingaiml.bsky.social .
📄 arxiv.org/abs/2505.244...
#AI #XAI #NeSy #CBM #ML
July 7, 2025 at 3:55 PM
Reposted
Reasonable Artificial Intelligence und The Adaptive Mind: Die TU Darmstadt wird im Rahmen der Exzellenzstrategie des Bundes und der Länder mit gleich zwei geförderten Clusterprojekten ausgezeichnet. Ein Meilenstein für unsere Universität! www.tu-darmstadt.de/universitaet...
Zwei Exzellenzcluster für die TU Darmstadt
Großer Erfolg für die Technische Universität Darmstadt: Zwei ihrer Forschungsprojekte werden künftig als Exzellenzcluster gefördert. Die Exzellenzkommission im Wettbewerb der prestigeträchtigen Exzell...
www.tu-darmstadt.de
May 22, 2025 at 4:20 PM
🚨 New #ICML2025 paper!
"Bongard in Wonderland: Visual Puzzles that Still Make AI Go Mad?"
We test Vision-Language Models on classic visual puzzles—and even simple concepts like “spiral direction” or “left vs. right” trip them up. Big gap to human reasoning remains.
📄 arxiv.org/pdf/2410.19546
May 7, 2025 at 1:39 PM
Reposted
May 2, 2025 at 8:00 AM
Reposted
🔥Our work “Where is the Truth? The Risk of Getting Confounded in a Continual World" was accepted with a spotlight poster at ICML!
arxiv.org/abs/2402.06434

-> we introduce continual confounding + the ConCon dataset, where confounders over time render continual knowledge accumulation insufficient ⬇️
May 2, 2025 at 9:48 AM
I am happy to share that my dissertation is now officially available online!
Feel free to take a look :) tuprints.ulb.tu-darmstadt.de/29712/
April 14, 2025 at 7:01 PM
Reposted
2018: Saliency maps give plausible interpretations of random weights, triggering skepticism and catalyzing the mechinterp cultural movement, which now advocates for SAEs.

2025: SAEs give plausible interpretations of random weights, triggering skepticism and ...
March 3, 2025 at 6:42 PM
Reposted
We all know backpropagation can calculate gradients, but it can do much more than that!

Come to my #AAAI2025 oral tomorrow (11:45, Room 119B) to learn more.
February 27, 2025 at 11:45 PM
Happy to share that I successfully defended my PhD on Feb 19th with distinction! My work on "The Value of Symbolic Concepts for AI Explanations and Interactions" has been a rewarding journey. Huge thanks to my mentors, peers, and committee for their support! Excited for what’s next! 🚀
February 24, 2025 at 9:01 PM