David Debot
daviddebot.bsky.social
David Debot
@daviddebot.bsky.social
PhD student @dtai-kuleuven.bsky.social in neurosymbolic AI and concept-based learning
https://daviddebot.github.io/
Open-source & easy to use!
🔷 Code: github.com/ML-KULeuven/...
🔷 Based on MiniHack & Stable Baselines3
🔷 Define new shields in just a few lines of code!

🚀 Let’s make RL safer & smarter, together!
(7/8)
February 24, 2025 at 12:28 PM
Want to try it yourself? 🎮

Use our interactive web demo!
🔷 Modify environments (add lava, monsters!)
🔷 Test shielded vs. non-shielded agents

🖥️ Play with it here: dtai.cs.kuleuven.be/projects/nes...
(6/8)
February 24, 2025 at 12:28 PM
Why does this matter?
🔷 Faster training ⌛
🔷 Safer exploration 🔒
🔷 Better generalization 🌍
(5/8)
February 24, 2025 at 12:27 PM
How does it work? 🤔🛡️

The shield:
✅ Exploits symbolic data from sensors 🌍
✅ Uses logical rules 📜
✅ Prevents unsafe actions 🚫
✅ Still allows flexible learning 🤖

A perfect blend of symbolic reasoning & deep learning!
(4/8)
February 24, 2025 at 12:27 PM
Enter MiniHack, our demo's testing ground! 🏰🗡️

There, RL agents face:
✅ Lava cliffs & slippery floors
✅ Chasing monsters
✅ Locked doors needing keys

Findings:
🔷 Standard RL struggles to find an optimal, safe policy.
🔷 Shielded RL agents stay safe & learn faster!
(3/8)
February 24, 2025 at 12:27 PM
Deep RL is powerful, but...
⚠️ It can take dangerous actions
⚠️ It lacks safety guarantees
⚠️ It struggles with real-world constraints

Yang et al.'s probabilistic logic shields fix this, enforcing safety without breaking learning efficiency! 🚀
(2/8)
February 24, 2025 at 12:26 PM
A short overview video can be found on YouTube: youtu.be/CgSDhQKESD0?...

#NeurIPS2024
Interpretable Concept-Based Memory Reasoning - NeurIPS 2024
YouTube video by David Debot
youtu.be
December 23, 2024 at 10:23 AM
Or check out our Medium post: 👉 medium.com/@pyc.devteam... (7/7)
December 4, 2024 at 8:50 AM
With CMR, we’re reaching the sweet spot of accuracy and interpretability. Check it out at our poster at #NeurIPS2024! 👉 neurips.cc/virtual/2024... (6/7)
NeurIPS Poster Interpretable Concept-Based Memory ReasoningNeurIPS 2024
neurips.cc
December 4, 2024 at 8:49 AM
During training, CMR learns embeddings as latent representations of logic rules, and a neural rule selector identifies the most relevant rule for each instance. Due to a clever factorization and rule selector, inference is linear in the number of concepts and rules. (5/7)
December 4, 2024 at 8:49 AM
CMR makes a prediction in 3 steps:
1) Predict concepts from the input
2) Neurally select a rule from a memory of learned logic rules ➨ Accuracy
3) Evaluate the selected rule with the concepts to make a final prediction ➨ Interpretability (4/7)
December 4, 2024 at 8:48 AM
CMR has:
⚡ State-of-the-art accuracy that rivals black-box models
🚀 Pure probabilistic semantics with linear-time exact inference
👁️ Transparent decision-making so human users can interpret model behavior
🛡️ Pre-deployment verifiability of model properties (3/7)
December 4, 2024 at 8:47 AM
CMR is our latest neurosymbolic concept-based model. A proven 𝘶𝘯𝘪𝘷𝘦𝘳𝘴𝘢𝘭 𝘣𝘪𝘯𝘢𝘳𝘺 𝘤𝘭𝘢𝘴𝘴𝘪𝘧𝘪𝘦𝘳 irrespective of the concept set, CMR achieves near-black-box accuracy by combining 𝗿𝘂𝗹𝗲 𝗹𝗲𝗮𝗿𝗻𝗶𝗻𝗴 and 𝗻𝗲𝘂𝗿𝗮𝗹 𝗿𝘂𝗹𝗲 𝘀𝗲𝗹𝗲𝗰𝘁𝗶𝗼𝗻! (2/7)
December 4, 2024 at 8:47 AM