David Debot
daviddebot.bsky.social
David Debot
@daviddebot.bsky.social
PhD student @dtai-kuleuven.bsky.social in neurosymbolic AI and concept-based learning
https://daviddebot.github.io/
Reposted by David Debot
Just under 10 days left to submit your latest endeavours in #tractable probabilistic models!

Join us at TPM @auai.org #UAI2025 and show how to build #neurosymbolic / #probabilistic AI that is both fast and trustworthy!
the #TPM ⚡Tractable Probabilistic Modeling ⚡Workshop is back at @auai.org #UAI2025!

Submit your works on:

- fast and #reliable inference
- #circuits and #tensor #networks
- normalizing #flows
- scaling #NeSy #AI
...& more!

🕓 deadline: 23/05/25
👉 tractable-probabilistic-modeling.github.io/tpm2025/
May 14, 2025 at 5:48 PM
Reposted by David Debot
We developed a library to make logical reasoning embarrasingly parallel on the GPU.

For those at ICLR 🇸🇬: you can get the juicy details tomorrow (poster #414 at 15:00). Hope to see you there!
April 23, 2025 at 8:13 AM
Reposted by David Debot
If you're at #AAAI2025, come check out our demo on neurosymbolic reinforcement learning with probabilistic logic shields 🤖 Tomorrow (Sat, March 1) from 12:30–2:30 PM during the poster session 💻
🚀 Do you care about safe AI? Do you want RL agents that are both smart & trustworthy?

At #AAAI2025, we present our demo for neurosymbolic RL—combining deep learning with probabilistic logic shields for safer, interpretable AI in complex environments. 🏰🔥
🧵👇
(1/8)
February 28, 2025 at 10:53 PM
Reposted by David Debot
We all know backpropagation can calculate gradients, but it can do much more than that!

Come to my #AAAI2025 oral tomorrow (11:45, Room 119B) to learn more.
February 27, 2025 at 11:45 PM
Reposted by David Debot
🔥 Can AI reason over time while following logical rules in relational domains? We will present Relational Neurosymbolic Markov Models (NeSy-MMs) next week at #AAAI2025! 🎉

📜 Paper: arxiv.org/pdf/2412.13023
💻 Code: github.com/ML-KULeuven/...

🧵⬇️
February 25, 2025 at 11:01 AM
🚀 Do you care about safe AI? Do you want RL agents that are both smart & trustworthy?

At #AAAI2025, we present our demo for neurosymbolic RL—combining deep learning with probabilistic logic shields for safer, interpretable AI in complex environments. 🏰🔥
🧵👇
(1/8)
February 24, 2025 at 12:26 PM
🚨 Interpretable AI often means sacrificing accuracy—but what if we could have both? Most interpretable AI models, like Concept Bottleneck Models, force us to trade accuracy for interpretability.

But not anymore, due to Concept-Based Memory Reasoner (CMR)! #NeurIPS2024 (1/7)
December 4, 2024 at 8:46 AM