- Safe reinforcement learning
- Logical control in language models
- Temporal reasoning with strong guarantees
Let's discuss! Thoughts, questions, or collaborations? ⬇️ #neurosymbolic #AAAI2025
🙌 @lennertds.bsky.social, @giuseppemarra.bsky.social, @lucderaedt.bsky.social
- Safe reinforcement learning
- Logical control in language models
- Temporal reasoning with strong guarantees
Let's discuss! Thoughts, questions, or collaborations? ⬇️ #neurosymbolic #AAAI2025
🙌 @lennertds.bsky.social, @giuseppemarra.bsky.social, @lucderaedt.bsky.social
They generalize better to out-of-distribution settings and allow test-time constraint adaptation, making them robust & versatile.
They generalize better to out-of-distribution settings and allow test-time constraint adaptation, making them robust & versatile.
✅ Relational probabilistic reasoning
✅ Logical constraints
✅ Neural learning
✅ Approximate Bayesian inference
We introduce a novel differentiable particle filter that enables efficient inference & learning while maintaining logical consistency.
✅ Relational probabilistic reasoning
✅ Logical constraints
✅ Neural learning
✅ Approximate Bayesian inference
We introduce a novel differentiable particle filter that enables efficient inference & learning while maintaining logical consistency.
NeSy-MMs bridge this gap: a new class of differentiable models that provably satisfy relational logical constraints while scaling efficiently.
NeSy-MMs bridge this gap: a new class of differentiable models that provably satisfy relational logical constraints while scaling efficiently.