Samuel Teuber
teuber.bsky.social
Samuel Teuber
@teuber.bsky.social
Doctoral Researcher at KIT's Computer Science Department | Formal Methods for Software & AI (Focus on CPS and Fairness Verification)
Currently migrating from Twitter (@teuber_dev)
www.teuber.dev
Thank you @etapsconf.bsky.social for the great event — so many interesting talks and discussions at #etaps this year!
I also had the opportunity to present my #tacas paper on verifying behavioral equivalence of neural networks 😃
May 9, 2025 at 1:59 PM
This is really cool!
I'm obviously nitpicking here, but I think it's very meta that quite a few papers on Neural Network Verification are right around the "decision boundary" of the Adversarial Robustness cluster...
December 8, 2024 at 11:09 PM
Perfection
November 17, 2024 at 11:44 PM
We applied the overall approach to multiple case studies including Vertical Airborne Collision Avoidance (VCAS):
Here, we analyzed NNs from prior work and found numerous concrete safety problems -- but see for yourself (any plane trajectory in the red region is BAD!):
November 17, 2024 at 6:04 PM
We start with a dL safety proof for an abstract control envelope. Our approach then derives an NN specification. We rigorously define the NN's semantics in dL via "nondeterministic mirrors".
Verification of the NN is then mirrored by a proof of infinite-time safety in dL.
November 17, 2024 at 6:04 PM
You want to ensure that your neural network *never* crashes your control system?

Our (now accepted 🥳) #NeurIPS paper introduces:
- Reusing control theory for NN verification
- Verifying *nonlinear arithmetic* specs on NNs

This guarantees your NN won't behave like this (1/12):
November 17, 2024 at 6:04 PM