Samuel Teuber
teuber.bsky.social
Samuel Teuber
@teuber.bsky.social
Doctoral Researcher at KIT's Computer Science Department | Formal Methods for Software & AI (Focus on CPS and Fairness Verification)
Currently migrating from Twitter (@teuber_dev)
www.teuber.dev
Pinned
You want to ensure that your neural network *never* crashes your control system?

Our (now accepted 🥳) #NeurIPS paper introduces:
- Reusing control theory for NN verification
- Verifying *nonlinear arithmetic* specs on NNs

This guarantees your NN won't behave like this (1/12):
Thank you @etapsconf.bsky.social for the great event — so many interesting talks and discussions at #etaps this year!
I also had the opportunity to present my #tacas paper on verifying behavioral equivalence of neural networks 😃
May 9, 2025 at 1:59 PM
Our paper on confidence-based equivalence verification of NNs has just been accepted to #TACAS25 🥳
I'm looking forward to present our new Zonotope-based abstract domain -- we also explored which equivalence properties are more/less amenable to differential verification...
teuber.dev/publication/...
Revisiting Differential Verification: Equivalence Verification with Confidence | Samuel Teuber
We introduce a new abstract domain for differential verification using Zonotopes and explore which equivalence properties are ammenable to differential verification. Furthermore, we propose an improve...
teuber.dev
December 22, 2024 at 2:10 PM
If you're at #NeurIPS this week: I'm organising a "Formal Methods and AI" dinner on Saturday evening.
You can find the details in the Whova App (-> Meet-ups -> Formal Methods and AI Dinner).
December 12, 2024 at 7:06 AM
If you want to learn more about steering Cyber-Physical Systems with Neural Networks and *without* crashes, visit my #NeurIPS poster 4201 Thursday afternoon!
You want to ensure that your neural network *never* crashes your control system?

Our (now accepted 🥳) #NeurIPS paper introduces:
- Reusing control theory for NN verification
- Verifying *nonlinear arithmetic* specs on NNs

This guarantees your NN won't behave like this (1/12):
December 12, 2024 at 5:55 AM
This is really cool!
I'm obviously nitpicking here, but I think it's very meta that quite a few papers on Neural Network Verification are right around the "decision boundary" of the Adversarial Robustness cluster...
December 8, 2024 at 11:09 PM
Perfection
November 17, 2024 at 11:44 PM
You want to ensure that your neural network *never* crashes your control system?

Our (now accepted 🥳) #NeurIPS paper introduces:
- Reusing control theory for NN verification
- Verifying *nonlinear arithmetic* specs on NNs

This guarantees your NN won't behave like this (1/12):
November 17, 2024 at 6:04 PM
Hello everyone! I'm giving this platform a shot now after what Twitter has become...
November 17, 2024 at 5:54 PM