Currently migrating from Twitter (@teuber_dev)
www.teuber.dev
I also had the opportunity to present my #tacas paper on verifying behavioral equivalence of neural networks 😃
I also had the opportunity to present my #tacas paper on verifying behavioral equivalence of neural networks 😃
I'm obviously nitpicking here, but I think it's very meta that quite a few papers on Neural Network Verification are right around the "decision boundary" of the Adversarial Robustness cluster...
I'm obviously nitpicking here, but I think it's very meta that quite a few papers on Neural Network Verification are right around the "decision boundary" of the Adversarial Robustness cluster...
Here, we analyzed NNs from prior work and found numerous concrete safety problems -- but see for yourself (any plane trajectory in the red region is BAD!):
Here, we analyzed NNs from prior work and found numerous concrete safety problems -- but see for yourself (any plane trajectory in the red region is BAD!):
Verification of the NN is then mirrored by a proof of infinite-time safety in dL.
Verification of the NN is then mirrored by a proof of infinite-time safety in dL.
Our (now accepted 🥳) #NeurIPS paper introduces:
- Reusing control theory for NN verification
- Verifying *nonlinear arithmetic* specs on NNs
This guarantees your NN won't behave like this (1/12):
Our (now accepted 🥳) #NeurIPS paper introduces:
- Reusing control theory for NN verification
- Verifying *nonlinear arithmetic* specs on NNs
This guarantees your NN won't behave like this (1/12):