Mahalakshmi Sabanayagam
maha-saba.bsky.social
Mahalakshmi Sabanayagam
@maha-saba.bsky.social
PhD student @ TU Munich. Interested in theory of deep learning and graph based learning.

web: https://mahalakshmi-sabanayagam.github.io
To hear all about the work @iclr-conf.bsky.social,

Poster: 25.04 from 3:00 to 5:30 pm
Talk: 27.04 at 3:30 pm in the VerifAI workshop verifai-workshop.github.io/schedule/
Paper: arxiv.org/abs/2412.00537

Looking forward to meeting and discussing with fellow researchers in Singapore! 🙂 [6/6]
April 24, 2025 at 4:44 PM
⏩ Our experimental results on (graph) neural networks in semi-supervised node classification uncover a novel phenomenon: a plateauing of robustness for intermediate perturbations (for more details and other interesting results, see our paper arxiv.org/abs/2412.00537). [5/6]
April 24, 2025 at 4:44 PM
🥳 The resulting certificate in both cases is a Mixed Integer Linear Program (MILP) and scales with the number of labelled samples. Thus, making it very much suitable for semi-supervised learning, which naturally has a low labeling rate. [4/6]
April 24, 2025 at 4:44 PM
🔥 How do we handle it? We leverage the Neural Tangent Kernel as it describes the training dynamics of wide NNs under certain conditions, and develop exact certification strategies for both sample-wise and collective certification. [3/6]
April 24, 2025 at 4:44 PM
🤔 All exact certificates for neural nets target test-time robustness. Here, we ask what if the training data, e.g. labels are corrupted.

🤯 Challenge: This brings additional complexity of handling the training dynamics of NNs during the certification of predictions. [2/6]
April 24, 2025 at 4:44 PM