Assistant Professor, Birla Institute of Technology & Science.
http://brain.bits-hyderabad.ac.in/venkat/
Both these papers were led by
@simran-ketha.bsky.social
, an extraordinary Ph.D. student advised by me.
Watch out for many new (and yes exciting!) results that we hope to share soon.
(N/N)
\END
Both these papers were led by
@simran-ketha.bsky.social
, an extraordinary Ph.D. student advised by me.
Watch out for many new (and yes exciting!) results that we hope to share soon.
(N/N)
\END
More here: openreview.net/pdf?id=0MWW5...
(10/N)
More here: openreview.net/pdf?id=0MWW5...
(10/N)
But, what if the key to defending from adversarial attacks lies within?
<9/N>
But, what if the key to defending from adversarial attacks lies within?
<9/N>
It turns out that with typical Deep Networks a malicious adversary can change inputs, e.g. images so the Deep Net classifies it as something else. See the example image below from blog.mi.hdm-stuttgart.de/index.php/20...
(8/N)
It turns out that with typical Deep Networks a malicious adversary can change inputs, e.g. images so the Deep Net classifies it as something else. See the example image below from blog.mi.hdm-stuttgart.de/index.php/20...
(8/N)
@simran-ketha.bsky.social
@mummani-nuthan.bsky.social
&
@niranjanrajesh.bsky.social
Here we considered the setting of adversarial attacks.
(7/N)
@simran-ketha.bsky.social
@mummani-nuthan.bsky.social
&
@niranjanrajesh.bsky.social
Here we considered the setting of adversarial attacks.
(7/N)
Indeed, it is reminiscent of some Neuroscience experiments, where it is known that animals sometimes have significantly poorer behavioral performance than what one can linearly decode from a handful of their neurons.
Paper: openreview.net/pdf?id=9Uen9...
(6/N)
Indeed, it is reminiscent of some Neuroscience experiments, where it is known that animals sometimes have significantly poorer behavioral performance than what one can linearly decode from a handful of their neurons.
Paper: openreview.net/pdf?id=9Uen9...
(6/N)
For every model tested, on at least one layer MASC beats the model test accuracy, and in many cases by a significant margin (see table below), and especially for cases where there is high degree of corruption of labels.
(5/N)
For every model tested, on at least one layer MASC beats the model test accuracy, and in many cases by a significant margin (see table below), and especially for cases where there is high degree of corruption of labels.
(5/N)
(4/N)
(4/N)
Specifically, we looked at the geometry of class-wise internal representations and whether it was organized in a manner that allowed for better generalization.
(3/N)
Specifically, we looked at the geometry of class-wise internal representations and whether it was organized in a manner that allowed for better generalization.
(3/N)
@simran-ketha.bsky.social
We consider the setting, where the training data has label noise. That is, w/ some probability, each training point has its label shuffled.
Here, Deep Nets *memorize*, i.e. are able to perfectly rote-learn the training data but do badly on unseen data
(2/N)
@simran-ketha.bsky.social
We consider the setting, where the training data has label noise. That is, w/ some probability, each training point has its label shuffled.
Here, Deep Nets *memorize*, i.e. are able to perfectly rote-learn the training data but do badly on unseen data
(2/N)