Venkat Ramaswamy
banner
venkramaswamy.bsky.social
Venkat Ramaswamy
@venkramaswamy.bsky.social
Theoretical Neuroscience, Deep Learning, & the space between.
Assistant Professor, Birla Institute of Technology & Science.

http://brain.bits-hyderabad.ac.in/venkat/
Thanks Krishna!
October 26, 2025 at 1:01 PM
I'll stop here.

Both these papers were led by
@simran-ketha.bsky.social
, an extraordinary Ph.D. student advised by me.

Watch out for many new (and yes exciting!) results that we hope to share soon.
(N/N)
\END
a little girl is sliding down an orange slide with the words ok bye written on it .
ALT: a little girl is sliding down an orange slide with the words ok bye written on it .
media.tenor.com
October 24, 2025 at 5:25 PM
We asked if we could similarly leverage the subspace geometry to obtain robustness to adversarial attacks. We built variants of MASC that ended up being upto ~3x better than the model, even though both use the same initial substrate.

More here: openreview.net/pdf?id=0MWW5...

(10/N)
October 24, 2025 at 5:25 PM
Defending against adversarial attacks has become important. Effective defenses usually involve adversarial training, which is expensive, or other ways to defend that involve modifications to standard training paradigms.

But, what if the key to defending from adversarial attacks lies within?
<9/N>
a cartoon of a monkey sitting on a rock meditating
ALT: a cartoon of a monkey sitting on a rock meditating
media.tenor.com
October 24, 2025 at 5:20 PM
So, what’s an adversarial attack?

It turns out that with typical Deep Networks a malicious adversary can change inputs, e.g. images so the Deep Net classifies it as something else. See the example image below from blog.mi.hdm-stuttgart.de/index.php/20...

(8/N)
October 24, 2025 at 5:12 PM
Paper #2 w/
@simran-ketha.bsky.social
@mummani-nuthan.bsky.social
&
@niranjanrajesh.bsky.social

Here we considered the setting of adversarial attacks.

(7/N)
October 24, 2025 at 5:12 PM
We don’t know why this works well.

Indeed, it is reminiscent of some Neuroscience experiments, where it is known that animals sometimes have significantly poorer behavioral performance than what one can linearly decode from a handful of their neurons.

Paper: openreview.net/pdf?id=9Uen9...

(6/N)
openreview.net
October 24, 2025 at 5:12 PM
Surprisingly, we find that this works extraordinarily well.
For every model tested, on at least one layer MASC beats the model test accuracy, and in many cases by a significant margin (see table below), and especially for cases where there is high degree of corruption of labels.
(5/N)
October 24, 2025 at 5:12 PM
More technically, we fitted subspaces, one for each class, to the *corrupted* layerwise outputs. For each incoming point, we asked, which subspace is it closest to, in the angle sense, & predicted the label of the datapoint to be that of that class. We call this the MASC classifier.

(4/N)
October 24, 2025 at 5:08 PM
We looked into the internals of Deep Networks to see if we could extract much better generalization (i.e. accuracy on unseen data)

Specifically, we looked at the geometry of class-wise internal representations and whether it was organized in a manner that allowed for better generalization.

(3/N)
October 24, 2025 at 5:08 PM
Paper #1 w/
@simran-ketha.bsky.social

We consider the setting, where the training data has label noise. That is, w/ some probability, each training point has its label shuffled.

Here, Deep Nets *memorize*, i.e. are able to perfectly rote-learn the training data but do badly on unseen data
(2/N)
a group of children are sitting at their desks in a classroom covering their faces with their hands .
ALT: a group of children are sitting at their desks in a classroom covering their faces with their hands .
media.tenor.com
October 24, 2025 at 5:08 PM
Ian, how about a bit of intellectual honesty in posting the full official statement by India, so your readers can make their own judgement about the rationale for India's position?
August 5, 2025 at 4:42 AM
Reposted by Venkat Ramaswamy
Exendin-4, the first GLP1 receptor agonist on market came from Gila Monster's venom that contains a mammalian GLP1 homolog. Without it we wouldn’t have had the current medical revolution that Ozempic etc. wrought.
Isolation and characterization of exendin-4, an exendin-3 analogue, from Heloderma suspectum venom. Further evidence for an exendin receptor on dispersed acini from guinea pig pancreas.
The recent identification in Heloderma horridum venom of exendin-3, a new member of the glucagon superfamily that acts as a pancreatic secretagogue, prompted a search for a similar peptide in Heloderm...
www.jbc.org
June 9, 2025 at 4:05 AM