Neehar Kondapaneni
therealpaneni.bsky.social
Neehar Kondapaneni
@therealpaneni.bsky.social
Researching interpretability and alignment in computer vision.
PhD student @ Vision Lab Caltech
This work was done in collaboration with @oisinmacaodha and @PietroPerona. It builds on our earlier related work RSVC (ICLR 2025). Check out our project page here nkondapa.github.io/rdx-page/ and our preprint here arxiv.org/abs/2505.23917.
Representational Difference Explanations (RDX)
Isolating and creating explanations of representational differences between two vision models.
nkondapa.github.io
July 8, 2025 at 3:43 PM
TLDR: ٍٍRDX is a new method for isolating representational differences and leads to insights about subtle, yet, important differences between models. We test it on vision models, but the method is general and can be applied to any representational space.
July 8, 2025 at 3:43 PM
Due to these issues we took a graph-based approach for RDX that does not use combinations of concept vectors. That means the explanation grid and the concept are equivalent -- what you see is what you get. This makes it much simpler to interpret RDX outputs.
July 8, 2025 at 3:43 PM
Even on a simple MNIST model, it is essentially impossible to anticipate that a weighted sum over these explanations results in this normal-looking five. Linear combinations of explanation grids are tricky to understand!
July 8, 2025 at 3:43 PM
Notably, we noticed two challenges with applying DL methods to model comparison. Explanations from DL methods are a grid of images (for vision). These grids (1) can overly simplify the underlying concept and/or (2) must be interpreted as part of a linear combination of concepts.
July 8, 2025 at 3:43 PM
We compare RDX to several popular dictionary-learning (DL) methods (like SAEs and NMF) and find that the DL methods struggle. In the spotted wing (SW) comparison experiment, we find that NMF shows model similarities rather than differences.
July 8, 2025 at 3:43 PM
After demonstrating that RDX works when there are known differences, we compare models with unknown differences. For example, when comparing DINO and DINOv2, we find that DINOv2 has learned a color based categorization of gibbons that is not present in DINO.
July 8, 2025 at 3:43 PM
We apply RDX on trained models with known differences and show that it isolates the core differences. For example, we compare model representations with and w/out a “spotted wing” (SW) concept and find that RDX shows that only one model groups birds according to this feature.
July 8, 2025 at 3:43 PM
Model comparison allows us to subtract away shared knowledge, revealing interesting concepts that explain model differences. Our method, RDX, isolates differences by answering the question: what does Model A consider similar that Model B does not?
nkondapa.github.io/rdx-page/
Representational Difference Explanations (RDX)
Isolating and creating explanations of representational differences between two vision models.
nkondapa.github.io
July 8, 2025 at 3:43 PM
The poster will actually be presented at Saturday 10am (Singapore time). Please ignore the previous time.
April 24, 2025 at 3:34 PM
If you’re attending ICLR, stop by our poster April 25, 3PM (Singapore time).
I’ll also be presenting a workshop poster, pushing further in this direction at the Bi-Align Workshop bialign-workshop.github.io#/ .
April 11, 2025 at 4:11 PM
We found these unique and important concepts to be fairly complex, requiring deep analysis. We use ChatGPT-4o to analyze the concept collages and find that it gives detailed and clear explanations about the differences between models. More examples here -- nkondapa.github.io/rsvc-page/
April 11, 2025 at 4:11 PM
We then look at “in-the-wild” models. We compare ResNets and ViTs trained on ImageNet. We measure concept importance and concept similarity. Do models learn unique and important concepts? Yes, sometimes they do!
April 11, 2025 at 4:11 PM
We first show this approach can recover known differences. We train Model 1 to use a pink square to make classification decisions and Model 2 to ignore it. Our method, RSVC, isolates this difference.
April 11, 2025 at 4:11 PM
We tackle this question by (i) extracting concepts for each model, (ii) using one model to predict the other’s concepts, (iii) and measuring the quality of the prediction.
April 11, 2025 at 4:11 PM
Great work! I am curious what the reconstruction error is? Does the model behavior change significantly when using the reconstructed activations?
March 1, 2025 at 11:15 PM
I had 1/5 reviewers respond, does that put me in the “has discussion” bucket? Are you checking number of reviewers who respond as well?
November 28, 2024 at 4:13 PM