Especially interested in model representations.
@ema-ridopoco.bsky.social and @andreadittadi.bsky.social will be presenting a poster in San Diego and Luigi and I will be presenting at Eurips (eurips.cc) in Copenhagen so come on by! 😄
@ema-ridopoco.bsky.social and @andreadittadi.bsky.social will be presenting a poster in San Diego and Luigi and I will be presenting at Eurips (eurips.cc) in Copenhagen so come on by! 😄
We also show that it is possible to define a metric between probability distributions and a measure of representational dissimilarity such that when distributions are close in this sense, we get similar representations.
We also show that it is possible to define a metric between probability distributions and a measure of representational dissimilarity such that when distributions are close in this sense, we get similar representations.
The two models agree on their prediction for the highest likelihood label. They also disagree on the ranking by likelihood of the remaining labels, and while this has a negligible effect on the KL divergence, it means the relation between their representations is non-linear.
The two models agree on their prediction for the highest likelihood label. They also disagree on the ranking by likelihood of the remaining labels, and while this has a negligible effect on the KL divergence, it means the relation between their representations is non-linear.
We prove that a small KL divergence between models is not enough to guarantee similar representations. Here is an example of how to construct two models with small KL divergence, but representations which are far from being linear transformations of each other.
We prove that a small KL divergence between models is not enough to guarantee similar representations. Here is an example of how to construct two models with small KL divergence, but representations which are far from being linear transformations of each other.
We study when and why representations learned by different neural networks are similar from the perspective of identifiability theory, which suggests that a measure of representational similarity should be invariant to transformations that leave the model distribution unchanged.
We study when and why representations learned by different neural networks are similar from the perspective of identifiability theory, which suggests that a measure of representational similarity should be invariant to transformations that leave the model distribution unchanged.
Hope to see you there! eurips.cc/ellis.
Hope to see you there! eurips.cc/ellis.
- Open-topic PhD positions: express your interest through ELLIS by 31 October 2025, start in Autumn 2026: ellis.eu/news/ellis-p...
#NLProc #XAI
- Open-topic PhD positions: express your interest through ELLIS by 31 October 2025, start in Autumn 2026: ellis.eu/news/ellis-p...
#NLProc #XAI