Andrea Dittadi
andreadittadi.bsky.social
Andrea Dittadi
@andreadittadi.bsky.social
postdoc at Helmholtz AI & TUM | diffusion/flows & (causal) representation learning | previously DTU Copenhagen, MPI Tübingen, Microsoft Research, Amazon

addtt.github.io
This was a very interesting and insightful project led by @beatrixmgn.bsky.social, with invaluable help from identifiability expert Luigi Gresele. This is such a promising area, and there's so much more to explore. Come chat with us at our poster at the @unireps.bsky.social workshop!

end/
December 6, 2024 at 4:06 PM
2) The representations of models with different likelihoods are related by a linear transformations, plus an "error" term which we found to depend on differences of log likelihoods. This error can be substantial even if the models' losses are arbitrarily close to optimal!

4/
December 6, 2024 at 4:06 PM
The main issue is that the assumption of equal likelihood rarely holds in practice.

This is because the loss being optimized is the expected negative log likelihood over the data distribution, and:

1) Equal loss does not imply equal likelihood.

3/
December 6, 2024 at 4:06 PM
Current identifiability results tell us that, for a broad class of models (including e.g. classifiers and autoregressive language models), those defining the same likelihood p(y|x) have representations that are equal up to a linear transformation.

2/
December 6, 2024 at 4:06 PM