Carl Allen
banner
carl-allen.bsky.social
Carl Allen
@carl-allen.bsky.social
Laplace Junior Chair, Machine Learning
ENS Paris. (prev ETH Zurich, Edinburgh, Oxford..)

Working on mathematical foundations/probabilistic interpretability of ML (what NNs learn🤷‍♂️, disentanglement🤔, king-man+woman=queen?👌…)
Softmax is also the exact formula for a label distribution p(y|x) under Bayes rule if class distributions p(x|y) have exponential family form (equivariant if Gaussian), so it can have a deeper rationale in a probabilistic model of the data (than a one-hot relaxation).
January 17, 2025 at 9:57 AM
Sorry, more a question re the OP. Just looking to understand the context.
December 29, 2024 at 4:38 AM
Can you give some examples of the kind of papers you’re referring to?
December 29, 2024 at 12:44 AM
And of course this all builds on the seminal work of @wellingmax.bsky.social, @dpkingma.bsky.social, Irina Higgins, Chris Burgess et al.
December 19, 2024 at 3:03 PM
sorry, @benmpoole.bsky.social (fat fingers..)
December 18, 2024 at 5:07 PM
Any constructive feedback, discussion or future collaboration more than welcome!

Full paper: arxiv.org/pdf/2410.22559
arxiv.org
December 18, 2024 at 4:58 PM
Building on this, we clarify the connection between diagonal covariance and Jacobian orthogonality and explain how disentanglement follows, ultimately defining disentanglement as factorising the data distribution into statistically independent components
December 18, 2024 at 4:58 PM
We focus on VAEs, used as building blocks of SOTA diffusion models. Recent works by Rolinek et al. and Kumar & @benmpoole.bsy.social suggest that disentanglement arises because diagonal posterior covariance matrices promote column-orthogonality in the decoder’s Jacobian matrix.
December 18, 2024 at 4:58 PM
While disentanglement is often linked to different models whose popularity may ebb & flow, we show that the phenomenon itself relates to the data’s latent structure and is more fundamental than any model that may expose it.
December 18, 2024 at 4:58 PM
Maybe give it time. Rome, a day, etc..
December 18, 2024 at 10:33 AM
Yup sure, the curve has to kick in at some point. I guess “law” sounds cooler than linear-ish graph. Maybe it started out as an acronym “Linear for A While”.. 🤷‍♂️
December 15, 2024 at 1:57 PM
I guess as complexity increases math->phys->chem->bio->… It’s inevitable that “theory-driven” tends to “theory-inspired”. ML seems a bit tangential tho since experimenting is relatively consequence free and you don’t need to deeply theorise, more iterate. So theory is deprioritised and lags for now
December 15, 2024 at 8:16 AM
But doesn’t theory follow empirics in all of science.. until it doesn’t? Except that in most sciences you can’t endlessly experiment for cost/risk/melting your face off reasons. But ML keeps going, making it a tricky moving/expanding target to try to explain/get ahead of.. I think it’ll happen tho.
December 14, 2024 at 6:47 PM
The last KL is nice as it’s clear that the objective is optimised when the model and posteriors match as well as possible. The earlier KL is nice as it contains the data distribution and all explicitly modelled distributions, so maximising ELBO can be seen intuitively as bringing them all “in line”.
December 5, 2024 at 3:41 PM
I think an intuitive view is that:
- max likelihood minimises
KL[p(x)||p’(x)] (p’(x)=model)

- max ELBO minimises
KL[p(x)q(z|x) || p’(x|z)p’(z)]
So brings together 2 models of the joint. (where p’(x)=\int p’(x|z)p’(z))

Can rearrange in diff ways, eg as
KL[p(x)q(z|x) || p’(x)p’(z|x)]
(or as in VAE)
December 5, 2024 at 3:36 PM
Ha me too, exactly that..
December 3, 2024 at 10:36 PM
(and here it comes.. ;) ). The latter view of classification is the motivation behind this work: scholar.google.co.uk/citations?vi...
‪Variational classification: A probabilistic generalization of the softmax classifier‬
‪SZ Dhuliawala, M Sachan, C Allen‬, ‪Transactions on Machine Learning Research, 2024‬ - ‪Cited by 10‬
scholar.google.co.uk
December 2, 2024 at 8:29 AM
In the binary case, both look the same: sigmoid might be a good model of how y becomes more likely (in future) as x increases. But sigmoid is also 2-case softmax so models Bayes rule for 2 classes of (exp-fam) x|y. The causality between x and y are very different, which "p(y|x)" doesn't capture.
December 2, 2024 at 8:26 AM
I think this comes down to the model behind p(x,y). If features of x cause y, e.g. aspects of a website (x) -> clicks (y); age/health -> disease, then p(y|x) is a (regression) fn of x. But if x|y is a distrib'n of different y's (e.g. cats) then p(y|x) is given by Bayes rule (squint at softmax).
December 2, 2024 at 8:20 AM
Pls add me thanks!
November 29, 2024 at 3:53 PM
Could you pls add me? Thanks!
November 26, 2024 at 7:13 AM
Yep, could maybe work. The accepted-to-RR bar would need to be high to maintain value, but “shininess” test cld be deferred. Think there’s still a separate issue of “highly irresponsible” reviews that needs addressing either way (as at #CVPR2025). We can’t just whinge & doing absolutely nothing!
November 24, 2024 at 11:00 PM