James Allingham
jamesallingham.bsky.social
James Allingham
@jamesallingham.bsky.social
Research Scientist @GoogleDeepMind | Organiser @DeepIndaba | Machine Learning PhD @CambridgeMLG | 🇿🇦
Thanks 🤩 Obsessing over TikZ is my guilt-free procrastination method 😂
December 5, 2024 at 12:51 PM
A big shoutout to all of my amazing collaborators who made this paper happen! @brunokm.bsky.social Shreyas Padhy, Javier Antoran, David Krueger, Richard Turner, Eric Nalisnick, and Jose Miguel Hernandez-Lobato.
December 5, 2024 at 9:45 AM
I'll keep this thread short, but if you are interested to chat further please get in touch or visit the poster at NeurIPS on Fri 13 Dec at 4:30 p.m. PST (East Exhibit Hall A-C #3710)

Here are a few diagrams more diagrams from the paper to tempt you!
December 5, 2024 at 9:45 AM
Excitingly, we can also use the symmetry information learned by our SGM to improve the data efficiency of standard deep generative models (e.g., VAEs).
December 5, 2024 at 9:45 AM
Our SGM is also interpretable – we can inspect the distributions over transformations for any prototype, which tells us about our dataset, and if our SGM is learning reasonable things.

E.g., 9's and 6's can be rotated into each other, and 1's can be rotated 180 deg w/o change.
December 5, 2024 at 9:45 AM
We provide experimental evidence that our SGM can learn prototypes and the distributions over transformation parameters such that the true data distribution is recovered. Here we show observations from the test set (top), prototypes (mid), and resampled observations (bot).
December 5, 2024 at 9:45 AM
We introduce our symmetry-aware generative model (SGM), in which an observation is generated by transforming an invariant latent "prototype", and a simple algorithm for learning the protos and transformation params.

paper: arxiv.org/abs/2403.01946
code: github.com/cambridge-ml...
December 5, 2024 at 9:45 AM