George Cazenavette
gcazenavette.bsky.social
George Cazenavette
@gcazenavette.bsky.social
Computer Vision PhD at MIT

georgecazenavette.github.io/
This work was a lot of fun and was done in collaboration with my advisors Antonio Torralba and @vincentsitzmann.bsky.social. Please see our paper for more details and results.

We hope this work inspires future research in this area!

And please share your favorite images from our gallery!!

6/6
November 21, 2025 at 4:01 AM
We distill images with different models, each yielding distinct styles that hint as to how these models "see."

Please see our gallery to browse all our images from many datasets (incl. ImageNet, Stanford Dogs, CUB 200, Flowers-102, Food-101): linear-gradient-matching.github.io/gallery/

5/6
November 21, 2025 at 4:01 AM
Our method out-performs all real-image baselines on the standard ImageNet benchmarks and shines even brighter on datasets with fine grained classes!

The learned images seem to contain more discriminative features than any single real image, leading to a better classifier.

4/6
November 21, 2025 at 4:01 AM
We directly learn our synthetic images such that they induce similar gradients as the real data when training a linear classifier.

Our meta loss is simply the distance between these gradients!

Critically, we also parameterize our images as pyramids as a form of implicit regularization.

3/6
November 21, 2025 at 4:01 AM
While prior work focuses on distilling images to train models from scratch, this task becomes unreasonable with extremely small support sizes.

Instead, we focus on learning images to train *linear classifiers* on top of pre-trained models, a more relevant task in the era of foundation models.

2/6
November 21, 2025 at 4:01 AM