Xi WANG
banner
xiwang92.bsky.social
Xi WANG
@xiwang92.bsky.social
Ecole Polytechnique, IP Paris; Prev. Ph.D.@Univ Rennes, Inria/IRISA
https://triocrossing.github.io/
For more details, visit the project website: yuanzhi-zhu.github.io/DiMO/
Or read the paper: arxiv.org/abs/2503.15457
The project is led by Yuanzhi Zhu (yuanzhi-zhu.github.io/about/) and supervised by @stephlat.bsky.social and @vickykalogeiton.bsky.social.
Di[M]O: Distilling Masked Diffusion Models into One-step Generator
SOCIAL MEDIA DESCRIPTION TAG TAG
yuanzhi-zhu.github.io
March 21, 2025 at 3:36 PM
We test Di[M]O on image generation with MaskGit & Meissonic as teacher models.
- First one-step MDM that competes with multi-step teachers
- A significant speed-up of 8 to 32 times without degradation in quality.
- The first successful distillation approach for text-to-image MDMs.
March 21, 2025 at 3:36 PM
Our approach fundamentally differs from previous distillation methods, such as DMD. Instead of minimizing the divergence of denoising distributions across the entire latent space, Di[M]O optimizes the divergence of token-level conditional distributions.
March 21, 2025 at 3:36 PM
To approximate the loss gradient, we introduce an auxiliary model that estimates an otherwise intractable term in the loss function. The auxiliary model is trained using a standard MDM training loss, with one-step generated samples as targets.
March 21, 2025 at 3:36 PM
To sample from the correct joint distribution, we introduce an initialization that maps a randomized input sequence to an almost deterministic target sequence.
Without proper initialization, the model may suffer from divergence or mode collapse, making this step essential.
March 21, 2025 at 3:36 PM
The initial distribution is crucial here. As pointed out by
Jiaming Song, in his recent position paper arxiv.org/abs/2503.07154, multi-token prediction is inherently difficult due to the independence assumption between the predicted tokens.
Ideas in Inference-time Scaling can Benefit Generative Pre-training Algorithms
Recent years have seen significant advancements in foundation models through generative pre-training, yet algorithmic innovation in this space has largely stagnated around autoregressive models for di...
arxiv.org
March 21, 2025 at 3:36 PM
The key idea is inspired by on-policy distillation. We align the output distributions of the teacher and student models at the student generated intermediate states, ensuring that the student's generation closely matches the teacher's by covering all possible intermediate states.
March 21, 2025 at 3:36 PM