Sean Man
sean-man.bsky.social
Sean Man
@sean-man.bsky.social
Ph.D. student Technion under Prof. Michael Elad ; Researching Image Inverse problems ; sean_8100🐦
We cover it in the paper to some extent. We found that things work out as long as y resembles an image. Moreover, the encoder is robust to a small amount of noise.
January 23, 2025 at 9:34 AM
🙌 This work was led by Ron Raphaeli under the guidance of Prof. Miki Elad.

🧵[7/7]
January 22, 2025 at 5:27 PM
✨ The result?

✅ Sharper images
✅ Significant speedups
✅ A simple framework for inverse problems with latent diffusion priors.

🧵[6/7]
January 22, 2025 at 5:27 PM
🔥 Our solution: What if we could bypass the decoder entirely?

We designed a latent operator that mimics image-space degradations directly in the latent space, eliminating the use of the decoder and its Jacobian.

🧵[5/7]
January 22, 2025 at 5:27 PM
⚠️ Worse, backpropagating through the decoder introduces artifacts into the restored images due to its Jacobian.

🧵[4/7]
January 22, 2025 at 5:27 PM
💡 The challenge: Solving inverse problems with latent diffusion models is tricky because degradation operators (e.g., blur, noise) are defined in image space.

This forces costly decoding steps at every iteration, slowing everything down.

🧵[3/7]
January 22, 2025 at 5:27 PM
If we are talking about image-to-image tasks, its seems you need only one:

arxiv.org/abs/2406.00828
Imitating the Functionality of Image-to-Image Models Using a Single Example
We study the possibility of imitating the functionality of an image-to-image translation model by observing input-output pairs. We focus on cases where training the model from scratch is impossible, e...
arxiv.org
January 9, 2025 at 10:12 PM