Frank Fundel
banner
frankfundel.bsky.social
Frank Fundel
@frankfundel.bsky.social
PhD Student @ LMU Munich

https://ffundel.de/
Reposted by Frank Fundel
🧹 CleanDiFT: Diffusion Features without Noise
@rmsnorm.bsky.social*, @stefanabaumann.bsky.social*, @koljabauer.bsky.social*, @frankfundel.bsky.social, Björn Ommer
Oral Session 1C (Davidson Ballroom): Friday 9:00
Poster Session 1 (ExHall D): Friday 10:30-12:30, # 218
compvis.github.io/cleandift/
CleanDIFT: Diffusion Features without Noise
CleanDIFT enables extracting Noise-Free, Timestep-Independent Diffusion Features
compvis.github.io
June 9, 2025 at 7:58 AM
Our paper is accepted at WACV 2025! 🤗
Check out DistillDIFT. Code & weights are now public:
👉 github.com/compvis/dist...
December 6, 2024 at 2:35 PM
🔥 We achieve SOTA in unsupervised & weakly-supervised semantic correspondence at just a fraction of the computational cost.
December 6, 2024 at 2:35 PM
✨ By training just a tiny LoRA adapter, we transfer the power of a large diffusion model (SDXL Turbo) into a small ViT (DINOv2).

🔄 All done unsupervised by retrieving pairs of similar images.
December 6, 2024 at 2:35 PM
🚀 Meet DistillDIFT:
It distills the power of two vision foundation models into one streamlined model, achieving SOTA performance at a fraction of the computational cost.

No need for bulky generative combos—just pure efficiency. 💡
December 6, 2024 at 2:35 PM
December 6, 2024 at 2:35 PM
Our paper is accepted at WACV 2025! 🤗
Check out DistillDIFT. Code & weights are now public:
👉 github.com/compvis/dist...
December 6, 2024 at 12:24 PM
🔥 We achieve SOTA in unsupervised & weakly-supervised semantic correspondence at just a fraction of the computational cost.
December 6, 2024 at 12:24 PM
✨ By training just a tiny LoRA adapter, we transfer the power of a large diffusion model (SDXL Turbo) into a small ViT (DINOv2).

🔄 All done unsupervised by retrieving pairs of similar images.
December 6, 2024 at 12:24 PM
🚀 Meet DistillDIFT:
It distills the power of two vision foundation models into one streamlined model, achieving SOTA performance at a fraction of the computational cost.

No need for bulky generative combos—just pure efficiency. 💡
December 6, 2024 at 12:24 PM
December 6, 2024 at 12:24 PM