🔗 Paper: arxiv.org/abs/2503.08915
🔗 Code: github.com/matthieutrs/...
🔗 Demo: huggingface.co/spaces/deepi...
🔗 Paper: arxiv.org/abs/2503.08915
🔗 Code: github.com/matthieutrs/...
🔗 Demo: huggingface.co/spaces/deepi...
Using recent ideas from equivariant imaging + SURE, we adapt the model to a single noisy image, e.g. on this tough SPAD problem:
Using recent ideas from equivariant imaging + SURE, we adapt the model to a single noisy image, e.g. on this tough SPAD problem:
It learned all tasks jointly — no task-specific retraining needed.
It learned all tasks jointly — no task-specific retraining needed.
✅ Inject knowledge of meas. operator A into inner layers (like conditioning in diffusion models).
✅ Share weights across modalities (grayscale, color, complex), adapting only input/output heads.
✅ Inject knowledge of meas. operator A into inner layers (like conditioning in diffusion models).
✅ Share weights across modalities (grayscale, color, complex), adapting only input/output heads.
So we asked: can we revive the UNet — and make it generalize?
So we asked: can we revive the UNet — and make it generalize?
They’re also rarely used in this space because unrolled networks are thought to offer better interpretability.
They’re also rarely used in this space because unrolled networks are thought to offer better interpretability.
But what if we used one universal UNet backbone?
No PnP, no unrolling, no retraining — just good inductive bias.
But what if we used one universal UNet backbone?
No PnP, no unrolling, no retraining — just good inductive bias.