banner
matthieuterris.bsky.social
@matthieuterris.bsky.social
And in challenging real-world imaging problems where there’s no ground truth and limited samples, we can still fine-tune… ✨ fully unsupervised! ✨
Using recent ideas from equivariant imaging + SURE, we adapt the model to a single noisy image, e.g. on this tough SPAD problem:
April 16, 2025 at 9:20 AM
Despite not being unrolled, our network shows strong zero-shot generalization — even to unseen operators and noise levels!
April 16, 2025 at 9:20 AM
We trained this network on multiple imaging tasks: motion blur, inpainting, MRI, CT, Poisson-Gaussian denoising, super-resolution... and across modalities (1, 2, 3 channels).
It learned all tasks jointly — no task-specific retraining needed.
April 16, 2025 at 9:20 AM
We propose two key architectural updates to make a UNet multitask and generalize:
✅ Inject knowledge of meas. operator A into inner layers (like conditioning in diffusion models).
✅ Share weights across modalities (grayscale, color, complex), adapting only input/output heads.
April 16, 2025 at 9:20 AM
⚠️ But trained unrolled networks lose their theoretical properties anyway...
So we asked: can we revive the UNet — and make it generalize?
April 16, 2025 at 9:20 AM
The issue: plain UNets don’t generalize well to new forward operators A.
They’re also rarely used in this space because unrolled networks are thought to offer better interpretability.
April 16, 2025 at 9:20 AM
Most architectures trained for solving y = Ax + e are tailored to specific operators A.
But what if we used one universal UNet backbone?
No PnP, no unrolling, no retraining — just good inductive bias.
April 16, 2025 at 9:20 AM