banner
matthieuterris.bsky.social
@matthieuterris.bsky.social
And in challenging real-world imaging problems where there’s no ground truth and limited samples, we can still fine-tune… ✨ fully unsupervised! ✨
Using recent ideas from equivariant imaging + SURE, we adapt the model to a single noisy image, e.g. on this tough SPAD problem:
April 16, 2025 at 9:20 AM
Despite not being unrolled, our network shows strong zero-shot generalization — even to unseen operators and noise levels!
April 16, 2025 at 9:20 AM
We trained this network on multiple imaging tasks: motion blur, inpainting, MRI, CT, Poisson-Gaussian denoising, super-resolution... and across modalities (1, 2, 3 channels).
It learned all tasks jointly — no task-specific retraining needed.
April 16, 2025 at 9:20 AM
We propose two key architectural updates to make a UNet multitask and generalize:
✅ Inject knowledge of meas. operator A into inner layers (like conditioning in diffusion models).
✅ Share weights across modalities (grayscale, color, complex), adapting only input/output heads.
April 16, 2025 at 9:20 AM
For my first🧵on bsky, what if a single UNet could solve all inverse problems?
In our latest preprint with Samuel Hurault, Maxime Song and @tachellajulian.bsky.social, we build a single multitask UNet for computational imaging — and show it generalizes surprisingly well 👇 arxiv.org/abs/2503.08915
April 16, 2025 at 9:20 AM