Sadegh Salehi
sadeghsalehi.bsky.social
Sadegh Salehi
@sadeghsalehi.bsky.social
Researching optimisation, machine learning, inverse problems, and computer vision
This is joint work with Subhadip Mukherjee, Lindon Roberts, and Matthias J. Ehrhardt

#MachineLearning #Optimisation #Imaging #Bilevel_Learning
December 17, 2024 at 4:54 PM
Our numerical experiments show:

📈 Significant speed-ups and better performance in image denoising and deblurring compared to the Method of Adaptive Inexact Descent (MAID).

Read the full preprint here:

arxiv.org/abs/2412.12049
Bilevel Learning with Inexact Stochastic Gradients
Bilevel learning has gained prominence in machine learning, inverse problems, and imaging applications, including hyperparameter optimization, learning data-adaptive regularizers, and optimizing forwa...
arxiv.org
December 17, 2024 at 4:54 PM
Why does this matter?

🔍 Insights into the behaviour of inexact stochastic gradients in bilevel problems, with practical assumptions and convergence results.
✅ Faster performance vs. adaptive deterministic bilevel methods.
✅ Better generalisation for imaging tasks.
December 17, 2024 at 4:54 PM
In this work, we make a theoretical contribution by connecting stochastic approximate hypergradients in bilevel optimisation to the theory of stochastic nonconvex optimisation.

Under mild assumptions, we prove these hypergradients satisfy the Biased ABC assumption for SGD.
December 17, 2024 at 4:54 PM
This is a joint work with Lea Bogensperger, Matthias J. Ehrhardt, Thomas Pock, and Hok Shing Wong.
December 14, 2024 at 8:56 PM
📊 Below, see how the learned ICNN regulariser performs on a sparse-angle computed tomography problem. Our bilevel framework shows significant improvement compared to adversarial training-based methods previously introduced in the literature.
December 14, 2024 at 8:56 PM
✨ Key Highlights:
•A-posteriori error bounds for inexact hypergradients computed by primal-dual style differentiation.
•Adaptive, convergent bilevel framework with primal-dual style differentiation.
•Application to learning data-adaptive regularisers (e.g., ICNNs)!
December 14, 2024 at 8:56 PM