Yann Traonmilin
ytraonmilin.bsky.social
Yann Traonmilin
@ytraonmilin.bsky.social
CNRS researcher at Institut de mathématiques de Bordeaux.

yanntraonmilin.perso.math.cnrs.fr
In approaches using deep projective priors, we link a key geometrical attribute, the "orthogonality of the projection" with identifiability and convergence rate.
Using this attribute to regularize the learning of such priors improves stability and robustness for ill posed imaging problems.
December 9, 2025 at 3:55 PM
Pour être tout à fait honnête, "se dessine avec une règle" est dans la leçon. Je ne sais pas si la règle utilise l'axiome du choix par contre
October 16, 2025 at 8:18 AM
a good basis to understand what is at play when we try to improve PTQ, by e.g. cross layer equalization or adaptive quantization.
October 10, 2025 at 7:37 AM
However, it is always possible to try to study algorihms as tools to minimize a recovery error to try to bypass this 2-step process (which I have tried to do lately).

I wonder if such approach is possible for the pure learning problem.
October 8, 2025 at 7:20 AM
So it makes sense to study optimization of the loss.

There is a parrallel in inverse problems, set up a function to minimize, guarantee convergence AND that minimizers identify the righ objects (that last part being often overlooked).
October 8, 2025 at 7:20 AM
Not sure if we are talking about the same thing, I was thinking about that:

Zoran, D., & Weiss, Y. (2011, November). From learning models of natural image patches to whole image restoration, ICCV

I used it for estimation of low rank GMM from compressed patch database:
hal.science/hal-03429102
Compressive learning for patch-based image denoising
The Expected Patch Log-Likelihood algorithm (EPLL) and its extensions have shown good performances for image denoising. The prior model used by EPLL is usually a Gaussian Mixture Model (GMM) estimated...
hal.science
August 26, 2025 at 11:15 AM
but natural image patches do (did not read the article though)
August 26, 2025 at 7:16 AM
Rencontre incontournable de la communauté ! Je n'y serais pas personnellement mais Ali Joundi y présentera nos travaux sur les autoencodeurs atomiques

cnrs.hal.science/hal-04773954/
Max-sparsity atomic autoencoders with application to inverse problems
<div><p>An atomic autoencoder is a neural network architecture that decomposes an image as a sum of low dimensional atoms. While it is efficient for image datasets which are well represented by this s...
cnrs.hal.science
July 2, 2025 at 7:37 AM
My workaround without a bib file: for a personal ref section : [YT1], [YT2]...

\newcounter{bibc}
\newcommand\nbib{\arabic{bibc}}
\stepcounter{bibc}\bibitem[YT\nbib]{label1} Reference 1
\stepcounter{bibc}\bibitem[YT\nbib]{label2} Reference 2

I dont remember why I couldn't do it with multibib
June 20, 2025 at 7:42 AM
I like to think about them as Schrödinger papers, both accepted and rejected at the same time. 🤣
June 12, 2025 at 6:57 AM