Paul Hagemann
@yungbayesian.bsky.social
PhD student at TU Berlin, working on generative models and inverse problems
he/him
he/him
what is so misunderstood about (3)?
May 26, 2025 at 5:28 PM
what is so misunderstood about (3)?
best of luck marvin :)
March 19, 2025 at 2:35 PM
best of luck marvin :)
ist auch logisch, dass greenpeace/foodwatch bspw den grünen näherstehen, da die ja die themen bespielen. neutralität wäre da ja eher lächerlich
February 27, 2025 at 1:51 PM
ist auch logisch, dass greenpeace/foodwatch bspw den grünen näherstehen, da die ja die themen bespielen. neutralität wäre da ja eher lächerlich
was ist an der studie falsch
February 18, 2025 at 11:23 PM
was ist an der studie falsch
interesting point, but i would say (true) memorization is mathematically impossible. the underlying question is what generalization means when we are given finite training samples. it depends on the model and how long you train, see proceedings.neurips.cc/paper_files/... and arxiv.org/abs/2412.20292
Score-Based Generative Models Detect Manifolds
proceedings.neurips.cc
February 18, 2025 at 10:56 AM
interesting point, but i would say (true) memorization is mathematically impossible. the underlying question is what generalization means when we are given finite training samples. it depends on the model and how long you train, see proceedings.neurips.cc/paper_files/... and arxiv.org/abs/2412.20292
yes i agree, but for diffusion such a constant velocity/score field does not even exist
February 7, 2025 at 1:04 PM
yes i agree, but for diffusion such a constant velocity/score field does not even exist
so in diffusion models the time schedule is so that we cannot have straight paths velocity fields (i.e., v_t(x_t) is constant in time), as opposed to flow matching/rectified flows where it is possible to obtain such paths (although it requires either OT/rectifying...)
February 6, 2025 at 8:11 PM
so in diffusion models the time schedule is so that we cannot have straight paths velocity fields (i.e., v_t(x_t) is constant in time), as opposed to flow matching/rectified flows where it is possible to obtain such paths (although it requires either OT/rectifying...)
yes lol thank you!
January 23, 2025 at 12:08 PM
yes lol thank you!
Check out our github and give it a try yourself! Lots of potential in exploring stuff like this also to other domains (medical imaging, protein/bio stuff)!
github.com/annegnx/PnP-...
Also credit goes to my awesome collaborators Anne Gagneux, Sego Martin and Gabriele Steidl!
github.com/annegnx/PnP-...
Also credit goes to my awesome collaborators Anne Gagneux, Sego Martin and Gabriele Steidl!
github.com
January 23, 2025 at 11:05 AM
Check out our github and give it a try yourself! Lots of potential in exploring stuff like this also to other domains (medical imaging, protein/bio stuff)!
github.com/annegnx/PnP-...
Also credit goes to my awesome collaborators Anne Gagneux, Sego Martin and Gabriele Steidl!
github.com/annegnx/PnP-...
Also credit goes to my awesome collaborators Anne Gagneux, Sego Martin and Gabriele Steidl!
Compared to diffusion methods, we can handle arbitrary latent distributions and also get (theoretically) straighter paths! We evaluate on multiple image datasets against flow matching+diffusion+standard PnP based restoration methods!
January 23, 2025 at 11:00 AM
Compared to diffusion methods, we can handle arbitrary latent distributions and also get (theoretically) straighter paths! We evaluate on multiple image datasets against flow matching+diffusion+standard PnP based restoration methods!
Our algorithm proceeds as follows: we do a gradient step on the data fidelity, reproject onto the flow matching path and then denoise using our flow matching model. This is super cheap to do!
January 23, 2025 at 10:58 AM
Our algorithm proceeds as follows: we do a gradient step on the data fidelity, reproject onto the flow matching path and then denoise using our flow matching model. This is super cheap to do!
Therefore, we use the plug and play framework and rewrite our velocity field (which predicts a direction) to instead denoise the image x_t (i.e., predict the MMSE image x_1). Then we obtain a "time" conditional PnP version, where we solve do the forward backward PnP at the current time and reproject
January 23, 2025 at 10:57 AM
Therefore, we use the plug and play framework and rewrite our velocity field (which predicts a direction) to instead denoise the image x_t (i.e., predict the MMSE image x_1). Then we obtain a "time" conditional PnP version, where we solve do the forward backward PnP at the current time and reproject
very nice paper, only had a quick glimpse, but another aspect is that the optimal score estimator explodes if we approach t -> 0, which NNs ofc cannot replicate. how does this influence the results?
January 1, 2025 at 3:51 PM
very nice paper, only had a quick glimpse, but another aspect is that the optimal score estimator explodes if we approach t -> 0, which NNs ofc cannot replicate. how does this influence the results?
you might be onto sth haha
November 28, 2024 at 10:26 AM
you might be onto sth haha
i guess the adam paper is a pretty good indicator how much ml papers are being published. looks like we are saturating since 2021
November 28, 2024 at 10:16 AM
i guess the adam paper is a pretty good indicator how much ml papers are being published. looks like we are saturating since 2021
same experience here. i am not sure we need actual conference reviewing at all. why do we not all publish on openreview and if i use your paper/build upon/read it, i can write my opinion on it? without the accept reject stamp.
November 24, 2024 at 1:14 PM
same experience here. i am not sure we need actual conference reviewing at all. why do we not all publish on openreview and if i use your paper/build upon/read it, i can write my opinion on it? without the accept reject stamp.
Here, one can see FID results for different beta! Indeed it seems to be fruitful to restrict mass movement in Y for class conditional cifar! We apply this also to other interesting inverse problems, the article can be found at arxiv.org/abs/2403.18705
November 20, 2024 at 9:14 AM
Here, one can see FID results for different beta! Indeed it seems to be fruitful to restrict mass movement in Y for class conditional cifar! We apply this also to other interesting inverse problems, the article can be found at arxiv.org/abs/2403.18705
We want to approximate this distance with standard OT solvers, and therefore introduce a twisted cost function. With this at hand, we can now do OT flow matching for inverse problems! The factor beta controls how much mass leakage we allow in Y.
November 20, 2024 at 9:12 AM
We want to approximate this distance with standard OT solvers, and therefore introduce a twisted cost function. With this at hand, we can now do OT flow matching for inverse problems! The factor beta controls how much mass leakage we allow in Y.
This object has already been of some interest, i.e., it pops up in the theory of gradient flows. It generalizes the KL property quite nicely, and unifies some ideas present in conditional generative modelling. For instance, its dual is the loss usually used in conditional wasserstein gans.
November 20, 2024 at 9:10 AM
This object has already been of some interest, i.e., it pops up in the theory of gradient flows. It generalizes the KL property quite nicely, and unifies some ideas present in conditional generative modelling. For instance, its dual is the loss usually used in conditional wasserstein gans.
Now does the same hold for the Wasserstein distance? Unfortunately not, since moving mass in Y-direction can be more efficient for some measures. However, we can fix that if we restrict the suitable couplings to ones, that only move mass in Y-direction.
November 20, 2024 at 9:08 AM
Now does the same hold for the Wasserstein distance? Unfortunately not, since moving mass in Y-direction can be more efficient for some measures. However, we can fix that if we restrict the suitable couplings to ones, that only move mass in Y-direction.