lebellig
banner
lebellig.bsky.social
lebellig
@lebellig.bsky.social
Ph.D. student on generative models and domain adaptation for Earth observation 🛰
Previously intern @SonyCSL, @Ircam, @Inria

🌎 Personal website: https://lebellig.github.io/
"Curly Flow Matching for Learning Non-gradient Field Dynamics" @kpetrovvic.bsky.social et al. arxiv.org/pdf/2510.26645
Solving the Schrödinger bridge pb with a non-zero drift ref. process: learn curved interpolants, apply minibatch OT with the induced metric, learn the mixture of diffusion bridges.
November 12, 2025 at 8:09 PM
Great article! But can the preference score go up to 2? You know, because 1 just isn’t aesthetic enough.
October 31, 2025 at 9:37 AM
“Entropic (Gromov) Wasserstein Flow Matching with GENOT” by D. Klein et al. arxiv.org/abs/2310.09254
Transport between two distributions defined on different spaces by training a noise-to-data flow models in the target space, conditioned on the source data and leveraging Gromov–Wasserstein couplings
October 30, 2025 at 10:43 PM
Diffusion Transformers with Representation Autoencoders by Boyang Zheng, et al (arxiv.org/abs/2510.116...)

Unexpected result: swapping the SD-VAE for a pretrained visual encoder improves FID, challenging the idea that encoders' information compression is not suited for generative modeling!
October 14, 2025 at 7:08 PM
"How to build a consistency model: Learning flow maps via self-distillation" by @nmboffi.bsky.social et al (arxiv.org/abs/2505.18825)
New method to train flow maps without any pretrained flow matching/diffusion models!
October 10, 2025 at 7:15 AM
"Be Tangential to Manifold: Discovering Riemannian Metric for Diffusion Models" Shinnosuke Saito et al. arxiv.org/abs/2510.05509
High-density regions might not be the most interesting areas to visit. Thus, they define a new Riemannian metric for diffusion models relying on the Jacobian of the score
October 8, 2025 at 9:57 AM
Grateful for the opportunity to speak at tomorrow’s Learning Machines seminar (RISE+@climateainordics.com) on generative domain adaptation and geospatial foundation models benchmarking for robust Earth observation 🌍

Join on Sept 11 at 15:00 CET! www.ri.se/en/learningm...
September 10, 2025 at 4:32 PM
Late to the party but I like the fact that you can use geodesic random walk (like really simulating the random walks) to derive the SDEs necessary for diffusion models on Riemannian manifolds (from arxiv.org/abs/2202.02763)
August 30, 2025 at 9:28 AM
I'll be at #GRETSI in Strasbourg next week! Friday morning, I'll present our work on Riemannian flow matching for SAR interferometry (generation and denoising) 🛰️

Also really looking forward to the poster sessions and all the exciting conferences on the program!

📄 hal.science/hal-05140421
August 21, 2025 at 2:34 PM
Anyone aware of a cats --> pure evil creatures image translation benchmark? not even neural networks’ dreams reached this level of nightmare fuel
August 18, 2025 at 9:51 PM
New episode in this line of work from @giannisdaras.bsky.social et al. on training diffusion models with mostly bad/low-quality/corrupted data (+few high-quality samples). This time for proteins!

📄 Ambient diffusion Omni: arxiv.org/pdf/2506.10038
📄 Ambient Proteins: www.biorxiv.org/content/10.1...
July 7, 2025 at 7:43 PM
Added to my reading list: Adjoint Schrödinger Bridge Sampler by Guan-Horng Liu et al. arxiv.org/abs/2506.22565
July 1, 2025 at 7:11 PM
Drop the "conditional", just "flow matching", it's cleaner.
June 24, 2025 at 3:24 PM
I was intrigued by "Mean Flows for One-Step Generative Modeling" and, in particular, how it handles averaging the marginal velocity field during training. In practice, they don't and replace it with the conditional one in their loss function. I wonder how mismatches impact generation...
June 11, 2025 at 7:37 AM
"Energy Matching: Unifying Flow Matching and
Energy-Based Models for Generative Modeling" by Michal Balcerak et al. arxiv.org/abs/2504.10612
I'm not sure EBM will beat flow-matching/diffusion models, but this article is very refreshing.
May 21, 2025 at 12:37 PM
"Probability Density Geodesics in Image Diffusion Latent Space" by Qingtao Yu et al. https://arxiv.org/abs/2504.06675
They propose an algorithm to traverse high-density regions when interpolating between two points in a diffusion model latent space.
April 29, 2025 at 5:02 PM
TerraMind (previous post), the new geospatial generative model from IBM and ESA, was trained on the TerraMesh dataset. Great to see large-scale datasets with aligned modalities! Now waiting for the release :)

📄 https://arxiv.org/abs/2504.11172
April 28, 2025 at 4:02 PM
IBM and @esa.int introduce TerraMind, a new geospatial generative model. With 'Thinking in Modalities', they generate missing modalities during model finetuning.
I'm curious about its modality translation capabilities  👀

📄 https://arxiv.org/abs/2504.11171
🐍 https://huggingface.co/ibm-esa-geospatial
April 28, 2025 at 3:30 PM
I really liked this approach by @matthieuterris.bsky.social et al.They propose learning a unique lightweight model for multiple inverse problems by conditioning it with the forward operator A. Thanks to self-supervised fine-tuning, it can tackle unseen inverse pb.

📰 https://arxiv.org/abs/2503.08915
April 26, 2025 at 4:02 PM
Very cool article from Panagiotis Theodoropoulos et al: https://arxiv.org/abs/2410.14055
Feedback Schrödinger Bridge Matching introduces a new method to improve transfer between two data distributions using only a small number of paired samples!
April 25, 2025 at 5:03 PM
It reminds me of this result (from arxiv.org/abs/2104.11222), you can finetune the jpeg compression level of your training set to have the best FID on church images.
March 19, 2025 at 6:28 PM
"Inductive Moment Matching" by Linqi Zhou et al. I like the use of multiple particles to apply a loss similar to consistency models, but on distributions. Training is stable and gives high-quality generated images in very few sampling steps

📄 arxiv.org/abs/2503.07565
🌍 lumalabs.ai/news/inducti...
March 13, 2025 at 3:05 PM
Nice research work from @nicolabourbaki.bsky.social et al. Enhances latent generative models by regularizing the VAE's latent space with an equivariance loss. The finetuning process is straightforward + demonstrates improvements in just 5 epochs!

📄 arxiv.org/abs/2502.09509
🐍 github.com/zelaki/eqvae
February 25, 2025 at 7:57 PM
I've been using this time sampling trick to stabilize diffusion model training for many years, yet I don't recall reading a comparison with naive uniform sampling + I rarely see it in flow matching codebases. Do you use it?
February 11, 2025 at 5:32 PM
the only definition of straightness I'm aware of for such applications comes from the rectified flow article (arxiv.org/pdf/2209.03003) but there might be others
February 7, 2025 at 6:22 PM