lebellig
banner
lebellig.bsky.social
lebellig
@lebellig.bsky.social
Ph.D. student on generative models and domain adaptation for Earth observation 🛰
Previously intern @SonyCSL, @Ircam, @Inria

🌎 Personal website: https://lebellig.github.io/
Pinned
I created 3 introductory notebooks on Flow Matching models to help get started with this exciting topic! ✨

1. Annotated Flow Matching paper: github.com/gle-bellier/...
2. Discrete Flow Matching: github.com/gle-bellier/...
3. Minimal FM in Jax: github.com/gle-bellier/...
GitHub - gle-bellier/flow-matching: Annotated Flow Matching paper
Annotated Flow Matching paper. Contribute to gle-bellier/flow-matching development by creating an account on GitHub.
github.com
Reposted by lebellig
I'm on my way to @caltech.edu for an AI + Science conference. Looking forward to seeing some friends and meeting new ones. There will be a livestream.
aiscienceconference.caltech.edu
November 9, 2025 at 8:41 PM
Reposted by lebellig
“Entropic (Gromov) Wasserstein Flow Matching with GENOT” by D. Klein et al. arxiv.org/abs/2310.09254
Transport between two distributions defined on different spaces by training a noise-to-data flow models in the target space, conditioned on the source data and leveraging Gromov–Wasserstein couplings
October 30, 2025 at 10:43 PM
Reposted by lebellig
💥 DeepInverse is now part of the official PyTorch Landscape💥

We are excited to join an ecosystem of great open-source AI libraries, including @hf.co diffusers, MONAI, einops, etc.

pytorch.org/blog/deepinv...
DeepInverse Joins the PyTorch Ecosystem: the library for solving imaging inverse problems with deep learning – PyTorch
pytorch.org
November 5, 2025 at 5:31 PM
Reposted by lebellig
🌀🌀🌀New paper on the generation phases of Flow Matching arxiv.org/abs/2510.24830
Are FM & diffusion models nothing else than denoisers at every noise level?
In theory yes, *if trained optimally*. But in practice, do all noise level equally matter?

with @annegnx.bsky.social, S Martin & R Gribonval
November 5, 2025 at 9:03 AM
Reposted by lebellig
Want to work on generative models and Earth Observation? 🌍

I'm looking for:
🧑‍💻 an intern on generative models for change detection
🧑‍🔬 a PhD student on neurosymbolic generative models for geospatial data

Both starting beginning of 2026.

Details are below, feel free to email me!
November 4, 2025 at 10:08 AM
Reposted by lebellig
We introduce MIRO: a new paradigm for T2I model alignment integrating reward conditioning into pretraining, eliminating the need for separate fine-tuning/RL stages. This single-stage approach offers unprecedented efficiency and control.

- 19x faster convergence ⚡
- 370x less FLOPS than FLUX-dev 📉
October 31, 2025 at 11:24 AM
“Entropic (Gromov) Wasserstein Flow Matching with GENOT” by D. Klein et al. arxiv.org/abs/2310.09254
Transport between two distributions defined on different spaces by training a noise-to-data flow models in the target space, conditioned on the source data and leveraging Gromov–Wasserstein couplings
October 30, 2025 at 10:43 PM
Reposted by lebellig
New paper, with @rkhashmani.me @marielpettee.bsky.social @garrettmerz.bsky.social Hellen Qu. We introduce a framework for generating realistic, highly multimodal datasets with explicitly calculable mutual information. This is helpful for studying self-supervised learning
arxiv.org/abs/2510.21686
October 28, 2025 at 5:23 PM
"The Principles of Diffusion Models" by Chieh-Hsin Lai, Yang Song, Dongjun Kim, Yuki Mitsufuji, Stefano Ermon. arxiv.org/abs/2510.21890
It might not be the easiest intro to diffusion models, but this monograph is an amazing deep dive into the math behind them and all the nuances
The Principles of Diffusion Models
This monograph presents the core principles that have guided the development of diffusion models, tracing their origins and showing how diverse formulations arise from shared mathematical ideas. Diffu...
arxiv.org
October 28, 2025 at 8:35 AM
Reposted by lebellig
I'm excited to share jaxion, a differentiable Python/JAX library for fuzzy dark matter (axions) + gas + stars, scalable on multiple GPUs

⭐️repo: github.com/JaxionProjec...
📚docs: jaxion.readthedocs.io

Feedback + collaborations welcome!
October 27, 2025 at 6:10 PM
Reposted by lebellig
Fisher meets Feynman! 🤝

We use score matching and a trick from quantum field theory to make a product-of-experts family both expressive and efficient for variational inference.

To appear as a spotlight @ NeurIPS 2025.
#NeurIPS2025 (link below)
October 27, 2025 at 12:51 PM
that and please share/repost the articles you’re interested in (especially if you’re not the author). If I’m following you, I want to see what you’re reading. We don’t need a fancy algorithm if we can discover great research through the curated posts of the people we follow
If you’re going to post a paper on twitter, why not do it a few days after the bluesky post? No harm to your career but makes clear it’s a slower information source
October 27, 2025 at 1:55 PM
Reposted by lebellig
Strong afternoon session: Ségolène Martin on how to go from flow matching to denoisers (and hopefully come back?) and Claire Boyer on how learning rate and working in latent spaces affect diffusion models
October 24, 2025 at 3:04 PM
Reposted by lebellig
Kickstarting our workshop on Flow matching and Diffusion with a talk by Eric Vanden Eijnden on how to optimize learning and sampling in Stochastic Interpolants!

Broadcast available at gdr-iasis.cnrs.fr/reunions/mod...
October 24, 2025 at 8:30 AM
Reposted by lebellig
Excited to share SamudrACE, the first 3D AI ocean–atm–sea-ice #climate emulator! 🚀 Simulates 800 years in 1 day on 1 GPU, ~100× faster than traditional models, straight from your laptop 👩‍💻 Collaboration with @ai2.bsky.social and GFDL, advancing #AIforScience with #DeepLearning.
tinyurl.com/Samudrace
SamudrACE: A fast, accurate, efficient 3D coupled climate AI emulator
A fast digital twin of a state-of-the-art coupled climate model, simulating 800 years in 1 day with 1 GPU. SamudrACE combines two leading…
medium.com
October 15, 2025 at 4:11 PM
I'm already waiting for the next generation of "diffusion transformers features are well-suited for discriminative tasks" but with DiT trained with this representation autoencoders and the loop with be closed
Diffusion Transformers with Representation Autoencoders by Boyang Zheng, et al (arxiv.org/abs/2510.116...)

Unexpected result: swapping the SD-VAE for a pretrained visual encoder improves FID, challenging the idea that encoders' information compression is not suited for generative modeling!
October 15, 2025 at 11:55 AM
Diffusion Transformers with Representation Autoencoders by Boyang Zheng, et al (arxiv.org/abs/2510.116...)

Unexpected result: swapping the SD-VAE for a pretrained visual encoder improves FID, challenging the idea that encoders' information compression is not suited for generative modeling!
October 14, 2025 at 7:08 PM
"How to build a consistency model: Learning flow maps via self-distillation" by @nmboffi.bsky.social et al (arxiv.org/abs/2505.18825)
New method to train flow maps without any pretrained flow matching/diffusion models!
October 10, 2025 at 7:15 AM
Reposted by lebellig
While working on semidiscrete flow matching this summer (➡️ arxiv.org/abs/2509.25519), I kept looking for a video illustrating that the velocity field solving the Benamou-Brenier OT problem is NOT constant w.r.t. time ⏳... so I did it myself, take a look! ott-jax.readthedocs.io/tutorials/th...
October 9, 2025 at 8:09 PM
Reposted by lebellig
🚨Updated: "How far can we go with ImageNet for Text-to-Image generation?"

TL;DR: train a text2image model from scratch on ImageNet only and beat SDXL.

Paper, code, data available! Reproducible science FTW!
🧵👇

📜 arxiv.org/abs/2502.21318
💻 github.com/lucasdegeorg...
💽 huggingface.co/arijitghosh/...
October 8, 2025 at 8:43 PM
Reposted by lebellig
Very excited to share our preprint: Self-Speculative Masked Diffusions

We speed up sampling of masked diffusion models by ~2x by using speculative sampling and a hybrid non-causal / causal transformer

arxiv.org/abs/2510.03929

w/ @vdebortoli.bsky.social, Jiaxin Shi, @arnauddoucet.bsky.social
October 7, 2025 at 10:09 PM
Reposted by lebellig
🚀 After more than a year of work — and many great discussions with curious minds & domain experts — we’re excited to announce the public release of 𝐀𝐩𝐩𝐚, our latent diffusion model for global data assimilation!

Check the repo and the complete wiki!
github.com/montefiore-s...
GitHub - montefiore-sail/appa: Code for the publication "Appa: Bending Weather Dynamics with Latent Diffusion Models for Global Data Assimilation".
Code for the publication "Appa: Bending Weather Dynamics with Latent Diffusion Models for Global Data Assimilation". - montefiore-sail/appa
github.com
October 8, 2025 at 10:33 AM
"Be Tangential to Manifold: Discovering Riemannian Metric for Diffusion Models" Shinnosuke Saito et al. arxiv.org/abs/2510.05509
High-density regions might not be the most interesting areas to visit. Thus, they define a new Riemannian metric for diffusion models relying on the Jacobian of the score
October 8, 2025 at 9:57 AM
Reposted by lebellig
#Distinction 🏆| Charlotte Pelletier, lauréate d'une chaire #IUF, développe des méthodes d’intelligence artificielle appliquées aux séries temporelles d’images satellitaires.
➡️ www.ins2i.cnrs.fr/fr/cnrsinfo/...
🤝 @irisa-lab.bsky.social @cnrs-bretagneloire.bsky.social
October 8, 2025 at 9:30 AM
Reposting because part of me wants to see EBM make a comeback and hopes flow-based training can help it scale.
"Energy Matching: Unifying Flow Matching and
Energy-Based Models for Generative Modeling" by Michal Balcerak et al. arxiv.org/abs/2504.10612
I'm not sure EBM will beat flow-matching/diffusion models, but this article is very refreshing.
October 7, 2025 at 6:30 PM