Arnaud Doucet
arnauddoucet.bsky.social
Arnaud Doucet
@arnauddoucet.bsky.social
Senior Staff Research Scientist @Google DeepMind, previously Stats Prof @Oxford Uni - interested in Computational Statistics, Generative Modeling, Monte Carlo methods, Optimal Transport.
Pinned
The slides of my NeurIPS lecture "From Diffusion Models to Schrödinger Bridges - Generative Modeling meets Optimal Transport" can be found here
drive.google.com/file/d/1eLa3...
BreimanLectureNeurIPS2024_Doucet.pdf
drive.google.com
Really nice.
A little self promotion: Hai-Dang Dau (NUS) and I recently released this pre-print, which I'm not half proud of.
arxiv.org/abs/2510.07559

The main problem we solve in it is to construct importance weights for Markov chain Monte Carlo. We achieve it via a method we call harmonization by coupling.
November 5, 2025 at 9:44 AM
Reposted by Arnaud Doucet
Very excited to share our preprint: Self-Speculative Masked Diffusions

We speed up sampling of masked diffusion models by ~2x by using speculative sampling and a hybrid non-causal / causal transformer

arxiv.org/abs/2510.03929

w/ @vdebortoli.bsky.social, Jiaxin Shi, @arnauddoucet.bsky.social
October 7, 2025 at 10:09 PM
Reposted by Arnaud Doucet
(1/n)🚨Train a model solving DFT for any geometry with almost no training data
Introducing Self-Refining Training for Amortized DFT: a variational method that predicts ground-state solutions across geometries and generates its own training data!
📜 arxiv.org/abs/2506.01225
💻 github.com/majhas/self-...
June 10, 2025 at 7:49 PM
Reposted by Arnaud Doucet
Shunichi Amari has been awarded the 40th (2025) Kyoto Prize in recognition of his pioneering research in the fields of artificial neural networks, machine learning, and information geometry

www.riken.jp/pr/news/2025...
甘利 俊一 栄誉研究員が「京都賞」を受賞
甘利 俊一栄誉研究員(本務:帝京大学 先端総合研究機構 特任教授)は、人工ニューラルネットワーク、機械学習、情報幾何学分野での先駆的な研究が評価され、第40回(2025)京都賞(先端技術部門 受賞対象分野:情報科学)を受賞しました。
www.riken.jp
June 20, 2025 at 1:26 PM
Reposted by Arnaud Doucet
🌟Applications open- LOGML 2025🌟

👥Mentor-led projects, expert talks, tutorials, socials, and a networking night
✍️Application form: logml.ai
🔬Projects: www.logml.ai/projects.html
📅Apply by 6th April 2025
✉️Questions? logml.committee@gmail.com

#MachineLearning #SummerSchool #LOGML #Geometry
LOGML 2025
London Geometry and Machine Learning Summer School, July 7-11 2025
logml.ai
March 11, 2025 at 3:25 PM
Reposted by Arnaud Doucet
SuperDiff goes super big!
- Spotlight at #ICLR2025!🥳
- Stable Diffusion XL pipeline on HuggingFace huggingface.co/superdiff/su... made by Viktor Ohanesian
- New results for molecules in the camera-ready arxiv.org/abs/2412.17762
Let's celebrate with a prompt guessing game in the thread👇
March 6, 2025 at 9:06 PM
Reposted by Arnaud Doucet
Why academia is sleepwalking into self-destruction. My editorial @brain1878.bsky.social If you agree with the sentiments please repost. It's important for all our sakes to stop the madness
academic.oup.com/brain/articl...
March 6, 2025 at 7:16 PM
Reposted by Arnaud Doucet
Excited to see our paper “Computing Nonequilibrium Responses with Score-Shifted Stochastic Differential Equations” in Physical Review Letters this morning as an Editor’s Suggestion! We uses ideas from generative modeling to unravel a rather technical problem. 🧵 journals.aps.org/prl/abstract...
March 4, 2025 at 6:45 PM
Great intro to PAC-Bayes bounds. Highly recommended!
I already advertised for this document when I posted it on arXiv, and later when it was published.

This week, with the agreement of the publisher, I uploaded the published version on arXiv.

Less typos, more references and additional sections including PAC-Bayes Bernstein.

arxiv.org/abs/2110.11216
March 5, 2025 at 9:55 AM
Reposted by Arnaud Doucet
Better diffusions with scoring rules!

Fewer, larger denoising steps using distributional losses; learn the posterior distribution of clean samples given the noisy versions.

arxiv.org/pdf/2502.02483

@vdebortoli.bsky.social Galashov Guntupalli Zhou @sirbayes.bsky.social @arnauddoucet.bsky.social
arxiv.org
February 5, 2025 at 2:23 PM
A standard ML approach for parameter estimation in latent variable models is to maximize the expectation of the logarithm of an importance sampling estimate of the intractable likelihood. We provide consistency/efficiency results for the resulting estimate: arxiv.org/abs/2501.08477
On the Asymptotics of Importance Weighted Variational Inference
For complex latent variable models, the likelihood function is not available in closed form. In this context, a popular method to perform parameter estimation is Importance Weighted Variational Infere...
arxiv.org
January 16, 2025 at 5:43 PM
Speculative sampling accelerates inference in LLMs by drafting future tokens which are verified in parallel. With @vdebortoli.bsky.social , A. Galashov & @arthurgretton.bsky.social , we extend this approach to (continuous-space) diffusion models: arxiv.org/abs/2501.05370
January 10, 2025 at 4:30 PM
Reposted by Arnaud Doucet
🔊 Super excited to announce the first ever Frontiers of Probabilistic Inference: Learning meets Sampling workshop at #ICLR2025 @iclr-conf.bsky.social!

🔗 website: sites.google.com/view/fpiwork...

🔥 Call for papers: sites.google.com/view/fpiwork...

more details in thread below👇 🧵
December 18, 2024 at 7:09 PM
Reposted by Arnaud Doucet
Schrödinger Bridge Flow for Unpaired Data Translation (by @vdebortoli.bsky.social et al.)

It will take me some time to digest this article fully, but it's important to follow the authors' advice and read the appendices, as the examples are helpful and well-illustrated.

📄 arxiv.org/abs/2409.09347
December 17, 2024 at 4:53 PM
The slides of my NeurIPS lecture "From Diffusion Models to Schrödinger Bridges - Generative Modeling meets Optimal Transport" can be found here
drive.google.com/file/d/1eLa3...
BreimanLectureNeurIPS2024_Doucet.pdf
drive.google.com
December 15, 2024 at 6:40 PM
Reposted by Arnaud Doucet
One #postdoc position is still available at the National University of Singapore (NUS) to work on sampling, high-dimensional data-assimilation, and diffusion/flow models. Applications are open until the end of January. Details:

alexxthiery.github.io/jobs/2024_di...
December 15, 2024 at 2:46 PM
Reposted by Arnaud Doucet
I have updated my course notes on Optimal Transport with a new Chapter 9 on Wasserstein flows. It includes 3 illustrative applications: training a 2-layer MLP, deep transformers, and flow-matching generative models. You can access it here: mathematical-tours.github.io/book-sources...
December 4, 2024 at 8:11 AM
Reposted by Arnaud Doucet
exciting new work by my truly brilliant postdoc Eugenio Clerico on the optimality of coin-betting strategies for mean estimation!

for fans of: mean estimation, online learning with log loss, optimal portfolios, hypothesis testing with E-values, etc.

dig in:
arxiv.org/abs/2412.02640
On the optimality of coin-betting for mean estimation
Confidence sequences are sequences of confidence sets that adapt to incoming data while maintaining validity. Recent advances have introduced an algorithmic formulation for constructing some of the ti...
arxiv.org
December 4, 2024 at 8:13 AM