lenz3000.bsky.social
lenz3000.bsky.social
@lenz3000.bsky.social
The result?
Replicating the experiments from Equivariant Flow Matching (2306.15030), using the same limited resources, we get:
✓ up to 3× higher ESS on LJ55
✓ 2x on Alanine Dipeptide XTB
✓ Improved performance on most tasks
May 19, 2025 at 9:23 AM
Path Gradients were the topic of my dissertation — and they turn out to be a great fine-tuning step for Boltzmann Generators.
Just apply them after Flow Matching.
Fine-tuning reliably improves performance without big changes to the overall learned distribution.
For example, on alanine dipeptide:
May 19, 2025 at 9:17 AM
Ever felt like Boltzmann Generators trained with Flow Matching were doing fine, just not good enough?
We slapped Path Gradients on top — and things got better.
No extra samples, no extra compute, no changes to the model. Just gradients you already have access to.
arxiv.org/abs/2505.10139
May 19, 2025 at 9:14 AM