Replicating the experiments from Equivariant Flow Matching (2306.15030), using the same limited resources, we get:
✓ up to 3× higher ESS on LJ55
✓ 2x on Alanine Dipeptide XTB
✓ Improved performance on most tasks
Replicating the experiments from Equivariant Flow Matching (2306.15030), using the same limited resources, we get:
✓ up to 3× higher ESS on LJ55
✓ 2x on Alanine Dipeptide XTB
✓ Improved performance on most tasks
Just apply them after Flow Matching.
Fine-tuning reliably improves performance without big changes to the overall learned distribution.
For example, on alanine dipeptide:
Just apply them after Flow Matching.
Fine-tuning reliably improves performance without big changes to the overall learned distribution.
For example, on alanine dipeptide:
We slapped Path Gradients on top — and things got better.
No extra samples, no extra compute, no changes to the model. Just gradients you already have access to.
arxiv.org/abs/2505.10139
We slapped Path Gradients on top — and things got better.
No extra samples, no extra compute, no changes to the model. Just gradients you already have access to.
arxiv.org/abs/2505.10139