I'll make an update once we have the updated version
I'll make an update once we have the updated version
No redesign. No retraining. Just better gradients.
arxiv.org/abs/2505.10139
By @leonklein.bsky.social & me.
No redesign. No retraining. Just better gradients.
arxiv.org/abs/2505.10139
By @leonklein.bsky.social & me.
Replicating the experiments from Equivariant Flow Matching (2306.15030), using the same limited resources, we get:
✓ up to 3× higher ESS on LJ55
✓ 2x on Alanine Dipeptide XTB
✓ Improved performance on most tasks
Replicating the experiments from Equivariant Flow Matching (2306.15030), using the same limited resources, we get:
✓ up to 3× higher ESS on LJ55
✓ 2x on Alanine Dipeptide XTB
✓ Improved performance on most tasks
Just apply them after Flow Matching.
Fine-tuning reliably improves performance without big changes to the overall learned distribution.
For example, on alanine dipeptide:
Just apply them after Flow Matching.
Fine-tuning reliably improves performance without big changes to the overall learned distribution.
For example, on alanine dipeptide: