majhas.github.io
Here is one showing the electron cloud in two stages: (1) the learning of electron density during training and (2) the predicted ground-state across conformations 😎
Here is one showing the electron cloud in two stages: (1) the learning of electron density during training and (2) the predicted ground-state across conformations 😎
Self-refining training reduces total runtime up to 4 times compared to the baseline
and up to 2 times compared to the fully-supervised approach!!!
Less need for large pre-generated datasets — training and sampling happen in parallel.
Self-refining training reduces total runtime up to 4 times compared to the baseline
and up to 2 times compared to the fully-supervised approach!!!
Less need for large pre-generated datasets — training and sampling happen in parallel.
We simulate molecular dynamics using each model’s energy predictions and evaluate accuracy along the trajectory.
Models trained with self-refinement stay accurate even far from the training distribution — while baselines quickly degrade.
We simulate molecular dynamics using each model’s energy predictions and evaluate accuracy along the trajectory.
Models trained with self-refinement stay accurate even far from the training distribution — while baselines quickly degrade.
Our method achieves low energy error with as few as 25 conformations.
With 10× less data, it matches or outperforms fully supervised baselines.
This is especially important in settings where labeled data is expensive or unavailable.
Our method achieves low energy error with as few as 25 conformations.
With 10× less data, it matches or outperforms fully supervised baselines.
This is especially important in settings where labeled data is expensive or unavailable.
🔁 Use the current model to sample conformations via MCMC
📉 Use those conformations to minimize energy and update the model
Everything runs asynchronously, without need for labeled data and minimal number of conformations from a dataset!
🔁 Use the current model to sample conformations via MCMC
📉 Use those conformations to minimize energy and update the model
Everything runs asynchronously, without need for labeled data and minimal number of conformations from a dataset!
Jointly minimizing this bound wrt θ and q yields
✅ A model that predicts the ground-state solutions
✅ Samples that match the ground true density
Jointly minimizing this bound wrt θ and q yields
✅ A model that predicts the ground-state solutions
✅ Samples that match the ground true density
Boltzmann distribution
This isn't a typical ML setup because
❌ No samples from the density - can’t train a generative model
❌ No density - can’t sample via Monte Carlo!
Boltzmann distribution
This isn't a typical ML setup because
❌ No samples from the density - can’t train a generative model
❌ No density - can’t sample via Monte Carlo!
This presents a bottleneck for MD/sampling.
We want to amortize this - train a model that generalizes across geometries R.
This presents a bottleneck for MD/sampling.
We want to amortize this - train a model that generalizes across geometries R.