John Gardner
jla-gardner.bsky.social
John Gardner
@jla-gardner.bsky.social
ML for Potential Energy Surfaces
PhD student at Oxford
Former Microsoft AI4Science Intern
Thanks! 😊 In principle yes - our data generation protocol requires ~5 model calls to generate a new, chemically reasonable structure + is easily parallelised across processes. If you were willing to burn $$$ one could generate a new dataset very quickly (as is often the case with e.g. RSS set ups)
June 24, 2025 at 5:53 AM
... @juraskova-ver.bsky.social, Louise Rosset, @fjduarte.bsky.social, Fausto Martelli, Chris Pickard and @vlderinger.bsky.social 🤩
June 23, 2025 at 2:13 PM
It was super fun collaborating with my co-first-author @dft-dutoit.bsky.social, together with the rest of the team across various research groups: @bm-chiheb.bsky.social, Zoé Faure Beaulieu, Bianca Pasça...
June 23, 2025 at 2:13 PM
We hope that you can start using this method to do cool new science!
June 23, 2025 at 2:13 PM
Code for our synthetic generation pipeline (compatible with any ase Calculator object) can be found here:
github.com/jla-gardner/...
GitHub - jla-gardner/augment-atoms: dataset augmentation for atomistic machine learning
dataset augmentation for atomistic machine learning - jla-gardner/augment-atoms
github.com
June 23, 2025 at 2:13 PM
I find our results for the modelling of MAPI (a hybrid perovskite) particularly pleasing: the distribution of cation orientations generated by the teacher and student models during NVT MD are ~identical!
June 23, 2025 at 2:13 PM
We go on to apply this distillation approach to target other chemical domains by distilling different foundation models (Orb, MatterSim @msftresearch.bsky.social) and MACE-OFF), and find that it works well across the board!
June 23, 2025 at 2:13 PM
Beyond error metrics, we extensively validate these models to show they model liquid water well.
June 23, 2025 at 2:13 PM
These student models have relatively few parameters (c. 40k for PaiNN and TensorNet), and so have much lower memory footprint. This lets you scale single GPU experiments very easily!
June 23, 2025 at 2:13 PM
The resulting student models reach impressive accuracy vs DFT while being orders of magnitude faster than the teacher!

Note that these student models are of a different architecture to MACE, and in fact ACE is not even NN-based.
June 23, 2025 at 2:13 PM
We start by (i) fine-tuning MACE-MP-0 (@ilyesbatatia.bsky.social) on 25 water structures labelled with an accurate functional, (ii) using this fine-tuned model and structures to generate a large number (10k) new “synthetic” structures, and (iii) training student models on this dataset.
June 23, 2025 at 2:13 PM
Does this distillation approach work? In short, yes! 🤩
June 23, 2025 at 2:13 PM
This approach is very cheap, taking c. 5 calls to the teacher model to generate a new, chemically relevant and uncorrelated structure! We can build large datasets within one hour using this protocol.
June 23, 2025 at 2:13 PM
In this pre-print, we propose a different solution: starting from a (very) small pool of structures, and repeatedly (i) rattling and (ii) crudely relaxing them using the teacher model and a Robbins-Monro procedure.
June 23, 2025 at 2:13 PM
This works well, but has two drawbacks: (1) MD is still quite expensive, and requires many steps to generate uncorrelated structures, and (2) expert knowledge and lots of fiddling is required to get the MD settings right.
June 23, 2025 at 2:13 PM
In previous work, we and others (PFD-kit) have proposed using teacher models to generate "synthetic data" by using them to drive MD, and to sample snapshots along these trajectories as training points.
June 23, 2025 at 2:13 PM
The devil is always in the details however 😈 The main problem we need to solve is how to generate many relevant structures that densely sample the chemical domain we are interested in targeting.
June 23, 2025 at 2:13 PM
At a high level, this builds upon the approach pioneered by Joe Morrow, now extended to the distillation of impressively capable foundation models, and to a range of downstream architectures and chemical domains,
June 23, 2025 at 2:13 PM
Concretely, we train a student to predict the energy and force labels generated by the teacher on a large dataset of structures: this requires no alterations to existing training pipelines, and so is completely agnostic to the architecture of both the teacher and student 😎
June 23, 2025 at 2:13 PM
Both of the above methods try to maximise the amount of information extracted per training structure from the teacher. Our approach is orthogonal to this: we try to maximise the number of structures (that are both sensible and useful) we use to transfer knowledge.
June 23, 2025 at 2:13 PM
Somewhat similarly,
@ask1729.bsky.social
and others extract additional Hessian information from the teacher. Again, this works well providing you have a training framework that lets you train student models on this data.
June 23, 2025 at 2:13 PM
@gasteigerjo.bsky.social
and others attempt to align not only the predictions, but also the internal representations of the teacher and the student. This approach works well for models with similar architectures, but is incompatible with e.g. fast linear models like ACE.
June 23, 2025 at 2:13 PM
At their heart, all model distillation strategies attempt to extract as much information as possible from the teacher model, in a format that is useful for the student.

Various existing methods in the literature do this in different ways.
June 23, 2025 at 2:13 PM