Sander Vandenhaute
banner
sanderhaute.bsky.social
Sander Vandenhaute
@sanderhaute.bsky.social
Computational physics/chemistry, and some ML

prev: PhD @ Ghent University 🇧🇪, AI researcher @ Orbital Materials 🇬🇧
now: postdoc @ Rotskoff group (Stanford University)
... so definitely more than 4.4 but less than 4.6 million? /s

source?
November 5, 2025 at 2:12 AM
Just expressing my support for typst as well! It’s mature enough to typeset a PhD thesis, has a very mellow learning curve, and has a great community.
May 11, 2025 at 10:45 PM
Here I was thinking I’d have a hard time convincing people RPA is empirical.

Do quantum monte carlo techniques have true potential or are we stuck with decades-old approximations invented by highly noncomputational scientists?
November 29, 2024 at 12:17 AM
simple Python API; to drive a single 'master' job which then does everything else!
November 25, 2024 at 6:09 PM
a golden (😂) PES

Actually, from that perspective, even a 1000x slowdown could be acceptable since it would be used less for super long MDs and more for building models above and beyond atomic-level MD...
November 24, 2024 at 11:05 PM
From a distance, and this is probably controversial, but it feels like AF has made so much progress that would have otherwise required decades of atomic simulations ?
November 24, 2024 at 3:42 PM
For drug discovery, do you think more accurate atomic interactions are the way to go, or will people gradually abandon bottom-up atomic-level simulations?
November 24, 2024 at 3:40 PM
So as long as all possible low-density environments are included in training, putting a limit at fixed cutoff makes sense?
November 24, 2024 at 3:04 PM
Hmm, yeah, and maybe the fixed neighbors thing is just a trick to speed up training and improve performance on the synthetic benchmarks

At the same time: beyond a “threshold” number of neighbors there is maybe so much screening that the required # neighs to include becomes a constant?
November 24, 2024 at 3:03 PM
I should really get out of my matsci cave because I am so unaware of these things 😂

I was imagining OpenMM with a bunch of custom force expressions, PME, and anisotropic pressure control (often needed in solid state) at 1 ms / step — and maybe 100 ms / step for MACE for a similar system, approx..
November 24, 2024 at 2:53 PM
at least in my experience!
November 24, 2024 at 2:41 PM
Although matbench leading entries usually truncate the number of neighbors to consider to a fixed number, such that the cost of a single message passing layer no longer scales with density …
November 24, 2024 at 2:40 PM
Right sorry, wasn’t counting bio! Though I think it’s more the increased density rather than shear system size that widens the performance gap?

In matsci / catalysis, with proper enhanced sampling, the main worry is not so much the achievable time scales rather than the accuracy of the QM data…
November 24, 2024 at 2:39 PM
100x because that’s how much slower an “optimized” ML potential for any particular system would be. I might be optimistic here but a small MACE network and the right training data have always gotten me below ~1 meV/atom and ~50 meV/A errors.
November 24, 2024 at 11:09 AM