Felix Koehler
banner
felix-m-koehler.bsky.social
Felix Koehler
@felix-m-koehler.bsky.social
🤖 Machine Learning & 🌊 Simulation | 📺 YouTuber | 🧑‍🎓 PhD student @ Thuerey Group
Last week, I had the honor to speak about this at the RISEof speaking ML seminar, hosted by Olof Mogren. You can find the recording here www.youtube.com/watch?v=olpX...
Felix Köhler: From numerical simulators of PDEs to neural emulators and back
YouTube video by RISE Research Institutes of Sweden
www.youtube.com
November 14, 2025 at 8:45 AM
🧵 Project Page: tum-pbs.github.io/emulator-sup...
📃 Paper: arxiv.org/pdf/2510.23111

Feel free to stop by at NeurIPS in San Diego during the poster session on Friday evening 4:30 p.m. PST — 7:30 p.m. PST at # 2106. 😊 I am looking forward to the exchange!
Neural Emulator Superiority: When Machine Learning for PDEs Surpasses its Training Data
tum-pbs.github.io
November 14, 2025 at 8:45 AM
Proven theoretically for linear PDEs, validated experimentally on nonlinear ones like Burgers' equation. Time to rethink data-driven ML benchmarks for higher physical fidelity!
November 14, 2025 at 8:45 AM
We show neural emulators trained on low-fidelity data can outperform their source simulators—thanks to inductive biases and smarter error accumulation—beating them against high-fidelity references.
November 14, 2025 at 8:45 AM
Reposted by Felix Koehler
It may only be a band-aid, but we have just announced our new "Salon des Refusés" sessions for papers rejected due to space constraints: bsky.app/profile/euri...
Congratulations to everyone who got their @neuripsconf.bsky.social papers accepted 🎉🎉🎉

At #EurIPS we are looking forward to welcoming presentations of all accepted NeurIPS papers, including a new “Salon des Refusés” track for papers which were rejected due to space constraints!
September 19, 2025 at 9:35 AM
We will use APEBench to train, test and benchmark it in an advection scenario against a feedforward ConvNet.

arxiv.org/abs/2411.00180
APEBench: A Benchmark for Autoregressive Neural Emulators of PDEs
We introduce the Autoregressive PDE Emulator Benchmark (APEBench), a comprehensive benchmark suite to evaluate autoregressive neural emulators for solving partial differential equations. APEBench is b...
arxiv.org
June 9, 2025 at 7:23 AM
To get started with APEBench install it via `pip install apebench` and check out the public documentation: tum-pbs.github.io/apebench/
APEBench
A Benchmark for Autoregressive PDE Emulators in JAX.
tum-pbs.github.io
February 12, 2025 at 4:08 PM
Finally, there are so many cool experiments we did to find insights in neural emulators, to highlight limitations they inherit from the numerical simulator counterparts, etc. You find all the details in the paper: arxiv.org/pdf/2411.00180
arxiv.org
February 12, 2025 at 4:08 PM
And to enforce good practices APEBench is designed around controllable deterministic pseudo-randomness that allows for straightforward run of seed statistics that can be used to perform hypothesis tests.
February 12, 2025 at 4:08 PM
Another important contribution is that APEBench defines most of its PDEs via a new parameterization that we call "difficulties". Those allow for expressing a wide range of different dynamics with a reduced and interpretable set of numbers.
February 12, 2025 at 4:08 PM
This allows for investigating how unrolled training helps with long-term accuracy.
February 12, 2025 at 4:08 PM
Temporal Axis also means various configurations of how emulator and simulator interact during training, for example, in terms of supervised unrolled training. We generalize many approaches seen in the literature in terms of unrolled steps T and branch steps B.
February 12, 2025 at 4:08 PM
One core motivation for APEBench was the temporal axis in emulator learning (hence the "autoregressive" in APE). We focus on rollout metrics and sample rollouts to truly understand temporal generalization via long-term stability and accuracy in more than 20 metrics.
February 12, 2025 at 4:08 PM
We, of course, also ship a wide range of popular emulator architectures, all of them implemented in JAX and designed agnostic to spatial dimension and boundary conditions. If you don't like APEBench (which I cannot imagine 😉), they are also available individually: github.com/Ceyron/pdequ...
GitHub - Ceyron/pdequinox: Neural Emulator Architectures in JAX.
Neural Emulator Architectures in JAX. Contribute to Ceyron/pdequinox development by creating an account on GitHub.
github.com
February 12, 2025 at 4:08 PM
The solver is also available as an individual package: Exponax: github.com/Ceyron/exponax
GitHub - Ceyron/exponax: Efficient Differentiable n-d PDE solvers in JAX.
Efficient Differentiable n-d PDE solvers in JAX. Contribute to Ceyron/exponax development by creating an account on GitHub.
github.com
February 12, 2025 at 4:08 PM
This numerical solver is based on Fourier-pseudo spectral ETDRK methods, one of the most efficient numerical techniques to solve semi-linear PDEs on periodic boundaries for which we provide a wide range of pre-defined configurations (46 as of the initial release).
February 12, 2025 at 4:08 PM
With it, we can _procedurally_ generate all data ever needed in seconds on a modern GPU --- yes, this means you do not have to download hundreds of GBs of data. Installing the APEBench Python package (<1MB) is sufficient. 😎
February 12, 2025 at 4:08 PM