Join us for what promises to be a very insightful session!
Join us for what promises to be a very insightful session!
Learning differential equations becomes substantially more challenging in the presence of stochasticity, as Neural SDEs typically require expensive, sequential integration during training.
Learning differential equations becomes substantially more challenging in the presence of stochasticity, as Neural SDEs typically require expensive, sequential integration during training.
chemrxiv.org/engage/chemr...
Kooplearn library:
kooplearn.readthedocs.io/latest/
For the longer version of the thread, you can take a look at this blog post:
vladi-iit.github.io/posts/2024-1...
chemrxiv.org/engage/chemr...
Kooplearn library:
kooplearn.readthedocs.io/latest/
For the longer version of the thread, you can take a look at this blog post:
vladi-iit.github.io/posts/2024-1...
[P1] NeurIPS 2022
arxiv.org/abs/2205.14027
[P2] NeurIPS2023
arxiv.org/abs/2302.02004
[P3] ICML2024
arxiv.org/abs/2312.13426
[P4] NeurIPS2023
arxiv.org/abs/2306.04520
[P5] ICLR 2024
arxiv.org/abs/2307.09912
[P6] NeurIPS2024
arxiv.org/abs/2405.12940
[P1] NeurIPS 2022
arxiv.org/abs/2205.14027
[P2] NeurIPS2023
arxiv.org/abs/2302.02004
[P3] ICML2024
arxiv.org/abs/2312.13426
[P4] NeurIPS2023
arxiv.org/abs/2306.04520
[P5] ICLR 2024
arxiv.org/abs/2307.09912
[P6] NeurIPS2024
arxiv.org/abs/2405.12940
• Learning from partial observations
• Modeling non-time-homogeneous dynamics
• Expanding applications in neuroscience, genetics, and climate modeling
Stay tuned for groundbreaking updates from our team! 🌍
• Learning from partial observations
• Modeling non-time-homogeneous dynamics
• Expanding applications in neuroscience, genetics, and climate modeling
Stay tuned for groundbreaking updates from our team! 🌍
🌟 Special thanks to Karim Lounici from École Polytechnique, whose insights were a major driving force behind many projects.
🌟 Special thanks to Karim Lounici from École Polytechnique, whose insights were a major driving force behind many projects.
[P8] NeurIPS 2024 proposed Neural Conditional Probability (NCP) to efficiently learn conditional distributions. It simplifies uncertainty quantification and guarantees accuracy for nonlinear, high-dimensional data.
[P8] NeurIPS 2024 proposed Neural Conditional Probability (NCP) to efficiently learn conditional distributions. It simplifies uncertainty quantification and guarantees accuracy for nonlinear, high-dimensional data.
[P6] NeurIPS 2024 introduced a physics-informed framework for learning Infinitesimal Generators (IG) of stochastic systems, ensuring robust spectral estimation.
[P6] NeurIPS 2024 introduced a physics-informed framework for learning Infinitesimal Generators (IG) of stochastic systems, ensuring robust spectral estimation.
[P5] ICLR 2024
We combined neural networks with operator theory via Deep Projection Networks (DPNets). This approach enhances robustness, scalability, and interpretability for dynamical systems.
[P5] ICLR 2024
We combined neural networks with operator theory via Deep Projection Networks (DPNets). This approach enhances robustness, scalability, and interpretability for dynamical systems.
[P4] NeurIPS 2023 introduced a Nyström sketching-based method to reduce computational costs from cubic to almost linear without sacrificing accuracy. Validated on massive datasets like molecular dynamics, see figure.
[P4] NeurIPS 2023 introduced a Nyström sketching-based method to reduce computational costs from cubic to almost linear without sacrificing accuracy. Validated on massive datasets like molecular dynamics, see figure.
Our Deflate-Learn-Inflate (DLI) paradigm ensures uniform error bounds, even for infinite time horizons. This method stabilized predictions in real-world tasks; see the figure.
Our Deflate-Learn-Inflate (DLI) paradigm ensures uniform error bounds, even for infinite time horizons. This method stabilized predictions in real-world tasks; see the figure.
[P1] NeurIPS 2022
We introduced the first ML formulation for learning TO, which led to the development of the open-source Kooplearn library. This step laid the groundwork for exploring the theoretical limits of operator learning from finite data.
[P1] NeurIPS 2022
We introduced the first ML formulation for learning TO, which led to the development of the open-source Kooplearn library. This step laid the groundwork for exploring the theoretical limits of operator learning from finite data.