CSML IIT Lab
banner
pontilgroup.bsky.social
CSML IIT Lab
@pontilgroup.bsky.social
Computational Statistics and Machine Learning (CSML) Lab | PI: Massimiliano Pontil | Webpage: csml.iit.it | Active research lines: Learning theory, ML for dynamical systems, ML for science, and optimization.

He will also present an entropy-respecting forward–backward learning scheme that mitigates the inherent ill-posedness of stochastic learning problems.

Join us for what promises to be a very insightful session!
November 14, 2025 at 2:03 PM
In this talk, Arthur Bizzi will introduce Neural Kolmogorov Equations, a deterministic and parallelizable framework for learning continuous-time stochastic processes using Forward and Backward Kolmogorov Equations.
November 14, 2025 at 2:03 PM
Abstract:
Learning differential equations becomes substantially more challenging in the presence of stochasticity, as Neural SDEs typically require expensive, sequential integration during training.
November 14, 2025 at 2:03 PM
P11] (submitted to The Journal of Chemical Physics)
chemrxiv.org/engage/chemr...

Kooplearn library:
kooplearn.readthedocs.io/latest/

For the longer version of the thread, you can take a look at this blog post:
vladi-iit.github.io/posts/2024-1...
Slow dynamical modes from static averages
In recent times, efforts are being made at describing the evolution of a complex system not through long trajectories, but via the study of probability distribution evolution. This more collective app...
chemrxiv.org
January 15, 2025 at 2:34 PM
14/ Looking ahead, we’re excited to tackle new challenges:
• Learning from partial observations
• Modeling non-time-homogeneous dynamics
• Expanding applications in neuroscience, genetics, and climate modeling

Stay tuned for groundbreaking updates from our team! 🌍
January 15, 2025 at 2:34 PM
🙏 Collaborations with the Dynamic Legged Systems group led by Claudio Semini and the Atomistic Simulations group led by Michele Parrinello enriched our research, resulting in impactful works like [P9, P10] and [P7, P11].
January 15, 2025 at 2:34 PM
12/ This journey wouldn’t have been possible without the inspiring collaborations that shaped our work.

🌟 Special thanks to Karim Lounici from École Polytechnique, whose insights were a major driving force behind many projects.
January 15, 2025 at 2:34 PM
11/ One of our most exciting results:
[P8] NeurIPS 2024 proposed Neural Conditional Probability (NCP) to efficiently learn conditional distributions. It simplifies uncertainty quantification and guarantees accuracy for nonlinear, high-dimensional data.
January 15, 2025 at 2:34 PM
10/ [P7] NeurIPS 2024 developed methods to discover slow dynamical modes in systems like molecular simulations. This is transformative for studying rare events and costly data acquisition scenarios in atomistic systems.
January 15, 2025 at 2:34 PM
9/ Addressing continuous dynamics:
[P6] NeurIPS 2024 introduced a physics-informed framework for learning Infinitesimal Generators (IG) of stochastic systems, ensuring robust spectral estimation.
January 15, 2025 at 2:34 PM
8/ 🌟 Representation learning takes center stage in:
[P5] ICLR 2024
We combined neural networks with operator theory via Deep Projection Networks (DPNets). This approach enhances robustness, scalability, and interpretability for dynamical systems.
January 15, 2025 at 2:34 PM
7/ 📈 Scaling up:
[P4] NeurIPS 2023 introduced a Nyström sketching-based method to reduce computational costs from cubic to almost linear without sacrificing accuracy. Validated on massive datasets like molecular dynamics, see figure.
January 15, 2025 at 2:34 PM
6/ [P3] ICML 2024 addressed a critical issue in TO-based modeling: reliable long-term predictions.
Our Deflate-Learn-Inflate (DLI) paradigm ensures uniform error bounds, even for infinite time horizons. This method stabilized predictions in real-world tasks; see the figure.
January 15, 2025 at 2:34 PM
5/ [P2] NeurIPS 2023 advanced TOs with theoretical guarantees for spectral decomposition—previously lacking finite sample guarantees. We developed sharp learning rates, enabling accurate, reliable models for long-term system behavior.
January 15, 2025 at 2:34 PM
4/ 🔑 The journey began with:
[P1] NeurIPS 2022
We introduced the first ML formulation for learning TO, which led to the development of the open-source Kooplearn library. This step laid the groundwork for exploring the theoretical limits of operator learning from finite data.
January 15, 2025 at 2:34 PM
3/TOs describe system evolution over finite time intervals, while IGs capture instantaneous rates of change. Their spectral decomposition is key for identifying dominant modes and understanding long-term behavior in complex or stochastic systems.
January 15, 2025 at 2:34 PM
2/ 🌐 Our work revolves around Markov/Transfer Operators (TO) and their Infinitesimal Generators (IG)—tools that allow us to model complex dynamical systems by understanding their evolution in higher-dimensional spaces. Here’s why this matters.
January 15, 2025 at 2:34 PM