Bartolomeo Stellato
banner
stella.to
Bartolomeo Stellato
@stella.to
Assistant Professor @Princeton ORFE l Real-time optimizer I http://osqp.org developer | From 🇮🇹 in 🇺🇲 | https://stella.to
📢 New in JMLR (w @rajivsambharya.bsky.social)! 🎉 Data-driven guarantees for classical & learned optimizers via sample bounds + PAC-Bayes theory.

📄 jmlr.org/papers/v26/2...
💻 github.com/stellatogrp/...
September 8, 2025 at 1:10 PM
📢 Our paper "Verification of First-Order Methods for Parametric Quadratic Optimization" with my student Vinit Ranjan (vinitranjan1.github.io/) is accepted in Mathematical Programming! 🎉

🔗 DOI: doi.org/10.1007/s10107-025-02261-w
📄 arXiv: arxiv.org/pdf/2403.033...
💻 Code: github.com/stellatogrp/...
August 8, 2025 at 7:37 PM
🚀 Gave a talk at the EURO @euroonline.bsky.social Seminar Series on "Data-Driven Algorithm Design and Verification for Parametric Convex Optimization"!

🎥 Recording: https://euroorml.euro-online.org/

Big thanks to Dolores Romero Morales for the invitation! 🙌 #MachineLearning #Optimization #ORMS
February 26, 2025 at 4:46 PM
Clustering is a powerful tool for decision-making under uncertainty!

Work w/ my students Irina Wang (lead) and Cole Becker, in collab. w/
Bart Van Parys

🧵 (7/7)
November 29, 2024 at 3:41 PM
We have several examples in the paper. Here is a sparse portfolio optimization one. Clustering barely affects the solution objective. Speedups are more than 3 orders of magnitude. 🧵 (6/7)
November 29, 2024 at 3:41 PM
By varying the number of clusters K, our method bridges Robust and Distributionally Robust optimization! We also derive theoretical bounds on 1) how to adjust the Wasserstein ball radius to compensate for clustering, and 2) how to exactly quantify the effect of clustering 🧵 (5/7)

November 29, 2024 at 3:41 PM
In Mean Robust Optimization, we define an uncertainty set around the cluster centroids with weights defined by the amount of samples in each cluster. 🧵 (4/7)
November 29, 2024 at 3:40 PM
Our procedure: we first cluster N data points into K clusters. Then, we solve the Mean Robust Optimization problem. 🧵 (3/7)
November 29, 2024 at 3:40 PM
Robust optimization is tractable but, often, very conservative. Wasserstein Distributionally Robust Optimization is less conservative but, often, computationally expensive. How can we bridge the two? 🧵 (2/7)
November 29, 2024 at 3:40 PM
Our paper "Mean robust optimization" has been accepted to Mathematical Programming: https://buff.ly/3B3VpIG

📰 Arxiv (longer version): https://buff.ly/3CT4aWD
👩‍💻 Code: https://buff.ly/3ATqAXh

w/ Irina Wang, Cole Becker, and Bart van Parys

A thread 🧵 (1/7)👇
November 29, 2024 at 3:40 PM
Very proud of my first PhD student Rajiv Sambharya for defending his thesis! 🎉

Rajiv has done excellent work on learning optimization algorithms for large-scale and embedded optimization, with strong convergence and generalization guarantees.

He will soon start a postdoc at UPenn Engineering!
September 7, 2024 at 2:00 PM
It was great to organize Princeton Workshop on #Optimization, #Learning, and #Control last June! Thanks to everyone who attended and made it a success! 🎉 #OLC24

Missed the live sessions? Catch up on all the talks with the video recordings here: https://buff.ly/3YwomX6
August 8, 2024 at 2:00 PM
In particular, none of this would have been possible without Goran Banjac with whom and I shared countless hours developing OSQP. Here is a picture of us in 2016 having Korean BBQ in the Bay Area (where it all began!)
July 31, 2024 at 2:30 PM
Excited to announce that our work on the OSQP solver (https://osqp.org/) has received the Beale — Orchard-Hays Prize (https://buff.ly/3Yqutfx) for Excellence in Computational Mathematical Programming! 🎉
July 31, 2024 at 2:30 PM