Rajiv Sambharya
rajivsambharya.bsky.social
Rajiv Sambharya
@rajivsambharya.bsky.social
Postdoc at Penn Engineering | researching optimization, control, and machine learning | Princeton and Berkeley alumnus
We learned acceleration algorithms for fast parametric convex optimization. Only 10 training instances used for each example and robustness is guaranteed with PEP! Joint work w/ Jinho Bok, Nik Matni, and George Pappas!
October 27, 2025 at 6:18 PM
We learned the hyperparameters to accelerate algorithms over a family of problems. Turns out that we only need 10 training instances in each example and learn long steps for (prox) gd! Check out this work with @stellato.io

paper: arxiv.org/pdf/2411.15717
code: github.com/stellatogrp/...
December 2, 2024 at 2:13 AM