David Rügamer
banner
davidruegamer.bsky.social
David Rügamer
@davidruegamer.bsky.social
Associate Prof @ LMU Munich
PI @ Munich Center for Machine Learning
Ellis Member
Associate Fellow @ relAI

-----
https://davidruegamer.github.io/ | https://www.muniq.ai/
-----

BNNs, UQ in DL, DL Theory (Overparam, Implicit Bias, Optim), Sparsity
OpenReview's Statement Regarding API Security Incident - November 27, 2025
November 27, 2025 at 11:16 PM
which seems a bit naive.... the dataset was even on Hugging Face in the meantime
November 27, 2025 at 7:37 PM
Here is the official @iclr-conf.bsky.social statement
November 27, 2025 at 7:28 PM
OpenReview had a bug that allowed crawling reviewer names for ICLR (and apparently for all previous and ongoing conferences), and it had been known since Nov 12 ................
November 27, 2025 at 5:19 PM
Cross-posting from X 👇 (cc @iclr-conf.bsky.social — don’t forget about the butterflies!)
November 11, 2025 at 11:00 PM
Works like a charm for structured sparsity too 🔥 Tested on a variety of architectures: sparse attention heads, sparse inputs, sparse conv filters, sparse NODEs,... And no setup changes needed, just a regularized gating variable! Spotlight @NeurIPS + Oral @EurIPS 🤩
Preprint: arxiv.org/abs/2509.23898
October 8, 2025 at 10:47 PM
This is for ICLR 2025. While I see the "Theory" effect Ahmad described, I am not sure we can draw any conclusions from the small sample size of an AC batch. My batch includes one paper with an average of 4, one with 3.6, and the rest are below 3.5 (so perhaps more in line with Ahmad's findings).
July 25, 2025 at 6:55 PM
Arriving in Singapore this afternoon 🛬 I'll attend #ICLR2025, #AABI2025, and #AISTATS2025 together with many of my students and collaborators to present our 2 orals, 5 posters, and 14 workshop contributions 🚀

Feel free to drop by!
April 23, 2025 at 3:15 AM
Connecting neural network solutions via a low-loss path reveals a subspace of highly performant hypotheses. In our AIStats paper (arxiv.org/pdf/2503.03382), we study direct optimization of such paths and their ambient spaces 🌐. Better understanding this space can enable improved subspace sampling.
March 6, 2025 at 8:42 AM
Interested in blazingly fast (10x faster than previous methods), effectively tuning-free, high-performance sampling-based inference for Bayesian Neural Networks?

Then check out our ICLR 2025 paper on Microcanonical Langevin Ensembles (MILE)! 🔥

openreview.net/pdf?id=QMtrW...
February 10, 2025 at 11:58 AM