Stephan Rabanser
stvrb.bsky.social
Stephan Rabanser
@stvrb.bsky.social
PhD candidate @utoronto.ca and @vectorinstitute.ai | Soon: Postdoc @princetoncitp.bsky.social‬ | Reliable, safe, trustworthy machine learning.
In our work, we introduce Gatekeeper: a novel loss function that calibrates smaller models in cascade setups to confidently handle easy tasks while deferring complex ones. Gatekeeper significantly improves deferral performance across a diverse set of architectures and tasks.
July 23, 2025 at 1:18 PM
🏅 Very excited to share that my recent Google internship project on model cascading has received the 𝗕𝗲𝘀𝘁 𝗣𝗼𝘀𝘁𝗲𝗿 𝗔𝘄𝗮𝗿𝗱 at the 𝘛𝘛𝘖𝘋𝘓𝘦𝘳-𝘍𝘔 𝘞𝘰𝘳𝘬𝘴𝘩𝘰𝘱 @ 𝘐𝘊𝘔𝘓! Thanks a lot to the organizers for setting up this amazing workshop!
July 23, 2025 at 1:18 PM
📄 Gatekeeper: Improving Model Cascades Through Confidence Tuning
Paper ➡️ arxiv.org/abs/2502.19335
Workshop ➡️ Tiny Titans: The next wave of On-Device Learning for Foundational Models (TTODLer-FM)
Poster ➡️ West Meeting Room 215-216 on Sat 19 Jul 3:00 p.m. — 3:45 p.m.
July 11, 2025 at 8:04 PM
📄 Selective Prediction Via Training Dynamics
Paper ➡️ arxiv.org/abs/2205.13532
Workshop ➡️ 3rd Workshop on High-dimensional Learning Dynamics (HiLD)
Poster ➡️ West Meeting Room 118-120 on Sat 19 Jul 10:15 a.m. — 11:15 a.m. & 4:45 p.m. — 5:30 p.m.
July 11, 2025 at 8:04 PM
📄 Suitability Filter: A Statistical Framework for Classifier Evaluation in Real-World Deployment Settings (✨ oral paper ✨)
Paper ➡️ arxiv.org/abs/2505.22356
Poster ➡️ E-504 on Thu 17 Jul 4:30 p.m. — 7 p.m.
Oral Presentation ➡️ West Ballroom C on Thu 17 Jul 4:15 p.m. — 4:30 p.m.
July 11, 2025 at 8:04 PM
📄 Confidential Guardian: Cryptographically Prohibiting the Abuse of Model Abstention
TL;DR ➡️ We show that a model owner can artificially introduce uncertainty and provide a detection mechanism.
Paper ➡️ arxiv.org/abs/2505.23968
Poster ➡️ E-1002 on Wed 16 Jul 11 a.m. — 1:30 p.m.
July 11, 2025 at 8:04 PM
📣 I will be at #ICML2025 in Vancouver next week to present two main conference papers (including one oral paper ✨) and two workshop papers! Say hi if you are around and want to chat about ML uncertainty & reliability! 😊

🧵 Papers in order of presentation below:
July 11, 2025 at 8:04 PM
🚀 Dive deeper:

Paper ▶️ arxiv.org/abs/2505.23968
Code ▶️ github.com/cleverhans-l...

Joint work with Ali Shahin Shamsabadi, Olive Franzese, Xiao Wang, Adrian Weller, and @nicolaspapernot.bsky.social.

Talk to us at ICML in Vancouver! 🇨🇦

🧵10/10 #Abstention #Uncertainty #Calibration #ZKP #ICML2025
June 2, 2025 at 2:38 PM
🔐 How Confidential Guardian works:

Auditor supplies a reference dataset which has coverage over suspicious regions. 📂
Model runs inside a ZKP circuit. 🤫
Confidential Guardian releases ECE & reliability diagram—artificial uncertainty tampering pops out. 🔍📈

🧵8/10
June 2, 2025 at 2:38 PM
💥 Meet MIRAGE:

A regularizer pushes the model's output distribution towards near‑uniform targets in any chosen region while leaving a small gap to random chance accuracy—confidence crashes 📉, accuracy stays high 📈.
Result: systematic “uncertain” labels that hide bias.

🧵5/10
June 2, 2025 at 2:38 PM
🧠 Theoretical feasibility:

We show theoretically that such uncertainty attacks work on any neural network—either repurposing hidden neurons or attaching additional fresh neurons to damp confidence. This means that no model is safe out‑of‑the‑box.

🧵4/10
June 2, 2025 at 2:38 PM
🔍 Background—Cautious Predictions:

ML models are often designed abstain from predicting when uncertain to avoid costly mistakes (finance, healthcare, justice, autonomous driving). But what if that safety valve becomes a backdoor for discrimination? 🚪⚠️

🧵2/10
June 2, 2025 at 2:38 PM
📢 New ICML 2025 paper!

Confidential Guardian: Cryptographically Prohibiting the Abuse of Model Abstention

🤔 Think model uncertainty can be trusted?
We show that it can be misused—and how to stop it!
Meet Mirage (our attack💥) & Confidential Guardian (our defense🛡️).

🧵1/10
June 2, 2025 at 2:38 PM
Starting off this account with a banger: In September 2025, I will be joining @princetoncitp.bsky.social at Princeton University as a Postdoc working with @randomwalker.bsky.social & @msalganik.bsky.social! I am very excited about this opportunity to continue my work on trustworthy/reliable ML! 🥳
March 10, 2025 at 8:45 PM