Drew Prinster
drewprinster.bsky.social
Drew Prinster
@drewprinster.bsky.social
Trustworthy AI/ML in healthcare & high-stakes apps | My job is (mostly) error bars \SaluteEmoji (eg, conformal prediction) | CS PhD at Johns Hopkins. Prev at Yale. he/him
https://drewprinster.github.io/
Takeaway 1 (Adaptation): Prior monitoring methods do sequential hypothesis testing (eg, to detect changes from IID/exchangeability), but many often raise unneeded alarms even to benign shifts. Our methods adapt online to mild shifts to maintain safety & utility! 4/
May 13, 2025 at 7:16 PM
…via methods based on weighted #ConformalPrediction (we construct novel martingales), w/ false-alarm control for continual (anytime-valid) & scheduled (set time horizon) settings.

Intuitively, we monitor the safety (coverage) & utility (sharpness) of an AI’s confidence sets.
3/
May 13, 2025 at 7:16 PM
In real-world #AI deployments, you need to prep for the worst: unexpected data shifts or black swan events (eg COVID-19 outbreak, new LLM jailbreaks) can harm performance. So, post-deployment system monitoring is crucial. Our WATCH approach addresses drawbacks of prior work…
2/
May 13, 2025 at 7:15 PM
AI monitoring is key to responsible deployment. Our #ICML2025 paper develops approaches for 3 main goals:

1) *Adapting* to mild data shifts
2) *Quickly Detecting* harmful shifts
3) *Diagnosing* cause of degradation

🧵w/ Xing Han, Anqi Liu, Suchi Saria
arxiv.org/abs/2505.04608
May 13, 2025 at 7:14 PM
Takeaway 4: Doctors trust local AI explanations more than global, *regardless of if AI is correct.*

- For correct AI: Explains why local explanations improve diagnostic performance.
- For incorrect AI: Local may worsen overreliance on AI errors--this needs further study!

8/
January 7, 2025 at 7:51 PM
Takeaway 2: Local AI explanations are more efficient than global: Doctors agree/disagree more quickly.

Takeaway 3: Doctors may not realize how AI explanations impact their diagnostic performance! (AI explanation types did not affect whether doctors viewed AI as useful.)

7/
January 7, 2025 at 7:51 PM
Takeaway 1: AI explanations impact benefits/harms of correct/incorrect AI advice!

- For correct AI: Local AI explanations improve diagnostic accuracy over global!
- Confident local swayed non-task experts (for correct AI)
- (Inconclusive for incorrect AI, but underpowered)

6/
January 7, 2025 at 7:51 PM
We simulated a real clinical X-ray diagnosis workflow for 220 practicing doctors. Along w/ AI explanations, we looked at:
Correctness of AI advice: +/-
Confidence of AI advice: 65%-94%
Physician task expertise: radiologist (expert) vs internal/emergency med (task non-expert)

5/
January 7, 2025 at 7:50 PM
So, we studied how doctors may be affected by two main categories of AI explanations in medical imaging:
- Local: Why this prediction on this input? (eg, highlighting key features)
- Global: How does the AI work in general? (eg, comparing to exemplar images of a class)

4/
January 7, 2025 at 7:49 PM
Interpretability may be key to effective AI, but when AI explanations *actually* provide transparency vs add bias is highly debated.

Despite so many explainable AI (XAI) methods, there’s too little understanding of when clinicians find XAI interpretable & useful in practice!

3/
January 7, 2025 at 7:48 PM
When do AI explanations actually help, & promote appropriate trust?

Spoiler, via prospective, multisite @radiology_rsna study of 220 doctors: *How* AI explains its advice has big impacts on doctors’ diagnostic performance and trust in AI--even if they *don’t realize it*!

🧵1/ #AI #Radiology
January 7, 2025 at 7:47 PM