https://drewprinster.github.io/
7/
7/
6/
6/
5/
5/
Intuitively, we monitor the safety (coverage) & utility (sharpness) of an AI’s confidence sets.
3/
Intuitively, we monitor the safety (coverage) & utility (sharpness) of an AI’s confidence sets.
3/
2/
2/
Developers & clinical users: Keep this in mind!
Many Qs for future work…. Eg, can we dynamically select explanation types to optimize human-AI teaming? 👀
9/9
Developers & clinical users: Keep this in mind!
Many Qs for future work…. Eg, can we dynamically select explanation types to optimize human-AI teaming? 👀
9/9
- For correct AI: Explains why local explanations improve diagnostic performance.
- For incorrect AI: Local may worsen overreliance on AI errors--this needs further study!
8/
- For correct AI: Explains why local explanations improve diagnostic performance.
- For incorrect AI: Local may worsen overreliance on AI errors--this needs further study!
8/
Takeaway 3: Doctors may not realize how AI explanations impact their diagnostic performance! (AI explanation types did not affect whether doctors viewed AI as useful.)
7/
Takeaway 3: Doctors may not realize how AI explanations impact their diagnostic performance! (AI explanation types did not affect whether doctors viewed AI as useful.)
7/
- For correct AI: Local AI explanations improve diagnostic accuracy over global!
- Confident local swayed non-task experts (for correct AI)
- (Inconclusive for incorrect AI, but underpowered)
6/
- For correct AI: Local AI explanations improve diagnostic accuracy over global!
- Confident local swayed non-task experts (for correct AI)
- (Inconclusive for incorrect AI, but underpowered)
6/
Correctness of AI advice: +/-
Confidence of AI advice: 65%-94%
Physician task expertise: radiologist (expert) vs internal/emergency med (task non-expert)
5/
Correctness of AI advice: +/-
Confidence of AI advice: 65%-94%
Physician task expertise: radiologist (expert) vs internal/emergency med (task non-expert)
5/
- Local: Why this prediction on this input? (eg, highlighting key features)
- Global: How does the AI work in general? (eg, comparing to exemplar images of a class)
4/
- Local: Why this prediction on this input? (eg, highlighting key features)
- Global: How does the AI work in general? (eg, comparing to exemplar images of a class)
4/
Despite so many explainable AI (XAI) methods, there’s too little understanding of when clinicians find XAI interpretable & useful in practice!
3/
Despite so many explainable AI (XAI) methods, there’s too little understanding of when clinicians find XAI interpretable & useful in practice!
3/
2/
2/