Harry Cheon
scheon.com
Harry Cheon
@scheon.com
"Seung Hyun" | MS CS & BS Applied Math @UCSD 🌊 | LPCUWC 18' 🇭🇰 | Interpretability, Explainability, AI Alignment, Safety & Regulation | 🇰🇷
harry.scheon.com
Reposted by Harry Cheon
In a new paper, I try to resolve the counterintuitive evidence of Meehl’s “clinical vs statistical prediction” problems: Statistics only wins because the game is rigged.
The Actuary's Final Word on Algorithmic Decision Making
Paul Meehl's foundational work "Clinical versus Statistical Prediction," provided early theoretical justification and empirical evidence of the superiority of statistical methods over clinical judgmen...
arxiv.org
September 8, 2025 at 2:48 PM
Reposted by Harry Cheon
When RAG systems hallucinate, is the LLM misusing available information or is the retrieved context insufficient? In our #ICLR2025 paper, we introduce "sufficient context" to disentangle these failure modes. Work w Jianyi Zhang, Chun-Sung Ferng, Da-Cheng Juan, Ankur Taly, @cyroid.bsky.social
April 24, 2025 at 6:18 PM
Reposted by Harry Cheon
Hey AI folks - stop using SHAP! It won't help you debug [1], won't catch discrimination [2], and makes no sense for feature importance [3].

Plus - as we show - it also won't give recourse.

In a paper at #ICLR we introduce feature responsiveness scores... 1/

arxiv.org/pdf/2410.22598
April 24, 2025 at 4:37 PM
Reposted by Harry Cheon
Many ML models predict labels that don’t reflect what we care about, e.g.:
– Diagnoses from unreliable tests
– Outcomes from noisy electronic health records

In a new paper w/@berkustun, we study how this subjects individuals to a lottery of mistakes.
Paper: bit.ly/3Y673uZ
🧵👇
April 19, 2025 at 11:04 PM
Denied a loan, an interview, or an insurance claim by machine learning models? You may be entitled to a list of reasons.

In our latest w @anniewernerfelt.bsky.social @berkustun.bsky.social @friedler.net, we show how existing explanation frameworks fail and present an alternative for recourse
April 24, 2025 at 6:19 AM