Michael Cooper
coopermj.bsky.social
Michael Cooper
@coopermj.bsky.social
We red-teamed modern LLMs with practicing clinicians using real clinical scenarios.
The LLMs:
✅ Made up lab test scores
✅ Gave bad surgical advice
✅ Claimed two identical X-rays looked different
Here’s what this means for LLMs in healthcare.
📄 arxiv.org/abs/2505.00467
🧵 (1/)
Red Teaming Large Language Models for Healthcare
We present the design process and findings of the pre-conference workshop at the Machine Learning for Healthcare Conference (2024) entitled Red Teaming Large Language Models for Healthcare, which took...
arxiv.org
June 25, 2025 at 5:27 PM
🚨 This is the future of causal inference. 🚨👇

CausalPFN is a foundation model trained on simulated causal worlds—it estimates heterogeneous treatment effects in-context from observational data. No retraining. Just inference.

Oh, and it's SOTA. 🔥

A 𝘮𝘢𝘴𝘴𝘪𝘷𝘦 leap forward for the field. 🚀
🚨 Introducing CausalPFN, a foundation model trained on simulated data for in-context causal effect estimation, based on prior-fitted networks (PFNs). Joint work with Hamid Kamkari, Layer6AI & @rahulgk.bsky.social 🧵[1/7]

📝 arxiv.org/abs/2506.07918
🔗 github.com/vdblm/Causal...
🗣️Oral@ICML SIM workshop
June 11, 2025 at 2:53 PM
Very grateful for the support from @torontosri.bsky.social—these resources stand to significantly advance my work applying modern machine learning to make liver transplant prioritization more equitable and efficient. 🚀
June 11, 2025 at 2:51 AM