Max Ilse
maxilse.bsky.social
Max Ilse
@maxilse.bsky.social
Working at microsoft research health futures. Interested in causal representation learning and generative modelling applied to medical data.
The result is a fair, end‑to‑end comparison that isolates what actually drives performance for radiology foundation models.

#AI #MedicalImaging #FoundationModels #ScalingLaws #Radiology
September 23, 2025 at 8:34 AM
including not just findings but also lines & tubes classification/segmentation and report generation. We also test the effect of adding structured labels alongside reports during CLIP‑style pretraining, and study scaling laws under these controlled conditions.
September 23, 2025 at 8:34 AM
That makes it hard to tell whether wins come from the model design or just from more data/compute or favorable benchmarks. We fix this by holding the pretraining dataset and compute constant and standardizing evaluation across tasks,
September 23, 2025 at 8:34 AM
Why this matters: Prior comparisons of radiology encoders have often been apples‑to‑oranges: models trained on different datasets, with different compute budgets, and evaluated mostly on small datasets of finding‑only tasks.
September 23, 2025 at 8:34 AM
✅ Pretrained on 3.5M CXRs to study scaling laws for radiology models
✅ Compared MedImageInsight (CLIP-based) vs RAD-DINO (DINOv2-based)
✅ Found that structured labels + text can significantly boost performance
✅ Showed that as little as 30k in-domain samples can outperform public foundation models
September 23, 2025 at 8:34 AM