Visiting Scientist at Abridge AI
Causality & Machine Learning in Healthcare
Prev: PhD at MIT, Postdoc at CMU
This formalism allows us to start reasoning about the impact of new models with different outputs and performance characteristics.
This formalism allows us to start reasoning about the impact of new models with different outputs and performance characteristics.
The first challenge is coverage: If the new model is very different from previous models, it may produce outputs (for specific types of inputs) that were never observed in the trial.
The first challenge is coverage: If the new model is very different from previous models, it may produce outputs (for specific types of inputs) that were never observed in the trial.
These bounds require some mild assumptions, but those assumptions can be tested in practice using RCT data that includes multiple models.
These bounds require some mild assumptions, but those assumptions can be tested in practice using RCT data that includes multiple models.
But AI/ML systems can change: Do we need a new RCT every time we update the model? Not necessarily, as we show in our UAI paper! arxiv.org/abs/2502.09467
But AI/ML systems can change: Do we need a new RCT every time we update the model? Not necessarily, as we show in our UAI paper! arxiv.org/abs/2502.09467
I work on safe/reliable ML and causal inference, motivated by healthcare applications.
Beyond myself, Johns Hopkins has a rich community of folks doing similar work. Come join us!
I work on safe/reliable ML and causal inference, motivated by healthcare applications.
Beyond myself, Johns Hopkins has a rich community of folks doing similar work. Come join us!