Kellie Owens
@kellieowens.bsky.social
Assistant Professor, Medical Ethics, NYU Grossman School of Medicine. Politics and Ethics of AI, Genomics, and Health Information Technologies.
Reposted by Kellie Owens
New from me and @kellieowens.bsky.social: we argue that the practice of maintaining AI models in healthcare exists in a "responsibility vacuum," resulting in the emergence of creative forms of invisible labor to monitor and repair technical systems bmchealthservres.biomedcentral.com/articles/10....
Managing a “responsibility vacuum” in AI monitoring and governance in healthcare: a qualitative study - BMC Health Services Research
Background Despite the increasing implementation of artificial intelligence (AI) and machine learning (ML) technologies in healthcare, their long-term safety, effectiveness, and equity remain compromised by a lack of sustained oversight. This study explores the phenomenon of a “responsibility vacuum” in AI governance, wherein maintenance and monitoring tasks are poorly defined, inconsistently performed, and undervalued across healthcare systems. Methods We conducted semi-structured interviews with 21 experts involved in AI implementation in healthcare, including clinicians, clinical informaticists, computer scientists, and legal/policy professionals. Participants were recruited through purposive and snowball sampling. Interviews were transcribed, coded, and analyzed using abductive qualitative methods to identify themes related to maintenance practices, institutional incentives, and responsibility attribution. Results Participants widely recognized that AI models degrade over time due to factors such as data drift, changes in clinical practice, and poor generalizability. However, monitoring practices remain ad hoc and fragmented, with few institutions investing in structured oversight infrastructure. This “responsibility vacuum” is perpetuated by institutional incentives favoring rapid innovation and strategic ignorance of AI failures. Despite these challenges, some participants described grassroots efforts to monitor and maintain AI systems, drawing inspiration from fields such as radiology, laboratory medicine, and transportation safety. Conclusions Our findings suggest that institutional and cultural forces in healthcare deprioritize the maintenance of AI tools, creating a governance gap that may lead to patient harm and inequitable outcomes. Addressing this responsibility vacuum will require formalized accountability structures, interdisciplinary collaboration, and policy reforms that center long-term safety and equity. Without such changes, AI/ML technologies designed to improve patient health may introduce new forms of harm, ultimately eroding trust in AI and machine learning for healthcare.
bmchealthservres.biomedcentral.com
September 29, 2025 at 10:39 PM
New from me and @kellieowens.bsky.social: we argue that the practice of maintaining AI models in healthcare exists in a "responsibility vacuum," resulting in the emergence of creative forms of invisible labor to monitor and repair technical systems bmchealthservres.biomedcentral.com/articles/10....
Reposted by Kellie Owens
New piece from me and @kellieowens.bsky.social on ambient documentation systems ("AI scribes") and the potential risks this technology may pose for medical education and the socialization of trainees www.healthaffairs.org/content/fore...
Ambient Documentation And The Dilemma Of Deskilling In Medical Education | Health Affairs Forefront
The digitization of health care has had important consequences for how medical training is conducted, and with the advent of ambient documentation, this important social component of health care is li...
www.healthaffairs.org
September 25, 2025 at 2:14 PM
New piece from me and @kellieowens.bsky.social on ambient documentation systems ("AI scribes") and the potential risks this technology may pose for medical education and the socialization of trainees www.healthaffairs.org/content/fore...
Reposted by Kellie Owens
Panels: 1) Diversifying healthcare innovation funding with @sindyea.bsky.social; 2) Debate! What ethical oversight is needed for healthcare innovation? with @kellieowens.bsky.social; 3) The role of AI in learning health systems. Generously funded by @dorisdukefdn.bsky.social and anonymous foundation
April 10, 2025 at 2:36 PM
Panels: 1) Diversifying healthcare innovation funding with @sindyea.bsky.social; 2) Debate! What ethical oversight is needed for healthcare innovation? with @kellieowens.bsky.social; 3) The role of AI in learning health systems. Generously funded by @dorisdukefdn.bsky.social and anonymous foundation
Reposted by Kellie Owens
Now completely sold out in-person - but you can still register to attend all but the workshops online! www.eventbrite.com/e/catalyzing... Our stand-out lineup features Tom Maddox, Genevieve Melton-Meaux, Raina Merchant & Mitesh Patel, then panels, posters & workshops. #MedSky #HealthPolicy
April 10, 2025 at 2:36 PM
Now completely sold out in-person - but you can still register to attend all but the workshops online! www.eventbrite.com/e/catalyzing... Our stand-out lineup features Tom Maddox, Genevieve Melton-Meaux, Raina Merchant & Mitesh Patel, then panels, posters & workshops. #MedSky #HealthPolicy
Reposted by Kellie Owens
I’ve always liked this piece by Smitha Khorana & @kellieowens.bsky.social on medical uncertainty because—in my reading—it demonstrates well the sort of conversations that social media affords, and when/how those intersect with T&S concerns, let alone public welfare
Understanding medical uncertainty in the hydroxychloroquine debate
www.brookings.edu
December 3, 2024 at 11:01 PM
I’ve always liked this piece by Smitha Khorana & @kellieowens.bsky.social on medical uncertainty because—in my reading—it demonstrates well the sort of conversations that social media affords, and when/how those intersect with T&S concerns, let alone public welfare