Irene Chen
@irenetrampoline.bsky.social
ML for healthcare and health equity. Assistant Professor at UC Berkeley and UCSF.
https://irenechen.net/
https://irenechen.net/
Pinned
Irene Chen
@irenetrampoline.bsky.social
· Nov 12
First post! I'm recruiting PhD students this PhD admission cycle who want to work on: a) impactful ML methods for healthcare 🤖, b) computational methods to improve health equity ⚖️, or c) AI for women's health or climate health 🤰🌎
Apply via UC Berkeley CPH or EECS (AI-H) 🌉.
irenechen.net/join-lab/
Apply via UC Berkeley CPH or EECS (AI-H) 🌉.
irenechen.net/join-lab/
What happens in SAIL 2025 stays in SAIL 2025 -- except for these anonymized hot takes! 🔥 Jotted down 17 de-identified quotes on AI and medicine from medical executives, journal editors, and academics in off-the-record discussions in Puerto Rico
irenechen.net/sail2025/
irenechen.net/sail2025/
May 12, 2025 at 2:02 PM
What happens in SAIL 2025 stays in SAIL 2025 -- except for these anonymized hot takes! 🔥 Jotted down 17 de-identified quotes on AI and medicine from medical executives, journal editors, and academics in off-the-record discussions in Puerto Rico
irenechen.net/sail2025/
irenechen.net/sail2025/
Reposted by Irene Chen
We've launched a biweekly AI & Society salon at Berkeley w/ @rajiinio.bsky.social & @irenetrampoline.bsky.social! This week, sociologist marionf.bsky.social joined EECS’ beenwrekt.bsky.social to discuss The Ordinal Society. Up next, on April 16th: AI & Education. Join us at ai-and-society.github.io
April 4, 2025 at 9:44 PM
We've launched a biweekly AI & Society salon at Berkeley w/ @rajiinio.bsky.social & @irenetrampoline.bsky.social! This week, sociologist marionf.bsky.social joined EECS’ beenwrekt.bsky.social to discuss The Ordinal Society. Up next, on April 16th: AI & Education. Join us at ai-and-society.github.io
AI deployments in health are often understudied because they require time and careful analysis.⌛️🤔
We share thoughts in @ai.nejm.org about a recent AI tool for emergency dept triage that: 1) improves wait times and fairness (!), and 2) helps nurses unevenly based on triage ability
We share thoughts in @ai.nejm.org about a recent AI tool for emergency dept triage that: 1) improves wait times and fairness (!), and 2) helps nurses unevenly based on triage ability
February 27, 2025 at 9:06 PM
AI deployments in health are often understudied because they require time and careful analysis.⌛️🤔
We share thoughts in @ai.nejm.org about a recent AI tool for emergency dept triage that: 1) improves wait times and fairness (!), and 2) helps nurses unevenly based on triage ability
We share thoughts in @ai.nejm.org about a recent AI tool for emergency dept triage that: 1) improves wait times and fairness (!), and 2) helps nurses unevenly based on triage ability
How do disparities in healthcare access affect ML models? 💰📉🧐 We found that low access to care -> worse EHR data quality -> worse ML performance in a dataset of 134k patients. Work with Anna Zink (on the faculty job market rn!) + Hongzhou Luan, presented at #ML4H2024
December 20, 2024 at 1:04 AM
How do disparities in healthcare access affect ML models? 💰📉🧐 We found that low access to care -> worse EHR data quality -> worse ML performance in a dataset of 134k patients. Work with Anna Zink (on the faculty job market rn!) + Hongzhou Luan, presented at #ML4H2024
It's giving Best Paper at the ML for Health Symposium (co-located w NeurIPS)!! 🥳 Congrats to co-authors Emily, Jin, and many others 👏. Check out our work using LLMs to understand liver transplants, esp understudied social and economic factors 🏥💰🏠! #ml4h2024
arxiv.org/pdf/2412.07924
arxiv.org/pdf/2412.07924
December 17, 2024 at 2:03 AM
It's giving Best Paper at the ML for Health Symposium (co-located w NeurIPS)!! 🥳 Congrats to co-authors Emily, Jin, and many others 👏. Check out our work using LLMs to understand liver transplants, esp understudied social and economic factors 🏥💰🏠! #ml4h2024
arxiv.org/pdf/2412.07924
arxiv.org/pdf/2412.07924
This year our CHEN lab holiday party featured cookie decorating! 🎄 Grateful to have such creative and inspiring students and collaborators. 🥰 Can you spot all of the ML-related cookies? 📈
December 12, 2024 at 7:05 PM
This year our CHEN lab holiday party featured cookie decorating! 🎄 Grateful to have such creative and inspiring students and collaborators. 🥰 Can you spot all of the ML-related cookies? 📈
Important caveats: 1) Very small sample size (6 medical cases) -> p=0.03 which is kinda sus, 2) human physicians in study had only 3 yrs of training, 3) no nuance of how to use LLMs for diag reasoning: clinical notes != clean cases; paper does not engage with this.
November 18, 2024 at 9:08 PM
Important caveats: 1) Very small sample size (6 medical cases) -> p=0.03 which is kinda sus, 2) human physicians in study had only 3 yrs of training, 3) no nuance of how to use LLMs for diag reasoning: clinical notes != clean cases; paper does not engage with this.
What do it mean to be a “low resourced” language? I’ve seen definitions for less training data to low number of speakers. Great to see this important clarifying work at #EMNLP2024 from @hellinanigatu.bsky.social et al
aclanthology.org/2024.emnlp-m...
aclanthology.org/2024.emnlp-m...
November 15, 2024 at 9:43 PM
What do it mean to be a “low resourced” language? I’ve seen definitions for less training data to low number of speakers. Great to see this important clarifying work at #EMNLP2024 from @hellinanigatu.bsky.social et al
aclanthology.org/2024.emnlp-m...
aclanthology.org/2024.emnlp-m...
Informative recap of EMNLP papers related to multilingual models and low resource languages! Thanks @catherinearnett.bsky.social
I wrote up some thoughts from #EMNLP2024, about some of the cool papers I saw and some interesting conversations. Also, this is my first substack post! open.substack.com/pub/catherin...
Inbox | Substack
open.substack.com
November 15, 2024 at 8:49 PM
Informative recap of EMNLP papers related to multilingual models and low resource languages! Thanks @catherinearnett.bsky.social
Fairness definitions differ across groups! For white respondents, fairness = "proximity'' to assigned school. For Hispanic or Latino parents, fairness = "same rules'' for everyone. Cool work by @nilou.bsky.social + students
New #CSCW2024 paper in which we found that people's definitions of fairness for an algorithmic distribution system (school choice) differ across socioeconomic groups. These definitions of fairness shape how fair they see the system, controlling for their own outcome. dl.acm.org/doi/10.1145/...
Definitions of Fairness Differ Across Socioeconomic Groups & Shape Perceptions of Algorithmic Decisions | Proceedings of the ACM on Human-Computer Interaction
Understanding how people perceive algorithmic decision-making is remains critical,
as these systems are increasingly integrated into areas such as education, healthcare,
and criminal justice. These pe...
dl.acm.org
November 14, 2024 at 8:15 AM
Fairness definitions differ across groups! For white respondents, fairness = "proximity'' to assigned school. For Hispanic or Latino parents, fairness = "same rules'' for everyone. Cool work by @nilou.bsky.social + students
Summary of #AMIA2024 presentations related to health equity and algorithmic fairness! Thanks for pulling together @alyssapradhan.bsky.social
One of the things I’ve really enjoyed about #AMIA2024 has been the thoughtful discussion about strategies to mitigate #algorithmicbias and promote #healthequity🧵
November 14, 2024 at 1:58 AM
Summary of #AMIA2024 presentations related to health equity and algorithmic fairness! Thanks for pulling together @alyssapradhan.bsky.social
If you trained 10 models and they had a huge variance on predictions for you, would you have any faith in the model? Enjoyed this paper defining self-consistency -- and showing enforcing that makes models more fair! Cool AAAI24 paper from A. Feder Cooper et al.
katelee168.github.io/pdfs/arbitra...
katelee168.github.io/pdfs/arbitra...
November 13, 2024 at 5:21 PM
If you trained 10 models and they had a huge variance on predictions for you, would you have any faith in the model? Enjoyed this paper defining self-consistency -- and showing enforcing that makes models more fair! Cool AAAI24 paper from A. Feder Cooper et al.
katelee168.github.io/pdfs/arbitra...
katelee168.github.io/pdfs/arbitra...
12 hours later, I've realized how much I've been missing a place like OldTwitter where you can share candid thoughts on research without bots clogging up the feed. Thanks Bluesky 💙
November 12, 2024 at 2:12 AM
12 hours later, I've realized how much I've been missing a place like OldTwitter where you can share candid thoughts on research without bots clogging up the feed. Thanks Bluesky 💙
Giving a talk tomorrow 11:40am PT at the Simons Domain Adaptation Workshop. I'll be speaking about our recent paper on the Data Addition Dilemma! Catch the talk on live-stream or recorded afterwards
Paper: arxiv.org/pdf/2408.04154
Workshop: simons.berkeley.edu/workshops/do...
Paper: arxiv.org/pdf/2408.04154
Workshop: simons.berkeley.edu/workshops/do...
November 12, 2024 at 2:06 AM
Giving a talk tomorrow 11:40am PT at the Simons Domain Adaptation Workshop. I'll be speaking about our recent paper on the Data Addition Dilemma! Catch the talk on live-stream or recorded afterwards
Paper: arxiv.org/pdf/2408.04154
Workshop: simons.berkeley.edu/workshops/do...
Paper: arxiv.org/pdf/2408.04154
Workshop: simons.berkeley.edu/workshops/do...
Creative AIES 2024 paper by andreawwenyi.bsky.social that uses NLP to help uncover gender bias for men/women defendants. Legal experts used NLP to build consensus and evidence on annotation rules. Could have relevant tie-ins to healthcare and bias in clinical notes
November 12, 2024 at 1:55 AM
Creative AIES 2024 paper by andreawwenyi.bsky.social that uses NLP to help uncover gender bias for men/women defendants. Legal experts used NLP to build consensus and evidence on annotation rules. Could have relevant tie-ins to healthcare and bias in clinical notes
First post! I'm recruiting PhD students this PhD admission cycle who want to work on: a) impactful ML methods for healthcare 🤖, b) computational methods to improve health equity ⚖️, or c) AI for women's health or climate health 🤰🌎
Apply via UC Berkeley CPH or EECS (AI-H) 🌉.
irenechen.net/join-lab/
Apply via UC Berkeley CPH or EECS (AI-H) 🌉.
irenechen.net/join-lab/
November 12, 2024 at 1:35 AM
First post! I'm recruiting PhD students this PhD admission cycle who want to work on: a) impactful ML methods for healthcare 🤖, b) computational methods to improve health equity ⚖️, or c) AI for women's health or climate health 🤰🌎
Apply via UC Berkeley CPH or EECS (AI-H) 🌉.
irenechen.net/join-lab/
Apply via UC Berkeley CPH or EECS (AI-H) 🌉.
irenechen.net/join-lab/