Yuzhe Yang
banner
yuzheyang.bsky.social
Yuzhe Yang
@yuzheyang.bsky.social
Asst Prof @UCLA | RS @Google | PhD @MIT | BS @PKU
#ML, #AI, #health, #medicine
https://www.cs.ucla.edu/~yuzhe
Read all about it!
➡️Paper: arxiv.org/abs/2506.09108

Huge team effort! Kudos to my intern Evelyn, amazing team @kmr_ayush, @aametwally1, @Orson_Xu, @timalthoff, @pushmeet, @cecim, @xliucs, @danmcduff, and other amazing co-authors!

#AI #wearable #sensor #health #multimodal
(8/8)
SensorLM: Learning the Language of Wearable Sensors
We present SensorLM, a family of sensor-language foundation models that enable wearable sensor data understanding with natural language. Despite its pervasive nature, aligning and interpreting sensor ...
arxiv.org
June 17, 2025 at 3:40 PM
Beyond its discriminative power, SensorLM showcases compelling generative capabilities. It can produce hierarchical and realistic captions from input wearable data only, offering more coherent & correct descriptions compared to LLMs like Gemini 2.0 Flash. ✍️✨

(7/8)
June 17, 2025 at 3:40 PM
SensorLM also demonstrates intriguing capabilities, including interesting scaling behavior over data size, model size, and compute. 📈💡

(6/8)
June 17, 2025 at 3:40 PM
Experiments across real-world tasks in human activity analysis 🏃‍♀️ & healthcare ⚕️ showcase its superior performance over SOTA models in:
✨ Zero-shot recognition
✨ Few-shot learning
✨ Cross-modal retrieval

(5/8)
June 17, 2025 at 3:40 PM
SensorLM extends prominent multimodal pretraining architectures (e.g., contrastive, generative) unifying their principles for sensor data. It extends prior approaches, recovering them as specific configurations within a single architecture. 🏗️🔗

(4/8)
June 17, 2025 at 3:40 PM
This enabled us to curate the largest sensor-language dataset to date: over 59.7 million hours of data from >103,000 people. That's orders of magnitude larger than prior studies! 🚀💾

(3/8)
June 17, 2025 at 3:40 PM
Despite its pervasiveness, aligning & interpreting sensor data with language remains challenging 📈 due to the lack of richly annotated sensor-text descriptions. 🚫

Our solution? A hierarchical pipeline captures statistical📊, structural🏗️, and semantic🧠 sensor info.

(2/8)
June 17, 2025 at 3:40 PM
Science News provides a great cover of our paper: www.science.org/content/arti...

Started in 2023, delayed but finally out! Huge congrats & thanks to amazing collaborators: Yujia, @xliucs, @Avanti0609, @Mastrodicasa_MD, Vivi, @ejaywang, @sahani_dushyant, Shwetak 🎉

(6/6)
#AI #health #fairness
AI models miss disease in Black and female patients
Analysis of chest x-rays underscores need for monitoring artificial intelligence tools for bias, experts say
science.org
March 28, 2025 at 8:01 PM
Why the gap? These foundation models in medical imaging encode demographic info (age, sex, race) from X-rays—more than humans do! Fascinating, but a challenge for fair healthcare ⚖️.

(5/)
March 28, 2025 at 8:01 PM
This fairness disparity also holds for unseen pathologies during training, as well as for differential diagnoses across 50+ pathologies. ⚕️

(4/)
March 28, 2025 at 8:01 PM
While expert-level VLMs can achieve _overall_ diagnosis accuracy on par with clinicians, they show significant underdiagnosis disparity over (intersectional) subpopulations vs. Radiologists 🚨

(3/)
March 28, 2025 at 8:01 PM
We tested top vision-language models like CheXzero on 5 global datasets 🌍. Result? They consistently show disparities in diagnosis based on race, sex, and age—esp. across marginalized groups—compared to certified radiologists 📷

(2/)
March 28, 2025 at 8:01 PM
Would love to be added!
November 26, 2024 at 6:10 PM
Would love to be added, thanks!
November 25, 2024 at 6:17 PM