David Ouyang, MD
davidouyang.bsky.social
David Ouyang, MD
@davidouyang.bsky.social
Cardiologist, Data Scientist, AI researcher
EchoPrime is a contrastive vision-language model. Trained on over 12M video-repprt pairs, it learns the relationship between what cardiologists write and what the ultrasound images show.

If you feed it a video or echo reports, it can find similar historical studies that match those concepts.
November 17, 2025 at 2:41 AM
EchoPrime is the culmination of a long roadmap for AI echo.

Substantial time and effort was put into curating the largest existing echocardiography database.

EchoPrime is trained on 1000x the data of EchoNet-Dynamic, our first model for AI-LVEF, and 10x the data of existing AI echo models.
November 17, 2025 at 2:39 AM
We are excited to announce EchoPrime is published in Nature. EchoPrime is the first echocardiography AI model capable of evaluating a full transthoracic echocardiogram study, identify the most relevant videos, and produce a comprehensive interpretation!
November 17, 2025 at 2:38 AM
Congratulations to Sarnoff Fellow Victoria Yuan on her new preprint 'Automated Deep Learning Pipeline for Characterizing Left Ventricular Diastolic Function'!
May 4, 2025 at 4:12 AM
3/n Without additional prompting, our #echofirst model naturally gravitates towards the vena contracta in color Doppler videos to assess AR severity.
March 25, 2025 at 10:08 PM
2/n Accurate assessment of AR severity and sequalae is critical for surveillance and timing of surgery. Using over 40k studies from @smidtheart.bsky.social and validated on 1.5k studies from @stanforddeptmed.bsky.social, we show that AI can accurately assess AR severity across a range of patients.
March 25, 2025 at 10:08 PM
1/n Excited to announce work by Dr. @BinderRodriguez
from @MedUni_Wien on the AI automated assessment of Aortic Regurgitation (AR) on #echofirst using 59,500 videos from @smidtheart.bsky.social. Lead by Dr. Bob Siegel.
March 25, 2025 at 10:07 PM
5/n Preprint: medrxiv.org/content/10.1...
GitHub: github.com/echonet/meas...
Demo: On Github.
March 20, 2025 at 10:35 PM
3/n Across a wide range of image quality, patient characteristics, and study types, EchoNet-Measurements perform well, improving upon sonographers' and cardiologists' precision.
March 20, 2025 at 10:34 PM
2/n Lead by Yuki Sahashi, EchoNet-Measurements provide automated annotations for the 18 most common #echofirst (both B mode and Doppler) measurements.

Excellent performance in test cohorts (overall R2 of 0.967 in the held-out CSMC dataset and 0.987 in the SHC dataset)
March 20, 2025 at 10:33 PM
1/n We are thrilled to present EchoNet-Measurements, an open source, comprehensive AI platform for automated #echofirst measurements.

Using more than 1,414,709 annotations from 155,215 studies from 78,037 patients for training, this is the most comprehensive #echofirst segmentation model.
March 20, 2025 at 10:33 PM
New preprint by @SarnoffCardio and @dgsomucla Victoria Yuan:

There are new therapies for obstructive HCM, however obstruction is frequently missed. Extra #echofirst work is required to eval for obstruction.

We develop an AI model on standard A4C videos to identify patients w/ obstruction.
March 7, 2025 at 9:03 PM
Consistency - one of the most useful heuristics of whether a medical AI model is good or not.

I've noticed that AI models trained on small datasets tend to jitter - jumping a lot from frame to frame - while robust models tend to have consistent measurements across the entire video.

Coming soon...
March 5, 2025 at 12:04 AM