Rohan Khera
banner
rohankhera.bsky.social
Rohan Khera
@rohankhera.bsky.social
Cardiologist-Data Scientist at Yale. Working on data-driven cardiovascular care innovation at the Cardiovascular Data Science (CarDS) Lab (@cardslab.bsky.social). Associate Editor @ JAMA (@jama.com)
@cardslab.bsky.social is leading a workshop at #AHA25 #QCOR25

#AI in Action: LLM-Powered Quality Assessment & Improvement in the EHR

Learn how to:
🔍 Use LLMs to measure care quality
⚡ Build real-time AI workflows
🌐 Implement scalable, equitable solutions

📍 Session: QCOR.01 | 💻 Bring your laptop!
October 30, 2025 at 5:24 PM
🚨 The @cardslab.bsky.social at Yale is recruiting postdocs in our clinician–data scientist track!

🔹 MD or MD/PhD
🔹 Python/R skills
🔹 Research experience
🔹 Passion for AI + digital health

📝 Apply early: CV + 250-word statement
🔗 apply.cards-lab.org/apply/CATSP004

#AI #Cardiology #Postdoc
July 14, 2025 at 3:41 PM
CarDS Lab (@cardslab.bsky.social) will be in full force at #ACC25. We are presenting our work spanning:

#AI for cardiac diagnosis
#DigitalBiomarkers
#Precision inference from RCTs
#DigitalHealthPolicy
#NLP & #LLM #RAG for scalable phenotyping in the #EHR

See you in Chicago this weekend!
March 26, 2025 at 6:26 PM
Also, read the excellent editorial that accompanies our study.
January 24, 2025 at 4:43 PM
The single image-based AI-ECG score was either comparable or exceeded the performance of a multicomponent standard in PCP-HF (computable at Yale and ELSA-Brasil)
January 24, 2025 at 4:43 PM
We demonstrate a several-fold higher risk of HF with higher thresholds of AI-ECG risk score for LV dysfunction.

The effect estimates were more pronounced in rigorous cohort studies in the UK and Brazil, likely due to more complete & robust outcome capture vs EHR
January 24, 2025 at 4:43 PM
Study design⬇️. We posited that cross-sectional signatures of LV dysfunction on AI-ECG would portend future HF risk.

Rather than build models for HF risk with time-varying confounding, we focused on a mechanistic approach.

+ broadly validated the result vs. risk scores
January 24, 2025 at 4:43 PM
The work is a collaboration between our growing global Precardia network - a network of federated science in AI. Watch this space for more: precardia.org

Tagging those on here: @ekoikonomou.bsky.social @aline-pedroso.bsky.social
January 24, 2025 at 4:43 PM
In the European Heart Journal, we report a novel paradigm for predicting HF risk.

AI applied to a single ECG image. Without risk scores or other testing.

Led by @lovedeepdhingra.bsky.social @cardslab.bsky.social

@yaleschoolofmed.bsky.social

🔗 academic.oup.com/eurheartj/ad...
January 24, 2025 at 4:43 PM
Happy New Year from my friends & work family at @cardslab.bsky.social

Grateful for 2024, where we grew our group & strengthened our culture & community around advancing science while having fun 😊

Hoping to build on this foundation in 2025!
January 1, 2025 at 4:06 PM
If you’re at #CVCT24, join us for:
1️⃣AI for design and dissemination of clinical trials (Mon, 12/9) w @hmkyale.bsky.social @jonwcunningham.bsky.social

2️⃣Enabling screening and tracking of cardiac amyloidosis (Tue, 12/10)

@cardslab.bsky.social

Schedule:
www.globalcvctforum.com/_files/ugd/d...
December 6, 2024 at 11:57 PM
The model generalized to external datasets EchoNet-LVH and EchoNet-Dynamic to achieve excellent performance in these new datasets it had never seen.
Also, it serves as a foundation model for new tasks, with the ability to efficiently fine-tune
November 22, 2024 at 9:20 PM
The model learned task-specific views - demonstrating that it correctly identifies the relevant views for each task and weighs them as an expert human reader would.
November 22, 2024 at 9:20 PM
The model achieved excellent performance on the entire range of tasks for both full TTEs but also abbreviated protocols obtainable on handheld devices
November 22, 2024 at 9:20 PM
The model uses a supervised learning strategy, combining an image encoder to examine frames of echo videos, and capturing temporal information across video frames using a transformer model & then performing each of the tasks automatically
November 22, 2024 at 9:20 PM
We used 1.1 M Echo videos from nearly 30K studies to build a model that does inference on 39 tasks - from measurement to quantification to classification.

The objective: to build a tool that can automatically leverage any echo study & make complete inference without human input
November 22, 2024 at 9:20 PM
Presented at #AHA24 Late Breaking session, we are pleased to announce #PanEcho - Complete AI-enabled echocardiography interpretation with multi-task deep learning.

Led by @giholste.bsky.social & @ekoikonomou.bsky.social
@cardslab.bsky.social

Preprint, code, and model: cards-lab.org/panecho
November 22, 2024 at 9:20 PM