Eric Martinez
banner
ericmtztx.bsky.social
Eric Martinez
@ericmtztx.bsky.social
UTRGV School of Medicine / UTHealth RGV
Asst Dir Business Intelligence & Enterprise Engineering
Adjunct Lecturer @ UTRGV CS
Jiu-Jitsu Black Belt
Reposted by Eric Martinez
hmm i feel attacked
December 24, 2024 at 8:27 PM
Your 'AI Engineer' can't deploy their own model? Your 'Data Engineer' can't write a web app? Stop with narrow job titles that create silos. In healthcare tech, we need full-stack problem solvers who can ship end-to-end. One capable generalist > five specialists.
November 30, 2024 at 7:27 PM
Reposted by Eric Martinez
Healthcare (system) is rarely personalised. In this HBR article the authors predict that *eventually healthcare will be as personalised as online shopping or banking. See poll below to weigh in.
#healthcare #personalisation #pt

hbr.org/2024/11/why-...
Why Isn’t Healthcare More Personalized?
Currently, inefficiencies in healthcare practices can frustrate patients and lead to unnecessary pre-surgical tests or boilerplate after-visit instructions, among other issues. Personalized health car...
hbr.org
November 28, 2024 at 12:26 PM
We're building an AI-powered search engine at UTHealth RGV that connects patients with the right healthcare providers. Our focus: making it easier for our community to find and access the care they need. Here's how we're building responsibly: (1/7)
November 28, 2024 at 6:06 PM
It’s almost a game for people to race to take the latest open weight model and Med* it. But anyone who has tried them can tell you that they don’t seem very good and their performance on anything real world is still far below state of the art closed source models.
Medically adapted foundation models (think Med-*) turn out to be more hot air than hot stuff. Correcting for fatal flaws in evaluation, the current crop are no better on balance than generic foundation models, even on the very tasks for which benefits are claimed.
arxiv.org/abs/2411.04118
Medical Adaptation of Large Language and Vision-Language Models: Are We Making Progress?
Several recent works seek to develop foundation models specifically for medical applications, adapting general-purpose large language models (LLMs) and vision-language models (VLMs) via continued pret...
arxiv.org
November 27, 2024 at 5:31 AM
Assuming biased data means biased models is lazy.

Measure bias by quantifying disparities in your model’s behavior.

Here’s how:

1. Define your AI's tasks and metrics

2. Measure performance across critical subgroups

3. Iterate based on real results

Act like a scientist—test, measure, repeat.
November 26, 2024 at 6:38 AM
Neat RAG library we developed for our in-house Ruby on Rails applications at UTRGV School of Medicine. We’ll be using this to power our upcoming “Find a Doctor” tool. No dependencies other than pgvector and Azure OpenAI.
November 23, 2024 at 7:35 AM