Asst Dir Business Intelligence & Enterprise Engineering
Adjunct Lecturer @ UTRGV CS
Jiu-Jitsu Black Belt
- Implementation & design details
- Evaluation methodology and dataset development
- AI engineering process
- Data platform infrastructure/ HIPAA compliance
Learn how we're making healthcare more accessible in the RGV, one search at a time. (7/7)
- Implementation & design details
- Evaluation methodology and dataset development
- AI engineering process
- Data platform infrastructure/ HIPAA compliance
Learn how we're making healthcare more accessible in the RGV, one search at a time. (7/7)
We're expanding our evaluation dataset to detect unwanted outcomes that could impact patients, providers, or departments. No system is perfect, but measuring bias lets us iterate and improve responsibly. (6/7)
We're expanding our evaluation dataset to detect unwanted outcomes that could impact patients, providers, or departments. No system is perfect, but measuring bias lets us iterate and improve responsibly. (6/7)
- Build comprehensive evaluation dataset
- Perfect basic provider matching
- Enable structured query rewriting with LLMs
Each step verified through testing to ensure reliable results. (5/7)
- Build comprehensive evaluation dataset
- Perfect basic provider matching
- Enable structured query rewriting with LLMs
Each step verified through testing to ensure reliable results. (5/7)
Key metrics we're tracking:
- Query-to-(provider/specialty/department) match accuracy
- Cross-lingual search performance
- Response time (targeting <300ms)
These guide our improvements where they matter most and can be sliced across descriptors to measure bias. (4/7)
Key metrics we're tracking:
- Query-to-(provider/specialty/department) match accuracy
- Cross-lingual search performance
- Response time (targeting <300ms)
These guide our improvements where they matter most and can be sliced across descriptors to measure bias. (4/7)
We built an evaluation pipeline measuring precision and other key metrics. This enables rapid testing of different embedding models, scoring methods, and chunking strategies - focusing on getting provider retrieval right before adding complexity. (3/7)
We built an evaluation pipeline measuring precision and other key metrics. This enables rapid testing of different embedding models, scoring methods, and chunking strategies - focusing on getting provider retrieval right before adding complexity. (3/7)
We extract actual patient-provider matches from our EHR, then use LLMs to generate natural search queries. This creates a robust test dataset that reflects real patient needs - from "my toe hurts" to "doctor for diabetes control" (2/7)
We extract actual patient-provider matches from our EHR, then use LLMs to generate natural search queries. This creates a robust test dataset that reflects real patient needs - from "my toe hurts" to "doctor for diabetes control" (2/7)