Rohit P. Ojha, DrPH, FACE
banner
rohitpojha.bsky.social
Rohit P. Ojha, DrPH, FACE
@rohitpojha.bsky.social
Director & Associate Professor, JPS Health Network Center for Epidemiology & Healthcare Delivery Research | Causal inference • Prediction • Evidence synthesis
Nice Simpsons reference, but symmetry is misleading. Consider this analogy: If you invest $100 in the stock market, you cannot lose more than 100% relative to that amount, but you can gain more than 100%. Additional reasons for leaving as ratios in link.

academic.oup.com/aje/article-...
Should Graphs of Risk or Rate Ratios be Plotted on a Log Scale?
Should graphs of risk or rate ratios be plotted on a logarithmic scale? The conventional answer to this question seems to be yes (1), even to the extent th
academic.oup.com
June 21, 2025 at 1:38 AM
Can’t imagine how random nonpositivity would be illustrated on a DAG. Could deterministic nonpositivity be illustrated similar to selection?
June 10, 2025 at 8:35 PM
We must hold AI tools to the same standards as any other clinical intervention. Strong evidence builds trust and supports responsible adoption.

3/3
May 6, 2025 at 2:12 PM
Too many AI interventions lack high-quality evidence. Most studies so far have high risk of bias and few report patient-relevant outcomes or potential harms.

www.thelancet.com/journals/lan...

2/3
Benefits and harms associated with the use of AI-related algorithmic decision-making systems by healthcare professionals: a systematic review
The current evidence on AI-related ADM systems provides limited insights into patient-relevant outcomes. Our findings underscore the essential need for rigorous evaluations of clinical benefits, reinf...
www.thelancet.com
May 6, 2025 at 2:12 PM
Reposted by Rohit P. Ojha, DrPH, FACE
"People Profit from being ambiguous about their research goals"

Julia concludes by highlighting the need for structural change. Rigorous causal research takes time and thought. That's not possible if we're still expecting PhD students to publish 3-5 papers.
April 10, 2025 at 3:21 PM
Reposted by Rohit P. Ojha, DrPH, FACE
This 'dataset first' approach leads some scientists to conduct weak research because 'this is the best we can do in our data'.

If a dataset is inappropriate for a particular question, the best you can do is NOT use it.

It shouldn't be our job, as scientists, to be showcasing datasets.
March 31, 2025 at 12:18 PM
Best wishes for a rapid and full recovery.
March 6, 2025 at 5:39 PM
A good starting point is the seminal article about this metric in a healthcare context: pubs.rsna.org/doi/10.1148/...

Additional nuances are discussed here: www.ahajournals.org/doi/10.1161/...
The meaning and use of the area under a receiver operating characteristic (ROC) curve. | Radiology
A representation and interpretation of the area under a receiver operating characteristic (ROC) curve obtained by the "rating" method, or by mathematical predictions based on patient char...
pubs.rsna.org
March 1, 2025 at 3:46 AM
Interesting situation. Perhaps the journal or Editorial Board has a policy to help guide?
February 25, 2025 at 11:58 PM
Just like the phrase, “…results should be interpreted cautiously.” As if results should ever be interpreted recklessly.
February 14, 2025 at 9:50 PM
I’m with you and advocate for further inquiry. I also agree that inference requires multiple sources, but the question is whether some studies are even useful for informing policy? Savitz wrote a nice article about the need for policy-relevant research.

academic.oup.com/aje/article/...
January 9, 2025 at 5:43 PM
Sometimes the available studies are so flawed or do not address the question of interest well enough that meta-analysis is unwarranted and no amount of sensitivity analysis or post hoc remedies can redeem. In such cases, greater value to provide guidance about how to improve the quality of studies.
January 9, 2025 at 12:37 PM
A meta-analysis is only as good as the included studies. Most studies in this meta-analysis were riddled with selection, exposure and outcome misclassification, and confounding biases. In addition, the standardized mean difference is problematic for meta-analysis.

pubmed.ncbi.nlm.nih.gov/38761102/
Standardization and other approaches to meta-analyze differences in means - PubMed
Meta-analysts often use standardized mean differences (SMD) to combine mean effects from studies in which the dependent variable has been measured with different instruments or scales. In this tutoria...
pubmed.ncbi.nlm.nih.gov
January 9, 2025 at 3:24 AM
So nice to see ideas statisticians established years ago about prediction models making a resurgence in other contexts.

#StatsSky

www.jclinepi.com/article/S089...
Validation, updating and impact of clinical prediction rules: A review
To provide an overview of the research steps that need to follow the development of diagnostic or prognostic prediction rules. These steps include validity assessment, updating (if necessary), and impact assessment of clinical prediction rules.
www.jclinepi.com
January 8, 2025 at 8:12 PM