Miguel Hernan
@miguelhernan.org
https://miguelhernan.org/
Using health data to learn what works.
Making #causalinference less casual.
Director, @causalab.bsky.social
Professor, @hsph.harvard.edu
Methods Editor, Annals of Internal Medicine @annalsofim.bsky.social
Using health data to learn what works.
Making #causalinference less casual.
Director, @causalab.bsky.social
Professor, @hsph.harvard.edu
Methods Editor, Annals of Internal Medicine @annalsofim.bsky.social
Roger:
You’ve been ridiculing my posts for years. However, you've never written a paper that presents a thoughtful criticism of our work. Would you consider engaging in a scientific exchange?
Also, a piece of advice: Stop embarrassing yourself and read our papers before posting about them.
Prou.
You’ve been ridiculing my posts for years. However, you've never written a paper that presents a thoughtful criticism of our work. Would you consider engaging in a scientific exchange?
Also, a piece of advice: Stop embarrassing yourself and read our papers before posting about them.
Prou.
February 18, 2025 at 3:12 PM
Roger:
You’ve been ridiculing my posts for years. However, you've never written a paper that presents a thoughtful criticism of our work. Would you consider engaging in a scientific exchange?
Also, a piece of advice: Stop embarrassing yourself and read our papers before posting about them.
Prou.
You’ve been ridiculing my posts for years. However, you've never written a paper that presents a thoughtful criticism of our work. Would you consider engaging in a scientific exchange?
Also, a piece of advice: Stop embarrassing yourself and read our papers before posting about them.
Prou.
2/ The #TargetTrial framework is a structured procedure to operationalize good practices for study design, data analysis, and reporting.
It avoids design-induced biases but not biases arising from data limitations, such as measurement error and insufficient information to adjust for confounding.
It avoids design-induced biases but not biases arising from data limitations, such as measurement error and insufficient information to adjust for confounding.
February 18, 2025 at 1:08 PM
2/ The #TargetTrial framework is a structured procedure to operationalize good practices for study design, data analysis, and reporting.
It avoids design-induced biases but not biases arising from data limitations, such as measurement error and insufficient information to adjust for confounding.
It avoids design-induced biases but not biases arising from data limitations, such as measurement error and insufficient information to adjust for confounding.
3. "Why use methods that require proportional hazards?"
@amjepi.bsky.social 2025
doi.org/10.1093/aje/...
The proportional hazards assumption is generally superfluous. We encourage the use of survival analysis methods that produce absolute risks and that don't require constant hazard ratios.
@amjepi.bsky.social 2025
doi.org/10.1093/aje/...
The proportional hazards assumption is generally superfluous. We encourage the use of survival analysis methods that produce absolute risks and that don't require constant hazard ratios.
February 3, 2025 at 2:51 PM
3. "Why use methods that require proportional hazards?"
@amjepi.bsky.social 2025
doi.org/10.1093/aje/...
The proportional hazards assumption is generally superfluous. We encourage the use of survival analysis methods that produce absolute risks and that don't require constant hazard ratios.
@amjepi.bsky.social 2025
doi.org/10.1093/aje/...
The proportional hazards assumption is generally superfluous. We encourage the use of survival analysis methods that produce absolute risks and that don't require constant hazard ratios.
2. "Why test for proportional hazards?"
@jama.com 2020
jamanetwork.com/journals/jam...
Several examples show that hazards aren't expected to be proportional because either the effect isn't constant or the selection bias isn't constant.
An exception: null effect of treatment (hazard ratio=1)
...
@jama.com 2020
jamanetwork.com/journals/jam...
Several examples show that hazards aren't expected to be proportional because either the effect isn't constant or the selection bias isn't constant.
An exception: null effect of treatment (hazard ratio=1)
...
February 3, 2025 at 2:51 PM
2. "Why test for proportional hazards?"
@jama.com 2020
jamanetwork.com/journals/jam...
Several examples show that hazards aren't expected to be proportional because either the effect isn't constant or the selection bias isn't constant.
An exception: null effect of treatment (hazard ratio=1)
...
@jama.com 2020
jamanetwork.com/journals/jam...
Several examples show that hazards aren't expected to be proportional because either the effect isn't constant or the selection bias isn't constant.
An exception: null effect of treatment (hazard ratio=1)
...
1. "The hazards of hazard ratios"
EPIDEMIOLOGY 2010
journals.lww.com/epidem/fullt...
Hazard ratios have a built-in selection bias because of depletion of susceptibles. Also, reporting only hazard ratios is insufficient because we also need (adjusted) absolute risks for sound decision making.
...
EPIDEMIOLOGY 2010
journals.lww.com/epidem/fullt...
Hazard ratios have a built-in selection bias because of depletion of susceptibles. Also, reporting only hazard ratios is insufficient because we also need (adjusted) absolute risks for sound decision making.
...
February 3, 2025 at 2:51 PM
1. "The hazards of hazard ratios"
EPIDEMIOLOGY 2010
journals.lww.com/epidem/fullt...
Hazard ratios have a built-in selection bias because of depletion of susceptibles. Also, reporting only hazard ratios is insufficient because we also need (adjusted) absolute risks for sound decision making.
...
EPIDEMIOLOGY 2010
journals.lww.com/epidem/fullt...
Hazard ratios have a built-in selection bias because of depletion of susceptibles. Also, reporting only hazard ratios is insufficient because we also need (adjusted) absolute risks for sound decision making.
...
In a recent commentary, Mats Stensrud and I argue that the proportional hazards assumption is not only implausible but also unnecessary.
doi.org/10.1093/aje/...
Easy-to-implement survival analysis methods that don't rely on proportional hazards are typically preferred.
The argument in 3 steps 👇
doi.org/10.1093/aje/...
Easy-to-implement survival analysis methods that don't rely on proportional hazards are typically preferred.
The argument in 3 steps 👇
February 3, 2025 at 2:51 PM
In a recent commentary, Mats Stensrud and I argue that the proportional hazards assumption is not only implausible but also unnecessary.
doi.org/10.1093/aje/...
Easy-to-implement survival analysis methods that don't rely on proportional hazards are typically preferred.
The argument in 3 steps 👇
doi.org/10.1093/aje/...
Easy-to-implement survival analysis methods that don't rely on proportional hazards are typically preferred.
The argument in 3 steps 👇
2/
Immortal time may occur when individuals
1) are assigned to treatment strategies based on post-eligibility information or
2) determined to be eligible based on post-assignment information.
#TargetTrial emulation prevents it by synchronizing eligibility and assignment at the start of follow-up.
Immortal time may occur when individuals
1) are assigned to treatment strategies based on post-eligibility information or
2) determined to be eligible based on post-assignment information.
#TargetTrial emulation prevents it by synchronizing eligibility and assignment at the start of follow-up.
January 6, 2025 at 4:41 PM
2/
Immortal time may occur when individuals
1) are assigned to treatment strategies based on post-eligibility information or
2) determined to be eligible based on post-assignment information.
#TargetTrial emulation prevents it by synchronizing eligibility and assignment at the start of follow-up.
Immortal time may occur when individuals
1) are assigned to treatment strategies based on post-eligibility information or
2) determined to be eligible based on post-assignment information.
#TargetTrial emulation prevents it by synchronizing eligibility and assignment at the start of follow-up.
Agree. Stephen Senn's "Seven myths of randomisation in clinical trials" pubmed.ncbi.nlm.nih.gov/23255195/ is a good place to start.
And the work by Jamie Robins and colleagues helped us understand "the curse of dimensionality" in high-dimensional settings (references in Chapter 10 of "What If").
And the work by Jamie Robins and colleagues helped us understand "the curse of dimensionality" in high-dimensional settings (references in Chapter 10 of "What If").
Seven myths of randomisation in clinical trials - PubMed
I consider seven misunderstandings that may be encountered about the nature, purpose and properties of randomisation in clinical trials. Some concern the practical realities of clinical research on pa...
pubmed.ncbi.nlm.nih.gov
November 26, 2024 at 1:56 PM
Agree. Stephen Senn's "Seven myths of randomisation in clinical trials" pubmed.ncbi.nlm.nih.gov/23255195/ is a good place to start.
And the work by Jamie Robins and colleagues helped us understand "the curse of dimensionality" in high-dimensional settings (references in Chapter 10 of "What If").
And the work by Jamie Robins and colleagues helped us understand "the curse of dimensionality" in high-dimensional settings (references in Chapter 10 of "What If").
In Chapter 10 of "Causal Inference: What If", we describe arguments for adjustment in randomized trials and refute some fallacies used to advise against adjustment.
www.hsph.harvard.edu/miguel-herna...
A practical challenge is how to incorporate adjustment into the design of #randomizedtrials.
www.hsph.harvard.edu/miguel-herna...
A practical challenge is how to incorporate adjustment into the design of #randomizedtrials.
November 26, 2024 at 1:38 PM
In Chapter 10 of "Causal Inference: What If", we describe arguments for adjustment in randomized trials and refute some fallacies used to advise against adjustment.
www.hsph.harvard.edu/miguel-herna...
A practical challenge is how to incorporate adjustment into the design of #randomizedtrials.
www.hsph.harvard.edu/miguel-herna...
A practical challenge is how to incorporate adjustment into the design of #randomizedtrials.
When risk factors are imbalanced for non-chance reasons in #observational studies, we call it #confounding.
An interesting point is that, regardless of whether the imbalance results from chance or confounding, we are better off ADJUSTING for prognostic factors that are imbalanced between groups.
An interesting point is that, regardless of whether the imbalance results from chance or confounding, we are better off ADJUSTING for prognostic factors that are imbalanced between groups.
November 26, 2024 at 1:38 PM
When risk factors are imbalanced for non-chance reasons in #observational studies, we call it #confounding.
An interesting point is that, regardless of whether the imbalance results from chance or confounding, we are better off ADJUSTING for prognostic factors that are imbalanced between groups.
An interesting point is that, regardless of whether the imbalance results from chance or confounding, we are better off ADJUSTING for prognostic factors that are imbalanced between groups.
Unsurprising. By definition, the 95% confidence interval of 5% of (perfect) trials isn't expected to include the true value of the effect.
Again: Of 20 randomized trials in which treatment truly has a null effect, the 95% CI of one of them isn't expected to include the null value. Just by chance.
Again: Of 20 randomized trials in which treatment truly has a null effect, the 95% CI of one of them isn't expected to include the null value. Just by chance.
November 26, 2024 at 1:38 PM
Unsurprising. By definition, the 95% confidence interval of 5% of (perfect) trials isn't expected to include the true value of the effect.
Again: Of 20 randomized trials in which treatment truly has a null effect, the 95% CI of one of them isn't expected to include the null value. Just by chance.
Again: Of 20 randomized trials in which treatment truly has a null effect, the 95% CI of one of them isn't expected to include the null value. Just by chance.