Alec McClean
alecmcclean.bsky.social
Alec McClean
@alecmcclean.bsky.social
Postdoc @ NYU Grossman; stats / ML + causal inference
https://alecmcclean.github.io/
Although estimator is complex, some nice properties arise from the construction: in particular, we can examine distribution of cumulative weights across subjects, like in single-timepoint weighting
July 16, 2025 at 10:36 PM
Cross-world"ness" --> nuances in identification and estimation

- ID: Need strong seq. rand., but still possible w/out positivity
- Est: new EIF for doubly robust estimator involves additional term w/ covariate density ratio across the target regimes
July 16, 2025 at 10:36 PM
These fx are

- "Cross-world"
- "Mechanism-relevant" (they target mean diff in POs we care about)
- **Not** "policy-relevant" (they're not implementable)

This tradeoff arises elsewhere (mediation, censoring by death). Ours is another example:

What you want to know != what you can implement
July 16, 2025 at 10:36 PM
New paper 📜 We construct longitudinal effects tailored to isolated mean diff in two POs while adapting to positivity violations under both regimes.

Some notes vv
July 16, 2025 at 10:36 PM
Excited to present on Thursday @eurocim.bsky.social on new work with @idiaz.bsky.social on (smooth) trimming with longitudinal data!

"Longitudinal trimming and smooth trimming with flip and S-flip interventions"

Prelim draft: alecmcclean.github.io/files/LSTTEs...
April 8, 2025 at 3:34 PM
We analyze the effect of mothers’ smoking on infant birthweight, and see that accounting for uncertainty in estimating M alters CIs for ATE.

This was fun work with Edward and Zach Branson (sites.google.com/site/zjbrans...) and was a great project to finish my PhD!

9/9
December 28, 2024 at 11:28 AM
We incorporate M into our “calibrated” sensitivity models. Generically:

U <= GM

where G is sensitivity parameter.

We outline many choices for U and M and develop three specific models. Then identify bounds on ATE and give estimators that account for uncertainty in estimating M.

8/9
December 28, 2024 at 11:28 AM
#2 Calibrated sensitivity models arxiv.org/abs/2405.08738

Sensitivity analyses look at how unmeasured confounders (U) alter causal effect estimates (when, eg, trtment not random). To understand U, we can calibrate by estimating analogous ~measured~ conf. (M) by leaving out variables from data

6/9
December 28, 2024 at 11:28 AM
3. A slower-than-root-n CLT w/ non-smooth nfs (some “fun” technical results deep in appendix :D)

4. Simulations illustrating 1-3. Manufacturing Holder smooth fns was an interesting challenge!

4/9
December 28, 2024 at 11:28 AM
We focus on estimating expected cond. cov. Four main contributions:

1. Structure-agnostic linear expansion for DCDR est. Nuis func est. bias more important than var.

2. Rates with local lin smoothers for nfs under holder smoothness. Semiparametric efficiency and minimax optimality possible

3/9
December 28, 2024 at 11:28 AM
#1 arxiv.org/abs/2403.15175

The DCDR estimator is quite new (2018, arxiv.org/abs/1801.09138). It splits training data and trains nuisance fns on independent folds

It can get faster conv rates than usual DR estimator, which trains nuis funcs on same sample.

We analyze the DCDR est. in detail!
2/9
December 28, 2024 at 11:28 AM