Edward H. Kennedy
banner
edwardhkennedy.bsky.social
Edward H. Kennedy
@edwardhkennedy.bsky.social
assoc prof of statistics & data science at Carnegie Mellon

https://www.ehkennedy.com/

interested in causality, machine learning, nonparametrics, public policy, etc
July 1, 2025 at 2:27 AM
Once you have a pathwise differentiable parameter, a natural estimator is a debiased plug-in, which subtracts off the avg of estimated influence fn

Pfanzagl gives this 1-step estimator here - in causal inference this is exactly the doubly robust / DML estimator you know & love!
February 17, 2025 at 2:47 AM
Here’s Pfanzagl on the gradient of a functional/parameter, aka derivative term in a von Mises expansion, aka influence function, aka Neyman-orthogonal score

Richard von Mises first characterized smoothness this way for stats in the 30s/40s! eg:

projecteuclid.org/journals/ann...
February 17, 2025 at 2:47 AM
"Randomized trials should be used to answer any causal question that can be so studied...

But the reality is that observational methods are used everyday to answer pressing causal questions that cannot be studied in randomized trials."

- Jamie Robins, 2002
tinyurl.com/4yuxfxes
tinyurl.com/zncp39mr
January 13, 2025 at 2:49 AM
Should we use structure-agnostic (arxiv.org/abs/2305.04116) or smooth (arxiv.org/pdf/1512.02174) models for causal inference?

Why not both?

Here we propose novel hybrid smooth+agnostic model, give minimax rates, & new optimal methods

arxiv.org/pdf/2405.08525

-> fast rates under weaker conditions
December 13, 2024 at 4:07 AM
New paper! arxiv.org/pdf/2411.14285

Led by amazing postdoc Alex Levis: www.awlevis.com/about/

We show causal effects of new "soft" interventions are less sensitive to unmeasured confounding

& study which effects are *least* sensitive to confounding -> makes new connections to optimal transport
November 22, 2024 at 4:39 AM
In this paper we consider incremental effects of continuous exposures:

arxiv.org/abs/2409.11967

i.e., soft interventions on cts treatments like dose, duration, frequency

it turns out exponential tilts preserve all nice properties of incremental effects with binary trt (arxiv.org/abs/1704.00211)
November 13, 2024 at 4:26 AM
Very excited about this paper!

arxiv.org/abs/2305.04116

We study if one can improve popular semiparametric / doubly robust / DML causal effect estimators - w/o adding structural assumptions...

Short answer: nope!

Turns out these methods are minimax optimal here

www.ehkennedy.com/uploads/5/8/...
November 12, 2024 at 3:30 AM
there are *surprisingly many* open problems when it comes to theory/methods in causal inference

check out this talk by Siva Balakrishnan for an excellent & comprehensive summary of the state of the art

www.youtube.com/live/Mnum0Ox...

www.stat.cmu.edu/~siva/
November 12, 2024 at 3:15 AM
November 12, 2024 at 3:10 AM
Short story - the ideas behind “causal ML” and “double machine learning” go back at least 40 years

Here is an estimator from a 1982 textbook that today would be called double machine learning or something similar
September 30, 2024 at 3:32 AM
Pfanzagl also points out our prized n^(1/4) nuisance rate conditions, which allow fast rates & inference for the parameter of interest, even w/ flexible np smoothing / lasso etc

Even throws in a mention of sample-splitting / cross-fitting! (from Hasminskii & Ibragimov 1979)
September 30, 2024 at 3:27 AM
Once you have a pathwise differentiable parameter, a natural estimator is a debiased plug-in, which subtracts off the avg of estimated influence fn

Pfanzagl gives this 1-step estimator here - in causal inference this is exactly the doubly robust / DML estimator you know & love!
September 30, 2024 at 3:26 AM
Pfanzagl uses pathwise differentiability above, but w/regularity conditions this is just a distributional Taylor expansion, which is easier to think about

I note this in my tutorial here:
x.com/edwardhkenne...

Also v related to neyman orthogonality - worth a separate thread
September 30, 2024 at 3:22 AM
Here’s Pfanzagl on the gradient of a functional/parameter, aka derivative term in a von Mises expansion, aka influence function, aka Neyman-orthogonal score

Richard von Mises first characterized smoothness this way for stats in the 30s/40s! eg:

projecteuclid.org/euclid.aoms/...
September 30, 2024 at 3:18 AM
September 30, 2024 at 3:16 AM
Here’s a (now 35 yr old) book review where Bickel gives some history:

projecteuclid.org/euclid.aos/1...

A more detailed discussion of the early days is in the comprehensive semiparametrics text by BKRW:

springer.com/gp/book/9780...

Would love to hear if others know good historical resources!
September 30, 2024 at 3:13 AM
Some amazing quotes by Herbert Robbins here:

jiayinggu.weebly.com/uploads/3/8/...

"Why does it take so long? Why haven't I done ten times as much as I have? Why do I bother over & over again trying the wrong way when the right way was staring me in the face all the time? I don't know."
December 13, 2023 at 3:58 AM
Larry Wasserman’s talk on “Problems with Bayesian causal inference”

youtu.be/sZyyaNdvfto?...
November 5, 2023 at 2:37 AM
Check out this great paper by Mateo Rubio (scholar.google.com/citations?us...), rigorously estimating causal effects of the "cycle of violence"

Superb example of how to tell a story including average effects, heterogeneous effects, & sensitivity analysis (i.e., relaxing assumptions abt confounding)
October 21, 2023 at 2:55 AM
October 21, 2023 at 12:00 AM
October 17, 2023 at 4:05 PM
"There are two types of statisticians: those who do causal inference and those who lie about it."

- Larry Wasserman #statsquotes

www.jstor.org/stable/26699...
October 16, 2023 at 1:42 AM
Matteo Bonvini made an R package for our “proportion of unmeasured confounding” sensitivity approach here:

github.com/matteobonvin...

Paper:

arxiv.org/abs/1912.02793
October 14, 2023 at 11:21 PM
Yes X is confounders and Y1 = Y^{a=1} is the potential outcome.

There are lots of different kinds of sensitivity analysis models - the Cinelli & Hazlett one is more specialized/structured, & assumes more.

The lit review in this paper might be useful:

arxiv.org/pdf/2104.083...
October 14, 2023 at 11:20 AM