yiqingxu.bsky.social
@yiqingxu.bsky.social
We also highlighted the need to use the leave-one-out approach to obtain pretrend estimates, thanks to Zikai and Anton's cautionary note: osf.io/ngr3d_v1/ @astrezh.bsky.social
August 25, 2025 at 8:56 PM
In this new version, we fixed many bugs and introduced more features.

Check it out: yiqingxu.org/packages/fect/

Special thanks to Ziyi, Rivka, and Tianzhu for their incredible work and dedication. More features are on the way!
August 25, 2025 at 8:56 PM
Thrilled to share that **fect** has won the 2025 Best Statistical Software Award from the Society of Political Methodology. We're honored!
polmeth.org/statistical-...

To celebrate, we've just released fect v2.0.5 on CRAN & Github 🎉
August 25, 2025 at 8:56 PM
7/ We have prototypes for all the algorithms used in the Element and will soon roll them out in **interflex** for R.

Comments and suggestions are more than welcome!
April 3, 2025 at 5:25 AM
6/ Through simulations & many empirical examples, we find AIPW-Lasso & PO-Lasso outperforming competitors with political science data, while DML requires much larger samples to do better.
April 3, 2025 at 5:25 AM
5/ We then turn to double machine learning (DML), incorporating modern learners such as neural nets, random forests, and HGB to estimate nuisance parameters and construct signals.
April 3, 2025 at 5:25 AM
4/ The core of the Element is robust estimation strategies.

First, we adapt AIPW-Lasso & Partialing-Out Lasso, both w/ basis expansion & Lasso double selection.

We walk through signal construction step-by-step to aid intuition. For smoothing, we support both kernel and B-spline regressions.
April 3, 2025 at 5:25 AM
3/ We start by defining the estimand, the CME, and presenting main identification results in the discrete-covariate case.

We then review & improve the semiparametric kernel estimator. The improvements include fully moderated models, adaptive kernels, and uniform confidence intervals.
April 3, 2025 at 5:25 AM
2/ We prepared it for Cambridge Element & previewed parts of it in our response to a blog post last month.

Scholars are often interested in how treatment effects vary with a moderating variable (example below).

We hope this will serve as a useful reference for this common task down the road.
April 3, 2025 at 5:25 AM
Draft “A Practical Guide to Estimating Conditional Marginal Effects: Modern Approaches” is on arXiv: arxiv.org/pdf/2504.01355

w/ two amazing grad students, Jiehan_Liu & Ziyi Liu 🧵
April 3, 2025 at 5:25 AM
5/ We then turn to double machine learning (DML), incorporating modern learners such as neural nets, random forests, and HGB to estimate nuisance parameters and construct signals.
April 3, 2025 at 3:14 AM
4/ The core of the Element is robust estimation.

First, we adapt AIPW-Lasso & partialing-out Lasso (PO-Lasso), both w/ basis expansion & Lasso double selection.

We walk through signal construction step by step to aid intuition. For smoothing, we support both kernel and B-spline regressions.
April 3, 2025 at 3:14 AM
3/ We start by defining the estimand, the CME, and presenting the main identification result.

We then review & improve the semiparametric kernel estimator by introducing fully moderated models, adaptive kernels, and uniform confidence intervals.
April 3, 2025 at 3:14 AM
We prepared this for Cambridge Element and previewed parts of it in our response to a blog post last month.

Scholars are often interested in how treatment effects vary with a moderating variable (example below). We hope this Element will serve as a useful reference for such tasks down the road.
April 3, 2025 at 3:14 AM
We streamlined six new DID-like estimators and created this tutorial for implementation in R.
yiqingxu.org/packages/fec...

Hope you no longer need to spend months figuring out what these estimators are and how to use them.
February 21, 2025 at 4:41 AM
7/ Based on the discussion, we propose a set of recommendations, starting from clearly stating the quantity of interest and the key ID & modeling assumptions.

We provide software support for various estimation strategies: yiqingxu.org/packages/int...
February 11, 2025 at 4:14 PM
6/ In contrast, Simonsohn’s proposed implementation of GAM is not well suited for estimating the CME. The method estimates the CAPE given a specific value of the treatment D = d, and we show that the estimated CAPE varies depending on the choice of d.
February 11, 2025 at 4:14 PM
5/ (3) We review recent advancements in the causal inference literature and highlight doubly robust and double/debiased machine learning estimators as appealing strategies for estimating the CME. Most of these methods are already supported by the *interflex* package in R.
February 11, 2025 at 4:14 PM
4/ (2) We show that the kernel estimator accurately recovers the true CME in the scenarios presented in the critique. The binning estimator, as a diagnostic tool, also performs reasonably well.
February 11, 2025 at 4:14 PM
3/ The critique compares the estimated CME to the wrong benchmark, leading to misleading conclusions. For example, in its simulation, the benchmark CME is not *zero* (as incorrectly stated) but increases with the moderator X because X and the treatment D are positively correlated.
February 11, 2025 at 4:14 PM
2/ While the critique is intriguing, it is fundamentally flawed for three reasons. (1) the causal estimand & ID assumptions are not clearly defined.

Most applied research focuses on estimating the CME, whereas the critique centers on CAPE, an estimand rarely used in applied work.
February 11, 2025 at 4:14 PM
Looks like everyone is here 😆

1/ Recently, Professor Uri Simonsohn critiqued Hainmueller, Mummolo & Xu (2019), arguing that the proposed methods fail to recover the conditional marginal effect (CME): datacolada.org/121

We appreciate the critique and offer this response: arxiv.org/pdf/2502.05717 🧵
February 11, 2025 at 4:14 PM