yiqingxu.bsky.social
@yiqingxu.bsky.social
Thanks! Please send us suggestions if you have any!
August 25, 2025 at 8:58 PM
We also highlighted the need to use the leave-one-out approach to obtain pretrend estimates, thanks to Zikai and Anton's cautionary note: osf.io/ngr3d_v1/ @astrezh.bsky.social
August 25, 2025 at 8:56 PM
In this new version, we fixed many bugs and introduced more features.

Check it out: yiqingxu.org/packages/fect/

Special thanks to Ziyi, Rivka, and Tianzhu for their incredible work and dedication. More features are on the way!
August 25, 2025 at 8:56 PM
Hi Vincent, thank you! Yes, we're working on a user manual that will incorporate these datasets. It will take a few weeks!
April 27, 2025 at 6:32 AM
7/ We have prototypes for all the algorithms used in the Element and will soon roll them out in **interflex** for R.

Comments and suggestions are more than welcome!
April 3, 2025 at 5:25 AM
6/ Through simulations & many empirical examples, we find AIPW-Lasso & PO-Lasso outperforming competitors with political science data, while DML requires much larger samples to do better.
April 3, 2025 at 5:25 AM
5/ We then turn to double machine learning (DML), incorporating modern learners such as neural nets, random forests, and HGB to estimate nuisance parameters and construct signals.
April 3, 2025 at 5:25 AM
4/ The core of the Element is robust estimation strategies.

First, we adapt AIPW-Lasso & Partialing-Out Lasso, both w/ basis expansion & Lasso double selection.

We walk through signal construction step-by-step to aid intuition. For smoothing, we support both kernel and B-spline regressions.
April 3, 2025 at 5:25 AM
3/ We start by defining the estimand, the CME, and presenting main identification results in the discrete-covariate case.

We then review & improve the semiparametric kernel estimator. The improvements include fully moderated models, adaptive kernels, and uniform confidence intervals.
April 3, 2025 at 5:25 AM
2/ We prepared it for Cambridge Element & previewed parts of it in our response to a blog post last month.

Scholars are often interested in how treatment effects vary with a moderating variable (example below).

We hope this will serve as a useful reference for this common task down the road.
April 3, 2025 at 5:25 AM
5/ We then turn to double machine learning (DML), incorporating modern learners such as neural nets, random forests, and HGB to estimate nuisance parameters and construct signals.
April 3, 2025 at 3:14 AM
4/ The core of the Element is robust estimation.

First, we adapt AIPW-Lasso & partialing-out Lasso (PO-Lasso), both w/ basis expansion & Lasso double selection.

We walk through signal construction step by step to aid intuition. For smoothing, we support both kernel and B-spline regressions.
April 3, 2025 at 3:14 AM
3/ We start by defining the estimand, the CME, and presenting the main identification result.

We then review & improve the semiparametric kernel estimator by introducing fully moderated models, adaptive kernels, and uniform confidence intervals.
April 3, 2025 at 3:14 AM
We prepared this for Cambridge Element and previewed parts of it in our response to a blog post last month.

Scholars are often interested in how treatment effects vary with a moderating variable (example below). We hope this Element will serve as a useful reference for such tasks down the road.
April 3, 2025 at 3:14 AM
Hah, thanks. You're the reason I'm here, Guilherme
February 11, 2025 at 4:15 PM
8/ Finally, we thank Professor Simonsohn for his thoughtful critique, which brings renewed attention to the estimation and testing of conditional relationships.

w/ Jens Hainmueller, Jiehan Liu, Ziyi Liu & Jonathan Mummolo
February 11, 2025 at 4:14 PM
7/ Based on the discussion, we propose a set of recommendations, starting from clearly stating the quantity of interest and the key ID & modeling assumptions.

We provide software support for various estimation strategies: yiqingxu.org/packages/int...
February 11, 2025 at 4:14 PM