Cory McCartan
@corymccartan.com
Asst. Prof. of Statistics & Political Science at Penn State. I study stats methods, gerrymandering, & elections. Bayesian. Founder of UGSDW and proud alum of HGSU-UAW L. 5118.
corymccartan.com
corymccartan.com
Looks like Mamdani will end the night with around 50.8% of the vote
November 5, 2025 at 3:43 AM
Looks like Mamdani will end the night with around 50.8% of the vote
Reposted by Cory McCartan
My main takeaway on the “moderation” debate is that Democrats would be better served by other debates besides left-right positioning, like how to develop new valence issues (corruption!) as wedge issues, and how to get attention for their policy proposals in the first place
October 28, 2025 at 1:16 AM
My main takeaway on the “moderation” debate is that Democrats would be better served by other debates besides left-right positioning, like how to develop new valence issues (corruption!) as wedge issues, and how to get attention for their policy proposals in the first place
October 21, 2025 at 3:31 PM
As we showed in our paper, `seine` can strongly outperforms existing methods, which generally do not control for covariates (or do not do so efficiently)
October 21, 2025 at 3:31 PM
As we showed in our paper, `seine` can strongly outperforms existing methods, which generally do not control for covariates (or do not do so efficiently)
Instead of a plot, you can also calculate a robustness value, which is a single-number summary of each estimand's sensitivity
October 21, 2025 at 3:31 PM
Instead of a plot, you can also calculate a robustness value, which is a single-number summary of each estimand's sensitivity
The fun does not stop there! `seine` lets you immediately turn around and conduct a sensitivity analysis on your estimates. The `ei_sens()` function returns a data frame with different sensitivity parameters and biases. By default this can be plotted! Benchmarking to observed covariates works too!
October 21, 2025 at 3:31 PM
The fun does not stop there! `seine` lets you immediately turn around and conduct a sensitivity analysis on your estimates. The `ei_sens()` function returns a data frame with different sensitivity parameters and biases. By default this can be plotted! Benchmarking to observed covariates works too!
The `ei_est()` function then does DML to estimate your main quantities of interest, and returns them in a tidy format.
You can subset to produce subgroup estimates, and estimate linear contrasts as well—all with proper uncertainty quantification
You can subset to produce subgroup estimates, and estimate linear contrasts as well—all with proper uncertainty quantification
October 21, 2025 at 3:31 PM
The `ei_est()` function then does DML to estimate your main quantities of interest, and returns them in a tidy format.
You can subset to produce subgroup estimates, and estimate linear contrasts as well—all with proper uncertainty quantification
You can subset to produce subgroup estimates, and estimate linear contrasts as well—all with proper uncertainty quantification
Then you can quickly & easily fit a regression model and a Riesz representer to the EI specification
These are implemented efficiently with the SVD, and the penalty is tuned automatically for the regression model!
These are implemented efficiently with the SVD, and the penalty is tuned automatically for the regression model!
October 21, 2025 at 3:31 PM
Then you can quickly & easily fit a regression model and a Riesz representer to the EI specification
These are implemented efficiently with the SVD, and the penalty is tuned automatically for the regression model!
These are implemented efficiently with the SVD, and the penalty is tuned automatically for the regression model!
You can then set up an EI specification which describes the problem and the covariates you will use.
October 21, 2025 at 3:31 PM
You can then set up an EI specification which describes the problem and the covariates you will use.
`seine` first helps you preprocess your data into a format suitable for applying EI. Any messiness with proportions not adding to 1, etc. is handled.
October 21, 2025 at 3:31 PM
`seine` first helps you preprocess your data into a format suitable for applying EI. Any messiness with proportions not adding to 1, etc. is handled.
Yes, conditioning on covariates leads to much weaker assumptions than EI of yore! Akin to the difference between estimating causal effects with a difference in means vs. a regression adjustment.
In our application, we control for geography at a fine scale, & basically recover Freedman's nbhd model
In our application, we control for geography at a fine scale, & basically recover Freedman's nbhd model
September 26, 2025 at 9:28 PM
Yes, conditioning on covariates leads to much weaker assumptions than EI of yore! Akin to the difference between estimating causal effects with a difference in means vs. a regression adjustment.
In our application, we control for geography at a fine scale, & basically recover Freedman's nbhd model
In our application, we control for geography at a fine scale, & basically recover Freedman's nbhd model