Arman Oganisian
banner
stablemarkets.bsky.social
Arman Oganisian
@stablemarkets.bsky.social
Statistician | Assistant professor @ Brown University Dept of Biostatistics | Developing nonparametric Bayesian methods for causal inference.

Research site: stablemarkets.netlify.app

#statsky
Regarding the hazard, it’s sensible and probabilistically valid. The resulting likelihood is the same as the likelihood under a piece-wise constant proportional hazard model.

We exploit this to fit such models efficiently in Stan: arxiv.org/abs/2310.12358
valid.as
October 24, 2025 at 5:39 PM
Bayesian nonparametric models allow flexibility in regions w/ lots of data, while allowing priors about sensitivity parameters drive inference in regions w/o data (see bottom-right plot).

Uncertainty about *all* unknowns flow into a single posterior for the causal quantity!
September 24, 2025 at 3:41 PM
Are some patients missing outcome info? Condition on data, make inferences about unknown {regression lines & missing values}.

Think the missingness is not at-random? Condition on data, making inferences about unknown {regression lines, missing values, & sensitivity parameters}
September 24, 2025 at 3:41 PM
We also discuss differences and similarities with methods for irregular visit processes that inverse-weight by the visit process intensity.
September 2, 2025 at 2:50 PM
... and how identifiability conditions may be read off a Single World Intervention Graph (SWIG) template for the implicit DTR.
September 2, 2025 at 2:50 PM
We discuss formalize connections between g-methods that use discrete-time versus continuous-time models adjustment models and the relative pros/cons of each...
September 2, 2025 at 2:50 PM
Progress can be made by (1) casting waiting times between decisions as potential outcomes of previous treatments and (2) framing subsequent decisions as outputs of an implicit dynamic Treatment Rule (DTR).
September 2, 2025 at 2:50 PM
I hope it’s helpful to others as they build Bayesian methods into their causal inference work.
August 22, 2025 at 12:36 PM
The “E” in the e g-formula represents expectations over the population distribution of the outcome. Whereas in Rubin’s Bayesian imputations, “E” represent expectations over the joint posterior distribution of the counterfactuals. These are different distributions in general.
August 19, 2025 at 1:06 AM
I see what you’re saying. But the “filled in” values are just monte carlo simulations for approximating the integrals over the time-varying confounder distributions because the integrals have no closed form solns in general. MC approximation of integrals is distinct from imputation imo
August 19, 2025 at 1:03 AM
I see yeah. Imposing common betas is such a pet peeve.
August 19, 2025 at 12:42 AM
Under additional cross-world assumptions implicitly made in this paper (e.g. assuming Y^1 & Y^0 are independent), drawing the POs and averaging the differences may recover the PATE for large n. But these draws are more like Monte Carlo sims that approximate the cond. expectations than imputations.
August 19, 2025 at 12:39 AM
Thanks for the reference! I'm not convinced the g-formula can be seen as an imputation-based estimator. For one thing, Rubin's imputation-based approach targets the sample ATE. the G-formula on the other hand identifies the population ATE. These have different variances & interpretations.
August 19, 2025 at 12:39 AM
Just curious in what sense is this “imputation”- based. To me this is just estimating the conditional expectation at empirically observed values of X. It’s not like values of Y are being drawn or multiply imputed from some distribution.
August 18, 2025 at 11:41 PM
Such a “check” is a strong yet implicit prior belief that “if PT holds in the pre-period, it must also hold in the post-period”

When estimating the effect of the Philadelphia beverage tax, Seong makes this explicit via a prior process on sensitivity parameters encoding departures from PT.
August 10, 2025 at 6:10 PM
As you suggested in the post, in my experience the situation is a lot better in biostatistics vs pure statistics departments at least at the places i’ve been at. I could also just be lucky - I have a great group of collaborators and can afford to be selective in the work I take on.
July 15, 2025 at 9:40 PM
If someone raises this concern, then the burden is on them to bring forward even a single plausible covariate - that is sufficiently unrelated with all the other covariates controlled for - with a realistic dual effect on treatment and outcomes. Otherwise they shouldn’t bring it up.
July 5, 2025 at 3:06 PM
On the other hand: we have causal critiques of the sort “this is wrong because there may be unmeasured confounding.”

Such critiques without solutions are intellectually lazy and do not add scientific value - after all unmeasured confounding is an issue in all obs causal studies.
July 5, 2025 at 3:05 PM