Causal Inference 🔴→🟠←🟡.
Machine Learning 🤖🎓.
Data Communication 📈.
Healthcare ⚕️.
Creator of 𝙲𝚊𝚞𝚜𝚊𝚕𝚕𝚒𝚋: https://github.com/IBM/causallib
Website: https://ehud.co
full code: gist.github.com/ehudkr/a9dd3...
full code: gist.github.com/ehudkr/a9dd3...
post ~ pre + X0 + A0 + X1 + A1
post ~ pre + X0 + A0 + X1 + A1
In the simplest case, if the second treatment is completely randomized then it isn't a big deal:
post ~ pre + X0 + A0 + A1
will suffice
In the simplest case, if the second treatment is completely randomized then it isn't a big deal:
post ~ pre + X0 + A0 + A1
will suffice
The simplest answers will be to ignore treatment being dynamic. It will answer whether treatment _initiation_ is effective, regardless of how patients stick to protocol.
The simplest answers will be to ignore treatment being dynamic. It will answer whether treatment _initiation_ is effective, regardless of how patients stick to protocol.
arxiv.org/abs/2407.12220
arxiv.org/abs/2407.12220
On the left, you don't expect ɛ (y=f(X)+ɛ) to be consistent across data splits since it's random, and thus fitting it is bad.
On the right, you don't expect U (ruler) to appear on deployment, so a model using it instead of X (skin) will be wrong.
On the left, you don't expect ɛ (y=f(X)+ɛ) to be consistent across data splits since it's random, and thus fitting it is bad.
On the right, you don't expect U (ruler) to appear on deployment, so a model using it instead of X (skin) will be wrong.
I just think overfitting assumes iid train/test, so I'm not sure if cases like described in this paragraph hold (e.g., black swan).
I don't think that poor performance from distribution shift would be classified as "overfitting".
I just think overfitting assumes iid train/test, so I'm not sure if cases like described in this paragraph hold (e.g., black swan).
I don't think that poor performance from distribution shift would be classified as "overfitting".
projecteuclid.org/journals/bay...
projecteuclid.org/journals/bay...
anyways, it's here now, and I'm glad I set down and wrote it because I discovered I had to iron out some personal misunderstandings.
so without further ado and with even shinier visuals, a post about double cross-fitting for #causalinference
ehud.co/blog/2024/03...
anyways, it's here now, and I'm glad I set down and wrote it because I discovered I had to iron out some personal misunderstandings.
so without further ado and with even shinier visuals, a post about double cross-fitting for #causalinference
ehud.co/blog/2024/03...