guidoimbens.bsky.social
@guidoimbens.bsky.social
I am excited to see whether this will work. Irrespective of that, I think journals should experiment more with the publication process.
March 27, 2025 at 1:41 PM
Haha, I thought you would have liked it.
January 7, 2025 at 10:08 PM
Seems an excellent place to start!
January 7, 2025 at 10:07 PM
(2/2) I will also discuss why I am not a big fan of difference in differences estimation.
January 3, 2025 at 10:49 PM
yes, it can be. If you see variation in average outcomes per pod, it must lead to a big variance under interference.
November 23, 2024 at 9:04 PM
I think the cluster variance is conservative the same way the standard Neyman variance is conservative, and there is not much you can do about it.
November 23, 2024 at 8:35 PM
(2/2) That is well defined (assuming no spillovers beyond the pods). somewhat weird object, tied to the design and the population, but well-defined (an over-used term!) and its variance is estimated using clustering. The treatment is defined as being assigned to a group with a random set of peers.
November 23, 2024 at 8:14 PM
(1/2) Re the spillovers: Suppose I have a fixed population of 100 units. My experiment is to assign 50 to a control group, and the other 50 to 5 pods (in seema's notation) where all they do is have a meeting and talk. I am interested in the average effect of being assigned to the treatment group.
November 23, 2024 at 8:09 PM
(5/5) This question is as in Abadie et al (QJE). The answers there are relevant here. If the estimand involves averaging over a large population of therapists, you should cluster, if you want to keep the population of therapists fixed, you should not cluster. There are some technical subtleties,
November 23, 2024 at 8:04 AM
(4/5) Without spillovers (the individual therapy case with the same therapist for multiple patients in Sterba, figure 1(a)), then the answer depends on the estimand. Do you want the ave effect over a large population of therapists, or the ave effect given the set of therapists in the experiment.
November 23, 2024 at 8:02 AM
(3/5) To have valid standard errors if there are spillovers I think you ought to cluster at the pod/group level. This is what Cai-Szeidl do, and so I think their standard errors are right. (Though I dont entirely like the way they justify them, but who cares given that they get it right).
November 23, 2024 at 8:00 AM
(2/5) In the Cai-Szeidl case, which is like the ``group-therapy case in Figure 1(b) in the Sterba paper, my concern would be about the presence of spillovers between units (firms or individuals) in the same pod/group. (It seems the only reason the groups matter is through interactions/spillovers.)
November 23, 2024 at 7:59 AM
(1/5) Interesting set of questions. I had not seen the Cai-Szeidl paper, nor the Sterba paper referenced by Pustejovsky. Both are quite nice and recommended, especially the classification of cases in Figure 1 in Sterba. Here is my (preliminary) view.
November 23, 2024 at 7:59 AM
but was I actually right?
November 19, 2024 at 2:57 AM
(2/2) Design-based inference is about where you think the uncertainty is coming from. For settings with endogenous treatments you can base the inference on (conditional) randomization of the instrument. See my paper with Paul Rosenbaum in JRSS-A.
November 17, 2024 at 1:47 AM
(1/2) I agree largely with Kevin. I dont see any particular concerns of doing design based inference in settings outside the potential outcome framework, because that is largely equivalent to other ways of formulating causal questions, eg structural equations, or graphs.
November 17, 2024 at 1:47 AM
I will present it at the ASSA meetings in San Francisco as one of the methods lectures. I expect it will get recorded there.
November 3, 2024 at 4:20 AM
Not quite my style! There is a lot of value to the DAGs, but perhaps not quite as much as Judea Pearl thinks.
November 3, 2024 at 4:18 AM
We're working on it.
November 3, 2024 at 4:17 AM