machine learning for difference-in-differences models. The Econometrics Journal, 23(2), 177-191. 4 The following data can be observed Pre-intervention Outcomes post-intervention Outcomes treatment group or not covariates
learning for difference-in-differences models. The Econometrics Journal, 23(2), 177-191. 5 The support of the ps of the treated is a subset of the support for the untreated conditional parallel-trend potential outcomes Counterfactual outcomes if no intervention is received treatment group control group violation for parallel-trend conditioning with X trend plot | X = 〇〇 ps Not overrap !! Common support is a subset of the untreated Comparable!! This states that the support of the propensity score of the treated group is a subset of the support for the untreated. This is the same constraint placed on ATT estimation in other propensity score methods
estimators, Review of Economic Studies, 72, 1–19. 6 propensity score Simple diff of pre vs. post ΔY D=1 & ps = 0.9 D=1 & ps = 0.1 D=0 & ps = 0.1 D=0 & ps = 0.9 In the example below, P(D)=0.5 no weight ( only 2 : inverse of P(D)=0.5) Since we want ATT, we do not weight the treatment group by the propensity score. -9 ps = 0.9 ~> Homogeneous with the treated -0.111 ps = 0.1 ~> Heterogeneous with the treated These are the untreated, so they are weighted negatively.
learning for difference-in-differences models. The Econometrics Journal, 23(2), 177-191. Predictive model (supervised learning) Label = Diff. Learning with control group only Cross fitting separates samples for “fitting” and “prediction” as in Chernozhukov (2018)
learning for difference-in-differences models. The Econometrics Journal, 23(2), 177-191. Diff. As in Abadie (2005), we weight the propensity score to the untreated. Propensity scores and P(D) are also calculated by "cross ﬁtting". Observable increase/decrease (Diff) ー Counterfactual increase/decrease (Diff) If there was no intervention (= counterfactual), the Diff would look something like this
difference-in-differences models. The Econometrics Journal, 23(2), 177-191. DMLDiD’s score function is as follows: New with the unknown constant p0 = P(D = 1) and the inﬁnite-dimensional nuisance parameter: nuisance parameter nuisance parameter
learning for difference-in-differences models. The Econometrics Journal, 23(2), 177-191. DMLDiD’s score function obey the Neyman orthogonality: The orthogonality property above says that the score function is invariant to small perturbations of the nuisance parameters g(propensity socre) and l (outcome model) consistent estimator for the asymptotic variance root-N consistency DMLDiD can achieve root-N consistency
C. (2020). Double/debiased machine learning for difference-in-differences models. The Econometrics Journal, 23(2), 177-191. ?? The data generating process in the original paper seems inappropriate in terms of testing the accuracy of this model. The conditional parallel trend assumption is not well represented. (This would be sufficient with ordinary DiD)
for difference-in-differences models. The Econometrics Journal, 23(2), 177-191. Although we were able to show its superiority over previous studies, simple DID is sufficient because it still does not represent bias well in the data generation process.
generates each variable. In the simulation, we assume that it is not possible to directly observe which latent group each unit belongs to. Y(0) = Y_2022 Y(1) = Y_2023 X : x0~x99 latent_group
is provided to make estimation difficult in PS-based models such as Abadie (2005). I tested whether DMLDiD can estimate ATT unbiasedly in such DAG. X D ΔY X PS-based Approach DAG `latent group` directly affects D! But unobservable !! X D ΔY X backdoor block with PS Failure to close total backdoor paths. A backpath through `latent group` may still exist.
data. I will try to reproduce it in the next issue. (There seem to be a few errors in the original demonstration.) Also, DMLDiD seems to be very versatile. I am currently developing a Python package 24
models." The Econometrics Journal 23.2 : 177–191 [2] Abadie A. (2005). Semiparametric difference-in-differences estimators, Review of Economic Studies, 72, 1–19. [3] Chernozhukov V., D. Chetverikov, M. Demirer, E. Duflo, C. Hansen, W. Newey, J. Robins (2018). Double/debiased machine learning for treatment and structural parameters, Econometrics Journal, 21, C1–C68. [4] (slide) 加藤真大 (2021)「DMLによる差分の差推定」 https://speakerdeck.com/masakat0/dmlniyoruchai-fen-falsechai-tui-ding 26