Slide 1

Slide 1 text

Statistical Rethinking 5. Elemental Confounds 2023

Slide 2

Slide 2 text

-4 -2 0 2 4 6 2 3 4 5 6 7 8 log metal bands per million happiness Finland

Slide 3

Slide 3 text

Does Waffle House cause divorce? 0 10 20 30 40 6 8 10 12 Waffle Houses per million Divorce rate AL AR FL GA KY LA ME MS NC OK SC TN

Slide 4

Slide 4 text

http://www.tylervigen.com/spurious-correlations Correlation is commonplace

Slide 5

Slide 5 text

No content

Slide 6

Slide 6 text

No content

Slide 7

Slide 7 text

No content

Slide 8

Slide 8 text

Association & Causation Statistical recipes must defend against confounding “Confounds”: Features of the sample & how we use it that mislead us Confounds are diverse X Y U Z V X Y U Z V ?

Slide 9

Slide 9 text

X Z Y The Pipe X Z Y The Fork X Z Y The Collider X Z Y The Descendant A Ye Olde Causal Alchemy The Four Elemental Confounds

Slide 10

Slide 10 text

The Fork X and Y are associated Share a common cause Z Once stratified by Z, no association Y ⫫ X Y ⫫ X | Z Z is a “common cause” X Z Y

Slide 11

Slide 11 text

No content

Slide 12

Slide 12 text

No content

Slide 13

Slide 13 text

Y X 0 1 0 397 84 1 100 419 X Z Y n <- 1000 Z <- rbern( n , 0.5 ) X <- rbern( n , (1-Z)*0.1 + Z*0.9 ) Y <- rbern( n , (1-Z)*0.1 + Z*0.9 ) > cor(X,Y) [1] 0.63 Y ⫫ X

Slide 14

Slide 14 text

Y X 0 1 0 397 84 1 100 419 Z = 0 Y X 0 1 0 390 43 1 44 5 Z = 1 Y X 0 1 0 7 41 1 56 414 X Z Y n <- 1000 Z <- rbern( n , 0.5 ) X <- rbern( n , (1-Z)*0.1 + Z*0.9 ) Y <- rbern( n , (1-Z)*0.1 + Z*0.9 ) > cor(X,Y) [1] 0.63 > cor(X[Z==0],Y[Z==0]) [1] 0.003 > cor(X[Z==1],Y[Z==1]) [1] 0.024 Y ⫫ X Y ⫫ X | Z

Slide 15

Slide 15 text

Z = 1 Z = 0 X Z Y -3 -2 -1 0 1 2 3 4 -3 -2 -1 0 1 2 3 X Y cols <- c(4,2) N <- 300 Z <- rbern(N) X <- rnorm(N,2*Z-1) Y <- rnorm(N,2*Z-1) plot( X , Y , col=cols[Z+1] , lwd=3 ) abline(lm(Y[Z==1]~X[Z==1]),col=2,lwd=3) abline(lm(Y[Z==0]~X[Z==0]),col=4,lwd=3) abline(lm(Y~X),lwd=3)

Slide 16

Slide 16 text

Fork Example Why do regions of the USA with 
 higher rates of marriage 
 also have 
 higher rates of divorce? 15 20 25 30 6 8 10 12 Marriage rate Divorce rate AL AK AR CO CT DE DC GA HI ID KY ME MN NJ ND OK RI TN UT VA WY library(rethinking) data(WaffleDivorce) M D ? 15 20 25 30 6 8 10 12 Marriage rate Divorce rate AL AK AR CO CT DE DC GA HI ID KY ME MN NJ ND OK RI TN UT VA WY Southern States

Slide 17

Slide 17 text

Marrying the Owl (1) Estimand: Causal effect of marriage rate on divorce rate (2) Scientific model (3) Statistical model (4) Analyze M D ?

Slide 18

Slide 18 text

D A Age at marriage ? Divorce 23 24 25 26 27 28 29 6 8 10 12 Median age of marriage Divorce rate AL AR CT DC ID ME MA MN NJ ND OK RI UT WY 15 20 25 30 6 8 10 12 Marriage rate Divorce rate AL AK AR CO CT DE DC GA HI ID KY ME MN NJ ND OK RI TN UT VA WY Southern States

Slide 19

Slide 19 text

15 20 25 30 6 8 10 12 Marriage rate Divorce rate AL AK AR CO CT DE DC GA HI ID KY ME MN NJ ND OK RI TN UT VA WY M D A Age at marriage ? Marriage Divorce 23 24 25 26 27 28 29 6 8 10 12 Median age of marriage Divorce rate AL AR CT DC ID ME MA MN NJ ND OK RI UT WY 23 24 25 26 27 28 29 15 20 25 30 Median age of marriage Marriage rate AK AR DE DC HI ID ME MA MN NJ NY ND OK PA RI UT WY ?

Slide 20

Slide 20 text

Marrying the Owl (1) Estimand: Causal effect of marriage rate on divorce rate (2) Scientific model (3) Statistical model (4) Analyze M D ? M D A

Slide 21

Slide 21 text

15 20 25 30 6 8 10 12 Marriage rate Divorce rate AL AK AR CO CT DE DC GA HI ID KY ME MN NJ ND OK RI TN UT VA WY M D A Age at marriage ? Marriage Divorce 23 24 25 26 27 28 29 6 8 10 12 Median age of marriage Divorce rate AL AR CT DC ID ME MA MN NJ ND OK RI UT WY 23 24 25 26 27 28 29 15 20 25 30 Median age of marriage Marriage rate AK AR DE DC HI ID ME MA MN NJ NY ND OK PA RI UT WY ? Fork: M <– A –> D To estimate causal effect of M, need to break the fork Break the fork by stratifying by A

Slide 22

Slide 22 text

What does it mean to stratify by a continuous variable? It depends How does A influence D?
 What is D = f(A,M)? In a linear regression: M D A μ i = α + β M M i + β A A i D i ∼ Normal(μ i , σ)

Slide 23

Slide 23 text

What does it mean to stratify by a continuous variable? Every value of A produces of different relationship between D and M: M D A μ i = α + β M M i + β A A i μ i = (α + β A A i ) + β M M i intercept

Slide 24

Slide 24 text

Statistical Fork To stratify by A (age at marriage), include as term in linear model μ i = α + β M M i + β A A i D i ∼ Normal(μ i , σ) α ∼ Normal(?, ?) β M ∼ Normal(?, ?) β A ∼ Normal(?, ?) σ ∼ Exponential(?) We are going to standardize the data

Slide 25

Slide 25 text

Standardizing the Owl Often convenient to standardize variables in linear regression Standardize: Subtract mean and divide by standard deviation Computation works better Easy to choose sensible priors -2 -1 0 1 2 3 -2 -1 0 1 2 Median age of marriage (standardized) Divorce rate (standardized)

Slide 26

Slide 26 text

Prior predictive simulation μ i = α + β M M i + β A A i D i ∼ Normal(μ i , σ) α ∼ Normal(0,10) β M ∼ Normal(0,10) β A ∼ Normal(0,10) σ ∼ Exponential(1) Some default priors # prior predictive simulation n <- 20 a <- rnorm(n,0,10) bM <- rnorm(n,0,10) bA <- rnorm(n,0,10) plot( NULL , xlim=c(-2,2) , ylim=c(-2,2) , xlab="Median age of marriage (standardized)" , ylab="Divorce rate (standardized)" ) Aseq <- seq(from=-3,to=3,len=30) for ( i in 1:n ) { mu <- a[i] + bA[i]*Aseq lines( Aseq , mu , lwd=2 , col=2 ) }

Slide 27

Slide 27 text

# prior predictive simulation n <- 20 a <- rnorm(n,0,10) bM <- rnorm(n,0,10) bA <- rnorm(n,0,10) plot( NULL , xlim=c(-2,2) , ylim=c(-2,2) , xlab="Median age of marriage (standardized)" , ylab="Divorce rate (standardized)" ) Aseq <- seq(from=-3,to=3,len=30) for ( i in 1:n ) { mu <- a[i] + bA[i]*Aseq lines( Aseq , mu , lwd=2 , col=2 ) } Prior predictive simulation μ i = α + β M M i + β A A i D i ∼ Normal(μ i , σ) α ∼ Normal(0,10) β M ∼ Normal(0,10) β A ∼ Normal(0,10) σ ∼ Exponential(1) Some default priors -2 -1 0 1 2 -2 -1 0 1 2 Median age of marriage (standardized) Divorce rate (standardized)

Slide 28

Slide 28 text

# better priors n <- 20 a <- rnorm(n,0,0.2) bM <- rnorm(n,0,0.5) bA <- rnorm(n,0,0.5) plot( NULL , xlim=c(-2,2) , ylim=c(-2,2) , xlab="Median age of marriage (standardized)" , ylab="Divorce rate (standardized)" ) Aseq <- seq(from=-3,to=3,len=30) for ( i in 1:n ) { mu <- a[i] + bA[i]*Aseq lines( Aseq , mu , lwd=2 , col=2 ) } Prior predictive simulation μ i = α + β M M i + β A A i D i ∼ Normal(μ i , σ) α ∼ Normal(0,0.2) β M ∼ Normal(0,0.5) β A ∼ Normal(0,0.5) σ ∼ Exponential(1) Better priors

Slide 29

Slide 29 text

# better priors n <- 20 a <- rnorm(n,0,0.2) bM <- rnorm(n,0,0.5) bA <- rnorm(n,0,0.5) plot( NULL , xlim=c(-2,2) , ylim=c(-2,2) , xlab="Median age of marriage (standardized)" , ylab="Divorce rate (standardized)" ) Aseq <- seq(from=-3,to=3,len=30) for ( i in 1:n ) { mu <- a[i] + bA[i]*Aseq lines( Aseq , mu , lwd=2 , col=2 ) } Prior predictive simulation μ i = α + β M M i + β A A i D i ∼ Normal(μ i , σ) α ∼ Normal(0,0.2) β M ∼ Normal(0,0.5) β A ∼ Normal(0,0.5) σ ∼ Exponential(1) Better priors -2 -1 0 1 2 -2 -1 0 1 2 Median age of marriage (standardized) Divorce rate (standardized)

Slide 30

Slide 30 text

Marrying the Owl (1) Estimand: Causal effect of marriage rate on divorce rate (2) Scientific model (3) Statistical model (4) Analyze M D ? M D A μ i = α + β M M i + β A A i

Slide 31

Slide 31 text

# model dat <- list( D = standardize(d$Divorce), M = standardize(d$Marriage), A = standardize(d$MedianAgeMarriage) ) m_DMA <- quap( alist( D ~ dnorm(mu,sigma), mu <- a + bM*M + bA*A, a ~ dnorm(0,0.2), bM ~ dnorm(0,0.5), bA ~ dnorm(0,0.5), sigma ~ dexp(1) ) , data=dat ) μ i = α + β M M i + β A A i D i ∼ Normal(μ i , σ) α ∼ Normal(0,0.2) β M ∼ Normal(0,0.5) β A ∼ Normal(0,0.5) σ ∼ Exponential(1) Analyze data

Slide 32

Slide 32 text

# model dat <- list( D = standardize(d$Divorce), M = standardize(d$Marriage), A = standardize(d$MedianAgeMarriage) ) m_DMA <- quap( alist( D ~ dnorm(mu,sigma), mu <- a + bM*M + bA*A, a ~ dnorm(0,0.2), bM ~ dnorm(0,0.5), bA ~ dnorm(0,0.5), sigma ~ dexp(1) ) , data=dat ) sigma bA bM a -0.5 0.0 0.5 Value plot(precis(m_DMA)) What’s the causal effect of M? Analyze data

Slide 33

Slide 33 text

Simulating interventions A causal effect is a manipulation of the generative model, an intervention. p(D|do(M)) means the distribution of D when we intervene (“do”) M This implies deleting all arrows into M and simulating D M D A M D A no intervention do(M)

Slide 34

Slide 34 text

M D A do(M) post <- extract.samples( m_DMA ) # sample A from data n <- 1e3 As <- sample(dat$A,size=n,replace=TRUE) # simulate D for M=0 (sample mean) DM0 <- with( post , rnorm(n, a + bM*0 + bA*As , sigma ) ) # simulate D for M=1 (+1 standard deviation) # use the *same* A values DM1 <- with( post , rnorm(n, a + bM*1 + bA*As , sigma ) ) # contrast M10_contrast <- DM1 - DM0 dens(M10_contrast,lwd=4,col=2,xlab="effect of 1sd increase in M" )

Slide 35

Slide 35 text

M D A do(M) post <- extract.samples( m_DMA ) # sample A from data n <- 1e3 As <- sample(dat$A,size=n,replace=TRUE) # simulate D for M=0 (sample mean) DM0 <- with( post , rnorm(n, a + bM*0 + bA*As , sigma ) ) # simulate D for M=1 (+1 standard deviation) # use the *same* A values DM1 <- with( post , rnorm(n, a + bM*1 + bA*As , sigma ) ) # contrast M10_contrast <- DM1 - DM0 dens(M10_contrast,lwd=4,col=2,xlab="effect of 1sd increase in M" )

Slide 36

Slide 36 text

M D A do(M) post <- extract.samples( m_DMA ) # sample A from data n <- 1e3 As <- sample(dat$A,size=n,replace=TRUE) # simulate D for M=0 (sample mean) DM0 <- with( post , rnorm(n, a + bM*0 + bA*As , sigma ) ) # simulate D for M=1 (+1 standard deviation) # use the *same* A values DM1 <- with( post , rnorm(n, a + bM*1 + bA*As , sigma ) ) # contrast M10_contrast <- DM1 - DM0 dens(M10_contrast,lwd=4,col=2,xlab="effect of 1sd increase in M" )

Slide 37

Slide 37 text

M D A do(M) post <- extract.samples( m_DMA ) # sample A from data n <- 1e3 As <- sample(dat$A,size=n,replace=TRUE) # simulate D for M=0 (sample mean) DM0 <- with( post , rnorm(n, a + bM*0 + bA*As , sigma ) ) # simulate D for M=1 (+1 standard deviation) # use the *same* A values DM1 <- with( post , rnorm(n, a + bM*1 + bA*As , sigma ) ) # contrast M10_contrast <- DM1 - DM0 dens(M10_contrast,lwd=4,col=2,xlab="effect of 1sd increase in M" ) -4 -2 0 2 4 0.0 0.1 0.2 0.3 0.4 effect of 1sd increase in M Density

Slide 38

Slide 38 text

do(A) Causal effect of A? M D A How to estimate causal effect of A, 
 p(D|do(A))? No arrows to delete for intervention Fit new model that ignores M, then simulate any intervention you like Why does ignoring M work? Because A –> M –> D is a “pipe”

Slide 39

Slide 39 text

PAUSE

Slide 40

Slide 40 text

X Z Y The Pipe X Z Y The Fork X Z Y The Collider X Z Y The Descendant A Ye Olde Causal Alchemy The Four Elemental Confounds

Slide 41

Slide 41 text

X Z Y The Pipe X and Y are associated Influence of X on Y transmitted through Z Once stratified by Z, no association Y ⫫ X Y ⫫ X | Z Z is a “mediator”

Slide 42

Slide 42 text

No content

Slide 43

Slide 43 text

Y X 0 1 0 430 87 1 93 390 X Z Y n <- 1000 X <- rbern( n , 0.5) Z <- rbern( n , (1-X)*0.1 + X*0.9 ) Y <- rbern( n , (1-Z)*0.1 + Z*0.9 ) > cor(X,Y) [1] 0.64 Y ⫫ X

Slide 44

Slide 44 text

Y X 0 1 0 430 87 1 93 390 Z = 0 Y X 0 1 0 422 39 1 53 5 Z = 1 Y X 0 1 0 8 48 1 40 385 X Z Y n <- 1000 X <- rbern( n , 0.5) Z <- rbern( n , (1-X)*0.1 + X*0.9 ) Y <- rbern( n , (1-Z)*0.1 + Z*0.9 ) > cor(X,Y) [1] 0.64 > cor(X[Z==0],Y[Z==0]) [1] 0.002 > cor(X[Z==1],Y[Z==1]) [1] 0.052 Y ⫫ X Y ⫫ X | Z

Slide 45

Slide 45 text

Z = 1 Z = 0 cols <- c(4,2) N <- 300 X <- rnorm(N) Z <- rbern(N,inv_logit(X)) Y <- rnorm(N,(2*Z-1)) plot( X , Y , col=cols[Z+1] , lwd=3 ) abline(lm(Y[Z==1]~X[Z==1]),col=2,lwd=3) abline(lm(Y[Z==0]~X[Z==0]),col=4,lwd=3) abline(lm(Y~X),lwd=3) X Z Y -3 -2 -1 0 1 2 3 -3 -2 -1 0 1 2 3 X Y

Slide 46

Slide 46 text

Pipe example Plant growth experiment 100 plants Half treated with anti-fungal Measure growth and fungus Estimand: Causal effect of treatment on plant growth

Slide 47

Slide 47 text

Scientific model H0 H1 F height
 time 1 height
 time 0 fungus

Slide 48

Slide 48 text

Scientific model H0 H1 T F treatment height
 time 1 height
 time 0 fungus

Slide 49

Slide 49 text

Statistical model Estimand: Total causal effect of T The path T –> F –> H1 is a pipe Should we stratify by F? NO — that would block the pipe See pages 170–175 for complete example H0 H1 T F The treatment must flow

Slide 50

Slide 50 text

Post-treatment bias Stratifying by (conditioning on) F induces post-treatment bias Mislead that treatment doesn’t work Consequences of treatment should not usually be included in estimator Doing experiments is no protection against bad causal inference STOP CONDITIONING ON POSTTREATMENT VARIABLES IN EXPERIMENTS 761 unlikely to hold in real-world settings. In short, condi- tioning on posttreatment variables can ruin experiments; we should not do it. Though the dangers of posttreatment bias have long been recognized in the fields of statistics, econometrics, and political methodology (e.g., Acharya, Blackwell, and Sen 2016; Elwert and Winship 2014; King and Zeng 2006; Rosenbaum 1984; Wooldridge 2005), there is still signif- icant confusion in the wider discipline about its sources and consequences. In this article, we therefore seek to provide the most comprehensive and accessible account to date of the sources, magnitude, and frequency of post- treatment bias in experimental political science research. We first identify common practices that lead to posttreat- mentconditioninganddocumenttheirprevalenceinarti- cles published in the field’s top journals. We then provide analyticalresultsthatexplainhowposttreatmentbiascon- taminates experimental analyses and demonstrate how it can distort treatment effect estimates using data from TABLE 1 Posttreatment Conditioning in Experimental Studies Category Prevalence Engages in posttreatment conditioning 46.7% Controls for/interacts with a posttreatment variable 21.3% Drops cases based on posttreatment criteria 14.7% Both types of posttreatment conditioning present 10.7% No conditioning on posttreatment variables 52.0% Insufficient information to code 1.3% Note: The sample consists of 2012–14 articles in the American Po- litical Science Review, the American Journal of Political Science, and the Journal of Politics including a survey, field, laboratory, or lab- in-the-field experiment (n = 75). avoid posttreatment bias. In many cases, the usefulness From Montgomery et al 2018 “How Conditioning on Posttreatment Variables Can Ruin Your Experiment and What to Do about It”

Slide 51

Slide 51 text

treatment covariate outcome unobserved
 confound T X u Y

Slide 52

Slide 52 text

X Z Y The Pipe X Z Y The Fork X Z Y The Collider X Z Y The Descendant A Ye Olde Causal Alchemy The Four Elemental Confounds

Slide 53

Slide 53 text

Y ⫫ X | Z X Z Y The Collider X and Y are not associated (share no causes) X and Y both influence Z Once stratified by Z, X and Y associated Y ⫫ X Z is a “collider”

Slide 54

Slide 54 text

No content

Slide 55

Slide 55 text

Y X 0 1 0 243 236 1 250 271 X Z Y n <- 1000 X <- rbern( n , 0.5 ) Y <- rbern( n , 0.5 ) Z <- rbern( n , ifelse(X+Y>0,0.9,0.2) ) > cor(X,Y) [1] 0.027 Y ⫫ X

Slide 56

Slide 56 text

Y ⫫ X | Z Y X 0 1 0 243 236 1 250 271 Z = 0 Y X 0 1 0 200 19 1 32 29 Z = 1 Y X 0 1 0 43 217 1 218 242 X Z Y n <- 1000 X <- rbern( n , 0.5 ) Y <- rbern( n , 0.5 ) Z <- rbern( n , ifelse(X+Y>0,0.9,0.2) ) > cor(X,Y) [1] 0.027 > cor(X[Z==0],Y[Z==0]) [1] 0.43 > cor(X[Z==1],Y[Z==1]) [1] -0.31 Y ⫫ X

Slide 57

Slide 57 text

Z = 1 Z = 0 cols <- c(4,2) N <- 300 X <- rnorm(N) Y <- rnorm(N) Z <- rbern(N,inv_logit(2*X+2*Y-2)) plot( X , Y , col=cols[Z+1] , lwd=3 ) abline(lm(Y[Z==1]~X[Z==1]),col=2,lwd=3) abline(lm(Y[Z==0]~X[Z==0]),col=4,lwd=3) abline(lm(Y~X),lwd=3) X Z Y -2 -1 0 1 2 3 -2 -1 0 1 2 3 X Y

Slide 58

Slide 58 text

-2 -1 0 1 2 3 -3 -2 -1 0 1 2 3 Newsworthiness Trustworthiness Collider example Some biases arise from selection Suppose: 200 grant applications Each scored on newsworthiness and trustworthiness No association in population Strong association after selection

Slide 59

Slide 59 text

-2 -1 0 1 2 3 -3 -2 -1 0 1 2 3 Newsworthiness Trustworthiness Collider example Some biases arise from selection Suppose: 200 grant applications Each scored on newsworthiness and trustworthiness No association in population Strong association after selection

Slide 60

Slide 60 text

Collider example Awarded grants must have been sufficiently newsworthy or trustworthy Few grants are high in both Results in negative association, conditioning on award N T A -2 -1 0 1 2 3 -3 -2 -1 0 1 2 3 Newsworthiness Trustworthiness

Slide 61

Slide 61 text

Collider example Similar examples: Restaurants survive by having good food or a good location => bad food in good locations Actors can succeed by being attractive or by being skilled => attractive actors are less skilled N T A -2 -1 0 1 2 3 -3 -2 -1 0 1 2 3 Newsworthiness Trustworthiness

Slide 62

Slide 62 text

Endogenous Colliders Collider bias can arise through statistical processing Endogenous selection: If you condition on (stratify by) a collider, creates phantom non-causal associations Example: Does age influence happiness?

Slide 63

Slide 63 text

Age and Happiness Estimand: Influence of age on happiness Possible confound: Marital status Suppose age has zero influence on happiness But that both age and happiness influence marital status H A M Happiness Married Age

Slide 64

Slide 64 text

Page 177 Married Unmarried

Slide 65

Slide 65 text

0 10 20 30 40 50 60 -2 -1 0 1 2 Age (years) Happiness (standardized) Full workflow starting on page 176 Stratified by marital status, negative association between age and happiness Married Unmarried

Slide 66

Slide 66 text

X Z Y The Pipe X Z Y The Fork X Z Y The Collider X Z Y The Descendant A Ye Olde Causal Alchemy The Four Elemental Confounds

Slide 67

Slide 67 text

The Descendant How a descendant behaves depends upon what it is attached to A is a “descendant” X Z Y A

Slide 68

Slide 68 text

Y ⫫ X | A The Descendant X and Y are causally associated through Z A holds information about Z Once stratified by A, X and Y less associated Y ⫫ X A is a “descendant” X Z Y A if strong enough

Slide 69

Slide 69 text

Y X 0 1 0 418 97 1 98 387 > cor(X,Y) [1] 0.61 Y ⫫ X n <- 1000 X <- rbern( n , 0.5 ) Z <- rbern( n , (1-X)*0.1 + X*0.9 ) Y <- rbern( n , (1-Z)*0.1 + Z*0.9 ) A <- rbern( n , (1-Z)*0.1 + Z*0.9 ) X Z Y A

Slide 70

Slide 70 text

Y ⫫ X | Z Y X 0 1 0 418 97 1 98 387 A = 0 Y X 0 1 0 387 54 1 50 32 A = 1 Y X 0 1 0 31 43 1 48 355 > cor(X,Y) [1] 0.61 > cor(X[A==0],Y[A==0]) [1] 0.26 > cor(X[A==1],Y[A==1]) [1] 0.29 Y ⫫ X n <- 1000 X <- rbern( n , 0.5 ) Z <- rbern( n , (1-X)*0.1 + X*0.9 ) Y <- rbern( n , (1-Z)*0.1 + Z*0.9 ) A <- rbern( n , (1-Z)*0.1 + Z*0.9 ) X Z Y A if strong enough

Slide 71

Slide 71 text

Descendants are everywhere Many measurements are proxies of what we want to measure Factor analysis Measurement error Social networks X Y A U B U: Unobserved confound

Slide 72

Slide 72 text

Unobserved Confounds Unmeasured causes (U) exist and can ruin your day Estimand: Direct effect of grandparents G on grandchildren C Need to block pipe G –> P –> C What happens when we condition on P? G P U C

Slide 73

Slide 73 text

Course Schedule Week 1 Bayesian inference Chapters 1, 2, 3 Week 2 Linear models & Causal Inference Chapter 4 Week 3 Causes, Confounds & Colliders Chapters 5 & 6 Week 4 Overfitting / MCMC Chapters 7, 8, 9 Week 5 Generalized Linear Models Chapters 10, 11 Week 6 Integers & Other Monsters Chapters 11 & 12 Week 7 Multilevel models I Chapter 13 Week 8 Multilevel models II Chapter 14 Week 9 Measurement & Missingness Chapter 15 Week 10 Generalized Linear Madness Chapter 16 https://github.com/rmcelreath/stat_rethinking_2023

Slide 74

Slide 74 text

No content