Richard McElreath
March 02, 2022
510

# Statistical Rethinking 2022 Lecture 18

March 02, 2022

## Transcript

2. None

4. ### Missing Data is Mundane Most data are missing most of

the time “Missing” data: Some cases unobserved Dropping cases with missing values sometimes justifiable Right thing to do depends upon causal assumptions Y X u X* Right thing to do depends upon causal assumptions
5. ### Missing Data is Mundane Right thing to do depends upon

causal assumptions (1) Missing data in DAGs (2) Bayesian imputation with structural causal models (3) Censoring and imputation Y X u X* Right thing to do depends upon causal assumptions

10. ### H S H* D No biasing paths connecting H to

S “Dog eats random homework”
11. ### H S H* D # Dog eats random homework #

aka missing completely at random N <- 100 S <- rnorm(N) H <- rnorm(N,0.5*S) # dog eats 50% of homework at random D <- rbern(N,0.5) Hstar <- H Hstar[D==1] <- NA plot( S , H , col=grau(0.8) , lwd=2 ) points( S , Hstar , col=2 , lwd=3 ) -2 -1 0 1 2 -2 -1 0 1 2 3 S H incomplete total Dog usually benign
12. ### -2 -1 0 1 2 -2 -1 0 1 2

3 S H H S H* D # Dog eats random homework # aka missing completely at random N <- 100 S <- rnorm(N) H <- rnorm(N,0.5*S) # dog eats 50% of homework at random D <- rbern(N,0.5) Hstar <- H Hstar[D==1] <- NA plot( S , H , col=grau(0.8) , lwd=2 ) points( S , Hstar , col=2 , lwd=3 ) Dog usually benign Loss of precision, usually no bias incomplete total

15. ### H S H* D Maybe biasing path “Dog eats conditional

on cause of homework”
16. ### -2 -1 0 1 2 -3 -2 -1 0 1

2 3 S H # Dog eats homework of students who study too much # aka missing at random N <- 100 S <- rnorm(N) H <- rnorm(N,0.5*S) # dog eats 80% of homework where S>0 D <- rbern(N, ifelse(S>0,0.8,0) ) Hstar <- H Hstar[D==1] <- NA plot( S , H , col=grau(0.8) , lwd=2 ) points( S , Hstar , col=2 , lwd=3 ) H S H* D Dog path can be benign incomplete total
17. ### # Dog eats homework of students who study too much

# BUT NOW NONLINEAR WITH CEILING EFFECT N <- 100 S <- rnorm(N) H <- rnorm(N,(1-exp(-0.7*S))) # dog eats 100% of homework where S>0 D <- rbern(N, ifelse(S>0,1,0) ) Hstar <- H Hstar[D==1] <- NA plot( S , H , col=grau(0.8) , lwd=2 ) points( S , Hstar , col=2 , lwd=3 ) H S H* D Non-linear relationships and poor modeling, less benign -2 -1 0 1 2 -4 -2 0 2 S H incomplete total

20. ### H S H* D Biasing path “Dog eats conditional on

homework itself”
21. ### # Dog eats bad homework # aka missing not at

random N <- 100 S <- rnorm(N) H <- rnorm(N,0.5*S) # dog eats 90% of homework where H<0 D <- rbern(N, ifelse(H<0,0.9,0) ) Hstar <- H Hstar[D==1] <- NA plot( S , H , col=grau(0.8) , lwd=2 ) points( S , Hstar , col=2 , lwd=3 ) Usually not benign H S H* D -3 -2 -1 0 1 2 -2 -1 0 1 2 3 S H incomplete total
22. ### (1) Dog eats random homework: Dropping incomplete cases okay, but

loss of efficiency (2) Dog eats conditional on cause: Correctly condition on cause (3) Dog eats homework itself:   Usually hopeless unless we can model the dog (e.g. survival analysis) H S H* D H S H* D H S H* D
23. ### Bayesian Imputation (1) Dog eats random homework (2) Dog eats

conditional on cause Both imply need to impute or marginalize over missing values Bayesian Imputation: Compute posterior probability distribution of missing values Marginalizing unknowns: Averaging over distribution of missing values without computing posterior distribution
24. ### Bayesian Imputation Causal model of all variables implies strategy for

imputation Technical obstacles exist! Sometimes imputation is unnecessary, e.g. discrete parameters Sometimes imputation is easier, e.g. censored observations
25. ### WBSJBCMFT  %JTDSFUF DBUT *NBHJOF B OFJHICPSIPPE JO XIJDI FWFSZ

IPVTF DPOUBJOT B TPOHCJSE 4VQQPTF XF TVSWFZ UIF OFJHICPSIPPE BOE TBNQMF POF NJOVUF PG TPOH GSPN FBDI IPVTF SFDPSEJOH UIF OVNCFS PG OPUFT :PV OPUJDF UIBU TPNF IPVTFT BMTP IBWF IPVTF DBUT BOE XPO EFS JG UIF QSFTFODF PG B DBU DIBOHFT UIF BNPVOU UIBU FBDI CJSE TJOHT 4P ZPV USZ UP BMTP ĕHVSF PVU XIJDI IPVTFT IBWF DBUT :PV DBO EP UIJT FBTJMZ JO TPNF DBTFT FJUIFS CZ TFFJOH UIF DBU PS CZ BTLJOH B IVNBO SFTJEFOU #VU JO BCPVU  PG IPVTFT ZPV DBOU EFUFSNJOF XIFUIFS PS OPU B DBU MJWFT UIFSF ćJT WFSZ TJMMZ FYBNQMF TFUT VT B WFSZ QSBDUJDBM XPSLJOH FYBNQMF PG IPX UP DPQF XJUI EJTDSFUF NJTTJOH EBUB 8F XJMM USBOTMBUF UIJT TUPSZ JOUP B HFOFSBUJWF NPEFM TJNVMBUF EBUB GSPN JU BOE UIFO CVJME B TUBUJTUJDBM NPEFM UIBU DPQFT XJUI UIF NJTTJOH WBMVFT -FUT DPOTJEFS UIF TUPSZ BCPWF ĕSTU BT B %"( C* C N RC ćF QSFTFODFBCTFODF PG B DBU \$ JOĘVFODFT UIF OVNCFS PG TVOH OPUFT / #FDBVTF PG NJTTJOH WBMVFT 3\$ IPXFWFS XF POMZ PCTFSWF \$∗ 5P NBLF UIJT JOUP B GVMMZ HFOFSBUJWF NPEFM XF NVTU OPX QJDL GVODUJPOT GPS FBDI BSSPX BCPWF )FSF BSF NZ DIPJDFT JO TUBUJTUJDBM OPUBUJPO /J ∼ 1PJTTPO(λJ) [Probability of notes sung] MPH λJ = α + β\$J [Rate of notes as function of cat] \$J ∼ #FSOPVMMJ(L) [Probability cat is present] 3\$,J ∼ #FSOPVMMJ(S) [Probability of not knowing \$J] "OE UIFO UP BDUVBMMZ TJNVMBUF TPNF EFNPOTUSBUJPO EBUB XFMM IBWF UP QJDL WBMVFT GPS α β L BOE S )FSFT B XPSLJOH TJNVMBUJPO 3 DPEF  set.seed(9) N_houses <- 100L alpha <- 5 beta <- (-3) k <- 0.5 r <- 0.2 Complete marginalization example in book, page 517
26. ### Phylogenetic regression data(Primates301) Life history traits Mass g, brain cc,

group size Much missing data, measurement error, unobserved confounding Street et al 2017 Coevolution of cultural intelligence, extended life history, sociality, and brain size in primates Allenopithecus nigroviridis Cercopithecus albogularis Cercopithecus ascanius Cercopithecus campbelli Cercopithecus campbelli lowei Cercopithecus cephus Cercopithecus cephus cephus Cercopithecus cephus ngottoensis Cercopithecus diana Cercopithecus erythrogaster Cercopithecus erythrogaster erythrogaster Cercopithecus erythrotis Cercopithecus hamlyni Cercopithecus lhoesti Cercopithecus mitis Cercopithecus mona Cercopithecus neglectus Cercopithecus nictitans Cercopithecus petaurista Cercopithecus pogonias Cercopithecus preussi Cercopithecus solatus Cercopithecus wolfi Chlorocebus aethiops Chlorocebus pygerythrus Chlorocebus pygerythrus cynosurus Chlorocebus sabaeus Chlorocebus tantalus Erythrocebus patas Miopithecus talapoin Allocebus trichotis Archaeolemur majori Avahi cleesei Avahi laniger Avahi occidentalis Avahi unicolor Cheirogaleus crossleyi Cheirogaleus major Cheirogaleus medius Daubentonia madagascariensis Eulem ur coronatus Eulem ur fulvus albifrons Eulem ur fulvus albocollaris Eulem ur fulvus collaris Eulem ur fulvus fulvus Eulem ur fulvus m ayottensis Eulem ur fulvus rufus Eulem ur fulvus sanfordi Eulem ur m acaco flavifrons Eulem ur m acaco m acaco Eulem ur m ongoz Eulem ur rubriventer Hapalem ur aureus Hapalem ur griseus Hapalemur griseus alaotrensis Hapalemur griseus griseus Hapalemur griseus meridionalis Hapalemur griseus occidentalis Hapalemur simus Indri indri Lemur catta Lepilemur aeeclis Lepilemur ankaranensis Lepilemur dorsalis Lepilemur edwardsi Lepilemur hubbardorum Lepilemur leucopus Lepilemur manasamody Lepilemur microdon Lepilemur mitsinjoensis Lepilemur mustelinus Lepilemur otto Lepilemur randrianasoli Lepilemur ruficaudatus Lepilemur sahamalazensis Lepilemur seali Lepilemur septentrionalis Microcebus berthae Microcebus bongolavensis Microcebus danfossi Microcebus griseorufus Microcebus jollyae Microcebus lehilahytsara Microcebus lokobensis Microcebus macarthurii Microcebus mamiratra Microcebus mittermeieri Microcebus murinus Microcebus myoxinus Microcebus ravelobensis Microcebus rufus Microcebus sambiranensis Microcebus simmonsi Microcebus tavaratra Mirza coquereli Mirza zaza Phaner furcifer Phaner furcifer pallescens Propithecus coquereli Propithecus deckenii Propithecus diadema Propithecus edwardsi Propithecus tattersalli Propithecus verreauxi Varecia rubra Varecia variegata variegata Alouatta belzebul Alouatta caraya Alouatta guariba Alouatta palliata Alouatta pigra Alouatta sara Alouatta seniculus Ateles belzebuth Ateles fusciceps Ateles geoffroyi Ateles paniscus Brachyteles arachnoides Lagothrix lagotricha Aotus azarai Aotus azarai boliviensis Aotus brumbacki Aotus infulatus Aotus lemurinus Aotus lemurinus griseimembra Aotus nancymaae Aotus nigriceps Aotus trivirgatus Aotus vociferans Callimico goeldii Callithrix argentata Callithrix aurita Callithrix emiliae Callithrix geoffroyi Callithrix humeralifera Callithrix jacchus Callithrix kuhli Callithrix mauesi Callithrix penicillata Callithrix pygmaea Cebus albifrons Cebus apella Cebus capucinus Cebus olivaceus Cebus xanthosternos Leontopithecus chrysomelas Leontopithecus chrysopygus Leontopithecus rosalia Saguinus bicolor Saguinus fuscicollis Saguinus fuscicollis melanoleucus Saguinus geoffroyi Saguinus imperator Saguinus leucopus Saguinus midas Saguinus mystax Saguinus niger Saguinus oedipus Saguinus tripartitus Saim iri boliviensis Saim iri oerstedii Saim iri sciureus Saim iri ustus Arctocebus aureus Arctocebus calabarensis Loris lydekkerianus Loris tardigradus Nycticebus bengalensis Nycticebus coucang Nycticebus javanicus Nycticebus menagensis Nycticebus pygmaeus Perodicticus potto Bunopithecus hoolock Gorilla beringei Gorilla gorilla gorilla Gorilla gorilla graueri Homo sapiens Homo sapiens neanderthalensis Hylobates agilis Hylobates klossii H ylobates lar Hylobates m oloch H ylobates m uelleri Hylobates pileatus Nom ascus concolor Nom ascus gabriellae Nom ascus leucogenys Nomascus nasutus Nom ascus siki Pan paniscus Pan troglodytes schweinfurthii Pan troglodytes troglodytes Pan troglodytes vellerosus Pan troglodytes verus Pongo abelii Pongo pygmaeus Sym phalangus syndactylus Cacajao calvus Cacajao m elanocephalus Callicebus donacophilus Callicebus hoffmannsi Callicebus moloch Callicebus personatus Callicebus torquatus Chiropotes satanas Pithecia irrorata Pithecia pithecia Cercocebus agilis Cercocebus galeritus Cercocebus torquatus Cercocebus torquatus atys Lophocebus albigena Lophocebus aterrim us Macaca arctoides Macaca assamensis Macaca brunnescens Macaca cyclopis Macaca fascicularis Macaca fuscata Macaca hecki Macaca leonina Macaca maura Macaca mulatta Macaca munzala Macaca nemestrina Macaca nemestrina leonina Macaca nemestrina siberu Macaca nigra Macaca nigrescens Macaca ochreata Macaca pagensis Macaca radiata Macaca silenus Macaca sinica Macaca sylvanus Macaca thibetana Macaca tonkeana M andrillus leucophaeus M andrillus sphinx Papio anubis Papio cynocephalus Papio ham adryas Papio papio Papio ursinus Rungwecebus kipunji Theropithecus gelada Colobus angolensis Colobus angolensis palliatus Colobus guereza Colobus polykomos Colobus satanas Colobus vellerosus Nasalis larvatus Piliocolobus badius Piliocolobus foai Piliocolobus gordonorum Piliocolobus kirkii Piliocolobus pennantii Piliocolobus preussi Piliocolobus rufomitratus Piliocolobus tephrosceles Piliocolobus tholloni Presbytis comata Presbytis m elalophos Procolobus verus Pygathrix cinerea Pygathrix nemaeus Rhinopithecus avunculus Rhinopithecus bieti Rhinopithecus brelichi Rhinopithecus roxellana Semnopithecus entellus Trachypithecus auratus Trachypithecus cristatus Trachypithecus delacouri Trachypithecus francoisi Trachypithecus geei Trachypithecus germaini Trachypithecus johnii Trachypithecus laotum Trachypithecus obscurus Trachypithecus phayrei Trachypithecus pileatus Trachypithecus poliocephalus Trachypithecus vetulus Euoticus elegantulus Galago alleni Galago gallarum Galago granti Galago matschiei Galago moholi Galago senegalensis Galagoides demidoff Galagoides zanzibaricus Otolemur crassicaudatus Otolemur garnettii Tarsius bancanus Tarsius dentatus Tarsius lariang Tarsius syrichta

29. ### Imputing Primates Key idea: Missing values already have probability distributions

Express causal model for each partially-observed variable Replace each missing value with a parameter, let model do the rest Doesn’t sound simple & it isn’t

31. ### Group size  (missing values) B M G u h G*

mG What influences missingness?
32. ### B M G u h G* mG Species close to

humans better studied

to count

studied
35. ### B M G u h G* mG Whatever the assumption,

our goal is to use the causal model to infer probability distribution of each missing value. Uncertainty in each missing value cascades through the entire model.

mM
38. ### B ∼ MVNormal(μ, K) μ i = α + β

G G i + β M M i α ∼ Normal(0,1) β G , β M ∼ Normal(0,0.5) η2 ∼ HalfNormal(1,0.25) K = η2 exp(−ρd i,j ) ρ ∼ HalfNormal(3,0.25) B M G u h G* mG B* mB M* mM
39. ### B ∼ MVNormal(μ, K) μ i = α + β

G G i + β M M i α ∼ Normal(0,1) β G , β M ∼ Normal(0,0.5) η2 ∼ HalfNormal(1,0.25) K = η2 exp(−ρd i,j ) ρ ∼ HalfNormal(3,0.25) B M G u h
40. ### B ∼ MVNormal(μ, K) μ i = α + β

G G i + β M M i α ∼ Normal(0,1) β G , β M ∼ Normal(0,0.5) η2 ∼ HalfNormal(1,0.25) K = η2 exp(−ρd i,j ) ρ ∼ HalfNormal(3,0.25) G ∼ MVNormal(ν, K G ) ν i = α G + β MG M i α G ∼ Normal(0,1) β MG ∼ Normal(0,0.5) η2 G ∼ HalfNormal(1,0.25) K G = η2 G exp(−ρ G d i,j ) ρ G ∼ HalfNormal(3,0.25) B M G u h M G u h
41. ### B ∼ MVNormal(μ, K) μ i = α + β

G G i + β M M i α ∼ Normal(0,1) β G , β M ∼ Normal(0,0.5) η2 ∼ HalfNormal(1,0.25) K = η2 exp(−ρd i,j ) ρ ∼ HalfNormal(3,0.25) G ∼ MVNormal(ν, K G ) ν i = α G + β MG M i α G ∼ Normal(0,1) β MG ∼ Normal(0,0.5) η2 G ∼ HalfNormal(1,0.25) K G = η2 G exp(−ρ G d i,j ) ρ G ∼ HalfNormal(3,0.25) M ∼ MVNormal(0,K M ) K M = η2 M exp(−ρ M d i,j ) η2 M ∼ HalfNormal(1,0.25) ρ M ∼ HalfNormal(3,0.25) B M G u h M G u h M u h
42. ### Draw the Missing Owl Let’s take it slow… (1) Ignore

cases with missing B values (for now) (2) Impute G and M ignoring models for each (3) Impute G using model (4) Impute B, G, M using model
43. ### Draw the Missing Owl Let’s take it slow… (1) Ignore

cases with missing B values (for now) (2) Impute G and M ignoring models for each (3) Impute G using model (4) Impute B, G, M using model > dd <- d[complete.cases(d\$brain),] > table( M=!is.na(dd\$body) , G=!is.na(dd\$group_size) ) G M FALSE TRUE FALSE 2 0 TRUE 31 151
44. ### (2) Impute G and M ignoring models for each B

∼ MVNormal(μ, K) μ i = α + β G G i + β M M i α ∼ Normal(0,1) β G , β M ∼ Normal(0,0.5) η2 ∼ HalfNormal(1,0.25) K = η2 exp(−ρd i,j ) ρ ∼ HalfNormal(3,0.25) G ∼ MVNormal(ν, K G ) ν i = α G + β MG M i α G ∼ Normal(0,1) β MG ∼ Normal(0,0.5) η2 G ∼ HalfNormal(1,0.25) K G = η2 G exp(−ρ G d i,j ) ρ G ∼ HalfNormal(3,0.25) M ∼ MVNormal(0,K M ) K M = η2 M exp(−ρ M d i,j ) η2 M ∼ HalfNormal(1,0.25) ρ M ∼ HalfNormal(3,0.25) B M G u h M G u h M u h
45. ### (2) Impute G and M ignoring models for each B

∼ MVNormal(μ, K) μ i = α + β G G i + β M M i α ∼ Normal(0,1) β G , β M ∼ Normal(0,0.5) η2 ∼ HalfNormal(1,0.25) K = η2 exp(−ρd i,j ) ρ ∼ HalfNormal(3,0.25) M i ∼ Normal(0,1) B M G u h M G u h M u h G i ∼ Normal(0,1)
46. ### (2) Impute G and M ignoring models for each B

∼ MVNormal(μ, K) μ i = α + β G G i + β M M i α ∼ Normal(0,1) β G , β M ∼ Normal(0,0.5) η2 ∼ HalfNormal(1,0.25) K = η2 exp(−ρd i,j ) ρ ∼ HalfNormal(3,0.25) M i ∼ Normal(0,1) B M G u h M G u h M u h G i ∼ Normal(0,1) When Gi observed, likelihood for standardized variable When Gi missing, prior
47. ### (2) Impute G and M ignoring models for each B

∼ MVNormal(μ, K) μ i = α + β G G i + β M M i α ∼ Normal(0,1) β G , β M ∼ Normal(0,0.5) η2 ∼ HalfNormal(1,0.25) K = η2 exp(−ρd i,j ) ρ ∼ HalfNormal(3,0.25) M i ∼ Normal(0,1) G i ∼ Normal(0,1) # imputation ignoring models of M and G fMBG_OU <- alist( B ~ multi_normal( mu , K ), mu <- a + bM*M + bG*G, matrix[N_spp,N_spp]:K <- cov_GPL1(Dmat,etasq,rho,0.01), M ~ normal(0,1), G ~ normal(0,1), a ~ normal( 0 , 1 ), c(bM,bG) ~ normal( 0 , 0.5 ), etasq ~ half_normal(1,0.25), rho ~ half_normal(3,0.25) ) mBMG_OU <- ulam( fMBG_OU , data=dat_all,chains=4,cores=4 )
48. ### (2) Impute G and M ignoring models for each B

∼ MVNormal(μ, K) μ i = α + β G G i + β M M i α ∼ Normal(0,1) β G , β M ∼ Normal(0,0.5) η2 ∼ HalfNormal(1,0.25) K = η2 exp(−ρd i,j ) ρ ∼ HalfNormal(3,0.25) M i ∼ Normal(0,1) G i ∼ Normal(0,1) # imputation ignoring models of M and G fMBG_OU <- alist( B ~ multi_normal( mu , K ), mu <- a + bM*M + bG*G, matrix[N_spp,N_spp]:K <- cov_GPL1(Dmat,etasq,rho,0.01), M ~ normal(0,1), G ~ normal(0,1), a ~ normal( 0 , 1 ), c(bM,bG) ~ normal( 0 , 0.5 ), etasq ~ half_normal(1,0.25), rho ~ half_normal(3,0.25) ) mBMG_OU <- ulam( fMBG_OU , data=dat_all,chains=4,cores=4 ) > precis(mBMG_OU,2) mean sd 5.5% 94.5% n_eff Rhat4 a -0.10 0.08 -0.22 0.02 4569 1 bG 0.01 0.02 -0.01 0.04 2816 1 bM 0.82 0.03 0.78 0.86 3949 1 etasq 0.04 0.01 0.03 0.05 3733 1 rho 2.75 0.26 2.36 3.17 3931 1 M_impute[1] -1.77 0.18 -2.04 -1.48 5087 1 M_impute[2] -0.69 0.15 -0.94 -0.45 5065 1 G_impute[1] 0.01 1.01 -1.61 1.63 6311 1 G_impute[2] 0.05 1.02 -1.55 1.67 5616 1 G_impute[3] 0.03 1.04 -1.64 1.71 4876 1 G_impute[4] 0.05 0.95 -1.45 1.56 4978 1 G_impute[5] 0.09 1.03 -1.53 1.69 5244 1 G_impute[6] 0.18 1.02 -1.48 1.79 5414 1 G_impute[7] 0.06 1.00 -1.57 1.65 6207 1 G_impute[8] 0.13 1.00 -1.47 1.72 6329 1 G_impute[9] 0.01 0.98 -1.62 1.62 5942 1 G_impute[10] 0.03 1.00 -1.56 1.67 5197 1 G_impute[11] 0.06 0.95 -1.47 1.60 5057 1 G_impute[12] 0.08 0.97 -1.49 1.63 4907 1 G_impute[13] -0.03 1.01 -1.65 1.58 5958 1 G_impute[14] 0.04 0.99 -1.52 1.65 5538 1 G_impute[15] -0.18 1.00 -1.77 1.37 5705 1 G_impute[16] -0.09 0.97 -1.63 1.45 4026 1 G_impute[17] -0.04 1.02 -1.74 1.58 6465 1 G_impute[18] -0.09 1.03 -1.73 1.54 5632 1 G_impute[19] -0.19 1.01 -1.78 1.45 4251 1 G_impute[20] -0.01 1.02 -1.62 1.65 6602 1 G_impute[21] -0.07 0.95 -1.56 1.47 5640 1 G_impute[22] -0.04 0.99 -1.60 1.55 6602 1 G_impute[23] 0.14 1.03 -1.45 1.82 5644 1 G_impute[24] -0.07 0.99 -1.65 1.46 4249 1 G_impute[25] -0.06 1.04 -1.72 1.62 5993 1 G_impute[26] 0.19 1.01 -1.40 1.78 4973 1 G_impute[27] 0.08 1.04 -1.54 1.70 5765 1 G_impute[28] -0.07 0.99 -1.66 1.52 3609 1 G_impute[29] 0.04 0.99 -1.55 1.63 4542 1 G_impute[30] -0.01 0.99 -1.61 1.56 4821 1 G_impute[31] 0.01 1.02 -1.65 1.66 6602 1 G_impute[32] 0.08 1.01 -1.51 1.66 5420 1 G_impute[33] -0.02 0.99 -1.58 1.60 4241 1
49. ### -2 -1 0 1 2 -2 -1 0 1 2

body mass (standardized) brain volume (standardized) (2) Impute G and M ignoring models for each Observed Imputed Because M strongly associated with B, imputed M values follow the regression relationship
50. ### -2 -1 0 1 2 -1 0 1 2 Body

mass (standardized) Group size (standardized) Observed Imputed (2) Impute G and M ignoring models for each Because association between M and G not modeled, imputed G values do not follow the regression relationship
51. ### -2 -1 0 1 2 -1 0 1 2 Body

mass (standardized) Group size (standardized) Observed Imputed (2) Impute G and M ignoring models for each 0.00 0.05 0.10 0 5 10 15 20 25 effect of G on B Density complete cases imputation
52. ### (3) Impute G using model B ∼ MVNormal(μ, K) μ

i = α + β G G i + β M M i α ∼ Normal(0,1) β G , β M ∼ Normal(0,0.5) η2 ∼ HalfNormal(1,0.25) K = η2 exp(−ρd i,j ) ρ ∼ HalfNormal(3,0.25) G ∼ MVNormal(ν, K G ) ν i = α G + β MG M i α G ∼ Normal(0,1) β MG ∼ Normal(0,0.5) η2 G ∼ HalfNormal(1,0.25) K G = η2 G exp(−ρ G d i,j ) ρ G ∼ HalfNormal(3,0.25) B M G u h M G u h M i ∼ Normal(0,1) M u h
53. ### # no phylogeny on G but have submodel M ->

G mBMG_OU_G <- ulam( alist( B ~ multi_normal( mu , K ), mu <- a + bM*M + bG*G, G ~ normal(nu,sigma), nu <- aG + bMG*M, M ~ normal(0,1), matrix[N_spp,N_spp]:K <- cov_GPL1(Dmat,etasq,rho,0.01), c(a,aG) ~ normal( 0 , 1 ), c(bM,bG,bMG) ~ normal( 0 , 0.5 ), c(etasq) ~ half_normal(1,0.25), c(rho) ~ half_normal(3,0.25), sigma ~ exponential(1) ), data=dat_all , chains=4 , cores=4 , sample=TRUE ) # phylogeny information for G imputation (but no M -> G model) mBMG_OU2 <- ulam( alist( B ~ multi_normal( mu , K ), mu <- a + bM*M + bG*G, M ~ normal(0,1), G ~ multi_normal( 'rep_vector(0,N_spp)' ,KG), matrix[N_spp,N_spp]:K <- cov_GPL1(Dmat,etasq,rho,0.01), matrix[N_spp,N_spp]:KG <- cov_GPL1(Dmat,etasqG,rhoG,0.01), a ~ normal( 0 , 1 ), c(bM,bG) ~ normal( 0 , 0.5 ), c(etasq,etasqG) ~ half_normal(1,0.25), c(rho,rhoG) ~ half_normal(3,0.25) ), data=dat_all , chains=4 , cores=4 , sample=TRUE ) G ∼ MVNormal(0,K G ) K G = η2 G exp(−ρ G d i,j ) G ∼ MVNormal(ν, Iσ2) ν i = α G + β MG M i (3) Impute G using model Just M –> G model Just phylogeny
54. ### # no phylogeny on G but have submodel M ->

G mBMG_OU_G <- ulam( alist( B ~ multi_normal( mu , K ), mu <- a + bM*M + bG*G, G ~ normal(nu,sigma), nu <- aG + bMG*M, M ~ normal(0,1), matrix[N_spp,N_spp]:K <- cov_GPL1(Dmat,etasq,rho,0.01), c(a,aG) ~ normal( 0 , 1 ), c(bM,bG,bMG) ~ normal( 0 , 0.5 ), c(etasq) ~ half_normal(1,0.25), c(rho) ~ half_normal(3,0.25), sigma ~ exponential(1) ), data=dat_all , chains=4 , cores=4 , sample=TRUE ) # phylogeny information for G imputation (but no M -> G model) mBMG_OU2 <- ulam( alist( B ~ multi_normal( mu , K ), mu <- a + bM*M + bG*G, M ~ normal(0,1), G ~ multi_normal( 'rep_vector(0,N_spp)' ,KG), matrix[N_spp,N_spp]:K <- cov_GPL1(Dmat,etasq,rho,0.01), matrix[N_spp,N_spp]:KG <- cov_GPL1(Dmat,etasqG,rhoG,0.01), a ~ normal( 0 , 1 ), c(bM,bG) ~ normal( 0 , 0.5 ), c(etasq,etasqG) ~ half_normal(1,0.25), c(rho,rhoG) ~ half_normal(3,0.25) ), data=dat_all , chains=4 , cores=4 , sample=TRUE ) G ∼ MVNormal(0,K G ) K G = η2 G exp(−ρ G d i,j ) G ∼ MVNormal(ν, Iσ2) ν i = α G + β MG M i (3) Impute G using model Just M –> G model Just phylogeny
55. ### # no phylogeny on G but have submodel M ->

G mBMG_OU_G <- ulam( alist( B ~ multi_normal( mu , K ), mu <- a + bM*M + bG*G, G ~ normal(nu,sigma), nu <- aG + bMG*M, M ~ normal(0,1), matrix[N_spp,N_spp]:K <- cov_GPL1(Dmat,etasq,rho,0.01), c(a,aG) ~ normal( 0 , 1 ), c(bM,bG,bMG) ~ normal( 0 , 0.5 ), c(etasq) ~ half_normal(1,0.25), c(rho) ~ half_normal(3,0.25), sigma ~ exponential(1) ), data=dat_all , chains=4 , cores=4 , sample=TRUE ) # phylogeny information for G imputation (but no M -> G model) mBMG_OU2 <- ulam( alist( B ~ multi_normal( mu , K ), mu <- a + bM*M + bG*G, M ~ normal(0,1), G ~ multi_normal( 'rep_vector(0,N_spp)' ,KG), matrix[N_spp,N_spp]:K <- cov_GPL1(Dmat,etasq,rho,0.01), matrix[N_spp,N_spp]:KG <- cov_GPL1(Dmat,etasqG,rhoG,0.01), a ~ normal( 0 , 1 ), c(bM,bG) ~ normal( 0 , 0.5 ), c(etasq,etasqG) ~ half_normal(1,0.25), c(rho,rhoG) ~ half_normal(3,0.25) ), data=dat_all , chains=4 , cores=4 , sample=TRUE ) G ∼ MVNormal(0,K G ) K G = η2 G exp(−ρ G d i,j ) G ∼ MVNormal(ν, Iσ2) ν i = α G + β MG M i (3) Impute G using model Just M –> G model Just phylogeny
56. ### (3) Impute G using model -2 -1 0 1 2

-1 0 1 2 Body mass (standardized) Group size (standardized) Observed Imputed
57. ### (3) Impute G using model -2 -1 0 1 2

-1 0 1 2 Body mass (standardized) Group size (standardized) Observed Imputed -2 -1 0 1 2 -1 0 1 2 Body mass (standardized) Group size (standardized) Observed
58. ### -2 -1 0 1 2 -1 0 1 2 Body

mass (standardized) Group size (standardized) (3) Impute G using model -2 -1 0 1 2 -1 0 1 2 Body mass (standardized) Group size (standardized) Observed Imputed Observed M–>G only
59. ### -2 -1 0 1 2 -1 0 1 2 Body

mass (standardized) Group size (standardized) (3) Impute G using model -2 -1 0 1 2 -1 0 1 2 Body mass (standardized) Group size (standardized) Observed Imputed Observed M–>G only Phylogeny only
60. ### -2 -1 0 1 2 -1 0 1 2 Body

mass (standardized) Group size (standardized) (3) Impute G using model Observed M–>G only Phylogeny only 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 phylogenetic distance G covariance O-U Gaussian process kernel
61. ### -2 -1 0 1 2 -1 0 1 2 Body

mass (standardized) Group size (standardized) Phylogeny + M–>G (3) Impute G using model
62. ### -0.05 0.00 0.05 0.10 0.15 0.20 0 5 10 15

20 effect of G on B (bG) Density -2 -1 0 1 2 -1 0 1 2 Body mass (standardized) Group size (standardized) Phylogeny + M–>G (3) Impute G using model Phylogeny + M–>G Observed M–>G only Phylogeny only
63. ### Draw the Missing Owl Let’s take it slow… (1) Ignore

cases with missing B values (for now) (2) Impute G and M ignoring models for each (3) Impute G using model (4) Impute B, G, M using model
64. ### B ∼ MVNormal(μ, K) μ i = α + β

G G i + β M M i α ∼ Normal(0,1) β G , β M ∼ Normal(0,0.5) η2 ∼ HalfNormal(1,0.25) K = η2 exp(−ρd i,j ) ρ ∼ HalfNormal(3,0.25) G ∼ MVNormal(ν, K G ) ν i = α G + β MG M i α G ∼ Normal(0,1) β MG ∼ Normal(0,0.5) η2 G ∼ HalfNormal(1,0.25) K G = η2 G exp(−ρ G d i,j ) ρ G ∼ HalfNormal(3,0.25) M ∼ MVNormal(0,K M ) K M = η2 M exp(−ρ M d i,j ) η2 M ∼ HalfNormal(1,0.25) ρ M ∼ HalfNormal(3,0.25) B M G u h M G u h M u h (4) Impute B, G, M using model
65. ### B ∼ MVNormal(μ, K) μ i = α + β

G G i + β M M i α ∼ Normal(0,1) β G , β M ∼ Normal(0,0.5) η2 ∼ HalfNormal(1,0.25) K = η2 exp(−ρd i,j ) ρ ∼ HalfNormal(3,0.25) G ∼ MVNormal(ν, K G ) ν i = α G + β MG M i α G ∼ Normal(0,1) β MG ∼ Normal(0,0.5) η2 G ∼ HalfNormal(1,0.25) K G = η2 G exp(−ρ G d i,j ) ρ G ∼ HalfNormal(3,0.25) M ∼ MVNormal(0,K M ) K M = η2 M exp(−ρ M d i,j ) η2 M ∼ HalfNormal(1,0.25) ρ M ∼ HalfNormal(3,0.25) data{ int N_spp; vector[N_spp] B; int N_B_miss; int B_missidx[N_B_miss]; vector[N_spp] M; int N_M_miss; int M_missidx[N_M_miss]; vector[N_spp] G; int N_G_miss; int G_missidx[N_G_miss]; matrix[N_spp,N_spp] Dmat; } parameters{ real a; real aG; real bG; real bM; real bMG; real<lower=0> etasq; real<lower=0> rho; real<lower=0> etasqG; real<lower=0> rhoG; real<lower=0> etasqM; real<lower=0> rhoM; vector[N_M_miss] M_impute; vector[N_G_miss] G_impute; vector[N_B_miss] B_impute; } model{ vector[N_spp] mu; vector[N_spp] nu; vector[N_spp] M_merge; vector[N_spp] G_merge; vector[N_spp] B_merge; matrix[N_spp,N_spp] K; matrix[N_spp,N_spp] KG; matrix[N_spp,N_spp] KM; rho ~ normal( 3 , 0.25 ); etasq ~ normal( 1 , 0.25 ); rhoG ~ normal( 3 , 0.25 ); etasqG ~ normal( 1 , 0.25 ); rhoM ~ normal( 3 , 0.25 ); etasqM ~ normal( 1 , 0.25 ); K = cov_GPL1(Dmat, etasq, rho, 0.01); KG = cov_GPL1(Dmat, etasqG, rhoG, 0.01); KM = cov_GPL1(Dmat, etasqM, rhoM, 0.01); bM ~ normal( 0 , 0.5 ); bG ~ normal( 0 , 0.5 ); bMG ~ normal( 0 , 0.5 ); a ~ normal( 0 , 1 ); aG ~ normal( 0 , 1 ); G_merge = merge_missing(G_missidx, to_vector(G), G_impute); M_merge = merge_missing(M_missidx, to_vector(M), M_impute); B_merge = merge_missing(B_missidx, to_vector(B), B_impute); for ( i in 1:N_spp ) { mu[i] = a + bM * M_merge[i] + bG * G_merge[i]; nu[i] = aG + bMG * M_merge[i]; } M_merge ~ multi_normal( rep_vector(0,N_spp) , KM ); G_merge ~ multi_normal( nu , KG ); B_merge ~ multi_normal( mu , K ); }
66. ### B ∼ MVNormal(μ, K) μ i = α + β

G G i + β M M i α ∼ Normal(0,1) β G , β M ∼ Normal(0,0.5) η2 ∼ HalfNormal(1,0.25) K = η2 exp(−ρd i,j ) ρ ∼ HalfNormal(3,0.25) G ∼ MVNormal(ν, K G ) ν i = α G + β MG M i α G ∼ Normal(0,1) β MG ∼ Normal(0,0.5) η2 G ∼ HalfNormal(1,0.25) K G = η2 G exp(−ρ G d i,j ) ρ G ∼ HalfNormal(3,0.25) M ∼ MVNormal(0,K M ) K M = η2 M exp(−ρ M d i,j ) η2 M ∼ HalfNormal(1,0.25) ρ M ∼ HalfNormal(3,0.25) data{ int N_spp; vector[N_spp] B; int N_B_miss; int B_missidx[N_B_miss]; vector[N_spp] M; int N_M_miss; int M_missidx[N_M_miss]; vector[N_spp] G; int N_G_miss; int G_missidx[N_G_miss]; matrix[N_spp,N_spp] Dmat; } parameters{ real a; real aG; real bG; real bM; real bMG; real<lower=0> etasq; real<lower=0> rho; real<lower=0> etasqG; real<lower=0> rhoG; real<lower=0> etasqM; real<lower=0> rhoM; vector[N_M_miss] M_impute; vector[N_G_miss] G_impute; vector[N_B_miss] B_impute; } model{ vector[N_spp] mu; vector[N_spp] nu; vector[N_spp] M_merge; vector[N_spp] G_merge; vector[N_spp] B_merge; matrix[N_spp,N_spp] K; matrix[N_spp,N_spp] KG; matrix[N_spp,N_spp] KM; rho ~ normal( 3 , 0.25 ); etasq ~ normal( 1 , 0.25 ); rhoG ~ normal( 3 , 0.25 ); etasqG ~ normal( 1 , 0.25 ); rhoM ~ normal( 3 , 0.25 ); etasqM ~ normal( 1 , 0.25 ); K = cov_GPL1(Dmat, etasq, rho, 0.01); KG = cov_GPL1(Dmat, etasqG, rhoG, 0.01); KM = cov_GPL1(Dmat, etasqM, rhoM, 0.01); bM ~ normal( 0 , 0.5 ); bG ~ normal( 0 , 0.5 ); bMG ~ normal( 0 , 0.5 ); a ~ normal( 0 , 1 ); aG ~ normal( 0 , 1 ); G_merge = merge_missing(G_missidx, to_vector(G), G_impute); M_merge = merge_missing(M_missidx, to_vector(M), M_impute); B_merge = merge_missing(B_missidx, to_vector(B), B_impute); for ( i in 1:N_spp ) { mu[i] = a + bM * M_merge[i] + bG * G_merge[i]; nu[i] = aG + bMG * M_merge[i]; } M_merge ~ multi_normal( rep_vector(0,N_spp) , KM ); G_merge ~ multi_normal( nu , KG ); B_merge ~ multi_normal( mu , K ); }
67. ### B ∼ MVNormal(μ, K) μ i = α + β

G G i + β M M i α ∼ Normal(0,1) β G , β M ∼ Normal(0,0.5) η2 ∼ HalfNormal(1,0.25) K = η2 exp(−ρd i,j ) ρ ∼ HalfNormal(3,0.25) G ∼ MVNormal(ν, K G ) ν i = α G + β MG M i α G ∼ Normal(0,1) β MG ∼ Normal(0,0.5) η2 G ∼ HalfNormal(1,0.25) K G = η2 G exp(−ρ G d i,j ) ρ G ∼ HalfNormal(3,0.25) M ∼ MVNormal(0,K M ) K M = η2 M exp(−ρ M d i,j ) η2 M ∼ HalfNormal(1,0.25) ρ M ∼ HalfNormal(3,0.25) data{ int N_spp; vector[N_spp] B; int N_B_miss; int B_missidx[N_B_miss]; vector[N_spp] M; int N_M_miss; int M_missidx[N_M_miss]; vector[N_spp] G; int N_G_miss; int G_missidx[N_G_miss]; matrix[N_spp,N_spp] Dmat; } parameters{ real a; real aG; real bG; real bM; real bMG; real<lower=0> etasq; real<lower=0> rho; real<lower=0> etasqG; real<lower=0> rhoG; real<lower=0> etasqM; real<lower=0> rhoM; vector[N_M_miss] M_impute; vector[N_G_miss] G_impute; vector[N_B_miss] B_impute; } model{ vector[N_spp] mu; vector[N_spp] nu; vector[N_spp] M_merge; vector[N_spp] G_merge; vector[N_spp] B_merge; matrix[N_spp,N_spp] K; matrix[N_spp,N_spp] KG; matrix[N_spp,N_spp] KM; rho ~ normal( 3 , 0.25 ); etasq ~ normal( 1 , 0.25 ); rhoG ~ normal( 3 , 0.25 ); etasqG ~ normal( 1 , 0.25 ); rhoM ~ normal( 3 , 0.25 ); etasqM ~ normal( 1 , 0.25 ); K = cov_GPL1(Dmat, etasq, rho, 0.01); KG = cov_GPL1(Dmat, etasqG, rhoG, 0.01); KM = cov_GPL1(Dmat, etasqM, rhoM, 0.01); bM ~ normal( 0 , 0.5 ); bG ~ normal( 0 , 0.5 ); bMG ~ normal( 0 , 0.5 ); a ~ normal( 0 , 1 ); aG ~ normal( 0 , 1 ); G_merge = merge_missing(G_missidx, to_vector(G), G_impute); M_merge = merge_missing(M_missidx, to_vector(M), M_impute); B_merge = merge_missing(B_missidx, to_vector(B), B_impute); for ( i in 1:N_spp ) { mu[i] = a + bM * M_merge[i] + bG * G_merge[i]; nu[i] = aG + bMG * M_merge[i]; } M_merge ~ multi_normal( rep_vector(0,N_spp) , KM ); G_merge ~ multi_normal( nu , KG ); B_merge ~ multi_normal( mu , K ); }
68. ### B ∼ MVNormal(μ, K) μ i = α + β

G G i + β M M i α ∼ Normal(0,1) β G , β M ∼ Normal(0,0.5) η2 ∼ HalfNormal(1,0.25) K = η2 exp(−ρd i,j ) ρ ∼ HalfNormal(3,0.25) G ∼ MVNormal(ν, K G ) ν i = α G + β MG M i α G ∼ Normal(0,1) β MG ∼ Normal(0,0.5) η2 G ∼ HalfNormal(1,0.25) K G = η2 G exp(−ρ G d i,j ) ρ G ∼ HalfNormal(3,0.25) M ∼ MVNormal(0,K M ) K M = η2 M exp(−ρ M d i,j ) η2 M ∼ HalfNormal(1,0.25) ρ M ∼ HalfNormal(3,0.25) data{ int N_spp; vector[N_spp] B; int N_B_miss; int B_missidx[N_B_miss]; vector[N_spp] M; int N_M_miss; int M_missidx[N_M_miss]; vector[N_spp] G; int N_G_miss; int G_missidx[N_G_miss]; matrix[N_spp,N_spp] Dmat; } parameters{ real a; real aG; real bG; real bM; real bMG; real<lower=0> etasq; real<lower=0> rho; real<lower=0> etasqG; real<lower=0> rhoG; real<lower=0> etasqM; real<lower=0> rhoM; vector[N_M_miss] M_impute; vector[N_G_miss] G_impute; vector[N_B_miss] B_impute; } model{ vector[N_spp] mu; vector[N_spp] nu; vector[N_spp] M_merge; vector[N_spp] G_merge; vector[N_spp] B_merge; matrix[N_spp,N_spp] K; matrix[N_spp,N_spp] KG; matrix[N_spp,N_spp] KM; rho ~ normal( 3 , 0.25 ); etasq ~ normal( 1 , 0.25 ); rhoG ~ normal( 3 , 0.25 ); etasqG ~ normal( 1 , 0.25 ); rhoM ~ normal( 3 , 0.25 ); etasqM ~ normal( 1 , 0.25 ); K = cov_GPL1(Dmat, etasq, rho, 0.01); KG = cov_GPL1(Dmat, etasqG, rhoG, 0.01); KM = cov_GPL1(Dmat, etasqM, rhoM, 0.01); bM ~ normal( 0 , 0.5 ); bG ~ normal( 0 , 0.5 ); bMG ~ normal( 0 , 0.5 ); a ~ normal( 0 , 1 ); aG ~ normal( 0 , 1 ); G_merge = merge_missing(G_missidx, to_vector(G), G_impute); M_merge = merge_missing(M_missidx, to_vector(M), M_impute); B_merge = merge_missing(B_missidx, to_vector(B), B_impute); for ( i in 1:N_spp ) { mu[i] = a + bM * M_merge[i] + bG * G_merge[i]; nu[i] = aG + bMG * M_merge[i]; } M_merge ~ multi_normal( rep_vector(0,N_spp) , KM ); G_merge ~ multi_normal( nu , KG ); B_merge ~ multi_normal( mu , K ); }
69. ### B ∼ MVNormal(μ, K) μ i = α + β

G G i + β M M i α ∼ Normal(0,1) β G , β M ∼ Normal(0,0.5) η2 ∼ HalfNormal(1,0.25) K = η2 exp(−ρd i,j ) ρ ∼ HalfNormal(3,0.25) G ∼ MVNormal(ν, K G ) ν i = α G + β MG M i α G ∼ Normal(0,1) β MG ∼ Normal(0,0.5) η2 G ∼ HalfNormal(1,0.25) K G = η2 G exp(−ρ G d i,j ) ρ G ∼ HalfNormal(3,0.25) M ∼ MVNormal(0,K M ) K M = η2 M exp(−ρ M d i,j ) η2 M ∼ HalfNormal(1,0.25) ρ M ∼ HalfNormal(3,0.25) data{ int N_spp; vector[N_spp] B; int N_B_miss; int B_missidx[N_B_miss]; vector[N_spp] M; int N_M_miss; int M_missidx[N_M_miss]; vector[N_spp] G; int N_G_miss; int G_missidx[N_G_miss]; matrix[N_spp,N_spp] Dmat; } parameters{ real a; real aG; real bG; real bM; real bMG; real<lower=0> etasq; real<lower=0> rho; real<lower=0> etasqG; real<lower=0> rhoG; real<lower=0> etasqM; real<lower=0> rhoM; vector[N_M_miss] M_impute; vector[N_G_miss] G_impute; vector[N_B_miss] B_impute; } model{ vector[N_spp] mu; vector[N_spp] nu; vector[N_spp] M_merge; vector[N_spp] G_merge; vector[N_spp] B_merge; matrix[N_spp,N_spp] K; matrix[N_spp,N_spp] KG; matrix[N_spp,N_spp] KM; rho ~ normal( 3 , 0.25 ); etasq ~ normal( 1 , 0.25 ); rhoG ~ normal( 3 , 0.25 ); etasqG ~ normal( 1 , 0.25 ); rhoM ~ normal( 3 , 0.25 ); etasqM ~ normal( 1 , 0.25 ); K = cov_GPL1(Dmat, etasq, rho, 0.01); KG = cov_GPL1(Dmat, etasqG, rhoG, 0.01); KM = cov_GPL1(Dmat, etasqM, rhoM, 0.01); bM ~ normal( 0 , 0.5 ); bG ~ normal( 0 , 0.5 ); bMG ~ normal( 0 , 0.5 ); a ~ normal( 0 , 1 ); aG ~ normal( 0 , 1 ); G_merge = merge_missing(G_missidx, to_vector(G), G_impute); M_merge = merge_missing(M_missidx, to_vector(M), M_impute); B_merge = merge_missing(B_missidx, to_vector(B), B_impute); for ( i in 1:N_spp ) { mu[i] = a + bM * M_merge[i] + bG * G_merge[i]; nu[i] = aG + bMG * M_merge[i]; } M_merge ~ multi_normal( rep_vector(0,N_spp) , KM ); G_merge ~ multi_normal( nu , KG ); B_merge ~ multi_normal( mu , K ); }
70. ### 0.00 0.05 0.10 0 5 10 15 effect of G

on B Density 0.75 0.80 0.85 0.90 0.95 0 2 4 6 8 10 12 14 effect of M on B Density -0.2 0.0 0.2 0.4 0.6 0 1 2 3 effect of M on G Density 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 phylogenetic distance B covariance 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 phylogenetic distance M covariance 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 phylogenetic distance G covariance complete cases full luxury imputation
71. ### Imputing Primates Key idea: Missing values already have probability distributions

Think like a graph, not like a regression Imputation without relationships among predictors risky Even if doesn’t change result, it’s our duty
72. ### Censored Observations censored adoptions (not yet observed) observed adoptions time

sample taken 0 2 4 6 8 10 0.0 0.2 0.4 0.6 0.8 1.0 time proportion not yet adopted

74. ### Course Schedule Week 1 Bayesian inference Chapters 1, 2, 3

Week 2 Linear models & Causal Inference Chapter 4 Week 3 Causes, Confounds & Colliders Chapters 5 & 6 Week 4 Overfitting / MCMC Chapters 7, 8, 9 Week 5 Generalized Linear Models Chapters 10, 11 Week 6 Ordered categories & Multilevel models Chapters 12 & 13 Week 7 More Multilevel models Chapters 13 & 14 Week 8 Social Networks & Gaussian Processes Chapter 14 Week 9 Measurement & Missingness Chapter 15 Week 10 Generalized Linear Madness Chapter 16 https://github.com/rmcelreath/stat_rethinking_2022
75. None