Upgrade to Pro — share decks privately, control downloads, hide ads and more …

一般化線形混合モデルの実践:気をつけたい3つのポイント / 2021-11-06 LMM and GLMM

Yu Tamura
November 06, 2021

一般化線形混合モデルの実践:気をつけたい3つのポイント / 2021-11-06 LMM and GLMM

2021年11月6日(土)名古屋大学大学院人文学研究科英語教育分野主催の連続公開講座『データサイエンス時代の英語教育』(2)における発表資料です。

Yu Tamura

November 06, 2021
Tweet

More Decks by Yu Tamura

Other Decks in Research

Transcript

  1. “[T]here is no single correct way to implement an LMM,

    and…the choices they [researchers] make during analysis will comprise one path, however justi f ied, amongst multiple alternatives. ” Meteyard & Davies (2020, pp.1–2)
  2. • ాଜ༞ʢͨΉΒΏ͏ʣ • ؔ੢େֶ֎ࠃޠֶ෦ʢ4೥໨ʣ • ઐ໳ • ୈೋݴޠशಘʢओʹจ๏ʣ • ୈೋݴޠจॲཧ

    • झຯɿαοΧʔ؍ઓ • Rͷ͜ͱɿ݁ߏ޷͖ʢҰॹʹ͍Δͱ࣌ؒΛ๨Εͤͯ͘͞ΕΔʣ ࣗݾ঺հ 7 https://tam07pb915.wordpress.com/ https://tamurayu.wordpress.com/
  3. • tݕఆɼ෼ࢄ෼ੳɼ୯ճؼɾॏճؼ -> ʢҰൠʣઢܗϞσϧ • ਖ਼ن෼෍Ҏ֎ͷ෼෍΁ͷ֦ு -> ҰൠԽઢܗϞσϧ • ݸਓࠩ౳Λߟྀ͍ͨ͠

    ->ࠞ߹ޮՌϞσϧ • ܏͖and/or੾ยΛࢀՃऀ͝ͱɾ߲໨͝ͱʹਪఆ • ࢀՃऀ෼ੳͱ߲໨෼ੳΛ଍͠߹ΘͤͨΑ͏ͳ΋ͷ ҰൠԽઢܗࠞ߹ޮՌϞσϧͱ͸ ઢܗʁҰൠԽʁࠞ߹ޮՌʁ 9
  4. • LME (Linear Mixed-Effect) • ઢܗࠞ߹ޮՌϞσϧͱݺ͹ΕΔ΋ͷ • ਖ਼ن෼෍ • GLMM

    (Generalized Linear Mixed-Effect Model) • ҰൠԽઢܗࠞ߹Ϟσϧͱݺ͹ΕΔ΋ͷ • ਖ਼ن෼෍Ҏ֎ʢϙΞιϯ෼෍ɼೋ߲෼෍ɼΨϯϚ෼෍, etc.ʣ ҰൠԽઢܗࠞ߹ޮՌϞσϧͱ͸ LMEʁGLMMʁ 10
  5. ࠞ߹ޮՌϞσϧͷಋೖ͸ Gries (2021)ͷΠϯτϩ෦ ෼͕؆ܿͰΘ͔Γ΍͍͢ Gries, S. T. (2021). (Generalized Linear)

    Mixed-E ff ects Modeling: A Learner Corpus Example. Language Learning, 71, 757 – 798. https://doi.org/10.1111/lang.12448 11
  6. • ʮαϯϓϦϯάɾϢχοτʯΛ૿΍͢ʢMeteyard & Ravies, 2020) • “As a general rule

    of thumb, increasing the sample size at the highest level (i.e., sampling more groups) will do more to increase power than increasing the number of individuals in the groups. “ (Scherbaum & Ferreter, 2009, p.352) ෼ੳͷख๏ ࢀՃऀ਺ɾ߲໨਺ͱݕఆྗ෼ੳ 19
  7. • • Ԡ౴ม਺ = ੾ย + ݻఆޮՌ + ݸਓࠩ +

    ߲໨ࠩ • ඞͣʮԠ౴ม਺͸XXXͰ͢ʯͱॻ͘ • ͱ͘ʹ஫ҙ͍ͨ͠ͷ͸ϩδεςΟοΫճؼͷͱ͖ • ਖ਼౴ɾޡ౴͸ਖ਼౴1ɼޡ౴0ͰΘ͔Γ΍͍͕͢… • preference taskͷΑ͏ͳͱ͖͸Ͳ͕ͬͪ0ͰͲ͕ͬͪ1͔໌ࣔత ʹهड़ yi = β1 + β2 xi + ri + rj ෼ੳͷख๏ Ԡ౴ม਺ʢresponse variableʣ͸ͳʹʁ 22
  8. • • Ԡ౴ม਺ = ੾ย + ݻఆޮՌ + ݸਓࠩ +

    ߲໨ࠩ • ඞͣʮઆ໌ม਺ʢݻఆޮՌʣ͸XXXͰ͢ʯͱॻ͘ • ΧςΰϦม਺ͷίʔσΟϯάͷઆ໌͸ஸೡʹʢޙड़ʣ • Ͳͷํ๏ͰίʔσΟϯά͔ͨ͠ • ਫ४ͷઃఆΛͲ͏ͨ͠ͷ͔ • ݚڀ՝୊ʹج͍͍ͮͯΔ͔Ͳ͏͔ yi = β1 +β2 xi +ri + rj ෼ੳͷख๏ આ໌ม਺ʢexplanatory variableʣ͸ͳʹʁ 23
  9. • ྫ1ɿcontrasts(dat$freq) <- contr.sum(2) • ྫ2ɿifelse(dat$freq == “low”, -0.5, 0.5)

    -> dat$freq_c • RͰ͸σϑΥϧτͩͱΞϧϑΝϕοτॱͰਫ४͕ܾ·ΔͷͰɼඞཁͰ͋Ε ͹factor()ؔ਺Ͱࢦఆ ม਺ͷίʔσΟϯά΍த৺Խ ίʔσΟϯάʹ͍ͭͯ 31 dat$freq<-factor(dat$condition, levels = c(“low","high")) Schad et al. (2020)
  10. • ݸͷਫ४͕͋Δͱͨ͠Βɼ ݸͷൺֱ͔͠Ͱ͖ͳ͍͜ͱʹ஫ҙʢ1͕ͭ ੾ยʹͳΔͷͰʣ • ԾઆͱηοτͰߟ͑Δͷ͕ॏཁʢSchad et al. 2020ʣ •

    ͱ͘ʹ3ਫ४Ҏ্ͷ৔߹͸ݚڀ՝୊ʹরΒ͠߹ΘͤͯɼͲͷਫ४ͱͲͷਫ४ Λൺֱ͢Δ͔Λߟ͑Δ • ౷੍܈ʢϕʔεϥΠϯ৚݅ʣͱ2ͭҎ্ͷ࣮ݧ܈ʢ࣮ݧ৚݅ʣΛൺ΂͍ͨ - > dummy coding • 3ͭ͋Δதͷ͋Δ܈ʢ৚݅ʣͱଞͷ2ͭͷ܈ʢ৚݅ʣͷฏۉΛൺ΂͍ͨ -> sum coding k k − 1 ม਺ͷίʔσΟϯά΍த৺Խ ίʔσΟϯάʹ͍ͭͯ 34
  11. • 2ཁҼҎ্Ͱަޓ࡞༻ʹڵຯ͕͋Γɼ͋ΔཁҼͷ୯७ओޮՌ͕ݟ ͍ͨ 1. sum contrastsͰަޓ࡞༻Λ֬ೝ 2. A: dummy codingʹ੾Γସ͑ͯ୯७ޮՌͷ֬ೝʢ3ཁҼҎ্

    Ͱྡ઀ͯ͠Δਫ४ͷൺֱͳΒrepeated coding΋ʣ 2. B: emmeansύοέʔδΛ࢖ͬͯԼҐݕఆ ม਺ͷίʔσΟϯά΍த৺Խ ίʔσΟϯάʹ͍ͭͯ 36 emmeans(model, pairwise~ཁҼA|ཁҼB)$contrastsʢཁҼBͷͦΕͧΕͷਫ४ͰͷཁҼAͷਫ४ؒൺֱʣ emmeans(model, pairwise~ཁҼB|ཁҼA)$contrastsʢཁҼAͷͦΕͧΕͷਫ४ͰͷཁҼBͷਫ४ؒൺֱʣ
  12. • ίʔσΟϯάʹؔͯ͠ࢀߟʹͳΔ΋ͷ • Shravan Vasishthͷಈըɿhttps://youtu.be/hD2XjoP5WBI • ΢Σϒϖʔδ • Coding categorical

    predictor variables in factorial designs • CODING SYSTEMS FOR CATEGORICAL VARIABLES IN REGRESSION ANALYSIS • ࿦จɿSchad, D. J., Vasishth, S., Hohenstein, S., & Kliegl, R. (2020). How to capitalize on a priori contrasts in linear (mixed) models: A tutorial. Journal of Memory and Language, 110, 104038. https://doi.org/10.1016/ j.jml.2019.104038 ม਺ͷίʔσΟϯά΍த৺Խ ίʔσΟϯάʹ͍ͭͯ 37
  13. • ࿈ଓม਺Ͱ͋Ε͹த৺Խ౳Λͨ͠ͷ͔ • த৺ԽͷࡍʹͳʹΛج४ʹͨ͠ͷ͔ʁʢBrauer & Curtin, 2018) • શମฏۉʢࢀՃऀؒʣʁूஂฏۉʢࢀՃऀ಺ʣʁ •

    Longܕͷσʔλͷࡍʹ͸த৺Խʹ஫ҙ • ख़ୡ౓ͷείΞͳͲɼ1ਓʹ͖ͭ1ͭͷ஋͔͠ͳ͍৔߹ɼ෼ࢄ͕ ຊདྷͷ஋ΑΓখ͘͞ͳͬͯ͠·͏ • ม਺ͷίʔσΟϯά΍த৺Խ த৺Խʹ͍ͭͯ 38
  14. • ྫͱͯ͠ཁҼAͱཁҼBͷओޮՌͱަޓ࡞༻ͷσβΠϯΛߟ͑Δ • (1 + A*B | subject) + (1

    + A*B | item) • ͔͜͜ΒͲ͏΍ͬͯ….ʁ • Մೳੑ1: (1 + A + B | subject) + (1 + A*B | item) • Մೳੑ2: (1 + A*B | subject) + (1 + A + B | item) • Մೳੑ3: (1 + A | subject) + (1 + A*B | item) • Մೳੑ4: (1 + B | subject) + (1 + A*B | item) • Մೳੑ5: (1 + A*B | subject) + (1 + A | item) • Մೳੑ6: (1 + A*B | subject) + (1 + B | item) ϞσϧΛཱͯͨํ๏ ҋͷதͰى͜Δ͜ͱͷ૝૾ 41 ΋͠ԾʹՄೳੑ1ͱՄ ೳੑ2ͷ྆ํ͕ऩଋ ͨ͠ͱͨ͠ΒɼͲ͏ ΍ͬͯબΜͰΔͷʁ
  15. “[W]hile the maximal model indeed performs well as far as

    Type I error rates were concerned, power decreases substantially with model complexity. Matuschek et al. (2017, pp.310 – 311)
  16. 1. ࠷େϞσϧΛߏஙʢऩଋ͠ͳͯ͘΋OKʣ 2. ͦͷϞσϧͷมྔޮՌʹ͍ͭͯओ੒෼෼ੳ 1. lme4::rePCA(maxmodel)%>%summary 2. “Proportion of Variance”͕θϩʢ·ͨ͸ݶΓͳ͘θϩʹ͍ۙ஋ʣͷྻ͕͋Δ͔νΣοΫ

    3. มྔޮՌʹ͓͚Δ੾ยͱ܏͖ͷ૬ؔύϥϝʔλΛআ֎ͯ͠࠶౓ओ੒෼෼ੳ • (1+A | subject)ɿ૬ؔύϥϝʔλ͋Γ • (1+A || subject)ɿ૬ؔύϥϝʔλͳ͠ 4. มྔޮՌͷ෼ࢄΛνΣοΫͯ͠ɼ஋ͷ௿͍ཁҼΛݮΒͯ͠ϞσϧΛߏங 5. anovaؔ਺Ͱ໬౓ൺݕఆʢMatuschek et al., 2017Ͱ͸ ͰγϛϡϨʔγϣϯʣ 1. or Ͱ༗ҙ -> ΑΓෳࡶͳϞσϧʢཁҼͷଟ͍ํʣΛબ୒ 2. or Ͱ༗ҙͰ͸ͳ͍ ->෼ࢄͷ௿͍ཁҼΛݮΒͯ͠Ϟσϧߏஙˍ໬౓ൺݕఆ 6. બ͹ΕͨϞσϧʹ૬ؔύϥϝʔλ௥Ճͨ͠΄͏͕͍͍͔໬౓ൺݕఆ αLRT = .20 αLRT = .10 αLRT = .20 αLRT = .10 αLRT = .20 ϞσϧΛཱͯͨํ๏ มྔޮՌͷܾఆखॱʢBates et al., 2015) 45 ೔ຊޠͰ͸৽ҪɾRoland (2016)ͷ4.2અʹղઆ͕͋Γ·͢ lmer()ͷ৔߹͸໬౓ൺݕఆ͢ΔͷͰREFL=FͰ࠷໬ਪఆ -> ࠷ޙʹREML=Tʹ໭͢
  17. 1. ϥϯμϜ੾ยͷΈͷϞσϧΛߏங 2. ϥϯμϜ܏͖Λ1ͭͣͭೖΕͯAICΛ࠷΋Լ͛Δ΋ͷΛ࢒͢ 3. ͞ΒʹϥϯμϜ܏͖Λ௥Ճͯ͠AICΛ࠷΋΋ͷΛ࢒͢ 4. AIC͕ͦΕҎ্Լ͕Βͳ͘ͳΔ·Ͱ௥Ճͯ͠ετοϓ • AICʹΑΔϞσϧબ୒͸ωετͯ͠ͳ͍Ϟσϧಉ࢜ͷൺֱ΋ෳ਺Ϟσϧͷ

    ൺֱ΋Մೳͱ͍͏ϝϦοτ • ͨͩ͠αϯϓϧαΠζ͕খ͍͞ͱAIC͸”too anti-conservative”ʹͳΔة ݥੑ΋͋ΔʢMatuschek et al., 2017, p.313ʣ ϞσϧΛཱͯͨํ๏ มྔޮՌͷܾఆखॱʢforward approachʣ 46 ୳ࡧతͳΞϓϩʔνͳͲɼݻఆޮՌͷ਺͕ଟ͍৔߹ʹ͸Ϟσϧ ϑΟοτ͕޲্͢Δ΋ͷΛೖΕΔͱ͍͏ํ๏΋͋Δ ۩ମతͳํ๏͸Murakami (2016)ͷSupplementary Materials͕ࢀߟʹͳΓ·͢
  18. • ࠷΋؆୯ͳํ๏͸performanceύοέʔδʢLüdecke et al., 2021ʣͷར༻ • r2(model) • conditional R2:

    ݻఆޮՌʴมྔޮՌ • marginal R2: ݻఆޮՌ෦෼ͷΈ • piecewiseSEM::rsquared()Ͱ΋ܭࢉՄʢ˞R 4.0.0Ҏ্ͷΈ ରԠʣ ใࠂ͞ΕΔ΂͖͜ͱ R2ͷࢉग़ํ๏ 54 performanceύοέʔδʹ͸ଞʹ΋৭ʑؔ਺͋ΔͷͰ৮ͬͯΈ͍ͯͩ͘͞
  19. • con f int.merMod()ؔ਺Λར༻ʢlme4ύοέʔδ͕ϩʔυ͞ ΕͯͨΒcon f int()Ͱಉ͡Ͱ͕͢statsύοέʔδͷ΋ͷͰ͸ ͳ͍͜ͱΛ໌ࣔͯ͠·͢ʣ • ݻఆޮՌͷ৴པ۠ؒͷΈ

    -> ”parm = “beta_”Ͱࢦఆ ใࠂ͞ΕΔ΂͖͜ͱ ਪఆ஋ͷ৴པ۠ؒ 55 ਪఆ஋ͱ95%৴པ۠ؒΛ·ͱΊͯදʹ͢Δίʔυ͸͜ͷεϨου͕ࢀߟʹͳΓ·͢
  20. ใࠂ͞ΕΔ΂ ͖͜ͱ • sjPlot::tab_model() • tab_model(model, show.stat = T, show.est

    = T, dv.labels = “Accuracy”) • ৴པ۠ؒͷਪఆํ๏͕ con f int()ؔ਺ͷσϑΥϧ τʢpro f ileʣͱҧ͏ ʢͬͪ͜͸Waldʣ ศརͳؔ਺ By-subject slope By-item slope Φοζൺ͸ f ixef(model)%>%exp()Ͱ΋ग़ͤ·͢ By-subject intercept Within-group variance By-item intercept ࢀߟɿhttps://strengejacke.github.io/sjPlot/articles/tab_model_estimates.html ࢀՃऀͷ੾ยͱ܏͖ͷ૬ؔ ߲໨ͷ੾ยͱ܏͖ͷ૬ؔ ڃ಺૬ؔ܎਺
  21. • ओཁࠃ಺ࢽͷ౤ߘنఆʹ͸σʔλͷެ։౳ʹؔ͢Δهड़ͳ͠ !Language Education and Technology !Annual Review of English

    Language Education in Japan !JACET Journal !Studies in Language Sciences !Second Language ࠃ಺ࢽͰ΋౤ߘنఆͷ੔උΛ 67
  22. • Barr, D. J., Levy, R., Scheepers, C., and Tily,

    H. J. (2013). Random effects structure for con f irmatory hypothesis testing: Keep it maximal. Journal of memory and language, 68(3), 255 – 278. • Bates, D., Kliegl, R., Vasishth, S., & Baayen, H. (2015). Parsimonious Mixed Models. https://arxiv.org/abs/1506.04967v2 • Brauer, M., & Curtin, J. J. (2018). Linear mixed-effects models and the analysis of nonindependent data: A uni f ied framework to analyze categorical and continuous independent variables that vary within-subjects and/or within-items. Psychological Methods, 23(3), 389 – 411. https://doi.org/10.1037/met0000159 • Brysbaert, M., & Stevens, M. (2018). Power Analysis and Effect Size in Mixed Effects Models: A Tutorial. Journal of Cognition, 1(1), 9. https:// doi.org/10.5334/joc.10 • Burnham, K. P., & Anderson, D. R. (2004). Multimodel Inference: Understanding AIC and BIC in Model Selection. Sociological Methods & Research, 33(2), 261 – 304. https://doi.org/10.1177/0049124104268644 • Frossard, J., & Renaud, O. (2019). Choosing the correlation structure of mixed effect models for experiments with stimuli. https://arxiv.org/ abs/1903.10766v3 • Gries, S. T. (2021). (Generalized Linear) Mixed-Effects Modeling: A Learner Corpus Example. Language Learning, 71(3), 757 – 798. https:// doi.org/10.1111/lang.12448 • Hou, X. (2021). Learning two syntactic constructions simultaneously: A case of overshadowing. Language and Cognition, 13(3), 467 – 493. https://doi.org/10.1017/langcog.2021.10 • Matuschek, H., Kliegl, R., Vasishth, S., Baayen, H., & Bates, D. (2017). Balancing Type I error and power in linear mixed models. Journal of Memory and Language, 94, 305 – 315. https://doi.org/10.1016/j.jml.2017.01.001 • Meteyard, L., & Davies, R. A. I. (2020). Best practice guidance for linear mixed-effects models in psychological science. Journal of Memory and Language, 112, 104092. https://doi.org/10.1016/j.jml.2020.104092 • Murakami, A. (2016). Modeling Systematicity and Individuality in Nonlinear Second Language Development: The Case of English Grammatical Morphemes: Modeling Individual Nonlinear Development. Language Learning, 66(4), 834 – 871. https://doi.org/10.1111/lang.12166 • RPubs—Reduction of Complexity of Linear Mixed Models with Double-Bar Syntax. (n.d.). Retrieved November 3, 2021, from https:// rpubs.com/Reinhold/22193 • RPubs—The Correlation Parameter in the Random Effects of Mixed Effects Models. (n.d.). Retrieved November 3, 2021, from https:// rpubs.com/yjunechoe/correlationsLMEM • Schad, D. J., Vasishth, S., Hohenstein, S., & Kliegl, R. (2020). How to capitalize on a priori contrasts in linear (mixed) models: A tutorial. Journal of Memory and Language, 110, 104038. https://doi.org/10.1016/j.jml.2019.104038 • Scherbaum, C. A., & Ferreter, J. M. (2009). Estimating Statistical Power and Required Sample Sizes for Organizational Research Using Multilevel Modeling. Organizational Research Methods, 12(2), 347 – 367. https://doi.org/10.1177/1094428107308906 • Should we f it maximal linear mixed models? | R-bloggers. (2014, November 25). https://www.r-bloggers.com/2014/11/should-we- f it-maximal- linear-mixed-models/ • ৽Ҫֶ, & Roland D. (2016). ݴޠཧղݚڀʹ͓͚Δ؟ٿӡಈσʔλٴͼಡΈ࣌ؒσʔλͷ౷ܭ෼ੳ. ౷ܭ਺ཧ, 64(2), 201 – 231. ࢀߟจݙ 77