Upgrade to Pro — share decks privately, control downloads, hide ads and more …

機械学習の解釈性に関する研究動向とシステム運用への応用 / A Survey on Interpretable Machine Learning and Its Application for System Operation

tsurubee
June 23, 2021

機械学習の解釈性に関する研究動向とシステム運用への応用 / A Survey on Interpretable Machine Learning and Its Application for System Operation

tsurubee

June 23, 2021
Tweet

More Decks by tsurubee

Other Decks in Research

Transcript

  1. ͘͞ΒΠϯλʔωοτגࣜձࣾ
    (C) Copyright 1996-2021 SAKURA Internet Inc
    ͘͞ΒΠϯλʔωοτݚڀॴ
    ػցֶशͷղऍੑʹؔ͢Δݚڀಈ޲ͱ

    γεςϜӡ༻΁ͷԠ༻
    2021/06/23
    ୈ12ճ͘͞ΒΠϯλʔωοτݚڀձ


    ௽ా തจ

    View Slide

  2. 2
    1ɽػցֶशͷղऍੑʹؔ͢Δݚڀಈ޲
    ໨࣍
    • ղऍੑ͕ٻΊΒΕΔഎܠ


    • ୅දతͳख๏ͱͦͷ෼ྨ


    • ήʔϜཧ࿦ʹجͮ͘ख๏ͷ঺հɼetc.
    2ɽҟৗͷݪҼ਍அʹؔ͢Δݚڀͷ঺հ
    • ҟৗͷݪҼ਍அͷઌߦݚڀ


    • ہॴతͳղऍख๏Λద༻ͨ͠ΞʔΩςΫνϟ


    • ݪҼ਍அ݁Ռͱ࣮ߦ࣌ؒͷධՁɼetc.

    View Slide

  3. 1.
    ػցֶशͷղऍੑʹؔ͢Δݚڀಈ޲

    View Slide

  4. 4
    ༻ޠͷఆٛɿղऍੑͱઆ໌ੑ
    ػցֶशͷ෼໺ʹ͓͍ͯɼղऍ(Մೳ)ੑ (Interpretability)ͱઆ໌(Մೳ)ੑ (Explainability) ͷ

    ౷Ұతͳఆٛ͸ͳ͘ɼಉٛͰ࢖ΘΕ͍ͯΔ͜ͱ΋͋Δɽ
    ຊൃදͰ͸ɼجຊతʹʮղऍʯͱ͍͏දݱͰ౷Ұ͠ɼࢀߟࢿྉͷදݱʹΑͬͯదٓ
    ʮઆ໌ʯͱ͍͏දݱΛҙຯͷ۠ผͳ͘༻͍͍ͯΔɽ
    [Linardatos+, Entropy2021] Explainable AI: A Review of Machine Learning Interpretability Methods
    • ղऍੑ͸ɼʮਓؒʹཧղՄೳͳݴ༿Ͱઆ໌·ͨ͸ఏࣔ͢ΔೳྗʯͰ͋Δɽ


    • Ұํɼઆ໌ੑͱ͸ɼػցֶशγεςϜͷ಺෦ͷϩδοΫ΍࢓૊Έʹؔ͢Δੑ࣭Ͱ͋Γɼ
    આ໌ੑΛ΋ͭϞσϧ͸ɼϞσϧͷֶश΍ҙࢥܾఆͷࡍͷ಺෦ಈ࡞ʹ͍ͭͯਓ͕ؒཧղ

    Ͱ͖Δɽ


    • ղऍੑ͸આ໌ੑΑΓ΋޿͍༻ޠͰ͋Δɽ
    ྆ऀͷҧ͍ʹؔ͢Δٞ࿦ͷҰྫ [Linardatos+, Entropy2021]

    View Slide

  5. 5
    ػցֶशͷԠ༻
    • ਂ૚ֶशΛ͸͡Ίͱ͢Δػցֶशٕज़͸ɼը૾ೝࣝ΍ࣗવݴޠॲཧͳͲ༷ʑͳλεΫͰ
    ߴ͍ੑೳΛൃش͠ɼ༷ʑͳ෼໺ͰԠ༻͕ਐΜͰ͍Δɽ


    • ಛʹɼҩྍɾϔϧεέΞ΍ࣗಈӡసɾϩϘοτ੍ޚɼϩʔϯ৹ࠪͳͲɼߴ౓ͳҙࢥܾఆ
    ͕ٻΊΒΕΔ৔໘΁ͷར༻͕֦େ͍ͯ͠Δɽ
    ਂ૚ֶशʹΑΔ౶೘පੑ໢ບ঱ͷը૾਍அ [Beede+, CHI2020]
    [Beede+, CHI2020] A Human-Centered Evaluation of a Deep Learning System Deployed in Clinics for the Detection of Diabetic Retinopathy

    View Slide

  6. 6
    ϒϥοΫϘοΫε໰୊ͱͦͷղܾ
    ਂ૚ֶशͳͲͷػցֶशϞσϧ͸ɼͦͷ༧ଌ΍൑அͷࠜڌΛਓؒ
    が
    ཧղ͢Δ͜ͱ
    がで
    ͖ͳ͍
    ʮϒϥοΫϘοΫεʯͰ͋Δ͜ͱ͕໰୊ࢹ͞Ε͍ͯΔɽ

    ྫ͑͹ɼҎԼͷΑ͏ͳ৔໘ʹ͓͍ͯɼ༧ଌ΍൑அͷࠜڌͷཧղ͕ཁ੥͞ΕΔɽ
    ػցֶशͷղऍੑ

    ʹର͢Δཁ੥ͷߴ·Γ
    • ҩྍ਍அʹ͓͚Δҩࢣͷॴݟͱͷ੔߹ੑ֬ೝ


    • ࣗಈӡసͰࣄނ͕ى͖ͨ৔߹ͷݪҼڀ໌


    • ެతػؔͰͷར༻ʹ͓͚Δެฏੑͷ୲อɼetc.
    • ࠃ಺ɿAI։ൃ
    ガ
    Π
    ド
    ϥΠϯҊ※1 (૯຿লɼ2017೥)


    • ಁ໌ੑͷݪଇ


    • ΞΧ΢ϯλϏϦςΟ (આ໌੹೚)ͷݪଇ
    • EUɿҰൠ
    デ
    ʔλอޢنଇ (General Data Protection Regulation: GDPR)※2 (2018೥)


    • GDPR ୈ22৚ʮAutomated individual decision-making, including pro
    fi
    lingʯ

    Ϣʔβʹର͢Δઆ໌੹೚ʹؔ͢Δ಺༰ ※1 https://www.soumu.go.jp/main_content/000499625.pdf

    ※2 https://gdpr-info.eu/

    View Slide

  7. 7
    ݚڀ෼໺ͷོ੝
    • 2016೥ࠒ͔Βɼػցֶशͷղऍੑʹؔ͢Δ࿦จ਺͕೥ʑ૿Ճ͍ͯ͠Δ (Լਤ)


    • ػցֶशؔ࿈ͷֶձɾݚڀձͰ΋ղऍੑʹؔ͢Δηογϣϯ͕։࠵͞Ε͍ͯΔɽ
    • AAAI2019ͷνϡʔτϦΞϧɿ

    Tutorial on Explainable AI: From Theory to
    Motivation, Applications and Limitations
    • NIPS2020ͷνϡʔτϦΞϧɿ

    Explaining Machine Learning

    Predictions: State-of-the-art, Challenges,

    and Opportunities
    • ୈ41ճIBISMLݚڀձ (2020೥)ͷηογϣϯɿ

    ʮػցֶशͷ༗ҙੑɼઆ໌ੑɼ҆શੑʯ


    • FAT/ML (Fairness, Accountability, and
    Transparency in Machine Learning) (2014~2018)

    View Slide

  8. 8
    ࣮αʔϏε΁ͷಋೖ
    ػցֶशͷղऍੑΛఏڙ͢ΔαʔϏε΋૿͍͑ͯΔɽ
    https://www.datarobot.com/wiki/prediction-explanations/
    https://cloud.google.com/explainable-ai
    https://jpn.nec.com/ai/xai_a.html

    View Slide

  9. 9
    ղऍੑ͔ΒಘΒΕΔ͜ͱ
    [Adadi+, IEEE Access2018]Ͱ͸ɼҎԼͷ4ͭͷ؍఺͔ΒػցֶशͷղऍੑͷඞཁੑΛ
    ड़΂͍ͯΔɽ
    [Adadi+, IEEE Access2018] Peeking Inside the Black-Box: A Survey on Explainable Arti
    fi
    cial Intelligence (XAI)
    Ϟσϧͷ༧ଌɾ൑அͷ
    ݁ՌΛਖ਼౰Խ͢Δ
    Ϟσϧͷ੬ऑੑ΍ܽؕΛಛఆ
    ͠ɼमਖ਼͢Δ(σόοά)
    ਓؒ-ϞσϧؒͰܧଓత
    ʹվળΛߦ͏
    Ϟσϧͷֶश݁ՌΛཧղ͢Δ
    ͜ͱͰ৽ͨͳൃݟʹܨ͕Δ

    View Slide

  10. 10
    Explain to justifyɿਖ਼౰ԽͷͨΊͷઆ໌
    ػցֶशΛ׆༻ͨ͠γεςϜ͕ɼภͬͨ݁Ռ΍ࠩผతͳ݁ՌΛ΋ͨΒ͢ͱ͍͏͜ͱ͕

    ใ͡ΒΕ͍ͯΔɽ
    https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G
    ʮެฏੑٴͼಁ໌ੑͷ͋Δҙࢥܾఆͱͦͷ݁Ռʹର͢Δઆ໌੹೚ (ΞΧ΢ϯλϏϦ
    ςΟ)͕ద੾ʹ֬อ͞ΕΔͱڞʹɼٕज़ʹର͢Δ৴པੑ (Trust)͕୲อ͞ΕΔඞཁ͕
    ͋Δʯ(಺ֳ෎, ʮਓؒத৺ͷAI ࣾձݪଇʯΑΓ)
    https://queue.acm.org/detail.cfm?id=2460278

    View Slide

  11. 11
    Explain to controlɿ੍ޚͷͨΊͷઆ໌
    [Ribeiro+, KDD2016] "Why Should I Trust You?": Explaining the Predictions of Any Classi
    fi
    er
    [Ribeiro+, KDD2016]
    σʔλͱ༧ଌ஋Λݟ͚ͨͩͰ͸ҙਤ͠ͳ͍
    ֶशΛߦ͍ͬͯΔ͜ͱΛಛఆ͢Δͷ͕ࠔ೉
    • ࿛ͱϋεΩʔͷը૾෼ྨΛߦ͏ϞσϧͰɼ
    ֶशʹ༻͍ͨ࿛ͷը૾ʹ͸എܠʹઇ͕ࣸͬ
    ͍ͯΔ


    • എܠͷಛ௃Λ΋ͱʹ࿛͔ϋεΩʔ͔Λೝࣝ
    ͢ΔϞσϧ͕ߏங͞Ε͍ͯΔɽ
    ػցֶशϞσϧͷڍಈΛཧղ͢Δ͜ͱͰɼϞσϧͷ੬ऑੑ΍ܽؕΛ೺ѲͰ͖ɼϞσϧ
    ͷσόοάΛߦ͏͜ͱ͕Ͱ͖Δɽ

    View Slide

  12. 12
    ػցֶशͷղऍੑʹର͢Δ൷൑
    • ղऍՄೳͳϞσϧΛ࡞ΔͷͰ͸ͳ͘ɼϒϥοΫϘοΫεԽͨ͠ϞσϧΛઆ໌
    ͠Α͏ͱ͢Δ͜ͱ͸ةݥͰ͋Δɽ


    • ۙࣅతͳઆ໌͸ɼݩͷϞσϧͷػೳʹ஧࣮Ͱ͸ͳ͍આ໌Λఏڙ͢ΔՄೳੑ͕
    ͋Δɼetc.
    Stop Explaining Black Box Machine Learning Models for High Stakes Decisions
    and Use Interpretable Models Instead (2019)
    ͋ΒΏΔέʔεͰਖ਼֬ͳղऍ΍આ໌Λ༩͑Δສೳͳख๏͸ଘࡏ͠ͳ͍͜ͱ
    ΍ɼಋೖͷࡍʹ͸खݩͷσʔλͰݕূΛߦ͏ඞཁ͕͋Δ͜ͱͳͲͷೝ͕ࣝ
    ॏཁͰ͸ͳ͍͔ɽ

    View Slide

  13. 13
    ղऍख๏ͷ෼ྨ [Das+, arXiv2020]
    [Das+, arXiv2020] Opportunities and Challenges in Explainable Arti
    fi
    cial Intelligence (XAI): A Survey

    View Slide

  14. 14
    ScopeɿGlobal or Local
    ղऍͷൣғ (Scope) ʹج͖ͮɼҎԼͷೋͭʹ෼ྨ͞ΕΔɽ
    1ɽେҬతͳղऍ (Global)ɿ

    ɹ ϞσϧࣗମʹղऍੑΛ෇༩͢Δख๏


    2ɽہॴతͳղऍ (Local)ɿ

    ɹ ݸʑͷσʔλͷ༧ଌ݁Ռʹର͠ղऍੑ

    ɹ Λ෇༩͢Δख๏


    [Das+, arXiv2020] Opportunities and Challenges in Explainable Arti
    fi
    cial Intelligence (XAI): A Survey
    [Das+, arXiv2020]
    [Das+, arXiv2020]

    View Slide

  15. 15
    ͲͷΑ͏ʹղऍख๏͕։ൃ͞Ε͍ͯΔ͔ (Usage)ʹج͖ͮɼҎԼͷೋͭʹ෼ྨ͞ΕΔɽ
    1ɽຊ࣭త (Intrinsic)ɿ

    ɹ ຊ࣭తʹղऍՄೳͳϞσϧͷར༻΍ઃܭ
    2ɽޙ෇͚త (Post-hoc)ɿ

    ɹ ϞσϧͷֶशޙʹղऍੑΛ෇༩
    [Das+, arXiv2020] Opportunities and Challenges in Explainable Arti
    fi
    cial Intelligence (XAI): A Survey
    [Das+, arXiv2020]
    [Das+, arXiv2020]
    UsageɿIntrinsic or Post-hoc

    View Slide

  16. 16
    ୅දతͳख๏
    [Adadi+, IEEE Access2018]
    [Adadi+, IEEE Access2018] Peeking Inside the Black-Box: A Survey on Explainable Arti
    fi
    cial Intelligence (XAI)
    Scope Usage
    Ϟσϧݻ༗ͷख๏͔Ϟσϧ
    ʹґଘ͠ͳ͍ख๏͔

    View Slide

  17. γϟʔϓϨΠ஋ɿڠྗήʔϜཧ࿦
    • ڠྗήʔϜཧ࿦ʹ͓͍ͯෳ਺ϓϨΠϠʔͷڠྗʹΑͬͯಘΒΕͨརಘΛ֤ϓϨΠϠʔͷߩݙ
    ౓ʹԠͯ͡ެਖ਼ʹ෼഑͢ΔͨΊͷखஈͷҰͭ


    • ۙ೥ɼػցֶशϞσϧͷ༧ଌ݁Ռʹର͢Δ֤ಛ௃ྔͷॏཁੑͷई౓ͱͯ͠ɼ ஫໨͞Ε͍ͯ
    Δɽ ػցֶशʹஔ͖׵͑Δͱɼಛ௃ྔ͕ϓϨʔϠʔɼ༧ଌ஋͕རಘʹͳΔɽ
    ػցֶश
    ήʔϜ
    ϓϨΠϠʔ རಘ ಛ௃ྔ ༧ଌ஋
    Ϟσϧ
    ڠྗήʔϜͱػցֶशͷରൺ
    17

    View Slide

  18. γϟʔϓϨΠ஋ɿڠྗήʔϜཧ࿦
    • ڠྗήʔϜཧ࿦ʹ͓͍ͯෳ਺ϓϨΠϠʔͷڠྗʹΑͬͯಘΒΕͨརಘΛ֤ϓϨΠϠʔͷߩݙ
    ౓ʹԠͯ͡ެਖ਼ʹ෼഑͢ΔͨΊͷखஈͷҰͭ


    • ۙ೥ɼػցֶशϞσϧͷ༧ଌ݁Ռʹର͢Δ֤ಛ௃ྔͷॏཁੑͷई౓ͱͯ͠ɼ ஫໨͞Ε͍ͯ
    Δɽ ػցֶशʹஔ͖׵͑Δͱɼಛ௃ྔ͕ϓϨʔϠʔɼ༧ଌ஋͕རಘʹͳΔɽ
    ػցֶश
    ήʔϜ
    ϓϨΠϠʔ རಘ ಛ௃ྔ ༧ଌ஋
    Ϟσϧ
    ڠྗήʔϜͱػցֶशͷରൺ
    ച্
    ఱؾ
    ؾԹ
    ༵೔
    18

    View Slide

  19. γϟʔϓϨΠ஋ɿಛੑؔ਺ܗήʔϜ
    ۩ମྫɿ3ਓͷϓϨΠϠʔʢ1ɼ2ɼ3ʣ͕ڠྗͯ͠ήʔϜʹ௅ઓ͠ɼҎԼͷ৆͕ۚಘΒΕΔͱ͢Δ
    ࢀՃϓϨΠϠʔ ৆ۚ
    1 4
    2 6
    3 10
    1, 2 16
    1, 3 22
    2, 3 30
    1, 2, 3 60
    ɿϓϨΠϠʔͷू߹ N = {1,2,3}
    = ͷ֤෦෼ू߹
    N S ʹରͯ֫͠ಘͰ͖ΔརಘΛ༩͑Δؔ਺
    ྫɿ
    ɿಛੑؔ਺
    v
    N
    v({1,2}) = 16
    શϓϨΠϠʔͷڠྗʹΑΓಘΒΕΔརಘ v({1,2,3}) = 60
    ΛͲͷΑ͏ʹ෼഑͢Δ͔ʁ
    ಛੑؔ਺ܗήʔϜ (N, v)
    19

    View Slide

  20. γϟʔϓϨΠ஋ɿݶքߩݙ౓ͷಋೖ
    ݶքߩݙ౓ɿϓϨΠϠʔ ͕ࢀՃͨ͠ͱ͖ͷརಘͷ૿Ճ෼
    i v(S ∪ {i}) − v(S)
    ݶքߩݙ౓͸ϓϨΠϠʔ ͕ࢀՃͨ͠ॱ൪ʹґଘ͢Δ
    i
    ϓϨΠϠʔͷࢀՃॱ
    ֤ϓϨΠϠʔͷݶքߩݙ౓
    1 2 3
    1 → 2 → 3 4 12 44
    1 → 3 → 2 4 38 18
    2 → 1 → 3 10 6 44
    2 → 3 → 1 30 6 24
    3 → 1 → 2 12 38 10
    3 → 2 → 1 30 20 10
    γϟʔϓϨΠ஋ 15 20 25
    ࢀՃϓϨΠϠʔ ৆ۚ
    1 4
    2 6
    3 10
    1, 2 16
    1, 3 22
    2, 3 30
    1, 2, 3 60
    v({1,2,3}) − v({1,2}) = 60 − 16 = 44
    (44 + 18 + 44 + 24 + 10 + 10)/6 = 25
    γϟʔϓϨΠ஋ɿશͯͷ෦෼ू߹ʹର͢ΔϓϨΠϠʔͷݶքߩݙ౓ͷฏۉ஋
    ϕi
    = ∑
    S⊆N\{i}
    |S|!(n − |S| − 1)!
    n!
    (v(S ∪ {i}) − v(S))
    |S|
    n
    ɿ෦෼ू߹ͷϓϨΠϠʔ਺
    ɿશϓϨΠϠʔ਺
    20

    View Slide

  21. ϓϨΠϠʔͷࢀՃॱ
    ֤ϓϨΠϠʔͷݶքߩݙ౓
    1 2 3
    1 → 2 → 3 4 12 44
    1 → 3 → 2 4 38 18
    2 → 1 → 3 10 6 44
    2 → 3 → 1 30 6 24
    3 → 1 → 2 12 38 10
    3 → 2 → 1 30 20 10
    γϟʔϓϨΠ஋ 15 20 25
    γϟʔϓϨΠ஋ɿݶքߩݙ౓ͷಋೖ
    ݶքߩݙ౓ɿϓϨΠϠʔ ͕ࢀՃͨ͠ͱ͖ͷརಘͷ૿Ճ෼
    i v(S ∪ {i}) − v(S)
    ݶքߩݙ౓͸ϓϨΠϠʔ ͕ࢀՃͨ͠ॱ൪ʹґଘ͢Δ
    i
    ࢀՃϓϨΠϠʔ ৆ۚ
    1 4
    2 6
    3 10
    1, 2 16
    1, 3 22
    2, 3 30
    1, 2, 3 60
    v({1,2,3}) − v({1,2}) = 60 − 16 = 44
    (44 + 18 + 44 + 24 + 10 + 10)/6 = 25
    γϟʔϓϨΠ஋ɿશͯͷ෦෼ू߹ʹର͢ΔϓϨΠϠʔͷݶքߩݙ౓ͷฏۉ஋
    ϕi
    = ∑
    S⊆N\{i}
    |S|!(n − |S| − 1)!
    n!
    (v(S ∪ {i}) − v(S))
    |S|
    n
    ɿ෦෼ू߹ͷϓϨΠϠʔ਺
    ɿશϓϨΠϠʔ਺
    21
    15 + 20 + 25 = 60 = v(N)
    γϟʔϓϨΠ஋ͷ૯࿨͕શϓϨΠϠʔͷ
    ڠྗʹΑΓಘΒΕΔརಘʹҰக

    View Slide

  22. γϟʔϓϨΠ஋ɿద༻࣌ͷ໰୊
    γϟʔϓϨΠ஋
    ϕi
    = ∑
    S⊆N\{i}
    |S|!(n − |S| − 1)!
    n!
    (v(S ∪ {i}) − v(S))
    ໰୊2 (ػցֶशʹద༻͢Δࡍͷ໰୊)
    Q. ͕େ͖͘ͳͬͨ৔߹ʹܭࢉྔ͕๲େʹ
    ͳΔ͜ͱΛͲ͏͢Δ͔ʁ
    ໰୊1 (ҰൠతͳγϟʔϓϨΠ஋ͷ໰୊)
    • શϓϨΠϠʔͷࢀՃॱͷ૊Έ߹Θͤͷ਺͸

    ɹ ݸͰ͋Δɽ


    • ྫ͑͹ɼ ͷͱ͖ɼ૊Έ߹Θͤͷ਺͸

    ໿362ສ௨ΓͱͳΓɼ ͕େ͖͘ͳΔʹ൐͍ɼ
    ݱ࣮తͳ࣌ؒͰͷܭࢉ͸ෆՄೳʹͳΔɽ
    n Q. ʹ͓͍ͯɼ͋Δಛ௃ྔ͕ଘࡏ͠ͳ͍
    ৔߹ͷ༧ଌ஋ΛͲͷΑ͏ʹಘΔ͔
    v(S)
    n!
    n = 10
    n
    • ػցֶशϞσϧͷղऍʹద༻͢Δ৔߹ɼಛੑ
    ؔ਺ ͸ػցֶशϞσϧ ʹͳΔɽ


    • ௨ৗͰ͸ɼશͯͷಛ௃ྔ͕ଘࡏ͢Δ৔߹ͷػ
    ցֶशϞσϧͷ༧ଌ஋͔͠ಘΒΕͳ͍ͨΊɼ
    ಛఆͷಛ௃ྔ͕ଘࡏ͠ͳ͍෦෼ू߹ʹର͢Δ
    ༧ଌ஋ΛԿ͔͠Βͷํ๏Ͱ࠶ݱ͢Δඞཁ͕͋
    Δɽ
    v f
    22

    View Slide

  23. SHAP (SHapley Additive exPlanation)
    • SHAP͸γϟʔϓϨΠ஋ʹجͮ͘ػցֶशϞσϧͷہॴతͳղऍख๏ͷҰͭͰ͋Γɼ

    Ϟσϧͷ༧ଌʹର͢Δ֤ಛ௃ྔͷߩݙ౓Λఏࣔ͢Δ [Lundberg+, NIPS2017]


    • Kernel SHAP͸લड़ͷγϟʔϓϨΠ஋ͷܭࢉ࣌ͷ໰୊ΛҎԼͷΑ͏ʹղܾ͢Δɽ
    ໰୊1ɿ ͕େ͖͘ͳͬͨ৔߹ʹܭࢉྔ͕๲େʹͳΔ͜ͱΛͲ͏͢Δ͔ʁ
    n
    ໰୊2ɿ ʹ͓͍ͯɼ͋Δಛ௃ྔ͕ଘࡏ͠ͳ͍৔߹ͷ༧ଌ஋ΛͲͷΑ͏ʹಘΔ͔
    v(S)
    ॏΈ෇͖࠷খೋ৐໰୊ͱͯ͠ͷఆࣜԽ + ϞϯςΧϧϩۙࣅ
    Kernel SHAPͷ࣮૷※ɿӈਤͷॏΈ͕େ͖͍྆୺ (෦෼ू߹Λද͢όΠφϦϕΫτϧ
    ͷཁૉ͕શͯ0·ͨ͸1͔ΒҰͭͣͭ൓స͍ͤͯ͘͞) ͔ΒαϯϓϦϯά͢Δɽ
    [Lundberg+, NIPS2017] A Uni
    fi
    ed Approach to Interpreting Model Predictions

    ※ https://github.com/slundberg/shap
    όοΫάϥ΢ϯυσʔληοτΛࢀর஋ͱͯ͠ଘࡏ͠ͳ͍ಛ௃ྔΛஔ͖׵͑Δ
    Kernel SHAPͷ࣮૷※ɿόοΫάϥ΢ϯσʔληοτ Λෳ਺ࢦఆͨ͠৔߹ɼظ଴஋ΛͱΔ
    D
    x
    x′

    ɿղऍ͍ͨ͠σʔλ
    ɿࢀর஋ ( ͷதͷҰͭͷσʔλ)
    D
    23

    View Slide

  24. 24
    ओͳࢀߟࢿྉ
    • XAIͷओཁͳ֓೦ɼಈػɼݚڀಈ޲ͳͲ͕แׅతʹ·ͱΊΒΕͨαʔϕΠ࿦จ
    1. https://ieeexplore.ieee.org/abstract/document/8466590

    2. https://arxiv.org/abs/2006.11371

    3. https://dl.acm.org/doi/10.5555/3295222.3295230

    4. https://www.slideshare.net/SatoshiHara3/ver2-225753735

    ※ https://www.youtube.com/watch?v=Fgza_C6KphU
    1ɽPeeking Inside the Black-Box: A Survey on Explainable Arti
    fi
    cial Intelligence (XAI) (2018)
    2ɽOpportunities and Challenges in Explainable Arti
    fi
    cial Intelligence (XAI): A Survey (2020)
    4ɽػցֶशϞσϧͷ൑அࠜڌͷઆ໌ (Ver.2) (2020)
    • ਂ૚ֶशʹಛԽͨ͠XAIͷख๏͕·ͱΊΒΕͨαʔϕΠ࿦จ


    • 2020೥·Ͱͷݚڀ੒ՌΛΧςΰϥΠζ͢Δํ๏ΛఏҊ͍ͯͯ͠શମ૾Λ೺Ѳ͠΍͍͢ɽ
    • େࡕେֶ ݪઌੜʹΑΔߨԋࢿྉ


    • XAIͷ୅දతͳݚڀ΍આ໌ͷ৴པੑͳͲΛ·ͱΊΒΕ͍ͯΔɽ


    • ؔ࿈͢Δߨԋಈը͕YouTubeʹ͋Δ※
    3ɽA Uni
    fi
    ed Approach to Interpreting Model Predictions (2017)
    • SHAPͷఏҊ࿦จ (2017೥)

    View Slide

  25. 2.
    ҟৗͷݪҼ਍அʹؔ͢Δݚڀͷ঺հ

    View Slide

  26. 26
    γεςϜͷେن໛ԽɾෳࡶԽͱ؂ࢹͷ՝୊
    • ͦͷͨΊɼγεςϜͷੑೳʹҟৗ͕ൃੜͨ͠ͱ͖ʹɼγεςϜͷঢ়ଶΛࣔ͢ࢦඪͰ͋
    ΔϝτϦοΫΛγεςϜ؅ཧऀ͕໢ཏతʹ໨ࢹ͢Δ͜ͱ΍ɼϝτϦοΫؒͷؔ܎ੑΛ
    ೺Ѳ͢Δ͜ͱ͕Ͱ͖ͣɼγεςϜͷҟৗݪҼΛಛఆ͢Δ͜ͱ͕೉͘͠ͳ͍ͬͯΔɽ
    • γεςϜͷେن໛Խʹ൐͍ɼγεςϜͷߏ੒ཁૉ਺ͷ૿େ΍ɼߏ੒ཁૉؒͷؔ܎ੑͷ
    ෳࡶԽ͕ਐΜͰ͍Δɽ
    γεςϜ؅ཧऀ͕ҟৗͷݪҼΛ೺Ѳ͢Δ·Ͱͷ࣌ؒΛ୹ॖͤ͞ΔͨΊͷΞϓϩʔν͕
    ඞཁͱͳΔɽ

    View Slide

  27. 27
    ఏҊɿSHAPΛ༻͍ͨҟৗͷݪҼ਍அ
    ҟৗʹد༩ͨ͠ϝτϦοΫΛߩݙ౓ͱ΋ʹఏࣔ
    ࠶ܝɿڠྗήʔϜͱػցֶशͷରൺ
    ήʔϜ
    ϓϨΠϠʔ རಘ ಛ௃ྔ ༧ଌ஋
    Ϟσϧ
    ػցֶश
    ϝτϦοΫ (ex. CPU usage)
    ҟৗείΞ

    View Slide

  28. 28
    ઌߦख๏ɿਂ૚ֶशͳͲͷػցֶशϕʔε
    ਂ૚ֶशͳͲͷػցֶशϞσϧΛ༻͍ͯγεςϜͷҟৗͷݪҼΛ਍அ͢Δख๏͕ఏҊ͞Ε͍ͯΔ※1ɽ
    ͜ΕΒ͸ɼγεςϜ؅ཧऀ͕ҟৗͷࠜຊݪҼΛߜΓࠐΉͨΊʹ׆༻͢Δ͜ͱ͕ظ଴Ͱ͖Δɽ
    • ࣄલʹػցֶशϞσϧͷֶश΍ߋ৽͕ඞཁͰ͋Δ͜ͱʹ൐͏՝୊͕ଘࡏ͢Δɽ


    • Ϟσϧͷֶशͱߋ৽ʹ൐͏ܭࢉίετ͕͔͔Δɽ


    • ϞσϧͷೖྗͱͳΔ෼ੳର৅ͷϝτϦοΫΛࣄલʹࢦఆ͢Δඞཁ͕͋Δɽ
    ※1 C. Zhang et al, A Deep Neural Network for Unsupervised Anomaly Detection and Diagnosis in Multivariate Time Series Data, Proceedings of the AAAI Conference on Arti
    fi
    cial Intelligence, 2019.

    View Slide

  29. 29
    ઌߦख๏ɿ౷ܭతҼՌ୳ࡧϕʔε
    ࣄલʹϞσϧͷֶश͕ෆཁͰ͋ΓɼҟৗൃੜΛى఺ʹݪҼΛ਍அͰ͖Δख๏ͱͯ͠ɼ౷ܭతҼՌ୳ࡧ
    Λ༻͍ͨख๏͕ఏҊ͞Ε͍ͯΔ※2,3ɽ͜ΕΒ͸ɼҼՌάϥϑʹΑΓҟৗͷ఻ൖܦ࿏ΛಛఆͰ͖Δɽ
    • طଘख๏Ͱ͋ΔMicroscope※2΍AutoMAP※3͸ɼ෼ੳର৅ͷϝτϦοΫΛࣄલʹࢦఆ͓ͯ͘͠
    ඞཁ͕͋Δ͜ͱ͕՝୊Ͱ͋Δɽ


    • γεςϜ؅ཧऀ͕બఆͨ͠ϝτϦΫεͷதʹҟৗͷࠜຊݪҼͱͳΔϝτϦοΫؚ͕·Εͣɼ
    ਍அ݁Ռ͔ΒݪҼϝτϦοΫ͕আ֎͞ΕΔՄೳੑ͕͋Δɽ
    ※2 J. Lin et al, Microscope: Pinpoint Performance Issues with Causal Graphs in Micro-service Environments, International Conference on Service-Oriented Computing, 2018.

    ※3 M. Ma et al, AutoMAP: Diagnose Your Microservice-based Web Applications Automatically, Proceedings of The Web Conference 2020 (WWW '20), 2020.

    View Slide

  30. 30
    ݚڀͷ໨త
    • ຊൃදͰ͸ɼࣄલʹϞσϧͷֶश΍ର৅ϝτϦοΫͷࢦఆΛඞཁͱͤͣɼػց
    ֶशϞσϧͷہॴతͳղऍख๏Ͱ͋ΔSHAP(SHapley Additive exPlanation)Λ
    ༻͍ͯγεςϜͷҟৗͷݪҼΛ਍அ͢Δख๏ͷݕ౼Λߦ͏ɽ


    • ہॴతͳղऍख๏Ͱ͋ΔSHAP͕γεςϜͷҟৗͷݪҼ਍அʹ׆༻Ͱ͖Δ͔
    ݕূ͢Δɽ


    • ଈ࣌ੑ͕ٻΊΒΕΔ؀ڥʹ͓͍ͯɼہॴతͳղऍख๏͕࣮༻తͳ࣌ؒ಺Ͱ

    ܭࢉՄೳ͔ݕূ͢Δɽ

    View Slide

  31. 31
    ہॴతͳղऍ
    • ہॴతͳղऍͱ͸ɼಛఆͷೖྗʹର͢ΔϞσϧͷ༧ଌ΍൑அͷࠜڌΛղऍ͢Δ͜ͱͰ͋Δɽ


    • ୅දతͳख๏ͱͯ͠LIME※5΍SHAP※6͕ڍ͛ΒΕΔɽ


    • ͜ΕΒͷख๏͸ɼ༧ଌ΍൑அͷࠜڌͱͳͬͨಛ௃ྔΛఏࣔ͢Δख๏Ͱ͋Δɽ


    • ྫ͑͹ɼը૾෼ྨͷػցֶशϞσϧʹରͯ͋͠Δը૾Λ

    ༩͑Δͱɼͦͷը૾Λʮmeerkatʯͱ൑அͨ͠ͱ͢Δɽ

    LIME΍SHAPͰ͸ͦͷࠜڌͱͳΔಛ௃ྔʢը૾ͷ৔߹͸

    ϐΫηϧʹ૬౰ʣΛ൑அ΁ͷد༩ͷ౓߹͍ͱͱ΋ʹఏࣔ

    ͢Δɽ
    ※5 M. T. Ribeiro et al., "Why Should I Trust You?": Explaining the Predictions of Any Classi
    fi
    er, Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and
    Data Mining(KDD’16), 2016.

    ※6 S. Lundberg and S. I. Lee, A Uni
    fi
    ed Approach to Interpreting Model Predictions, Advances in Neural Information Processing Systems 30(NIPS 2017), 2017.
    https://github.com/slundberg/shap

    View Slide

  32. 32
    ہॴతͳղऍͱҟৗͷݪҼ਍அ
    • ہॴతͳղऍख๏͸ɼಛʹը૾ೝࣝͷ෼໺Ͱ਺ଟ͘ͷݚڀ͕ใࠂ͞Ε͍ͯΔ͕ɼҟৗͷݪҼ
    ਍அʹ͓͍ͯ΋ͦͷ༗༻ੑ͕ࣔ͞Ε͍ͯΔ※7-9ɽ


    • ྫ͑͹ɼSHAPͳͲΛ༻͍ͯPCA※7΍ΦʔτΤϯίʔμ※8ɼࠞ߹Ψ΢εϞσϧ※9ɼม෼Φʔτ
    Τϯίʔμ※9ͳͲʹΑΔҟৗݕ஌ͷ݁Ռͷղऍ͕ɼଞͷख๏ͱൺֱͯ͠ɼݪҼͷಛఆਫ਼౓͕
    ߴ͍ɼ΋͘͠͸ਓؒͷ௚ײʹ͍ۙղऍΛ༩͑ΔͳͲͷݚڀ͕ใࠂ͕͞Ε͍ͯΔɽ
    ※7 N. Takeishi, Shapley Values of Reconstruction Errors of PCA for Explaining Anomaly Detection, IEEE International Conference on Data Mining Workshops (ICDM Workshops), 2019.

    ※8 L. Antwarg et al., Explaining Anomalies Detected by Autoencoders Using SHAP, arXiv:1903.02407, 2019.

    ※9 N. Takeishi and Y. Kawahara, On Anomaly Interpretation via Shapley Values, arXiv:2004.04464, 2020.

    View Slide

  33. 33
    ΞʔΩςΫνϟ֓ཁ
    ࣄલʹϞσϧͷֶश΍ର৅ϝτϦοΫͷࢦఆΛඞཁͱ͠ͳ͍ݪҼ਍அख๏ͷ֓ཁਤ
    • ఏҊख๏͸ɼҟৗൃੜޙʹͦͷݪҼΛ਍அ͢ΔͨΊͷख๏Ͱ͋Γɼҟৗݕ஌ʹ͸ɼService Level
    Objective (SLO)΍ϝτϦοΫ͝ͱʹઃఆͨ͠ᮢ஋ͳͲΛ༻͍Δ͜ͱΛ૝ఆ͍ͯ͠Δɽ


    • ఏҊख๏͸ɼҟৗൃੜ࣌ʹγεςϜ؅ཧऀʹݪҼ਍அ݁ՌΛఏࣔ͠ɼγεςϜ؅ཧऀ͕γεςϜ
    Λҟৗঢ়ଶ͔Β෮چ͢ΔͨΊͷ࡞ۀΛࢧԉ͢Δ͜ͱΛ໨ࢦ͍ͯ͠Δɽ


    • ཁ݅ɿ਍அ݁ՌΛఏࣔ͢Δ·Ͱͷ͕࣌ؒ୹͍͜ͱ͕๬·ΕΔ

    View Slide

  34. 34
    Step 1ɿϝτϦοΫͷϑΟϧλϦϯά
    • ఏҊख๏͸ɼࣄલʹ෼ੳର৅ͱͳΔϝτϦοΫΛࢦఆ͢Δඞཁ͕ͳ͍ͨΊɼҟৗൃੜޙʹର৅ϝτ
    ϦοΫΛબఆͰ͖Δɽ


    • ҟৗൃੜ࣌ʹ΄ͱΜͲมಈ͕ͳ͍ϝτϦοΫͳͲɼͦͷҟৗ΁ͷؔ࿈ͷՄೳੑ͕௿͍΋ͷΛϑΟϧ
    λϦϯά͢Δ͜ͱ͸ɼݪҼ਍அͷਫ਼౓ͷ޲্ͱޙଓεςοϓͷ࣮ߦ࣌ؒͷ୹ॖʹ༗ޮͰ͋Δɽ


    • ҟৗ΁ͷؔ࿈ੑ͕௿͍ϝτϦοΫΛϑΟϧλϦϯά͢Δख๏ͷҰͭͱͯ͠ɼҎલͷզʑͷݚڀ੒Ռ
    Ͱ͋ΔTSifter※10ͷ׆༻Λݕ౼͢Δɽ
    ※10 ௶಺ ༎थ, ௽ా തจ, ݹ઒ խେ, TSifter: ϚΠΫϩαʔϏεʹ͓͚Δੑೳҟৗͷ ਝ଎ͳ਍அʹ޲͍ͨ࣌ܥྻσʔλͷ࣍ݩ࡟ݮख๏, ୈ13ճΠϯλʔωοτͱӡ༻ٕज़γϯϙδ΢Ϝ(IOTS 2020).
    TSifterͷ֓ཁਤ※10
    ఆৗੑͷݕఆ ֊૚తΫϥελϦϯά

    View Slide

  35. 35
    Step 2ɿϞσϧͷֶश
    • ఏҊख๏Ͱ͸ɼҟৗൃੜޙʹ؍ଌσʔλ͔ΒϞσϧΛֶश͢ΔͨΊɼߴ଎ʹֶशՄೳͳϞσϧ

    Λ༻͍Δඞཁ͕͋Δɽ


    • ҟৗݕ஌ͷϞσϧͱͯ͠ɼओ੒෼෼ੳ (PCA)ͷར༻Λݕ౼͢Δ (ࠓޙɼඇઢܗͷϞσϧ౳ʹ֦ு

    ༧ఆ)ɽ


    • PCAΛ༻͍ͨҟৗݕ஌Ͱ͸ɼ؍ଌσʔλʹର͢Δ࣍ݩ࡟ݮʹΑΓਖ਼ৗ෦෼ۭؒΛٻΊɼςετ
    σʔλͱਖ਼ৗ෦෼ۭؒͱͷڑ཭ΛҟৗείΞͱ͢Δɽ


    • ఏҊख๏Ͱ͸ɼPCAͰࢉग़͞ΕΔҟৗείΞΛҟৗݕ஌Ͱ͸ͳ͘ɼݕ஌ޙͷݪҼ਍அʹ༻͍Δɽ
    ਖ਼ৗ෦෼ۭؒ
    ςετσʔλ

    (ϕΫτϧ)
    ಛ௃ۭؒ

    View Slide

  36. 36
    Step 3ɿҟৗ΁ͷߩݙ౓ͷܭࢉ
    • ఏҊख๏Ͱ͸ɼҟৗͷݪҼ਍அΛߦ͏ͨΊʹɼ֤ϝτϦοΫͷҟৗ΁ͷߩݙ౓Λܭࢉ͢Δɽ


    • ߩݙ౓ͷܭࢉʹ͸ɼڠྗήʔϜཧ࿦ͷShapley Valueʹجͮ͘SHAPͷར༻Λݕ౼͢Δɽ


    • SHAPͷΞϧΰϦζϜͷதͰ΋ɼKernel SHAP※6Λ࠾༻͢Δɽ
    • Model-agnostic (Ϟσϧඇґଘ) ͳղऍख๏


    • Linear LIMEͱShapley ValueΛ૊Έ߹ΘͤͨΞϓϩʔν
    Kernel SHAP
    ɿղऍ͍ͨ͠ෳࡶͳϞσϧ

    ɿઆ໌༻ͷ୯७ͳϞσϧ

    ɿ༧ଌ஋ʹର͢Δ֤ಛ௃ྔͷߩݙ౓
    Additive feature attribution methods※6
    ※6 S. Lundberg and S. I. Lee, A Uni
    fi
    ed Approach to Interpreting Model Predictions, Advances in Neural Information Processing Systems 30(NIPS 2017), 2017.
    f
    g
    ϕ

    View Slide

  37. ධՁɿ࣮ݧ؀ڥ
    • Google Kubernetes Engine (GKE)্ʹϚΠΫϩαʔ
    ビ
    εͷ
    ベ
    ϯνϚʔΫΞ
    プ
    Ϧέʔγϣϯ
    で
    ͋ΔSock Shop※Λߏஙͨ͠ɽ


    • Sock ShopΛߏ੒͢Δ11ίϯςφ͔ΒcAdvisorΛ༻͍ͯCPU࢖༻཰ͳͲͷϝτϦοΫΛ5ඵ͓
    ͖ʹऩूͨ͠ɽ


    • Sock ShopΞϓϦέʔγϣϯʹରͯ͠ɼٖࣅతͳෛՙΛੜ੒͢ΔͨΊʹɼLocustΛར༻ͨ͠ɽ


    • γεςϜͷҟৗΛ໛฿͢ΔͨΊʹɼuser-dbίϯςφʹCPUෛՙΛ஫ೖͨ͠ɽ
    37
    ※ https://microservices-demo.github.io/
    Fron-end
    Catalogue Orders Carts
    User
    Payment
    Shipping
    Sock Shop
    Locust
    Prometheus
    ϚΠΫϩαʔϏεΫϥελ ੍ޚαʔό
    ֎෦ෛՙͷੜ੒
    CPUෛՙ஫ೖ
    ϝτϦοΫͷ

    ऩूɾอଘ
    stress-ng
    ղੳαʔό
    ϝτϦοΫ

    औಘϞδϡʔϧ
    ղੳϞδϡʔϧ
    8core, 32GB

    View Slide

  38. 1. ҟৗͷݪҼ਍அɿϑΟϧλϦϯά
    38
    • TSifterʹΑΔ࣍ݩ࡟ݮͷ݁ՌɼSock ShopΛߏ੒͢Δίϯςφ͔Βऔಘͨ͠ϝτϦοΫ਺͕
    601͔Β72·Ͱ࡟ݮ͞Εͨ


    • ෼ੳର৅ͷσʔλ͸ɼ72×240ͷଟ࣍ݩ࣌ܥྻσʔλͱͳΔʢ240఺=20෼ʣ
    ϑΟϧλϦϯάޙͷuser-dbίϯςφͷඪ४Խͨ͠ϝτϦοΫ
    ҟৗΛ஫ೖ

    View Slide

  39. 1. ҟৗͷݪҼ਍அɿϑΟϧλϦϯά
    39
    • TSifterʹΑΔ࣍ݩ࡟ݮͷ݁ՌɼSock ShopΛߏ੒͢Δίϯςφ͔Βऔಘͨ͠ϝτϦοΫ਺͕
    601͔Β72·Ͱ࡟ݮ͞Εͨ


    • ෼ੳର৅ͷσʔλ͸ɼ72×240ͷଟ࣍ݩ࣌ܥྻσʔλͱͳΔʢ240఺=20෼ʣ
    ϑΟϧλϦϯάޙͷuser-dbίϯςφͷඪ४Խͨ͠ϝτϦοΫ
    ҟৗΛ஫ೖ
    ֶशσʔλ ςετσʔλ

    View Slide

  40. 1. ҟৗͷݪҼ਍அɿҟৗͷߩݙ౓
    40
    1λΠϜεςοϓʹ͓͚Δҟৗ΁ͷߩݙ౓ (SHAPͷforce plot)
    ςετσʔλશମ(120λΠϜεςοϓ)ʹ͓͚Δ
    ҟৗ΁ͷߩݙ౓ (SHAPͷsummary plot)
    ※ c-(ίϯςφ໊)_(ϝτϦοΫ໊)
    • ࠨਤͷ݁Ռ͸ɼࢉग़ͨ͠SHAP஋ͷઈର஋

    ͷฏۉ͕େ͖͍΋ͷ͔Βॱʹ্͔Βฒ΂ͯ
    ͓ΓɼݪҼϝτϦοΫͷީิΛ্͔Βฒ΂
    ͍ͯΔ͜ͱʹ૬౰͢Δɽ


    • ຊ࣮ݧ৚݅ʹ͓͍ͯɼSHAPʹΑΔղऍ͸ɼ
    ࣮ࡍͷҟৗͷࠜຊݪҼͱҰகͨ݁͠ՌΛ༩
    ͍͑ͯΔɽ

    View Slide

  41. 1. ҟৗͷݪҼ਍அɿϕʔεϥΠϯͱͷൺֱ
    41
    • ݪҼ਍அͷϕʔεϥΠϯख๏ͱͯ͠ɼGaussian Based ThresholdingʢGBTʣΛ༻͍ͨɽ


    • GBTΛ༻͍ͨݪҼ਍அ͸ɼֶशσʔλͷฏۉ஋ͱςετσʔλͷฏۉ஋ͷࠩ෼͕େ͖͍ॱ൪ʹ
    ҟৗ΁ͷߩݙ౓͕ߴ͍ͱ͢Δɽ
    GBTʹΑΔҟৗ΁ͷߩݙ౓
    • ຊ࣮ݧʹ͓͚ΔࠜຊݪҼͰ͋Δuser-dbͷCPU
    ͷϝτϦοΫ͸ҟৗ΁ͷߩݙ౓͕7൪໨ͱͳͬ
    ͍ͯͨɽ


    • ͜ͷ݁Ռ͸ɼਖ਼ৗ࣌Ͱ΋෼ࢄ͕େ͖͍ϝτ
    ϦοΫ͕ɼۮൃతʹֶशσʔλͱςετσʔλ
    ͷฏۉ஋ͷࠩ෼͕େ͖͘ͳͬͨ৔߹ɼͦΕΛ
    ҟৗʹΑΔมಈͱݟ෼͚Δ͜ͱ͕Ͱ͖ͳ͍͜ͱ
    ʹىҼ͢Δͱߟ͍͑ͯΔɽ

    View Slide

  42. 42
    2. ࣮ߦ࣌ؒ
    • ఏҊख๏ͷ࣮ߦ࣌ؒ͸64ඵͰ͋ΓɼSHAPͷܭࢉ͕ࢧ഑తͰ͋Δ͜ͱ͕Θ͔ͬͨɽ SHAPͷܭࢉ
    ʹ͸SHAPͷఏҊऀΒ͕։ൃ͍ͯ͠ΔPython੡ͷϥΠϒϥϦ※Λ༻͍ͨɽ


    • ఏҊख๏ͷ࣮ߦ࣌ؒ͸ɼର৅ͱͳΔϝτϦοΫ਺ͷ૿େͱͱ΋ʹ௕͘ͳΔͨΊɼຊ࣮ݧ৚݅ΑΓ
    େن໛ͳγεςϜ΁ରԠ͢ΔͨΊʹ͸ɼSHAPͷܭࢉͷߴ଎Խ͕՝୊ͱͳΔɽ
    ※ https://github.com/slundberg/shap
    ࣮ߦεςοϓ͝ͱͷఏҊख๏ͷ࣮ߦ࣌ؒʢsummary plotͷܭࢉʣ
    8ίΞͷαʔόͰλΠϜεςοϓ
    ͝ͱͷSHAPͷܭࢉΛฒྻԽͨ͠

    View Slide

  43. 43
    ·ͱΊͱࠓޙͷల๬
    • ຊൃදͰ͸ɼࣄલʹϞσϧͷֶश΍ର৅ϝτϦοΫͷࢦఆΛඞཁͱͤͣɼػցֶशϞσϧͷہॴత
    ͳղऍख๏Ͱ͋ΔSHAPΛ༻͍ͯγεςϜͷҟৗͷݪҼΛ਍அ͢Δख๏ͷݕ౼ͨ͠ɽ


    • ࠓճͷ࣮ݧʹ͓͚Δҟৗύλʔϯʹ͓͍ͯ͸ɼఏҊख๏Ͱ࠾༻͍ͯ͠ΔSHAPͷํ͕ϕʔεϥΠϯ
    ख๏ΑΓ΋ྑ͍ݪҼ਍அͷ݁ՌΛ༩͑Δ͜ͱ͕Θ͔ͬͨɽ


    • ࠓճͷ࣮ݧ৚݅ʹ͓͍ͯɼఏҊख๏ͷ࣮ߦ࣌ؒ͸64ඵͰ͋ΓɼSHAPͷܭࢉ͕ࢧ഑తͰ͋Δ͜ͱ͕
    Θ͔ͬͨɽ
    • ఏҊख๏ͷ༗༻ੑΛࣔͨ͢Ίʹ޿ൣͳҟৗύλʔϯʹରͯ͠ݪҼ਍அͷਫ਼౓ΛఆྔతʹධՁ͢Δ༧
    ఆͰ͋Δɽ


    • ର৅ͱ͢ΔγεςϜ͕େن໛Խͨ͠ࡍʹɼఏҊख๏͕࣮༻ʹ଱͑͏Δ͔Λݕূ͢ΔɽͦͷͨΊʹɼ
    ϝτϦοΫ਺͕૿େͨ͠৔߹ͷݪҼ਍அͷܭࢉ࣌ؒͷධՁΛߦ͏ɽ


    • ࠓճͷ࠾༻ͨ͠PCA͸ɼઢܗ͔ͭ࣌ܥྻͷ৘ใΛߟྀ͍ͯ͠ͳ͍୯७ͳϞσϧͰ͋ΔͨΊɼࠓޙɼ

    ඇઢܗͷϞσϧ΍࣌ܥྻʹରԠͨ͠ϞσϧΛ࠾༻͠ɼͦͷ༗ޮੑΛݕূ͢Δɽ
    ·ͱΊ
    ࠓޙͷల๬

    View Slide