Upgrade to Pro — share decks privately, control downloads, hide ads and more …

機械学習の解釈性に関する研究動向とシステム運用への応用 / A Survey on Interpretable Machine Learning and Its Application for System Operation

tsurubee
June 23, 2021

機械学習の解釈性に関する研究動向とシステム運用への応用 / A Survey on Interpretable Machine Learning and Its Application for System Operation

tsurubee

June 23, 2021
Tweet

More Decks by tsurubee

Other Decks in Research

Transcript

  1. 2 1ɽػցֶशͷղऍੑʹؔ͢Δݚڀಈ޲ ໨࣍ • ղऍੑ͕ٻΊΒΕΔഎܠ • ୅දతͳख๏ͱͦͷ෼ྨ • ήʔϜཧ࿦ʹجͮ͘ख๏ͷ঺հɼetc. 2ɽҟৗͷݪҼ਍அʹؔ͢Δݚڀͷ঺հ

    • ҟৗͷݪҼ਍அͷઌߦݚڀ • ہॴతͳղऍख๏Λద༻ͨ͠ΞʔΩςΫνϟ • ݪҼ਍அ݁Ռͱ࣮ߦ࣌ؒͷධՁɼetc.
  2. 4 ༻ޠͷఆٛɿղऍੑͱઆ໌ੑ ػցֶशͷ෼໺ʹ͓͍ͯɼղऍ(Մೳ)ੑ (Interpretability)ͱઆ໌(Մೳ)ੑ (Explainability) ͷ 
 ౷Ұతͳఆٛ͸ͳ͘ɼಉٛͰ࢖ΘΕ͍ͯΔ͜ͱ΋͋Δɽ ຊൃදͰ͸ɼجຊతʹʮղऍʯͱ͍͏දݱͰ౷Ұ͠ɼࢀߟࢿྉͷදݱʹΑͬͯదٓ ʮઆ໌ʯͱ͍͏දݱΛҙຯͷ۠ผͳ͘༻͍͍ͯΔɽ

    [Linardatos+, Entropy2021] Explainable AI: A Review of Machine Learning Interpretability Methods • ղऍੑ͸ɼʮਓؒʹཧղՄೳͳݴ༿Ͱઆ໌·ͨ͸ఏࣔ͢ΔೳྗʯͰ͋Δɽ • Ұํɼઆ໌ੑͱ͸ɼػցֶशγεςϜͷ಺෦ͷϩδοΫ΍࢓૊Έʹؔ͢Δੑ࣭Ͱ͋Γɼ આ໌ੑΛ΋ͭϞσϧ͸ɼϞσϧͷֶश΍ҙࢥܾఆͷࡍͷ಺෦ಈ࡞ʹ͍ͭͯਓ͕ؒཧղ 
 Ͱ͖Δɽ • ղऍੑ͸આ໌ੑΑΓ΋޿͍༻ޠͰ͋Δɽ ྆ऀͷҧ͍ʹؔ͢Δٞ࿦ͷҰྫ [Linardatos+, Entropy2021]
  3. 6 ϒϥοΫϘοΫε໰୊ͱͦͷղܾ ਂ૚ֶशͳͲͷػցֶशϞσϧ͸ɼͦͷ༧ଌ΍൑அͷࠜڌΛਓؒ が ཧղ͢Δ͜ͱ がで ͖ͳ͍ ʮϒϥοΫϘοΫεʯͰ͋Δ͜ͱ͕໰୊ࢹ͞Ε͍ͯΔɽ 
 ྫ͑͹ɼҎԼͷΑ͏ͳ৔໘ʹ͓͍ͯɼ༧ଌ΍൑அͷࠜڌͷཧղ͕ཁ੥͞ΕΔɽ

    ػցֶशͷղऍੑ 
 ʹର͢Δཁ੥ͷߴ·Γ • ҩྍ਍அʹ͓͚Δҩࢣͷॴݟͱͷ੔߹ੑ֬ೝ • ࣗಈӡసͰࣄނ͕ى͖ͨ৔߹ͷݪҼڀ໌ • ެతػؔͰͷར༻ʹ͓͚Δެฏੑͷ୲อɼetc. • ࠃ಺ɿAI։ൃ ガ Π ド ϥΠϯҊ※1 (૯຿লɼ2017೥) • ಁ໌ੑͷݪଇ • ΞΧ΢ϯλϏϦςΟ (આ໌੹೚)ͷݪଇ • EUɿҰൠ デ ʔλอޢنଇ (General Data Protection Regulation: GDPR)※2 (2018೥) • GDPR ୈ22৚ʮAutomated individual decision-making, including pro fi lingʯ 
 Ϣʔβʹର͢Δઆ໌੹೚ʹؔ͢Δ಺༰ ※1 https://www.soumu.go.jp/main_content/000499625.pdf 
 ※2 https://gdpr-info.eu/
  4. 7 ݚڀ෼໺ͷོ੝ • 2016೥ࠒ͔Βɼػցֶशͷղऍੑʹؔ͢Δ࿦จ਺͕೥ʑ૿Ճ͍ͯ͠Δ (Լਤ) • ػցֶशؔ࿈ͷֶձɾݚڀձͰ΋ղऍੑʹؔ͢Δηογϣϯ͕։࠵͞Ε͍ͯΔɽ • AAAI2019ͷνϡʔτϦΞϧɿ 


    Tutorial on Explainable AI: From Theory to Motivation, Applications and Limitations • NIPS2020ͷνϡʔτϦΞϧɿ 
 Explaining Machine Learning 
 Predictions: State-of-the-art, Challenges, 
 and Opportunities • ୈ41ճIBISMLݚڀձ (2020೥)ͷηογϣϯɿ 
 ʮػցֶशͷ༗ҙੑɼઆ໌ੑɼ҆શੑʯ • FAT/ML (Fairness, Accountability, and Transparency in Machine Learning) (2014~2018)
  5. 9 ղऍੑ͔ΒಘΒΕΔ͜ͱ [Adadi+, IEEE Access2018]Ͱ͸ɼҎԼͷ4ͭͷ؍఺͔ΒػցֶशͷղऍੑͷඞཁੑΛ ड़΂͍ͯΔɽ [Adadi+, IEEE Access2018] Peeking

    Inside the Black-Box: A Survey on Explainable Arti fi cial Intelligence (XAI) Ϟσϧͷ༧ଌɾ൑அͷ ݁ՌΛਖ਼౰Խ͢Δ Ϟσϧͷ੬ऑੑ΍ܽؕΛಛఆ ͠ɼमਖ਼͢Δ(σόοά) ਓؒ-ϞσϧؒͰܧଓత ʹվળΛߦ͏ Ϟσϧͷֶश݁ՌΛཧղ͢Δ ͜ͱͰ৽ͨͳൃݟʹܨ͕Δ
  6. 11 Explain to controlɿ੍ޚͷͨΊͷઆ໌ [Ribeiro+, KDD2016] "Why Should I Trust

    You?": Explaining the Predictions of Any Classi fi er [Ribeiro+, KDD2016] σʔλͱ༧ଌ஋Λݟ͚ͨͩͰ͸ҙਤ͠ͳ͍ ֶशΛߦ͍ͬͯΔ͜ͱΛಛఆ͢Δͷ͕ࠔ೉ • ࿛ͱϋεΩʔͷը૾෼ྨΛߦ͏ϞσϧͰɼ ֶशʹ༻͍ͨ࿛ͷը૾ʹ͸എܠʹઇ͕ࣸͬ ͍ͯΔ • എܠͷಛ௃Λ΋ͱʹ࿛͔ϋεΩʔ͔Λೝࣝ ͢ΔϞσϧ͕ߏங͞Ε͍ͯΔɽ ػցֶशϞσϧͷڍಈΛཧղ͢Δ͜ͱͰɼϞσϧͷ੬ऑੑ΍ܽؕΛ೺ѲͰ͖ɼϞσϧ ͷσόοάΛߦ͏͜ͱ͕Ͱ͖Δɽ
  7. 12 ػցֶशͷղऍੑʹର͢Δ൷൑ • ղऍՄೳͳϞσϧΛ࡞ΔͷͰ͸ͳ͘ɼϒϥοΫϘοΫεԽͨ͠ϞσϧΛઆ໌ ͠Α͏ͱ͢Δ͜ͱ͸ةݥͰ͋Δɽ • ۙࣅతͳઆ໌͸ɼݩͷϞσϧͷػೳʹ஧࣮Ͱ͸ͳ͍આ໌Λఏڙ͢ΔՄೳੑ͕ ͋Δɼetc. Stop Explaining

    Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead (2019) ͋ΒΏΔέʔεͰਖ਼֬ͳղऍ΍આ໌Λ༩͑Δສೳͳख๏͸ଘࡏ͠ͳ͍͜ͱ ΍ɼಋೖͷࡍʹ͸खݩͷσʔλͰݕূΛߦ͏ඞཁ͕͋Δ͜ͱͳͲͷೝ͕ࣝ ॏཁͰ͸ͳ͍͔ɽ
  8. 14 ScopeɿGlobal or Local ղऍͷൣғ (Scope) ʹج͖ͮɼҎԼͷೋͭʹ෼ྨ͞ΕΔɽ 1ɽେҬతͳղऍ (Global)ɿ 


    ɹ ϞσϧࣗମʹղऍੑΛ෇༩͢Δख๏ 2ɽہॴతͳղऍ (Local)ɿ 
 ɹ ݸʑͷσʔλͷ༧ଌ݁Ռʹର͠ղऍੑ 
 ɹ Λ෇༩͢Δख๏ [Das+, arXiv2020] Opportunities and Challenges in Explainable Arti fi cial Intelligence (XAI): A Survey [Das+, arXiv2020] [Das+, arXiv2020]
  9. 15 ͲͷΑ͏ʹղऍख๏͕։ൃ͞Ε͍ͯΔ͔ (Usage)ʹج͖ͮɼҎԼͷೋͭʹ෼ྨ͞ΕΔɽ 1ɽຊ࣭త (Intrinsic)ɿ 
 ɹ ຊ࣭తʹղऍՄೳͳϞσϧͷར༻΍ઃܭ 2ɽޙ෇͚త (Post-hoc)ɿ

    
 ɹ ϞσϧͷֶशޙʹղऍੑΛ෇༩ [Das+, arXiv2020] Opportunities and Challenges in Explainable Arti fi cial Intelligence (XAI): A Survey [Das+, arXiv2020] [Das+, arXiv2020] UsageɿIntrinsic or Post-hoc
  10. 16 ୅දతͳख๏ [Adadi+, IEEE Access2018] [Adadi+, IEEE Access2018] Peeking Inside

    the Black-Box: A Survey on Explainable Arti fi cial Intelligence (XAI) Scope Usage Ϟσϧݻ༗ͷख๏͔Ϟσϧ ʹґଘ͠ͳ͍ख๏͔
  11. γϟʔϓϨΠ஋ɿಛੑؔ਺ܗήʔϜ ۩ମྫɿ3ਓͷϓϨΠϠʔʢ1ɼ2ɼ3ʣ͕ڠྗͯ͠ήʔϜʹ௅ઓ͠ɼҎԼͷ৆͕ۚಘΒΕΔͱ͢Δ ࢀՃϓϨΠϠʔ ৆ۚ 1 4 2 6 3 10

    1, 2 16 1, 3 22 2, 3 30 1, 2, 3 60 ɿϓϨΠϠʔͷू߹ N = {1,2,3} = ͷ֤෦෼ू߹ N S ʹରͯ֫͠ಘͰ͖ΔརಘΛ༩͑Δؔ਺ ྫɿ ɿಛੑؔ਺ v N v({1,2}) = 16 શϓϨΠϠʔͷڠྗʹΑΓಘΒΕΔརಘ v({1,2,3}) = 60 ΛͲͷΑ͏ʹ෼഑͢Δ͔ʁ ಛੑؔ਺ܗήʔϜ (N, v) 19
  12. γϟʔϓϨΠ஋ɿݶքߩݙ౓ͷಋೖ ݶքߩݙ౓ɿϓϨΠϠʔ ͕ࢀՃͨ͠ͱ͖ͷརಘͷ૿Ճ෼ i v(S ∪ {i}) − v(S) ݶքߩݙ౓͸ϓϨΠϠʔ

    ͕ࢀՃͨ͠ॱ൪ʹґଘ͢Δ i ϓϨΠϠʔͷࢀՃॱ ֤ϓϨΠϠʔͷݶքߩݙ౓ 1 2 3 1 → 2 → 3 4 12 44 1 → 3 → 2 4 38 18 2 → 1 → 3 10 6 44 2 → 3 → 1 30 6 24 3 → 1 → 2 12 38 10 3 → 2 → 1 30 20 10 γϟʔϓϨΠ஋ 15 20 25 ࢀՃϓϨΠϠʔ ৆ۚ 1 4 2 6 3 10 1, 2 16 1, 3 22 2, 3 30 1, 2, 3 60 v({1,2,3}) − v({1,2}) = 60 − 16 = 44 (44 + 18 + 44 + 24 + 10 + 10)/6 = 25 γϟʔϓϨΠ஋ɿશͯͷ෦෼ू߹ʹର͢ΔϓϨΠϠʔͷݶքߩݙ౓ͷฏۉ஋ ϕi = ∑ S⊆N\{i} |S|!(n − |S| − 1)! n! (v(S ∪ {i}) − v(S)) |S| n ɿ෦෼ू߹ͷϓϨΠϠʔ਺ ɿશϓϨΠϠʔ਺ 20
  13. ϓϨΠϠʔͷࢀՃॱ ֤ϓϨΠϠʔͷݶքߩݙ౓ 1 2 3 1 → 2 → 3

    4 12 44 1 → 3 → 2 4 38 18 2 → 1 → 3 10 6 44 2 → 3 → 1 30 6 24 3 → 1 → 2 12 38 10 3 → 2 → 1 30 20 10 γϟʔϓϨΠ஋ 15 20 25 γϟʔϓϨΠ஋ɿݶքߩݙ౓ͷಋೖ ݶքߩݙ౓ɿϓϨΠϠʔ ͕ࢀՃͨ͠ͱ͖ͷརಘͷ૿Ճ෼ i v(S ∪ {i}) − v(S) ݶքߩݙ౓͸ϓϨΠϠʔ ͕ࢀՃͨ͠ॱ൪ʹґଘ͢Δ i ࢀՃϓϨΠϠʔ ৆ۚ 1 4 2 6 3 10 1, 2 16 1, 3 22 2, 3 30 1, 2, 3 60 v({1,2,3}) − v({1,2}) = 60 − 16 = 44 (44 + 18 + 44 + 24 + 10 + 10)/6 = 25 γϟʔϓϨΠ஋ɿશͯͷ෦෼ू߹ʹର͢ΔϓϨΠϠʔͷݶքߩݙ౓ͷฏۉ஋ ϕi = ∑ S⊆N\{i} |S|!(n − |S| − 1)! n! (v(S ∪ {i}) − v(S)) |S| n ɿ෦෼ू߹ͷϓϨΠϠʔ਺ ɿશϓϨΠϠʔ਺ 21 15 + 20 + 25 = 60 = v(N) γϟʔϓϨΠ஋ͷ૯࿨͕શϓϨΠϠʔͷ ڠྗʹΑΓಘΒΕΔརಘʹҰக
  14. γϟʔϓϨΠ஋ɿద༻࣌ͷ໰୊ γϟʔϓϨΠ஋ ϕi = ∑ S⊆N\{i} |S|!(n − |S| −

    1)! n! (v(S ∪ {i}) − v(S)) ໰୊2 (ػցֶशʹద༻͢Δࡍͷ໰୊) Q. ͕େ͖͘ͳͬͨ৔߹ʹܭࢉྔ͕๲େʹ ͳΔ͜ͱΛͲ͏͢Δ͔ʁ ໰୊1 (ҰൠతͳγϟʔϓϨΠ஋ͷ໰୊) • શϓϨΠϠʔͷࢀՃॱͷ૊Έ߹Θͤͷ਺͸ 
 ɹ ݸͰ͋Δɽ • ྫ͑͹ɼ ͷͱ͖ɼ૊Έ߹Θͤͷ਺͸ 
 ໿362ສ௨ΓͱͳΓɼ ͕େ͖͘ͳΔʹ൐͍ɼ ݱ࣮తͳ࣌ؒͰͷܭࢉ͸ෆՄೳʹͳΔɽ n Q. ʹ͓͍ͯɼ͋Δಛ௃ྔ͕ଘࡏ͠ͳ͍ ৔߹ͷ༧ଌ஋ΛͲͷΑ͏ʹಘΔ͔ v(S) n! n = 10 n • ػցֶशϞσϧͷղऍʹద༻͢Δ৔߹ɼಛੑ ؔ਺ ͸ػցֶशϞσϧ ʹͳΔɽ • ௨ৗͰ͸ɼશͯͷಛ௃ྔ͕ଘࡏ͢Δ৔߹ͷػ ցֶशϞσϧͷ༧ଌ஋͔͠ಘΒΕͳ͍ͨΊɼ ಛఆͷಛ௃ྔ͕ଘࡏ͠ͳ͍෦෼ू߹ʹର͢Δ ༧ଌ஋ΛԿ͔͠Βͷํ๏Ͱ࠶ݱ͢Δඞཁ͕͋ Δɽ v f 22
  15. SHAP (SHapley Additive exPlanation) • SHAP͸γϟʔϓϨΠ஋ʹجͮ͘ػցֶशϞσϧͷہॴతͳղऍख๏ͷҰͭͰ͋Γɼ 
 Ϟσϧͷ༧ଌʹର͢Δ֤ಛ௃ྔͷߩݙ౓Λఏࣔ͢Δ [Lundberg+, NIPS2017]

    • Kernel SHAP͸લड़ͷγϟʔϓϨΠ஋ͷܭࢉ࣌ͷ໰୊ΛҎԼͷΑ͏ʹղܾ͢Δɽ ໰୊1ɿ ͕େ͖͘ͳͬͨ৔߹ʹܭࢉྔ͕๲େʹͳΔ͜ͱΛͲ͏͢Δ͔ʁ n ໰୊2ɿ ʹ͓͍ͯɼ͋Δಛ௃ྔ͕ଘࡏ͠ͳ͍৔߹ͷ༧ଌ஋ΛͲͷΑ͏ʹಘΔ͔ v(S) ॏΈ෇͖࠷খೋ৐໰୊ͱͯ͠ͷఆࣜԽ + ϞϯςΧϧϩۙࣅ Kernel SHAPͷ࣮૷※ɿӈਤͷॏΈ͕େ͖͍྆୺ (෦෼ू߹Λද͢όΠφϦϕΫτϧ ͷཁૉ͕શͯ0·ͨ͸1͔ΒҰͭͣͭ൓స͍ͤͯ͘͞) ͔ΒαϯϓϦϯά͢Δɽ [Lundberg+, NIPS2017] A Uni fi ed Approach to Interpreting Model Predictions 
 ※ https://github.com/slundberg/shap όοΫάϥ΢ϯυσʔληοτΛࢀর஋ͱͯ͠ଘࡏ͠ͳ͍ಛ௃ྔΛஔ͖׵͑Δ Kernel SHAPͷ࣮૷※ɿόοΫάϥ΢ϯσʔληοτ Λෳ਺ࢦఆͨ͠৔߹ɼظ଴஋ΛͱΔ D x x′  ɿղऍ͍ͨ͠σʔλ ɿࢀর஋ ( ͷதͷҰͭͷσʔλ) D 23
  16. 24 ओͳࢀߟࢿྉ • XAIͷओཁͳ֓೦ɼಈػɼݚڀಈ޲ͳͲ͕แׅతʹ·ͱΊΒΕͨαʔϕΠ࿦จ 1. https://ieeexplore.ieee.org/abstract/document/8466590 
 2. https://arxiv.org/abs/2006.11371 


    3. https://dl.acm.org/doi/10.5555/3295222.3295230 
 4. https://www.slideshare.net/SatoshiHara3/ver2-225753735 
 ※ https://www.youtube.com/watch?v=Fgza_C6KphU 1ɽPeeking Inside the Black-Box: A Survey on Explainable Arti fi cial Intelligence (XAI) (2018) 2ɽOpportunities and Challenges in Explainable Arti fi cial Intelligence (XAI): A Survey (2020) 4ɽػցֶशϞσϧͷ൑அࠜڌͷઆ໌ (Ver.2) (2020) • ਂ૚ֶशʹಛԽͨ͠XAIͷख๏͕·ͱΊΒΕͨαʔϕΠ࿦จ • 2020೥·Ͱͷݚڀ੒ՌΛΧςΰϥΠζ͢Δํ๏ΛఏҊ͍ͯͯ͠શମ૾Λ೺Ѳ͠΍͍͢ɽ • େࡕେֶ ݪઌੜʹΑΔߨԋࢿྉ • XAIͷ୅දతͳݚڀ΍આ໌ͷ৴པੑͳͲΛ·ͱΊΒΕ͍ͯΔɽ • ؔ࿈͢Δߨԋಈը͕YouTubeʹ͋Δ※ 3ɽA Uni fi ed Approach to Interpreting Model Predictions (2017) • SHAPͷఏҊ࿦จ (2017೥)
  17. 31 ہॴతͳղऍ • ہॴతͳղऍͱ͸ɼಛఆͷೖྗʹର͢ΔϞσϧͷ༧ଌ΍൑அͷࠜڌΛղऍ͢Δ͜ͱͰ͋Δɽ • ୅දతͳख๏ͱͯ͠LIME※5΍SHAP※6͕ڍ͛ΒΕΔɽ • ͜ΕΒͷख๏͸ɼ༧ଌ΍൑அͷࠜڌͱͳͬͨಛ௃ྔΛఏࣔ͢Δख๏Ͱ͋Δɽ • ྫ͑͹ɼը૾෼ྨͷػցֶशϞσϧʹରͯ͋͠Δը૾Λ

    
 ༩͑Δͱɼͦͷը૾Λʮmeerkatʯͱ൑அͨ͠ͱ͢Δɽ 
 LIME΍SHAPͰ͸ͦͷࠜڌͱͳΔಛ௃ྔʢը૾ͷ৔߹͸ 
 ϐΫηϧʹ૬౰ʣΛ൑அ΁ͷد༩ͷ౓߹͍ͱͱ΋ʹఏࣔ 
 ͢Δɽ ※5 M. T. Ribeiro et al., "Why Should I Trust You?": Explaining the Predictions of Any Classi fi er, Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining(KDD’16), 2016. 
 ※6 S. Lundberg and S. I. Lee, A Uni fi ed Approach to Interpreting Model Predictions, Advances in Neural Information Processing Systems 30(NIPS 2017), 2017. https://github.com/slundberg/shap
  18. 32 ہॴతͳղऍͱҟৗͷݪҼ਍அ • ہॴతͳղऍख๏͸ɼಛʹը૾ೝࣝͷ෼໺Ͱ਺ଟ͘ͷݚڀ͕ใࠂ͞Ε͍ͯΔ͕ɼҟৗͷݪҼ ਍அʹ͓͍ͯ΋ͦͷ༗༻ੑ͕ࣔ͞Ε͍ͯΔ※7-9ɽ • ྫ͑͹ɼSHAPͳͲΛ༻͍ͯPCA※7΍ΦʔτΤϯίʔμ※8ɼࠞ߹Ψ΢εϞσϧ※9ɼม෼Φʔτ Τϯίʔμ※9ͳͲʹΑΔҟৗݕ஌ͷ݁Ռͷղऍ͕ɼଞͷख๏ͱൺֱͯ͠ɼݪҼͷಛఆਫ਼౓͕ ߴ͍ɼ΋͘͠͸ਓؒͷ௚ײʹ͍ۙղऍΛ༩͑ΔͳͲͷݚڀ͕ใࠂ͕͞Ε͍ͯΔɽ ※7

    N. Takeishi, Shapley Values of Reconstruction Errors of PCA for Explaining Anomaly Detection, IEEE International Conference on Data Mining Workshops (ICDM Workshops), 2019. 
 ※8 L. Antwarg et al., Explaining Anomalies Detected by Autoencoders Using SHAP, arXiv:1903.02407, 2019. 
 ※9 N. Takeishi and Y. Kawahara, On Anomaly Interpretation via Shapley Values, arXiv:2004.04464, 2020.
  19. 34 Step 1ɿϝτϦοΫͷϑΟϧλϦϯά • ఏҊख๏͸ɼࣄલʹ෼ੳର৅ͱͳΔϝτϦοΫΛࢦఆ͢Δඞཁ͕ͳ͍ͨΊɼҟৗൃੜޙʹର৅ϝτ ϦοΫΛબఆͰ͖Δɽ • ҟৗൃੜ࣌ʹ΄ͱΜͲมಈ͕ͳ͍ϝτϦοΫͳͲɼͦͷҟৗ΁ͷؔ࿈ͷՄೳੑ͕௿͍΋ͷΛϑΟϧ λϦϯά͢Δ͜ͱ͸ɼݪҼ਍அͷਫ਼౓ͷ޲্ͱޙଓεςοϓͷ࣮ߦ࣌ؒͷ୹ॖʹ༗ޮͰ͋Δɽ •

    ҟৗ΁ͷؔ࿈ੑ͕௿͍ϝτϦοΫΛϑΟϧλϦϯά͢Δख๏ͷҰͭͱͯ͠ɼҎલͷզʑͷݚڀ੒Ռ Ͱ͋ΔTSifter※10ͷ׆༻Λݕ౼͢Δɽ ※10 ௶಺ ༎थ, ௽ా തจ, ݹ઒ խେ, TSifter: ϚΠΫϩαʔϏεʹ͓͚Δੑೳҟৗͷ ਝ଎ͳ਍அʹ޲͍ͨ࣌ܥྻσʔλͷ࣍ݩ࡟ݮख๏, ୈ13ճΠϯλʔωοτͱӡ༻ٕज़γϯϙδ΢Ϝ(IOTS 2020). TSifterͷ֓ཁਤ※10 ఆৗੑͷݕఆ ֊૚తΫϥελϦϯά
  20. 35 Step 2ɿϞσϧͷֶश • ఏҊख๏Ͱ͸ɼҟৗൃੜޙʹ؍ଌσʔλ͔ΒϞσϧΛֶश͢ΔͨΊɼߴ଎ʹֶशՄೳͳϞσϧ 
 Λ༻͍Δඞཁ͕͋Δɽ • ҟৗݕ஌ͷϞσϧͱͯ͠ɼओ੒෼෼ੳ (PCA)ͷར༻Λݕ౼͢Δ

    (ࠓޙɼඇઢܗͷϞσϧ౳ʹ֦ு 
 ༧ఆ)ɽ • PCAΛ༻͍ͨҟৗݕ஌Ͱ͸ɼ؍ଌσʔλʹର͢Δ࣍ݩ࡟ݮʹΑΓਖ਼ৗ෦෼ۭؒΛٻΊɼςετ σʔλͱਖ਼ৗ෦෼ۭؒͱͷڑ཭ΛҟৗείΞͱ͢Δɽ • ఏҊख๏Ͱ͸ɼPCAͰࢉग़͞ΕΔҟৗείΞΛҟৗݕ஌Ͱ͸ͳ͘ɼݕ஌ޙͷݪҼ਍அʹ༻͍Δɽ ਖ਼ৗ෦෼ۭؒ ςετσʔλ 
 (ϕΫτϧ) ಛ௃ۭؒ
  21. 36 Step 3ɿҟৗ΁ͷߩݙ౓ͷܭࢉ • ఏҊख๏Ͱ͸ɼҟৗͷݪҼ਍அΛߦ͏ͨΊʹɼ֤ϝτϦοΫͷҟৗ΁ͷߩݙ౓Λܭࢉ͢Δɽ • ߩݙ౓ͷܭࢉʹ͸ɼڠྗήʔϜཧ࿦ͷShapley Valueʹجͮ͘SHAPͷར༻Λݕ౼͢Δɽ • SHAPͷΞϧΰϦζϜͷதͰ΋ɼKernel

    SHAP※6Λ࠾༻͢Δɽ • Model-agnostic (Ϟσϧඇґଘ) ͳղऍख๏ • Linear LIMEͱShapley ValueΛ૊Έ߹ΘͤͨΞϓϩʔν Kernel SHAP ɿղऍ͍ͨ͠ෳࡶͳϞσϧ 
 ɿઆ໌༻ͷ୯७ͳϞσϧ 
 ɿ༧ଌ஋ʹର͢Δ֤ಛ௃ྔͷߩݙ౓ Additive feature attribution methods※6 ※6 S. Lundberg and S. I. Lee, A Uni fi ed Approach to Interpreting Model Predictions, Advances in Neural Information Processing Systems 30(NIPS 2017), 2017. f g ϕ
  22. ධՁɿ࣮ݧ؀ڥ • Google Kubernetes Engine (GKE)্ʹϚΠΫϩαʔ ビ εͷ ベ ϯνϚʔΫΞ

    プ Ϧέʔγϣϯ で ͋ΔSock Shop※Λߏஙͨ͠ɽ • Sock ShopΛߏ੒͢Δ11ίϯςφ͔ΒcAdvisorΛ༻͍ͯCPU࢖༻཰ͳͲͷϝτϦοΫΛ5ඵ͓ ͖ʹऩूͨ͠ɽ • Sock ShopΞϓϦέʔγϣϯʹରͯ͠ɼٖࣅతͳෛՙΛੜ੒͢ΔͨΊʹɼLocustΛར༻ͨ͠ɽ • γεςϜͷҟৗΛ໛฿͢ΔͨΊʹɼuser-dbίϯςφʹCPUෛՙΛ஫ೖͨ͠ɽ 37 ※ https://microservices-demo.github.io/ Fron-end Catalogue Orders Carts User Payment Shipping Sock Shop Locust Prometheus ϚΠΫϩαʔϏεΫϥελ ੍ޚαʔό ֎෦ෛՙͷੜ੒ CPUෛՙ஫ೖ ϝτϦοΫͷ 
 ऩूɾอଘ stress-ng ղੳαʔό ϝτϦοΫ 
 औಘϞδϡʔϧ ղੳϞδϡʔϧ 8core, 32GB
  23. 1. ҟৗͷݪҼ਍அɿҟৗͷߩݙ౓ 40 1λΠϜεςοϓʹ͓͚Δҟৗ΁ͷߩݙ౓ (SHAPͷforce plot) ςετσʔλશମ(120λΠϜεςοϓ)ʹ͓͚Δ ҟৗ΁ͷߩݙ౓ (SHAPͷsummary plot)

    ※ c-(ίϯςφ໊)_(ϝτϦοΫ໊) • ࠨਤͷ݁Ռ͸ɼࢉग़ͨ͠SHAP஋ͷઈର஋ 
 ͷฏۉ͕େ͖͍΋ͷ͔Βॱʹ্͔Βฒ΂ͯ ͓ΓɼݪҼϝτϦοΫͷީิΛ্͔Βฒ΂ ͍ͯΔ͜ͱʹ૬౰͢Δɽ • ຊ࣮ݧ৚݅ʹ͓͍ͯɼSHAPʹΑΔղऍ͸ɼ ࣮ࡍͷҟৗͷࠜຊݪҼͱҰகͨ݁͠ՌΛ༩ ͍͑ͯΔɽ
  24. 1. ҟৗͷݪҼ਍அɿϕʔεϥΠϯͱͷൺֱ 41 • ݪҼ਍அͷϕʔεϥΠϯख๏ͱͯ͠ɼGaussian Based ThresholdingʢGBTʣΛ༻͍ͨɽ • GBTΛ༻͍ͨݪҼ਍அ͸ɼֶशσʔλͷฏۉ஋ͱςετσʔλͷฏۉ஋ͷࠩ෼͕େ͖͍ॱ൪ʹ ҟৗ΁ͷߩݙ౓͕ߴ͍ͱ͢Δɽ

    GBTʹΑΔҟৗ΁ͷߩݙ౓ • ຊ࣮ݧʹ͓͚ΔࠜຊݪҼͰ͋Δuser-dbͷCPU ͷϝτϦοΫ͸ҟৗ΁ͷߩݙ౓͕7൪໨ͱͳͬ ͍ͯͨɽ • ͜ͷ݁Ռ͸ɼਖ਼ৗ࣌Ͱ΋෼ࢄ͕େ͖͍ϝτ ϦοΫ͕ɼۮൃతʹֶशσʔλͱςετσʔλ ͷฏۉ஋ͷࠩ෼͕େ͖͘ͳͬͨ৔߹ɼͦΕΛ ҟৗʹΑΔมಈͱݟ෼͚Δ͜ͱ͕Ͱ͖ͳ͍͜ͱ ʹىҼ͢Δͱߟ͍͑ͯΔɽ
  25. 43 ·ͱΊͱࠓޙͷల๬ • ຊൃදͰ͸ɼࣄલʹϞσϧͷֶश΍ର৅ϝτϦοΫͷࢦఆΛඞཁͱͤͣɼػցֶशϞσϧͷہॴత ͳղऍख๏Ͱ͋ΔSHAPΛ༻͍ͯγεςϜͷҟৗͷݪҼΛ਍அ͢Δख๏ͷݕ౼ͨ͠ɽ • ࠓճͷ࣮ݧʹ͓͚Δҟৗύλʔϯʹ͓͍ͯ͸ɼఏҊख๏Ͱ࠾༻͍ͯ͠ΔSHAPͷํ͕ϕʔεϥΠϯ ख๏ΑΓ΋ྑ͍ݪҼ਍அͷ݁ՌΛ༩͑Δ͜ͱ͕Θ͔ͬͨɽ • ࠓճͷ࣮ݧ৚݅ʹ͓͍ͯɼఏҊख๏ͷ࣮ߦ࣌ؒ͸64ඵͰ͋ΓɼSHAPͷܭࢉ͕ࢧ഑తͰ͋Δ͜ͱ͕

    Θ͔ͬͨɽ • ఏҊख๏ͷ༗༻ੑΛࣔͨ͢Ίʹ޿ൣͳҟৗύλʔϯʹରͯ͠ݪҼ਍அͷਫ਼౓ΛఆྔతʹධՁ͢Δ༧ ఆͰ͋Δɽ • ର৅ͱ͢ΔγεςϜ͕େن໛Խͨ͠ࡍʹɼఏҊख๏͕࣮༻ʹ଱͑͏Δ͔Λݕূ͢ΔɽͦͷͨΊʹɼ ϝτϦοΫ਺͕૿େͨ͠৔߹ͷݪҼ਍அͷܭࢉ࣌ؒͷධՁΛߦ͏ɽ • ࠓճͷ࠾༻ͨ͠PCA͸ɼઢܗ͔ͭ࣌ܥྻͷ৘ใΛߟྀ͍ͯ͠ͳ͍୯७ͳϞσϧͰ͋ΔͨΊɼࠓޙɼ 
 ඇઢܗͷϞσϧ΍࣌ܥྻʹରԠͨ͠ϞσϧΛ࠾༻͠ɼͦͷ༗ޮੑΛݕূ͢Δɽ ·ͱΊ ࠓޙͷల๬