• ಛʹɼҩྍɾϔϧεέΞࣗಈӡసɾϩϘοτ੍ޚɼϩʔϯ৹ࠪͳͲɼߴͳҙࢥܾఆ ͕ٻΊΒΕΔ໘ͷར༻͕֦େ͍ͯ͠Δɽ ਂֶशʹΑΔපੑບͷը૾அ [Beede+, CHI2020] [Beede+, CHI2020] A Human-Centered Evaluation of a Deep Learning System Deployed in Clinics for the Detection of Diabetic Retinopathy
11 Explain to controlɿ੍ޚͷͨΊͷઆ໌ [Ribeiro+, KDD2016] "Why Should I Trust You?": Explaining the Predictions of Any Classi fi er [Ribeiro+, KDD2016] σʔλͱ༧ଌΛݟ͚ͨͩͰҙਤ͠ͳ͍ ֶशΛߦ͍ͬͯΔ͜ͱΛಛఆ͢Δͷ͕ࠔ • ͱϋεΩʔͷը૾ྨΛߦ͏ϞσϧͰɼ ֶशʹ༻͍ͨͷը૾ʹഎܠʹઇ͕ࣸͬ ͍ͯΔ
• ۙࣅతͳઆ໌ɼݩͷϞσϧͷػೳʹ࣮Ͱͳ͍આ໌Λఏڙ͢ΔՄೳੑ͕ ͋Δɼetc. Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead (2019) ͋ΒΏΔέʔεͰਖ਼֬ͳղऍઆ໌Λ༩͑Δສೳͳख๏ଘࡏ͠ͳ͍͜ͱ ɼಋೖͷࡍʹखݩͷσʔλͰݕূΛߦ͏ඞཁ͕͋Δ͜ͱͳͲͷೝ͕ࣝ ॏཁͰͳ͍͔ɽ
ɹ ϞσϧͷֶशޙʹղऍੑΛ༩ [Das+, arXiv2020] Opportunities and Challenges in Explainable Arti fi cial Intelligence (XAI): A Survey [Das+, arXiv2020] [Das+, arXiv2020] UsageɿIntrinsic or Post-hoc
16 දతͳख๏ [Adadi+, IEEE Access2018] [Adadi+, IEEE Access2018] Peeking Inside the Black-Box: A Survey on Explainable Arti fi cial Intelligence (XAI) Scope Usage Ϟσϧݻ༗ͷख๏͔Ϟσϧ ʹґଘ͠ͳ͍ख๏͔
• Kernel SHAPલड़ͷγϟʔϓϨΠͷܭࢉ࣌ͷΛҎԼͷΑ͏ʹղܾ͢Δɽ 1ɿ ͕େ͖͘ͳͬͨ߹ʹܭࢉྔ͕େʹͳΔ͜ͱΛͲ͏͢Δ͔ʁ n 2ɿ ʹ͓͍ͯɼ͋Δಛྔ͕ଘࡏ͠ͳ͍߹ͷ༧ଌΛͲͷΑ͏ʹಘΔ͔ v(S) ॏΈ͖࠷খೋͱͯ͠ͷఆࣜԽ + ϞϯςΧϧϩۙࣅ Kernel SHAPͷ࣮※ɿӈਤͷॏΈ͕େ͖͍྆ (෦ू߹Λද͢όΠφϦϕΫτϧ ͷཁૉ͕શͯ0·ͨ1͔ΒҰͭͣͭస͍ͤͯ͘͞) ͔ΒαϯϓϦϯά͢Δɽ [Lundberg+, NIPS2017] A Uni fi ed Approach to Interpreting Model Predictions
※ https://github.com/slundberg/shap όοΫάϥϯυσʔληοτΛࢀরͱͯ͠ଘࡏ͠ͳ͍ಛྔΛஔ͖͑Δ Kernel SHAPͷ࣮※ɿόοΫάϥϯσʔληοτ Λෳࢦఆͨ͠߹ɼظΛͱΔ D x x′  ɿղऍ͍ͨ͠σʔλ ɿࢀর ( ͷதͷҰͭͷσʔλ) D 23
※ https://www.youtube.com/watch?v=Fgza_C6KphU 1ɽPeeking Inside the Black-Box: A Survey on Explainable Arti fi cial Intelligence (XAI) (2018) 2ɽOpportunities and Challenges in Explainable Arti fi cial Intelligence (XAI): A Survey (2020) 4ɽػցֶशϞσϧͷஅࠜڌͷઆ໌ (Ver.2) (2020) • ਂֶशʹಛԽͨ͠XAIͷख๏͕·ͱΊΒΕͨαʔϕΠจ
• ϞσϧͷೖྗͱͳΔੳରͷϝτϦοΫΛࣄલʹࢦఆ͢Δඞཁ͕͋Δɽ ※1 C. Zhang et al, A Deep Neural Network for Unsupervised Anomaly Detection and Diagnosis in Multivariate Time Series Data, Proceedings of the AAAI Conference on Arti fi cial Intelligence, 2019.
• γεςϜཧऀ͕બఆͨ͠ϝτϦΫεͷதʹҟৗͷࠜຊݪҼͱͳΔϝτϦοΫؚ͕·Εͣɼ அ݁Ռ͔ΒݪҼϝτϦοΫ͕আ֎͞ΕΔՄೳੑ͕͋Δɽ ※2 J. Lin et al, Microscope: Pinpoint Performance Issues with Causal Graphs in Micro-service Environments, International Conference on Service-Oriented Computing, 2018.
※3 M. Ma et al, AutoMAP: Diagnose Your Microservice-based Web Applications Automatically, Proceedings of The Web Conference 2020 (WWW '20), 2020.
͢Δɽ ※5 M. T. Ribeiro et al., "Why Should I Trust You?": Explaining the Predictions of Any Classi fi er, Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining(KDD’16), 2016.
※6 S. Lundberg and S. I. Lee, A Uni fi ed Approach to Interpreting Model Predictions, Advances in Neural Information Processing Systems 30(NIPS 2017), 2017. https://github.com/slundberg/shap
• ྫ͑ɼSHAPͳͲΛ༻͍ͯPCA※7ΦʔτΤϯίʔμ※8ɼࠞ߹ΨεϞσϧ※9ɼมΦʔτ Τϯίʔμ※9ͳͲʹΑΔҟৗݕͷ݁Ռͷղऍ͕ɼଞͷख๏ͱൺֱͯ͠ɼݪҼͷಛఆਫ਼͕ ߴ͍ɼ͘͠ਓؒͷײʹ͍ۙղऍΛ༩͑ΔͳͲͷݚڀ͕ใࠂ͕͞Ε͍ͯΔɽ ※7 N. Takeishi, Shapley Values of Reconstruction Errors of PCA for Explaining Anomaly Detection, IEEE International Conference on Data Mining Workshops (ICDM Workshops), 2019.
※8 L. Antwarg et al., Explaining Anomalies Detected by Autoencoders Using SHAP, arXiv:1903.02407, 2019.
※9 N. Takeishi and Y. Kawahara, On Anomaly Interpretation via Shapley Values, arXiv:2004.04464, 2020.
• Linear LIMEͱShapley ValueΛΈ߹ΘͤͨΞϓϩʔν Kernel SHAP ɿղऍ͍ͨ͠ෳࡶͳϞσϧ
ɿઆ໌༻ͷ୯७ͳϞσϧ
ɿ༧ଌʹର͢Δ֤ಛྔͷߩݙ Additive feature attribution methods※6 ※6 S. Lundberg and S. I. Lee, A Uni fi ed Approach to Interpreting Model Predictions, Advances in Neural Information Processing Systems 30(NIPS 2017), 2017. f g ϕ