※1 C. Zhang et al, A Deep Neural Network for Unsupervised Anomaly Detection and Diagnosis in Multivariate Time Series Data, Proceedings of the AAAI Conference on Arti fi cial Intelligence, 2019.
※2 J. Lin et al, Microscope: Pinpoint Performance Issues with Causal Graphs in Micro-service Environments, International Conference on Service-Oriented Computing, 2018. ※3 M. Ma et al, AutoMAP: Diagnose Your Microservice-based Web Applications Automatically, Proceedings of The Web Conference 2020 (WWW '20), 2020.
(૯লɼ2018) • ಁ໌ੑͷݪଇɼΞΧϯλϏϦςΟ(આ໌) ͷݪଇ • DARPA (ถࠃߴݚڀܭըہ) • Explainable Arti fi cial Intelligence (XAI) ϓϩδΣΫτ ※4 A. Adadi and M. Berrada, Peeking Inside the Black-Box: A Survey on Explainable Arti fi cial Intelligence (XAI), IEEE Access, 2018. ػցֶशͷղऍੑɾઆ໌ੑʹؔ͢ΔจͷਪҠ※4
༩͑Δͱɼͦͷը૾Λʮmeerkatʯͱஅͨ͠ͱ͢Δɽ LIMESHAPͰͦͷࠜڌͱͳΔಛྔʢը૾ͷ߹ ϐΫηϧʹ૬ʣΛஅͷد༩ͷ߹͍ͱͱʹఏࣔ ͢Δɽ ※5 M. T. Ribeiro et al., "Why Should I Trust You?": Explaining the Predictions of Any Classi fi er, Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining(KDD’16), 2016. ※6 S. Lundberg and S. I. Lee, A Uni fi ed Approach to Interpreting Model Predictions, Advances in Neural Information Processing Systems 30(NIPS 2017), 2017. https://github.com/slundberg/shap
N. Takeishi, Shapley Values of Reconstruction Errors of PCA for Explaining Anomaly Detection, IEEE International Conference on Data Mining Workshops (ICDM Workshops), 2019. ※8 L. Antwarg et al., Explaining Anomalies Detected by Autoencoders Using SHAP, arXiv:1903.02407, 2019. ※9 N. Takeishi and Y. Kawahara, On Anomaly Interpretation via Shapley Values, arXiv:2004.04464, 2020.
SHAPΛ࠾༻͢Δɽ • Model-agnostic (Ϟσϧඇґଘ) ͳղऍख๏ • Linear LIMEͱShapley ValueΛΈ߹ΘͤͨΞϓϩʔν Kernel SHAP ɿղऍ͍ͨ͠ෳࡶͳϞσϧ ɿઆ໌༻ͷ୯७ͳϞσϧ ɿ༧ଌʹର͢Δ֤ಛྔͷߩݙ Additive feature attribution methods※6 ※6 S. Lundberg and S. I. Lee, A Uni fi ed Approach to Interpreting Model Predictions, Advances in Neural Information Processing Systems 30(NIPS 2017), 2017. f g ϕ