Slide 67
Slide 67 text
参考⽂献 1/2
• [恵⽊ 2020] 恵⽊正史. “XAI(eXplainable AI)技術の研究動向.” ⽇本セキュリティ・マネジメント学会誌,
vol. 34, no. 1, 2020, https://www.jstage.jst.go.jp/article/jssmjournal/34/1/34_20/_pdf/-char/ja.
• [Ribeiro+ 2016] Ribeiro, Marco Tulio, et al. “ʻWhy Should I Trust You?ʼ: Explaining the Predictions
of Any Classifier.” arXiv:1602.04938 [cs, Stat], Feb. 2016. arXiv.org,
http://arxiv.org/abs/1602.04938.
• [Plumb+ 2019] Plumb, Gregory, et al. “Regularizing Black-Box Models for Improved
Interpretability.” arXiv [cs.LG], 18 Feb. 2019, http://arxiv.org/abs/1902.06787. arXiv.
• [Sundararajan+ 2017] Sundararajan, Mukund, et al. “Axiomatic Attribution for Deep Networks.”
arXiv [cs.LG], 4 Mar. 2017, http://arxiv.org/abs/1703.01365. arXiv.
• [Zhou+ 2016] Zhou, Bolei, et al. “Learning Deep Features for Discriminative Localization.” 2016
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, 2016,
https://doi.org/10.1109/cvpr.2016.319.
• [Selvaraju+ 2020] Selvaraju, Ramprasaath R., et al. “Grad-CAM: Visual Explanations from Deep
Networks via Gradient-Based Localization.” International Journal of Computer Vision, vol. 128, no.
2, Feb. 2020, pp. 336‒59.
• [Petsiuk+ 2018] Petsiuk, Vitali, et al. “RISE: Randomized Input Sampling for Explanation of Black-
Box Models.” arXiv [cs.CV], 19 June 2018, http://arxiv.org/abs/1806.07421. arXiv.
• [Abnar+ 2020] Abnar, Samira, and Willem Zuidema. “Quantifying Attention Flow in Transformers.”
arXiv [cs.LG], May 2020, https://arxiv.org/abs/2005.00928. arXiv.
• [Doshi-Velez+ 2017] Doshi-Velez, Finale, and Been Kim. “Towards A Rigorous Science of
Interpretable Machine Learning.” arXiv [stat.ML], 28 Feb. 2017, http://arxiv.org/abs/1702.08608.
arXiv.
67