Slide 45
Slide 45 text
参考⽂献 1/2
• [恵⽊ 2020] 恵⽊正史. “XAI(eXplainable AI)技術の研究動向.” ⽇本セキュリティ・マネジメント学会誌, vol. 34,
no. 1, 2020, https://www.jstage.jst.go.jp/article/jssmjournal/34/1/34_20/_pdf/-char/ja.
• [Ribeiro+ 2016] Ribeiro, Marco Tulio, et al. “ʻWhy Should I Trust You?ʼ: Explaining the Predictions of Any
Classifier.” arXiv:1602.04938 [cs, Stat], Feb. 2016. arXiv.org, http://arxiv.org/abs/1602.04938.
• [Sundararajan+ 2017] Sundararajan, Mukund, et al. “Axiomatic Attribution for Deep Networks.” arXiv
[cs.LG], 4 Mar. 2017, http://arxiv.org/abs/1703.01365. arXiv.
• [Doshi-Velez+ 2017] Doshi-Velez, Finale, and Been Kim. “Towards A Rigorous Science of Interpretable
Machine Learning.” arXiv [stat.ML], 28 Feb. 2017, http://arxiv.org/abs/1702.08608. arXiv.
• [Yoshikawa+ 2022] Yoshikawa, Yuya, and Tomoharu Iwata. “Neural Generators of Sparse Local Linear
Models for Achieving Both Accuracy and Interpretability.” An International Journal on Information Fusion,
vol. 81, May 2022, pp. 116‒28.
• [Yakura+ 2019] Yakura, Hiromu, et al. “Neural Malware Analysis with Attention Mechanism.” Computers &
Security, vol. 87, Nov. 2019, p. 101592.
• [Yoshikawa+ 2023] Yoshikawa, Yuya, and Tomoharu Iwata. “Explanation-Based Training with
Differentiable Insertion/Deletion Metric-Aware Regularizers.” arXiv [cs.LG], Oct. 2023,
https://arxiv.org/abs/2310.12553. arXiv.
45