Slide 28
Slide 28 text
参考⽂献
• Lundberg, Scott M., and Su-In Lee. "A unified approach to interpreting model predictions." Advances in Neural
Information Processing Systems. 2017.
• Lundberg, Scott M., Gabriel G. Erion, and Su-In Lee. "Consistent individualized feature attribution for tree
ensembles." arXiv preprint arXiv:1802.03888 (2018).
• Lundberg, Scott M., et al. "Explainable AI for Trees: From Local Explanations to Global Understanding." arXiv
preprint arXiv:1905.04610 (2019).
• Sundararajan, Mukund, and Amir Najmi. "The many Shapley values for model explanation." arXiv preprint
arXiv:1908.08474 (2019).
• Janzing, Dominik, Lenon Minorics, and Patrick Blöbaum. "Feature relevance quantification in explainable AI: A
causality problem." arXiv preprint arXiv:1910.13413 (2019).
• GitHub - slundberg/shap: A game theoretic approach to explain the output of any machine learning model.
https://github.com/slundberg/shap.
• Molnar, Christoph. "Interpretable machine learning. A Guide for Making Black Box Models Explainable.” (2019).
https://christophm.github.io/interpretable-ml-book/.
• Biecek, Przemyslaw, and Tomasz Burzykowski. "Predictive Models: Explore, Explain, and Debug." (2019).
https://pbiecek.github.io/PM_VEE/.
• SHAP(SHapley Additive exPlanations)で機械学習モデルを解釈する
https://dropout009.hatenablog.com/entry/2019/11/20/091450.
• 岡⽥ 卓. "ゲーム理論 新版. " 有斐閣. (2011).