Referências
LUNDBERG, S. M.; ERION, G. G.; CHEN, H.; DEGRAVE, A.; PRUTKIN, J. M.; NAIR,B.; KATZ, R.; HIMMELFARB, J.; BANSAL, N.; LEE, S.
2019. Explainable AI for trees: From local explanations to global understanding. Disponível em:http://arxiv.org/abs/1905.04610
LUNDBERG, S. M.; LEE, S.-I. 2017. A unified approach to interpreting model predictions.In: GUYON, I.; LUXBURG, U. V.; BENGIO, S.;
WALLACH, H.; FERGUS, R.;VISHWANATHAN, S.; GARNETT, R. (Ed.). Disponível
em:http://papers.nips.cc/paper/7062-a-unified-approach-to-interpreting-model-predictions.pdf
Mengnan, D., Ninghao, L., e Xia, H. 2019. Techniques for interpretable machine learning. Disponível em:
https://dl.acm.org/doi/10.1145/3359786
Obermeyer, Z., Powers, B., Vogeli, C. e Mullainathan, S. 2019. Dissecting racial bias in an algorithm used to manage the health of
populations. Disponível em: https://science.sciencemag.org/content/366/6464/447
O’NEIL, C.Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. USA: Crown Publishing
Group, 2016. ISBN 0553418815.
Papadopoulos, P. e Walkinshaw, N. 2015. "Black-Box Test Generation from Inferred Models".Disponível em:
https://ieeexplore.ieee.org/document/7168327
Vieira, C. P. R.; Digiampietri, L. A. 2020. A study about Explainable Artificial Intelligence: using decision tree to explain SVM.
Disponível em: http://seer.upf.br/index.php/rbca/article/view/10247
52