Complex algorithms (GBDTs, deep neural nets, etc.) have the ability to perform far better than linear models because they can capture non-linear behaviour and interaction effects. However, interpreting these models was typically more difficult or in some cases (e.g. neural nets) even impossible. For ML applications where explainability on the local (individual) level was key, this typically meant we were limited to simpler, more explainable models like Logistic Regression.
Recently we have seen advances in using simpler, locally interpretable models that are trained on top of the outputs of complex models. SHAP (SHapley Additive exPlanations) is a unified approach to explain the output of any machine learning model.
In this talk, we will share our experience of using SHAP in a real-world ML application, the changes we made to both our training and prediction phases and considerations to take into account when using SHAP.