Explaining Blackbox Models

Explaining Blackbox Models

Without operationalization and explainability, data science projects inevitably fail. In this presentation we discuss how to tackle both of these problems.

B7189c9a09c7d99379c2a343fcfb2dbd?s=128

Lawrence Spracklen

October 08, 2019
Tweet

Transcript

  1. Explaining Blackbox Models Lawrence Spracklen SupportLogic

  2. SupportLogic • Complete applied ML solution • Extracts actionable signals

    from across enterprise systems of record • Support • Engineering • Deep NLP to analyze unstructured data • Actionable signals routed to the appropriate owners • Ensemble models consume raw signals • Escalation prediction • Backlog prioritization • Product • Sales • Email • Slack • Discussion Forums
  3. Has ML Delivered? • ML & AI will change everything

    • Everyone hypes ML • Everyone has an ML project • But it certainly hasn’t yet… • Why hasn’t ML delivered on its promise? • Operationalization • Explainability • [and more]
  4. What constitutes ML success? “If an ML model is trained,

    but no one is around to use it…..” • In the commercial world, it’s crucial to • Affect behavior • Improve decisions • Drive outcomes • Deliver ROI • Many great DS projects result in nothing more than a PowerPoint • Failure to operationalize
  5. Many hurdles to success Training a model is only the

    first of many hurdles • Applied ML • Application of existing algorithms • Needs a partnership between engineers and data scientists • Operationalized solutions need ongoing maintenance • Models can need retraining • A simple understandable model in production often delivers benefits
  6. Last Mile Problems • How do I operationalize my model?

    • Recent explosion of solutions for basic ops • Standards languishing (PMML, PFA) • How do I integrate into users existing tooling? • How does my prediction API get queried? • Significant fragile glue-logic? • How do I build workflows around my predictions?
  7. Actionability • What should the user do with the prediction?

    • How should they respond? • What is my engine warning light telling me? • Sometimes we intuitively know • It is predicted to rain tomorrow • Sometimes the end-user doesn’t necessarily need to care • Case assignment? • Sometimes understanding is fundamental to driving correct actions • Escalation prediction • Intelligent backlog prioritization
  8. Why should I care? • Explainability can be an audit

    requirement • GDPR requirements • Debugging • Why did the model make this prediction? • Validation • Is my model doing what I think it is? • Model simplification • Many of these features don’t improve fidelity • Actions and workflows • Turn-key workflows
  9. Increasing ML Automation • Rapidly increasing automation opportunities • Significant

    tooling to accelerate: • Feature Engineering • Model Selection • Hyper-parameter tuning • Model simplification • Many effective OSS solutions • For many problems automation solutions are adequate • How does AutoML impact explainability?
  10. What is a Blackbox model? https://en.wikipedia.org/wiki/Black_box • A model can

    be considered a blackbox when • You don’t have access to the inner works • The inner works are too complex to easily understand
  11. Explainability Options • Variety of different techniques • Choices dictated

    by choice of model • Complex models can deliver higher fidelity predictions • But it often comes at the cost of explainability • Variety of model agnostic techniques can provide insights into arbitrarily complex models
  12. Explainability goals Global explainability • How does the model work

    globally? • General information about the features that are the most important Individual observation explainability • Inspect an individual prediction of a model • Determine why the model made the decision it made
  13. Example dataset • Using titanic dataset • Disaster survival data

    • Basic cleaning • Basic feature engineering • Analyze • Logistic regression • Decision tree • Random Forest • Gradient Boosted Tree • Keras DNN • TPOT AutoML pipeline • FeatureTools + Random Forest
  14. Interpretable models • A variety of ‘simple’ models are intrinsically

    interpretable • Linear regression • Logistic regression • Decision tree • GLM • GAM y = ! !"#$%(' ()"(*+*"…" (-+- ) Logistic Regression A change in a feature by one unit changes the odds ratio (multiplicative) by a factor of exp(βj) cv y = / + ! ! + … + 0 0 Linear Regression Predictions are a weighted sum of the features, making predictions understandable cv
  15. Interpretable models – example 1 Survival : 0.08 Survival :

    0.16
  16. Interpretable models – example 2 Has_Cabin 2.337X

  17. Feature Importance • Which features are the most important in

    explaining the target variable? • Variety of different techniques • Model specific methods • Feature permutation • Drop column • Different methods deliver differing results • Doesn’t provide insights for a specific observation a11 a12 a13 y1 a21 a22 a23 y2 a31 a32 a33 y2 a21 a12 a13 y1 a31 a22 a23 y2 a11 a32 a33 y2 a12 a13 y1 a22 a23 y2 a32 a33 y2 shuffle Drop column Feature1 score score score Baseline score Feature1 Importance
  18. Feature Importance - example gb.feature_importances_ permutation_importance(gb) GradientBoostingClassifier

  19. Feature Importance - Summary The Good • High-level overview of

    model behavior • Provides a good model sanity check • Easy to compute The Bad • Limited global information • Beware correlated features • Concerns with model specific methods
  20. Partial Dependence Plots • Illustrates dependency of a target variable

    for a particular feature • Provides an understanding of relationship between target and feature of interest • Linear, monotonic or more complex a11 a12 a13 y1 a11 a22 a23 y2 a11 a32 a33 y2 a21 a12 a13 y1 a21 a22 a23 y2 a21 a32 a33 y2 a31 a12 a13 y1 a31 a22 a23 y2 a31 a32 a33 y2 Y1 Y2 Y3 a11 Y1 a21 Y2 a31 Y3 a11 a12 a13 y1 a21 a22 a23 y2 a31 a32 a33 y2
  21. Partial Dependence Plots - Examples

  22. Partial Dependence Plots - Summary The Good • Additional insights

    compared with feature importance The Bad • Beware correlated features • Average marginal plots can hide details • E.g. features displaying both negative and positive associations with target
  23. ICE Plots Individual Conditional Expectation Plots • Disaggregate averages by

    displaying each individual observation • One line per instance that shows how the instance’s prediction changes when a feature changes • Interesting insights wont be lost because of the averaging inherent in the PDP
  24. ICE Plots - Examples

  25. ICE Plots -- Summary The Good • Additional insights compared

    with PDPs • Highlight heterogeneous relationships The Bad • Beware correlated features
  26. Surrogate Models • Train an interpretable model to approximate the

    predictions of the black box model • Surrogate model needs to approximate predictions of black box as accurately as possible • Yet surrogate must be interpretable • Train a single global surrogate to probe high-level behavior • Train local surrogates to understand individual predictions • Important to understand the fidelity of the approximation • R2 Measure
  27. LIME Local Interpretable Model-Agnostic Explanations • 2 key observations •

    Simple linear models are easily explainable • Complex models are locally linear (approximately) • Basic flow • Probe model around observation using slight perturbations in feature values • Train linear model on results • Use linear model to understand features driving prediction *https://github.com/marcotcr/lime
  28. LIME Details 1. Permute the observation n times 2. Generate

    predictions for permuted observations using black box model 3. Compute the distance of each permutation from the original observation 4. Convert the distances to similarity scores • Exponential kernel of a user defined width 5. Fit linear (ridge) model to the permuted data • Permuted data further modified before training • Permuted data weighted by its similarity to the original observation • Probabilities form outcomes when explaining classifiers 6. Feature weights from the linear model drive explanations for the complex models local behavior
  29. LIME Permutation 3 35.0 8.0 0 0 1 0 1

    2 26.3 3.5 0 0 0 1 1 3 32.6 15.3 0 0 1 0 1 BlackBox model 1 1 1 1 1 1 1 1 0 0 0 1 1 0 0 1 1 1 0 1 1 1 1 1 0.11 0.08 0.72 Perturb Predict Ridge 1.00 0.08 0.61 Weights Scale & Transform Train N = 1000 Discretize = True Kernel_width = 1
  30. LIME - Examples

  31. LIME – beyond tabular data Lime can handle more than

    tabular data Text • Perturb input by randomly removing words from the observation text • Train an interpretable model on permuted observations • Uses cosine similarity to compute similarities scores • Leverage model to understand words driving black box prediction Images • Perturb image via superpixel construct • Superpixels defined using scikit-image segmentation methods • 'quickshift', 'slic', 'felzenszwalb'
  32. LIME in the real world • LIME explanations are not

    infallible • Don’t trust blindly • Prediction quality illustrates how well it approximates the black-box • Low quality à explanation shouldn’t be trusted • Many tweakable parameters that influence outcomes • Big open question • What constitutes local?
  33. SHAP Values SHAP (SHapley Additive exPlanations) • Shapley values •

    From coalitional game theory • Determine how much each player (aka feature) in a collaborative game has contributed to success • Computationally intensive for real-world models & data sets • SHAP leverages approximations to control compute costs • Local linear models to estimate SHAP values for any model • Shapley derived weighting & sampling • Exact “high-speed” exact method for tree ensembles
  34. SHAP Values – Example 1 • Illustrate contribution of each

    feature to model output • Features “push” the model output from the base value to the model output • Features pushing the prediction higher are shown in red, • Features pushing the prediction lower are in blue
  35. SHAP vs LIME • From the SHAP GitHub

  36. SHAP Values – Example 2 • SHAP Summary Plot •

    Each point is a Shapley value for a feature and an observation • Feature importance & feature effects
  37. Anchor • Model-agnostic explanations based on if-then rules • Referred

    to as anchors • An anchor explanation is a rule that sufficiently “anchors” the prediction locally • Changes to the rest of the feature values of the instance do not matter. • For instances on which the anchor holds, the prediction is (almost) always the same • Explains the scope/coverage of the explanation • Provides clearer user understanding https://github.com/marcotcr/anchor
  38. Anchor -- Example SHAP LIME ANCHOR

  39. Explaining the Explanations • LIME and Shap just provide raw

    info on the features/values driving the prediction • Necessary to explain what this means to the user in human terms • Especially true when feature engineering creates non-obvious features
  40. Conclusions • Most ML predictions need to be explained •

    Audit & accountability reasons • Confidence & acceptance • Understanding of appropriate response/actions • Important to understand both global behavior and individual predictions • Simple models can be inherently interpretable • Blackbox models offer benefits but introduce complexity • Open source tools exist for attempting to explain backbox models • Try using them! • But don’t trust them blindly
  41. Questions? SupportLogic is hiring! • Data scientists • ML engineers

    • System engineers Lawrence@supportlogic.io
  42. Useful links • All packages can be directly installed by

    pip • Github’s provide good example pynb • LIME : https://github.com/marcotcr/lime • SHAP : https://github.com/slundberg/shap • Anchor : https://github.com/marcotcr/anchor • ELI5 : https://github.com/TeamHG-Memex/eli5