Breaking the black box of neural networks

Breaking the black box of neural networks

I was given a presentation on the interpretable machine learning and how they work at MUST(may-2020) webinar.

With the more complex algorithms like deep neural networks, random forest with 1000s of trees or dense machine learning models we are achieving the desired accuracy with a sacrifice of interpretability. If we are more interested in interpretability, we are sacrificing accuracy. In domains like finance or banking both are needed in justifying a prediction which helps the client and customers to understand why it predicted in that way. so how do we build interpretable machine learning models or explainable artificial intelligence?

In this mettup, I will be explaining why it is important to build Interpretables models and how to draw insights from it and how to trust your model and make human to understand them, with the help of available methods.

Eaa3894b9cda69c88ec8f2c4d6cc28b5?s=128

uday kiran

May 31, 2020
Tweet

Transcript

  1. © 2020 MUST India BREAKING THE BLACK BOX Uday Kiran

    Machine Learning Engineer MUST Research
  2. © 2020 MUST India According to Oxford it is a

    complex system or device whose internal workings are hidden or not readily understood. Black box?
  3. © 2020 MUST India Machine learning? Source –Interpretability ml book

  4. © 2020 MUST India Upto what extent human can understand

    the decisions and choices taken by the model in making the prediction. Interpretable Machine Learning
  5. © 2020 MUST India • Trust • Fairness • Debugging

    • Privacy • Reliability • Accountability • Regulations • Feature Engineering Why interpretable ML?
  6. © 2020 MUST India • No significant impact • Problem

    is well studied Do you think it is always necessary?
  7. © 2020 MUST India • Intrinsic or post hoc •

    Model-specific or model-agnostic • Local or global Machine learning interpretability
  8. © 2020 MUST India • Global • How does the

    model make predictions? • How do parts of the model affect predictions? • Local • Why did the model make certain predictionfor a single instance? • Why did the model make certain predictionsfor a group of instances? Scope of interpretability
  9. © 2020 MUST India • Exploratory data analysis • Principal

    Component Analysis (PCA) • Self-organizing maps (SOM) • Latent Semantic Indexing • t-Distributed Stochastic Neighbor Embedding (t-SNE) • variational autoencoders • clustering Traditional Techniques •Performance evaluation metrics • precision •recall, •accuracy, •ROC curve and the AUC •(R-square) •root mean-square error •mean absolute error •silhouette coefficient
  10. © 2020 MUST India Interpretability vs Flexibility Source – Introduction

    to statistical learning book
  11. © 2020 MUST India Limitations of traditional techniques

  12. © 2020 MUST India Interpretation techniques Using Interpretable Models Source

    @Interpretable Machine Learning, Christoph Molnar
  13. © 2020 MUST India • Steps: (it is a model

    agnostic method) 1. Get the trained model 2. Shuffle the values in column and calculate the loss 3. Repeat step 2 with each column 4. Calculate permutation feature importance Permutation Feature Importance
  14. © 2020 MUST India • Pros 1. Simple and intuitive

    2. Available through the eli5 and skater library 3. Easy to compute 4. Does not require retraining Permutation Feature Importance
  15. © 2020 MUST India • Cons 1.Unclear about using test

    or train data 2.Different shuffles may give different results 3.Greatly influenced by correlated features 4.Requires labelled data Permutation Feature Importance
  16. © 2020 MUST India • Steps: (it is a model

    agnostic method) 1. Get the trained model 2. repeatedly alter the value for one variable to make a series of predictions. 3. Repeat step 2 with each column 4. Calculate permutation feature importance Partial Dependence Plot (PDP)
  17. © 2020 MUST India • Pros 1. Easy and intuitive

    2. Available in sklearn, skater, PDPBox • Cons 1. Assumption of feature independence (chek Accumulated Local Effect Plots) 2. maximum number of features Partial Dependence Plot (PDP)
  18. © 2020 MUST India • Steps (Solving machine learning interpretability

    by using more machine learning!) 1. Get the data 2. Train Black-box model 3. Train interpretable model 4. Measure how well the surrogate model replicates the predictions of the black box model 5. Interprete the surrogate model. Global Surrogate Models
  19. © 2020 MUST India • Pros 1. Very flexible 2.

    Intuitive and straightforward • Cons 1. Gives conclusions about the model, not about the data because it never sees the real outcome. 2. Depends on the surrogate model you choose. Global Surrogate Models
  20. © 2020 MUST India • Steps 1. Select your instance

    of interest for which you want to have an explanation of its black box prediction. 2. Perturb your dataset and get the black box predictions for these new points. 3. Weight the new samples according to their proximity to the instance of interest. 4. Train a weighted, interpretable model on the dataset with the variations. 5. Explain the prediction by interpreting the local model. Local Interpretable Model-agnostic Explanations (LIME)
  21. © 2020 MUST India Local Interpretable Model-agnostic Explanations (LIME)

  22. © 2020 MUST India • Pros 1. Flexibility 2. Works

    with tabular data, text and images 3. Guaranteed high precision • Cons 1. No correct definition of the neighborhood 2. if you repeat the sampling process, then the explanations that come out can be different. 3. Still in development phase Local Interpretable Model-agnostic Explanations (LIME)
  23. © 2020 MUST India Shapley Values and SHapley Additive exPlanations

    (SHAP)
  24. © 2020 MUST India Shapley Values and SHapley Additive exPlanations

    (SHAP)
  25. © 2020 MUST India • Pros 1. fairly distributed 2.

    solid theory 3. explain a prediction as a game • Cons 1. lot of computing time 2. can be misinterpreted 3. no prediction model 4. no prediction model Shapley Values and SHapley Additive exPlanations (SHAP)
  26. © 2020 MUST India FEATURE VISUALIZATION

  27. © 2020 MUST India • Spatial Attribution with Saliency Maps

    • Channel Attribution • Neural grouping ATTRIBUTION
  28. © 2020 MUST India

  29. © 2020 MUST India Thanks Uday kiran Machine Learning Engineer

    MUST Research