Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Interpreting Machine Learning Models: Why and How!

OmaymaS
April 06, 2019

Interpreting Machine Learning Models: Why and How!

Invited talk at SatRday Johannesburg #satRdayJoburg
https://joburg2019.satrdays.org/

OmaymaS

April 06, 2019
Tweet

More Decks by OmaymaS

Other Decks in Technology

Transcript

  1. “ ” [In Idaho], the state declined to disclose the

    formula it was using, saying that its math qualified as a TRADE SECRET.
  2. WHAT ELSE ? I'm In to Connect and Serve* “AUTOMATED

    REDACTION, TRANSCRIPTION, REPORTING”
  3. Amazon’s system TAUGHT ITSELF that male candidates were preferable. It

    penalized resumes that included the word “women’s,” as in “women’s chess club captain.” And it downgraded graduates of two all-women’s colleges, according to people familiar with the matter. They did not specify the names of the schools. “
  4. Amazon’s system TAUGHT ITSELF that male candidates were preferable. It

    penalized resumes that included the word “women’s,” as in “women’s chess club captain.” And it downgraded graduates of two all-women’s colleges, according to people familiar with the matter. They did not specify the names of the schools. “ LEARNED FROM HUMANS
  5. COLLECT/LABEL DATA IT IS HUMANS WHO BIAS IN: - REPRESENTATION

    - DISTRIBUTION - LABELS AND MORE….. WRITE ALGORITHMS DEFINE METRICS
  6. IT IS HUMANS WHO DEFINE METRICS WRITE ALGORITHMS COLLECT/LABEL DATA

    - TRAIN/TEST SPLIT - FEATURES/PROXIES - BLACK-BOX MODELS AND MORE…..
  7. IT IS HUMANS WHO COLLECT/LABEL DATA DEFINE METRICS WRITE ALGORITHMS

    - WHAT IS THE IMPACT OF DIFFERENT ERROR TYPES ON DIFFERENT GROUPS? - WHAT DO YOU OPTIMIZE FOR?
  8. Practitioners consistently: - overestimate their model’s accuracy. - propagate feedback

    loops. - fail to notice data leaks. “ ” “Why Should I Trust You?” Explaining the Predictions of Any Classifier https:/ /arxiv.org/pdf/1602.04938.pdf
  9. 1- Select a point to explain (red). Based on an

    example in “Interpretable Machine Learning” Book by Christoph Molnar LIME (Tabular Data)
  10. 2- Sample data points. LIME (Tabular Data) Based on an

    example in “Interpretable Machine Learning” Book by Christoph Molnar
  11. 3- Weight points according to their proximity to the selected

    point. LIME (Tabular Data) Based on an example in “Interpretable Machine Learning” Book by Christoph Molnar
  12. 4- Train a weighted, interpretable local model. LIME (Tabular Data)

    Based on an example in “Interpretable Machine Learning” Book by Christoph Molnar
  13. 5- Explain the black-box model prediction using the local model.

    LIME (Tabular Data) Based on an example in “Interpretable Machine Learning” Book by Christoph Molnar
  14. set.seed(5658) ## load libraries library(caret) library(lime) ## partition the data

    intrain <- createDataPartition(y = iris$Species,p = 0.8, list = F) ## create train and test data train_data <- iris[intrain, ] test_data <- iris[-intrain, ] ## train Random Forest model on train_data model <- train(x = train_data[, 1:4], y = train_data[, 5], method = 'rf') TRAIN
  15. set.seed(5658) ## load libraries library(caret) library(lime) ## partition the data

    intrain <- createDataPartition(y = iris$Species, p = 0.8, list = F) ## create train and test data train_data <- iris[intrain, ] test_data <- iris[-intrain, ] ## train Random Forest model on train_data model <- train(x = train_data[, 1:4], y = train_data[, 5], method = 'rf') ## create an explainer object using train_data explainer <- lime(train_data, model) EXPLAIN
  16. set.seed(5658) ## load libraries library(caret) library(lime) ## partition the data

    intrain <- createDataPartition(y = iris$Species, p = 0.8, list = F) ## create train and test data train_data <- iris[intrain, ] test_data <- iris[-intrain, ] ## train Random Forest model on train_data model <- train(x = train_data[, 1:4], y = train_data[, 5], method = 'rf') ## create an explainer object using train_data explainer <- lime(train_data, model) ## explain new observations in test data explanation <- explain(test_data[, 1], explainer, n_labels = 1, n_features = 4) EXPLAIN
  17. set.seed(5658) ## load libraries library(caret) library(lime) ## partition the data

    intrain <- createDataPartition(y = iris$Species, p = 0.8, list = F) ## create train and test data train_data <- iris[intrain, ] test_data <- iris[-intrain, ] ## train Random Forest model on train_data model <- train(x = train_data[, 1:4], y = train_data[, 5], method = 'rf') ## create an explainer object using train_data explainer <- lime(train_data, model) ## explain new observations in test data explanation <- explain(test_data[, 1:4], explainer, n_labels = 1, n_features = 4) https:/ /github.com/OmaymaS/satRday2019_talk_scripts/blob/master/R/lime_tabular.R
  18. Label: tabby, tabby cat Probability: 0.29 Explanation Fit: 0.77 Label:

    Egyptian Cat Probability: 0.28 Explanation Fit: 0.69 LIME (Images) pre-trained ImageNet model
  19. Label: tabby, tabby cat Probability: 0.29 Explanation Fit: 0.77 Label:

    Egyptian Cat Probability: 0.28 Explanation Fit: 0.69 Type: Supports Type: Contradicts LIME (Images) pre-trained ImageNet model
  20. LIME (Images) “Why Should I Trust You?” Explaining the Predictions

    of Any Classifier https:/ /arxiv.org/pdf/1602.04938.pdf
  21. LIME (Images) “Why Should I Trust You?” Explaining the Predictions

    of Any Classifier https:/ /arxiv.org/pdf/1602.04938.pdf SNOW
  22. Pros LIME - Provides human-friendly explanations. - Gives a fidelity

    measure. - Can use other features than the black-box model.
  23. Pros LIME Cons - Provides human-friendly explanations. - Gives a

    fidelity measure. - Can use other features than the original model. - The definition of proximity is not totally resolved in tabular data. - Instability of explanations.
  24. Pros - Provides human-friendly explanations. - Gives a fidelity measure.

    - Can use other features than the original model. Cons - Instability of explanations. LIME - The definition of proximity is not totally resolved in tabular data.
  25. SHAPLEY VALUES Explain the difference between the actual prediction and

    the average prediction of the black-box model. coalitional game theory “ ”
  26. library(tidyverse) library(caret) library(iml) ## partition the data intrain <- createDataPartition(y

    = bike$cnt, p = 0.9, list = F) ## create train and test data train_data <- bike[intrain, ] test_data <- bike[-intrain, ] train_x <- select(train_data, -cnt) test_x <- select(test_data, -cnt) ## train model model <- train(x = train_x, y = train_data$cnt, method = 'rf', ntree = 30, maximise = FALSE) TRAIN
  27. library(tidyverse) library(caret) library(iml) ## partition the data intrain <- createDataPartition(y

    = bike$cnt, p = 0.9, list = F) ## create train and test data train_data <- bike[intrain, ] test_data <- bike[-intrain, ] train_x <- select(train_data, -cnt) test_x <- select(test_data, -cnt) ## train model model <- train(x = train_x, y = train_data$cnt, method = 'rf', ntree = 30, maximise = FALSE) ## create predictor predictor <- Predictor$new(model, data = train_x) EXPLAIN
  28. library(tidyverse) library(caret) library(iml) ## partition the data intrain <- createDataPartition(y

    = bike$cnt, p = 0.9, list = F) ## create train and test data train_data <- bike[intrain, ] test_data <- bike[-intrain, ] train_x <- select(train_data, -cnt) test_x <- select(test_data, -cnt) ## train model model <- train(x = train_x, y = train_data$cnt, method = 'rf', ntree = 30, maximise = FALSE) ## create predictor predictor <- Predictor$new(model, data = train_x) ## calculate shapley values for a new instance shapley_values <- Shapley$new(predictor, x.interest = test_x[10,]) EXPLAIN
  29. library(tidyverse) library(caret) library(iml) ## partition the data intrain <- createDataPartition(y

    = bike$cnt, p = 0.9, list = F) ## create train and test data train_data <- bike[intrain, ] test_data <- bike[-intrain, ] train_x <- select(train_data, -cnt) test_x <- select(test_data, -cnt) ## train model model <- train(x = train_x, y = train_data$cnt, method = 'rf', ntree = 30, maximise = FALSE) ## create predictor predictor <- Predictor$new(model, data = train_x) ## calculate shapley values for a new instance shapley_values <- Shapley$new(predictor, x.interest = new_istance) https:/ /github.com/OmaymaS/satRday2019_talk_scripts/blob/master/R/shapley_tabular.R
  30. SHAPLEY VALUES The contribution of temp value (4.416) to the

    difference between the actual prediction and the mean prediction is the estimated Shapley value (~ -1000).
  31. Pros - Solid theory. - The difference between the prediction

    and the average prediction is fairly distributed among the feature values of the instance. SHAPLEY VALUES
  32. Pros Cons - Solid theory - The difference between the

    prediction and the average prediction is fairly distributed among the feature values of the instance. - Computationally expensive. - Can be misinterpreted. SHAPLEY VALUES - Uses all features (not ideal for explanations that contain few features).
  33. Pros Cons - Solid theory. - The difference between the

    prediction and the average prediction is fairly distributed among the feature values of the instance. - Computationally expensive. - Can be misinterpreted. SHAPLEY VALUES - Uses all features (not ideal for explanations that contain few features).