Upgrade to Pro — share decks privately, control downloads, hide ads and more …

[Paper reading] L-SHAPLEY AND C-SHAPLEY: EFFICI...

Daiki Tanaka
August 04, 2019

[Paper reading] L-SHAPLEY AND C-SHAPLEY: EFFICIENT MODEL INTERPRETATION FOR STRUCTURED DATA

Daiki Tanaka

August 04, 2019
Tweet

More Decks by Daiki Tanaka

Other Decks in Research

Transcript

  1. Background • Although many black-box ML models, such as RandomForest,

    NN or kernel-methods, can produce highly accurate prediction, but such predictions are lack of interpretability. 1. Luck of interpretation is a crucial issue when black-box models are applied in areas such as medicine, financial markets, and criminal justice. 2. To be able to know the model’s reasoning is the good way to improve the model.
  2. Background : There are kinds of approaches for interpreting models.

    • Model-specific interpretation or Model-agnostic interpretation • Model-specific : making some assumptions to models. (e.g. methods based on attention weights, or gradient based method like smooth-grad, grad-CAM, …) • Model-agnostic : making no assumption to models. (e.g. LIME, or Shapley value) • Model-level interpretation or Instance-wise interpretation • Instance-wise : yielding feature importances for each input instance (e.g. saliency map). • Model-level : yielding feature importances for the whole model. (e.g. weights of Logistic Regression, or decision rules of decision tree) This study focuses on Model-Agnostic & Instance-wise interpretation.
  3. Problem Setting : Model-Agnostic & Instance-wise interpretation • Input :

    • An Instance • A predictive model • Output : • A vector of importance scores of the feature. • Indicating which features are the key for the model to make its prediction on that instance. *OUFSQSFUBUJPO NFUIPE *OTUBODF .PEFM *NQPSUBODF TDPSFT .BLJOHOPBTTVNQUJPOTPONPEFM
  4. Related work : Shapley value • The Shapley value is

    an idea in the field of cooperative game theory. • This was originally proposed as a characterization of a fair distribution of a total profit from all the players. 1FSTPO" 1FSTPO# 1FSTPO$ 1SPpU                             # " $
  5. Related work : Shapley value • The Shapley value of

    is defined as : • is the set including all players. (ex. ) • is the function which returns the “profit” by the set . • considers the contribution of the element . • means the number of ways for selecting -sized subsets. i ϕ(i) = 1 N ∑ S⊆N\{i} 1 ( |N| − 1 |S| ) (v(S ∪ {i}) − v(S)) N N = {person A, person B, person C} v(S) S v(S ∪ {i}) − v(S) i ( |N| − 1 |S| ) |S|
  6. Related work : Example of Shapley value • The Shapley

    value of person A is defined as : ϕ(A) = 1 3 ∑ S⊆{A,B,C}\{A} 1 ( |N| − 1 |S| ) (v(S ∪ {A}) − v(S)) = 1 3 ( 1 1 (100 − 50) + 1 2 (55 − 5) + 1 2 (75 − 30)) 1FSTPO" 1FSTPO# 1FSTPO$ 1SPpU                            
  7. Related work : Shapley value • The Shapley value can

    be applied to predictive models. • Each feature is seen as a player in the underlying game. • Issue : Each evaluation of the Shapley value requires exponential number of model evaluations. There are two kinds of approaches to deal with the problem. • Approach1 : Sampling based method • Randomly sampling feature subsets • Approach2 : Regression based method • Sampling feature subsets based on a weighted kernel, and carrying out a linear regression to estimate Shapley value
  8. Notation • Feature Vector : • Note that is the

    dimension of feature vectors. • Set of features : • Sub-vector of features : • Output variables : • Output vector of a model given an input vector : x ∈ ⊂ ℝd d S ⊂ {1,2,…, d} xs = {xj , j ∈ S} y ∈ x ℙm (Y|x)
  9. Preliminaries : Importance of a feature set • Here, Importance

    score of feature set is introduced as: • Where denotes the expectation over . • The more similar the prediction produced by to the prediction produced by , the higher becomes. S m [ ⋅ |x] ℙm ( ⋅ |x) xS x vx (S) vx (S) = m[log ℙm (Y|xs ) ∣ x]
  10. Preliminaries : Importance of a feature set • In many

    cases, class-specific importance is favored. • How important is a feature set S to the predicted class? • Here, following degenerate conditional distribution is introduced. • We can then define the importance of a subset S with respect to using the modified score, which is the expected log probability of the predicted class. ̂ ℙm ̂ ℙm (y|x) = { 1 if y ∈ arg maxy′ ℙm (y′|x) 0 otherwise . vx (S) = ̂ m[log ℙm (Y|xs ) ∣ x]
  11. Preliminaries : measuring interaction between features • Consider quantifying the

    importance of a given -th feature for feature vector . • A naive way is to compute the importance of set : . • But it ignores interactions between features. • For example, when performing sentiment analysis on the following sentence. This movie is not heartwarming or entertaining. • Then we wish to quantify the the importance of feature “not”, which plays an important role in the sentence as being classified as negative. • But one would expect that , because “not” itself has neither negative nor positive sentiment. i x {i} Vx ({i}) Vx ({not}) ≈ 0
  12. Preliminaries : marginal contributions of a feature • It is

    essential to consider the interactions of a given feature with other features. • A natural way to assess how feature interacts with other features is to compute difference between the importance of all features in S, with and without . • This difference is called marginal contribution of to S, and given by : • To obtain a simple scaler measure for , we need to aggregate these marginal contributions over all subsets S including . • The Shapley value is one way to do so. i i i i i i mx (S, i) := vx (S) − vx (S\i)
  13. Preliminaries : Shapley value • For k = 1,2,…,d, we

    let denote the set of k-sized feature subsets that contain feature . • The Shapley value is obtained by averaging the marginal contributions. • First over the set for a fixed k. • Then over all possible choices of set size k. Sk (i) i Sk (i) ϕx (i) = 1 d d ∑ k=1 1 ( d − 1 k − 1) ∑ S∈Sk (i) mx (S, i) 4FUPGLTJ[FEGFBUVSFTFUTJODMVEJOHGFBUVSFJ
  14. Challenge with computing Shapley value • The exact computation of

    the Shapley value leads to computational difficulties. • We need to calculate marginal contributions for subsets. • There are some sampling-based approaches to deal with the problem. • But such approaches suffer from high variance when the number of samples to be collected per instance is limited. 2d−1 ϕx (i) = 1 d d ∑ k=1 1 ( d − 1 k − 1) ∑ S∈Sk (i) mx (S, i) = ∑ S∋i,S⊆[d] 1 ( d − 1 |S| − 1) mx (S, i)
  15. Key idea : features can be seen as nodes of

    a graph, and they have some relationship. • In many applications, features can be considered as nodes of a graph, and we can define distances between pairs of features based on the graph structure. • Features distant in the graph have weak interactions each other. • For example, an image is modeled with a grid graph. Pixels that are far apart may have little effect on each other in the computation of Shapley value. • Or, a text is represented as a line graph. 5IJT JT " QFO
  16. Proposed method : preliminary • We are given feature vector

    • Then we let denote a connected graph • Each feature is assigned with node . • Edges represent the interactions between features. • The graph induces a following distance function on . • For a given node , its k-neighborhood is the set : x ∈ ℝd G = (V, E) i i V × V i ∈ V k (i) := {j ∈ V|dG (i, j) ≤ k} dG (l, m) = the number of edges in shortest path joining l to m . Gray area is an example of . 2 (i)
  17. Proposed method1 : Local-Shapley (L-Shapley) • Definition1: Given a model

    , a sample , and a feature , the L-Shapley estimate of order k on a graph is given by : ℙm x i G ̂ ϕk x (i) = 1 |k (i)| ∑ T∋i,T⊆Nk (i) 1 ( |Nk (i)| − 1 |T| − 1 ) mx (T, i) LOFJHICPSIPPETPGJ original Shapley value : ϕx (i) = ∑ S∋i,S⊆[d] 1 ( d − 1 |S| − 1) mx (S, i)
  18. Proposed method2 : Connected-Shapley (C-Shapley) • Definition2: Given a model

    , a sample , and a feature , the C-Shapley estimate of order k on a graph is given by : where denotes the set of all subsets of that contain node , and nodes that are connected in . ℙm x i G Ck (i) k (i) i G ̂ ϕk x (i) = ∑ U∈Ck (i) 2 (|U| + 2)(|U| + 1)|U| mx (U, i) original Shapley value : ϕx (i) = ∑ S∋i,S⊆[d] 1 ( d − 1 |S| − 1) mx (S, i)
  19. Examples • Left subset (blue and red) is summed over

    in L-Shapely, not in C-Shapley. • Right subset (blue and red) is summed over in both L- Shapley and C-Shapley.
  20. Properties : Error between L-Shapley value and true Shapley value

    is upper-bounded. • S is the subset of k-nearest features of i. • is the sub-vector having k-nearest features of i. • is the sub-vector having features not included by S. XU XV
  21. Properties : Error between C-Shapley value and true Shapley value

    is upper-bounded. • S is the subset of k-nearest features of i. • is the connected subset in S. U ∋ i
  22. Experiments : tasks and baselines • Tasks : image classification,

    and text classification • Baselines : model agnostic methods • KernelSHAP : regression based approximation of Shapley • SampleShapley : Random sampling based approximation of Shapley • LIME : model agnostic interpretation method linearly approximating the black-box function around the target feature.
  23. Experiments : evaluation method • Evaluation method : • The

    change in log-odds scores of the predicted class before and after masking the top features ranked by importance scores, where masked words are replaced by zero paddings. • Larger decreasing of log-odds means that importance scores by algorithm could correctly capture importance of features.
  24. Experiment(1/3) : Text classification • We study the performances on

    three neural models and three datasets: %BUBTFU 5BTL .FUIPE "DDVSBDZ *.%#3FWJFX 4FOUJNFOUDMBTTJpDBUJPO 8PSECBTFE$//  "(OFXT $BUFHPSZDMBTTJpDBUJPO $IBSBDUFS#BTFE $//  :BIPP"OTXFST $BUFHPSZDMBTTJpDBUJPO -45. 
  25. Experiment(1/3) : result • On IMDB with Word-CNN, the simplest

    model among the three, L-Shapley achieves the best performance while LIME, KernelSHAP and C-Shapley achieve slightly worse performance. #FUUFS
  26. Experiment(2/3) : Image classification • Datasets : • A subset

    of MNIST : Only “3" and “8” are included. • A subset of CIFAR-10 : Only deers(ࣛ) and horses(അ) are included.
  27. Experiment(2/3) : Examples for misclassified images • Above image is

    “3”, and below image is “8”. They are misclassified into “8”, and “3”, respectively. • The masked pixels are colored with red if activated, (white) and blue otherwise. • The result seems to show the “reasoning” of the classifier.
  28. Experiment(3/3) : Evaluation by human subjects (5 people) • They

    use Amazon Mechanical Turk to compare L-Shapley, C- Shapley and KernelSHAP on IMDB movie reviews (200 reviews). • Experimental purpose : • Are humans able to make a decision with top words alone? • Are humans unable to make a decision with top words masked? • They ask subjects to classify the sentiment of texts into five categories : strongly positive (+2), positive (+1), neutral (0), negative (-1), strongly negative (-2).
  29. Experiment(3/3) : Evaluation by human subjects (5 people) • Texts

    have three types : 1. raw reviews 2. top 10 words of each review ranked by L-Shapley, C-Shapley and KernelSHAP 3. reviews with top words being masked • Masked words are produced by the L-Shapley, C-Shapley, and KernelSHAP until the probability score of the correct class produced by the model is lower than 10%. • Around 14.6% of words in each review are masked for L-Shapley and C- Shapley, and 31.6% for KernelSHAP.
  30. Experiment(3/3) : Evaluation by human subjects (5 people) • Evaluation

    metrics : • Consistency (0 or 1) between true labels and labels from human subjects. • Standard deviation of scores on each reviews • This is as a measure of disagreement between humans. • The absolute value of the averaged scores. • This is as a measure of confidence of decision.
  31. Experiment(3/3) : result • Humans become more consistent and confident

    when they are presented with top words. On the other hand, when top words are masked, humans are easier to make mistakes and are less certain. • C-Shapley yields the highest performance in terms of consistency, agreement, and confidence. • L-Shapley harms the human judgement the most among the three algorithms.
  32. Conclusion • They have proposed two algorithms; L-Shapley and C-

    Shapley for instance-wise and model-agnostic interpretation. • They demonstrated the superior performance of these algorithms compared to other methods on black-box models in both text and image classification with both quantitative metrics and human evaluation.