Slide 15
Slide 15 text
Local, Interpretable,
Model-Agnostic Explanations (LIME)
1. Pick a model class interpretable by humans
- May not be globally faithful…
2. Locally approximate global (blackbox) model
- Simple model is globally bad,
but locally good
Line,
shallow decision tree,
sparse features, …
Locally-faithful simple
decision boundary
➔
Good explanation
for prediction
15