how a model works around a single or cluster of simililar observations 2. Global Interpretability - Focus on how a model works across all observations (i.e. coefficients from a liner regression)
is positive/negative Trust individual predictions (i.e. reasons for a prediction make sense to domain experts) Provide guidence for intervening strategies (i.e. the cancer is predicted to be caused by X, which can be treated with Y) These problems have been addressed by recent literature
develops models that detect risk/fraud among merchants who use our Quickbooks products to perform credit card / ACH transactions with their customers. So if we’re evaluating individual transactions, and we deem some to be high risk, then we pass them along to agents who review them more closely and determine whether to take some sort of action on the transaction. However, we want to be able to provide guidance to these agents - we don’t want to simply provide some sort of risk score, we want to provide some sort of human-readable intuitions regarding the score to point the agents in (what we believe to be) the right direction with their investigation Edmunds: Dealer Churn
help generate new ideas that can be tested experimentally A global understanding of the ‘causes’ of an outcome can drive significant business/product changes This problem has not received much attention in the machine learning literature
text Model agnostic Focus on one observation (x) at a time Sample other observations (z) weighted by distance to x Compute f(z) (The predicted outcome) Select K features with LASSO then compute least squares Coefficients from LS are ‘local effects’