Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Just-So Stories for AI: Explaining Black Box Predictions

Sam Ritchie
September 29, 2017

Just-So Stories for AI: Explaining Black Box Predictions

As machine learning techniques become more powerful, humans and companies are offloading more and more ethical decisions to ML models. Which person should get a loan? Where should I direct my time and attention? Algorithms often outperform humans, so we cede our control happily and love the extra time and leverage this gives us.

There's lurking danger here. Many of the most successful machine learning algorithms are black boxes - they give us predictions without the "why" that accompanies human decision-making. Trust without understanding is a scary thing. Why did the self-driving car swerve into traffic, killing its driver? How did the robotic doctor choose that dose? The ability to demand a plausible explanation for each decision is humanity's key to maintaining control over our ethical development.

In this talk we'll explore several state-of-the-art strategies for explaining the decisions of our black box models. We'll talk about why some algorithms are so difficult to interpret and discuss the insights that our explanation-generating models can give us into free will and how humans invent explanations for our own decisions.

Sam Ritchie

September 29, 2017
Tweet

More Decks by Sam Ritchie

Other Decks in Programming

Transcript

  1. Just-So Stories for AI Explaining Black Box Predictions Sam Ritchie

    (@sritchie) Strange Loop 2017 St Louis, Missouri
  2. Hi!

  3. Outline • ML Techniques for detecting Fraud • Interpretable Models

    vs “Black Box” Models • Explanations (?) for Black Box Models • Why this is so important!
  4. Decision Trees :) • Interpretable • Every decision has a

    built-in explanation! • Easy to Trust
  5. Decision Trees :) • Interpretable • Every decision has a

    built-in explanation! • Easy to Trust • NOT a Black Box.
  6. Decision Trees :( • Shallow trees are not very accurate

    • Deep trees are not interpretable
  7. Decision Trees :( • Shallow trees are not very accurate

    • Deep trees are not interpretable • Deep trees can overfit to a training set
  8. Decision Trees :( • Shallow trees are not very accurate

    • Deep trees are not interpretable • Deep trees can overfit to a training set • Limitations on Public Data
  9. case class Predicate( name: String, op: Operation, constant: FeatureValue )

    val predicate = Predicate( “unique_card_ip_address_24_hrs”, Op.Gt, 10)
  10. case class Explanation(preds: List[Predicate]) val explanations = Explanation( List( Predicate(

    “unique_card_ip_address_24_hrs”, Op.Gt, 10), Predicate( “billing_country_matches_card_country”, Op.NotEq, true), Predicate(“card_type”, Op.Eq, “mastercard”) ))
  11. Training Explanation Models 1. Generate a ton of possible predicates

    2. weight predicates by precision given SOME recall (5% or higher) 3. Sample top-N predicates (1000 or so) 4. Set those aside 5. Combine all predicates to make N x N "explanations" 6. GO back to step 3! 7. Iterate... 8. Set aside the highest precision “explanation” of ALL. 9. Toggle the positives that this catches, go back to 2. 10. COVERAGE! (~80 explanations)
  12. { "prediction": 0.48, "threshold": 0.3, "explanation": [ { "feature": "unique_card_ip_address_24_hrs",

    "op": ">", "constant": 10 }, { "feature": "billing_country_matches_card_country", "op": "!=", "constant": true }, { "feature":"card_type", "op": "==", "constant": "mastercard" }]}
  13. “Humans decisions might admit post-hoc interpretability despite the black box

    nature of human brains, revealing a contradiction between two popular notions of interpretability.”
  14. “You can ask a human, but… what cognitive psychologists have

    discovered is that when you ask a human you’re not really getting at the decision process. They make a decision first, and then you ask, and then they generate an explanation and that may not be the true explanation.” - Peter Norvig
  15. "In the rush to gain acceptance for machine learning and

    to emulate human intelligence, we should be careful not to reproduce pathological behavior at scale."
  16. “If I apply for a loan and I get turned

    down, whether it’s by a human or by a machine, and I say what’s the explanation, and it says well you didn’t have enough collateral. That might be the right explanation or it might be it didn’t like my skin colour. And I can’t tell from that explanation.”