Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Explainable Artificial Intelligence

David Low
February 28, 2019

Explainable Artificial Intelligence

Presented at Data Science: Innovation & Trends in 2019, organised by Impress.ai and Accenture

David Low

February 28, 2019
Tweet

More Decks by David Low

Other Decks in Technology

Transcript

  1. How much do we know about them? How does the

    AI system arrive at such decision? Under what circumstances, the predictions can be trusted? As ML practitioners, are you able to explain every single decision made? ...
  2. A highly accurate model BUT... • Does the model learn

    the RIGHT things? The answer is “NO”. Instead of learning the appearance feature of the dogs, the model picks up the signal from the background (Snow! In this case.) Source: "Why Should I Trust You?": Explaining the Predictions of Any Classifier by Marco et al., 2016
  3. Is there a need to explain ML model? • Safety

    Make sure the system is making sound decisions. • Debugging Understand why a system doesn't work, so we can fix that. • Science Enable new discovery. • Mismatched Objectives and multi-objectives trade-offs: The system may not be optimizing the true objective. • Legal / Ethics: Legally required to provide an explanation and/or avoid discriminate against particular groups due to bias in data.