AI system arrive at such decision? Under what circumstances, the predictions can be trusted? As ML practitioners, are you able to explain every single decision made? ...
the RIGHT things? The answer is “NO”. Instead of learning the appearance feature of the dogs, the model picks up the signal from the background (Snow! In this case.) Source: "Why Should I Trust You?": Explaining the Predictions of Any Classifier by Marco et al., 2016
Make sure the system is making sound decisions. • Debugging Understand why a system doesn't work, so we can fix that. • Science Enable new discovery. • Mismatched Objectives and multi-objectives trade-offs: The system may not be optimizing the true objective. • Legal / Ethics: Legally required to provide an explanation and/or avoid discriminate against particular groups due to bias in data.