What is Interpretable ML / Explainable AI ?
Why is Interpretability required in Machine Learning?
How is it relevant to me?
Types of Interpretability (Global & Local)
IML with Python References (LIME, SHAP)
do not implicitly or explicitly discriminate against protected groups. • Privacy: Ensuring that sensitive information in the data is protected. • Reliability or Robustness: Ensuring that small changes in the input do not lead to large changes in the prediction. • Causality: Check that only causal relationships are picked up. • Trust: It is easier for humans to trust a system that explains its decisions compared to a black box. • Legal: Compliance (like from GDPR) emphasise on Right to Explanation
Data Scientist who takes pride in your work, But what’s with the pride when you have no clue about why it does what it does! • You may not be a Data Scientist or ML Engineer, But a Technologist / Team member of a given project, you want to validate what’s inside it • A lot of times, Data points used for an ML Model are Real Human Beings - Could be You and I - Today or Some day!
Theory Shapley values – a method from coalitional game theory – tells us how to fairly distribute the “payout” among the features. https://datascience.sia-partners.com/en/blog/interpretable-machine-learning
Theory Shapley values – a method from coalitional game theory – tells us how to fairly distribute the “payout” among the features. Feature (Columns) of a data instance act as players in a coalition https://datascience.sia-partners.com/en/blog/interpretable-machine-learning
and running • Strong Theoretical Foundation (in Game theory) makes SHAP a nice candidate for Legal/Compliance Requirements • TreeExplainer (TreeSHAP) is fast for Tree-based models • SHAP provides a uniﬁed package - Local and Global Interpretability all based on Common Shapley Values (while if we use LIME, it’s only Local) • Interactive Visualization on Notebooks more intuitive for Business