[PyCon JP 2018] Interpretable Machine Learning, making black box models explainable with Python!

4c8f908a3d2fe9a40db499a924cc6e4f?s=47 David Low
September 17, 2018

[PyCon JP 2018] Interpretable Machine Learning, making black box models explainable with Python!

Ever wonder how does a Machine Learning model make predictions? In particularly a 256-layers deep neural network, how does it distinguish a Corgi from a Husky puppy? Come to my talk and I’ll enlighten you by demystifying black-box ML models with some Python magic!

Machine learning models are increasingly complex due to the advancements of model architectures such as deep neural networks and ensemble models. While these sophisticated models have achieved higher accuracy, they are like black boxes which how the decision was made couldn’t be fully understood. There are potential risks of misrepresentation, discrimination or overfitting. Furthermore, the need of interpretability is crucial to gain the trusts of regulators and users towards ML models.

4c8f908a3d2fe9a40db499a924cc6e4f?s=128

David Low

September 17, 2018
Tweet