Ever wonder how does a Machine Learning model make predictions? In particularly a 256-layers deep neural network, how does it distinguish a Corgi from a Husky puppy? Come to my talk and I’ll enlighten you by demystifying black-box ML models with some Python magic!
Machine learning models are increasingly complex due to the advancements of model architectures such as deep neural networks and ensemble models. While these sophisticated models have achieved higher accuracy, they are like black boxes which how the decision was made couldn’t be fully understood. There are potential risks of misrepresentation, discrimination or overfitting. Furthermore, the need of interpretability is crucial to gain the trusts of regulators and users towards ML models.