With the increasing adoption of machine learning-based solutions in different domains, systems that use black-box algorithms are getting used more often with the promise of providing higher accuracy. However, this accuracy comes at the cost of interpretability, which introduces a barrier against wider adoption of such algorithms in crucial areas and raises the skepticism of the impacted individuals. This talk focuses on the importance of interpretable machine learning, why it is crucial from technical and ethical perspectives and its current limitations. In addition, it gives an overview of some of the relevant tools and packages (e.g. LIME, SHAPLEY).