Modern ML algorithms are achieving remarkable results thanks to the continuous development of deeper architectures that can identify complex patterns that are closer to human understanding. However, these results come with a price to be paid: understanding ML predictions is becoming increasingly difficult and, in some cases, even impossible. The problem is more relevant because of regulatory requirements that oftentimes explicitly demand well-formed and understandable explanations for automatic decisions made by ML-powered processes. The need for ML arises when you know the questions and answers but you don’t know an easy way to get from one to the other.