and interpret Regulated industries like medical and finance industries often fall short from using Deep learning techniques due to this. Techniques for Interpretability help users to : • Trust and understand why predictions are made in a certain way • Provides accountability for usage in important decision making
architectures. CAM works only for networks with last layer as an Global Average Pooling layer Alpha = Weights[:, class_index] # (512,) FeatureMaps = getLastConvLayer() # (7,7,512) CAM = Alpha * FeatureMaps # (7,7) Upsample to original image size and overlay In Grad CAM, the equivalent to global average pooling is performed on the gradients of the output with respect to the feature maps Aij
PyTorch and TensorFlow supporting the following algorithms • Gradient Attribution • Integrated Gradients • Smoothed Gradients • GradCAM • FullGrad The library features a consistent API across different techniques as well as a benchmarking utility
techniques for interpretability, especially neuron visualisation • Support other modalities like Text and Speech A possible line of research was found while testing FullGrad technique, where it was discovered that heat maps produced were not very class discriminative. A combination of ideas from GradCAM could potentially solve this.