Multi-scale local explanation approach for image analysis using model-agnostic explainable artificial intelligence (XAI)
Date
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
The recent success of deep neural networks has generated a remarkable growth in Artificial Intelligence (AI) research, and it received much interest over the past few years. However, one of the main challenges for the broad adoption of deep learning based models such as Convolutional Neural Networks (CNN) is the lack of understanding of their decisions. Local Interpretable Model-agnostic Explanations (LIME) is an explanation method which produces a coarse heatmap as a visual explanation highlighting the most important superpixels affecting the CNN’s decision. This thesis aims to explore and develop a multi-scale scheme of LIME to explain decisions made by CNN models through heatmaps of coarse to finer scales. More precisely, when LIME highlights large superpixels from the coarse scale, there are some tiny regions in the corresponding superpixel that influenced the model’s prediction at the finer scale. Therefore, we propose a multi-scale scheme of LIME and two weighting approaches based on Gaussian distribution and a parameter-free framework to produce visual explanations observed from different scales. We investigated the proposed multi-scale scheme on Flower Dataset from TensorFlow and a biological dataset, Camelyon 16. The results prove that the explanations are faithful to the underlying model, and our visualizations are reasonably interpretable.