Interpretation of Neural Networks Is Fragile
Autor: | Abubakar Abid, James Zou, Amirata Ghorbani |
---|---|
Rok vydání: | 2019 |
Předmět: |
FOS: Computer and information sciences
0301 basic medicine Hessian matrix Computer Science - Machine Learning Computer science Machine Learning (stat.ML) 02 engineering and technology Machine learning computer.software_genre Machine Learning (cs.LG) 03 medical and health sciences symbols.namesake Statistics - Machine Learning Robustness (computer science) 0202 electrical engineering electronic engineering information engineering Feature (machine learning) Artificial neural network business.industry Interpretation (philosophy) General Medicine Variety (cybernetics) 030104 developmental biology symbols 020201 artificial intelligence & image processing Artificial intelligence business computer |
Zdroj: | AAAI |
ISSN: | 2374-3468 2159-5399 |
Popis: | In order for machine learning to be deployed and trusted in many applications, it is crucial to be able to reliably explain why the machine learning algorithm makes certain predictions. For example, if an algorithm classifies a given pathology image to be a malignant tumor, then the doctor may need to know which parts of the image led the algorithm to this classification. How to interpret black-box predictors is thus an important and active area of research. A fundamental question is: how much can we trust the interpretation itself? In this paper, we show that interpretation of deep learning predictions is extremely fragile in the following sense: two perceptively indistinguishable inputs with the same predicted label can be assigned very different interpretations. We systematically characterize the fragility of several widely-used feature-importance interpretation methods (saliency maps, relevance propagation, and DeepLIFT) on ImageNet and CIFAR-10. Our experiments show that even small random perturbation can change the feature importance and new systematic perturbations can lead to dramatically different interpretations without changing the label. We extend these results to show that interpretations based on exemplars (e.g. influence functions) are similarly fragile. Our analysis of the geometry of the Hessian matrix gives insight on why fragility could be a fundamental challenge to the current interpretation approaches. Comment: Published as a conference paper at AAAI 2019 |
Databáze: | OpenAIRE |
Externí odkaz: |