Improving performance of deep learning models with axiomatic attribution priors and expected gradients
Autor: | Gabriel G. Erion, Joseph D. Janizek, Su-In Lee, Scott M. Lundberg, Pascal Sturmfels |
---|---|
Rok vydání: | 2021 |
Předmět: |
0301 basic medicine
Artificial neural network Computer Networks and Communications Computer science business.industry Deep learning Machine learning computer.software_genre Human-Computer Interaction 03 medical and health sciences 030104 developmental biology 0302 clinical medicine Artificial Intelligence Prior probability Feature (machine learning) Computer Vision and Pattern Recognition Artificial intelligence business Completeness (statistics) Attribution computer 030217 neurology & neurosurgery Software Axiom Interpretability |
Zdroj: | Nature Machine Intelligence. 3:620-631 |
ISSN: | 2522-5839 |
DOI: | 10.1038/s42256-021-00343-w |
Popis: | Recent research has demonstrated that feature attribution methods for deep networks can themselves be incorporated into training; these attribution priors optimize for a model whose attributions have certain desirable properties—most frequently, that particular features are important or unimportant. These attribution priors are often based on attribution methods that are not guaranteed to satisfy desirable interpretability axioms, such as completeness and implementation invariance. Here we introduce attribution priors to optimize for higher-level properties of explanations, such as smoothness and sparsity, enabled by a fast new attribution method formulation called expected gradients that satisfies many important interpretability axioms. This improves model performance on many real-world tasks where previous attribution priors fail. Our experiments show that the gains from combining higher-level attribution priors with expected gradients attributions are consistent across image, gene expression and healthcare datasets. We believe that this work motivates and provides the necessary tools to support the widespread adoption of axiomatic attribution priors in many areas of applied machine learning. The implementations and our results have been made freely available to academic communities. Neural networks are becoming increasingly popular for applications in various domains, but in practice, further methods are necessary to make sure the models are learning patterns that agree with prior knowledge about the domain. A new approach introduces an explanation method, called ‘expected gradients’, that enables training with theoretically motivated feature attribution priors, to improve model performance on real-world tasks. |
Databáze: | OpenAIRE |
Externí odkaz: |