Making deep neural networks right for the right scientific reasons by interacting with their explanations
Autor: | Xiaoting Shao, Hans-Georg Luigs, Stefano Teso, Kristian Kersting, Wolfgang Stammer, Anna Brugger, Patrick Schramowski, Franziska Herbert, Anne-Katrin Mahlein |
---|---|
Rok vydání: | 2020 |
Předmět: |
0301 basic medicine
Computer Networks and Communications Computer science business.industry Deep learning Machine learning computer.software_genre Plant phenotyping Interactive Learning Human-Computer Interaction 03 medical and health sciences 030104 developmental biology 0302 clinical medicine Artificial Intelligence Deep neural networks Computer Vision and Pattern Recognition Artificial intelligence business Research task computer 030217 neurology & neurosurgery Software |
Zdroj: | Nature Machine Intelligence |
Popis: | Deep neural networks have demonstrated excellent performances in many real-world applications. Unfortunately, they may show Clever Hans-like behaviour (making use of confounding factors within datasets) to achieve high performance. In this work we introduce the novel learning setting of explanatory interactive learning and illustrate its benefits on a plant phenotyping research task. Explanatory interactive learning adds the scientist into the training loop, who interactively revises the original model by providing feedback on its explanations. Our experimental results demonstrate that explanatory interactive learning can help to avoid Clever Hans moments in machine learning and encourages (or discourages, if appropriate) trust in the underlying model. Deep learning approaches can show excellent performance but still have limited practical use if they learn to predict based on confounding factors in a dataset, for instance text labels in the corner of images. By using an explanatory interactive learning approach, with a human expert in the loop during training, it becomes possible to avoid predictions based on confounding factors. |
Databáze: | OpenAIRE |
Externí odkaz: |