Evaluating Deep Learning Biases Based on Grey-Box Testing Results
Autor: | Mira Franke, Patricia Morreale, Moushume Hai, J. Jenny Li, Thayssa Silva |
---|---|
Rok vydání: | 2020 |
Předmět: |
Gray box testing
Interpretation (logic) Artificial neural network Computer science business.industry Deep learning 02 engineering and technology 010501 environmental sciences Machine learning computer.software_genre 01 natural sciences Autoencoder Software deployment Similarity (psychology) 0202 electrical engineering electronic engineering information engineering 020201 artificial intelligence & image processing Artificial intelligence Language translation business computer 0105 earth and related environmental sciences |
Zdroj: | Advances in Intelligent Systems and Computing ISBN: 9783030551797 IntelliSys (1) |
DOI: | 10.1007/978-3-030-55180-3_48 |
Popis: | The very exciting and promising approaches of deep learning are immensely successful in processing large real world data sets, such as image recognition, speech recognition, and language translation. However, much research discovered that it has biases that arise in the design, production, deployment, and use of AI/ML technologies. In this paper, we first explain mathematically the causes of biases and then propose a way to evaluate biases based on testing results of neurons and auto-encoders in deep learning. Our interpretation views each neuron or autoencoder as an approximation of similarity measurement, of which grey-box testing results can be used to measure biases and finding ways to reduce them. We argue that monitoring deep learning network structures and parameters is an effective way to catch the sources of biases in deep learning. |
Databáze: | OpenAIRE |
Externí odkaz: |