Plausible Counterfactuals: Auditing Deep Learning Classifiers with Realistic Adversarial Examples
Autor: | Javier Del Ser, Alejandro Barredo-Arrieta |
---|---|
Rok vydání: | 2020 |
Předmět: |
Counterfactual thinking
FOS: Computer and information sciences Computer Science - Machine Learning Counterfactual conditional Computer Science - Cryptography and Security Computer science Process (engineering) 02 engineering and technology 010501 environmental sciences 01 natural sciences Task (project management) Machine Learning (cs.LG) Adversarial system 0202 electrical engineering electronic engineering information engineering Neural and Evolutionary Computing (cs.NE) 0105 earth and related environmental sciences business.industry Deep learning Computer Science - Neural and Evolutionary Computing Data science Task analysis 020201 artificial intelligence & image processing Artificial intelligence Heuristics business Cryptography and Security (cs.CR) |
Zdroj: | IJCNN |
DOI: | 10.48550/arxiv.2003.11323 |
Popis: | The last decade has witnessed the proliferation of Deep Learning models in many applications, achieving unrivaled levels of predictive performance. Unfortunately, the black-box nature of Deep Learning models has posed unanswered questions about what they learn from data. Certain application scenarios have highlighted the importance of assessing the bounds under which Deep Learning models operate, a problem addressed by using assorted approaches aimed at audiences from different domains. However, as the focus of the application is placed more on non-expert users, it results mandatory to provide the means for him/her to trust the model, just like a human gets familiar with a system or process: by understanding the hypothetical circumstances under which it fails. This is indeed the angular stone for this research work: to undertake an adversarial analysis of a Deep Learning model. The proposed framework constructs counterfactual examples by ensuring their plausibility, e.g. there is a reasonable probability that a human could generate them without resorting to a computer program. Therefore, this work must be regarded as valuable auditing exercise of the usable bounds a certain model is constrained within, thereby allowing for a much greater understanding of the capabilities and pitfalls of a model used in a real application. To this end, a Generative Adversarial Network (GAN) and multi-objective heuristics are used to furnish a plausible attack to the audited model, efficiently trading between the confusion of this model, the intensity and plausibility of the generated counterfactual. Its utility is showcased within a human face classification task, unveiling the enormous potential of the proposed framework. Comment: 7 pages, 5 figures. Accepted for its presentation at WCCI 2020 |
Databáze: | OpenAIRE |
Externí odkaz: |