Vulnerability of deep neural networks for detecting COVID-19 cases from chest X-ray images to universal adversarial attacks
Autor: | Kazuki Koga, Kazuhiro Takemoto, Hokuto Hirano |
---|---|
Jazyk: | angličtina |
Rok vydání: | 2020 |
Předmět: |
Male
FOS: Computer and information sciences Viral Diseases Computer Science - Machine Learning Computer Science - Cryptography and Security Databases Factual Pulmonology Computer science Computer Vision and Pattern Recognition (cs.CV) Computer Science - Computer Vision and Pattern Recognition Social Sciences Diagnostic Radiology Machine Learning (cs.LG) Training (Education) Medical Conditions Sociology Medicine and Health Sciences Lung Virus Testing Multidisciplinary Artificial neural network Radiology and Imaging Applied Mathematics Simulation and Modeling Image and Video Processing (eess.IV) Thorax Bone Imaging Infectious Diseases Norm (mathematics) Physical Sciences X ray image Deep neural networks Medicine Female Cryptography and Security (cs.CR) Algorithms Research Article Computer and Information Sciences Coronavirus disease 2019 (COVID-19) Neural Networks Imaging Techniques Science Research and Analysis Methods Education Diagnostic Medicine Medical imaging FOS: Electrical engineering electronic engineering information engineering Humans business.industry SARS-CoV-2 Retraining COVID-19 Biology and Life Sciences Pattern recognition Covid 19 Pneumonia Electrical Engineering and Systems Science - Image and Video Processing X-Ray Radiography Open source Artificial intelligence Neural Networks Computer business Tomography X-Ray Computed Mathematics Neuroscience |
Zdroj: | PLoS ONE PLoS ONE, Vol 15, Iss 12, p e0243963 (2020) |
Popis: | Under the epidemic of the novel coronavirus disease 2019 (COVID-19), chest X-ray computed tomography imaging is being used for effectively screening COVID-19 patients. The development of computer-aided systems based on deep neural networks (DNNs) has been advanced, to rapidly and accurately detect COVID-19 cases, because the need for expert radiologists, who are limited in number, forms a bottleneck for the screening. However, so far, the vulnerability of DNN-based systems has been poorly evaluated, although DNNs are vulnerable to a single perturbation, called universal adversarial perturbation (UAP), which can induce DNN failure in most classification tasks. Thus, we focus on representative DNN models for detecting COVID-19 cases from chest X-ray images and evaluate their vulnerability to UAPs generated using simple iterative algorithms. We consider nontargeted UAPs, which cause a task failure resulting in an input being assigned an incorrect label, and targeted UAPs, which cause the DNN to classify an input into a specific class. The results demonstrate that the models are vulnerable to nontargeted and targeted UAPs, even in case of small UAPs. In particular, 2% norm of the UPAs to the average norm of an image in the image dataset achieves >85% and >90% success rates for the nontargeted and targeted attacks, respectively. Due to the nontargeted UAPs, the DNN models judge most chest X-ray images as COVID-19 cases. The targeted UAPs make the DNN models classify most chest X-ray images into a given target class. The results indicate that careful consideration is required in practical applications of DNNs to COVID-19 diagnosis; in particular, they emphasize the need for strategies to address security concerns. As an example, we show that iterative fine-tuning of the DNN models using UAPs improves the robustness of the DNN models against UAPs. 17 pages, 5 figures, 3 tables |
Databáze: | OpenAIRE |
Externí odkaz: |