Reducing DNN Properties to Enable Falsification with Adversarial Attacks.

Autor: Shriver, David, Elbaum, Sebastian, Dwyer, Matthew B.
Předmět:
Zdroj: ICSE: International Conference on Software Engineering; 5/22/2021, p275-287, 13p
Abstrakt: Deep Neural Networks (DNN) are increasingly being deployed in safety-critical domains, from autonomous vehicles to medical devices, where the consequences of errors demand techniques that can provide stronger guarantees about behavior than just high test accuracy. This paper explores broadening the application of existing adversarial attack techniques for the falsification of DNN safety properties. We contend and later show that such attacks provide a powerful repertoire of scalable algorithms for property falsification. To enable the broad application of falsification, we introduce a semantics-preserving reduction of multiple safety property types, which subsume prior work, into a set of equivalid correctness problems amenable to adversarial attacks. We evaluate our reduction approach as an enabler of falsification on a range of DNN correctness problems and show its cost-effectiveness and scalability. [ABSTRACT FROM AUTHOR]
Databáze: Complementary Index