Analyzing Deep Neural Networks with Symbolic Propagation: Towards Higher Precision and Faster Verification

Autor: Li, Jianlin, Yang, Pengfei, Liu, Jiangchao, Chen, Liqian, Huang, Xiaowei, Zhang, Lijun
Rok vydání: 2019
Předmět:
Druh dokumentu: Working Paper
DOI: 10.1007/978-3-030-32304-2_15
Popis: Deep neural networks (DNNs) have been shown lack of robustness for the vulnerability of their classification to small perturbations on the inputs. This has led to safety concerns of applying DNNs to safety-critical domains. Several verification approaches have been developed to automatically prove or disprove safety properties of DNNs. However, these approaches suffer from either the scalability problem, i.e., only small DNNs can be handled, or the precision problem, i.e., the obtained bounds are loose. This paper improves on a recent proposal of analyzing DNNs through the classic abstract interpretation technique, by a novel symbolic propagation technique. More specifically, the values of neurons are represented symbolically and propagated forwardly from the input layer to the output layer, on top of abstract domains. We show that our approach can achieve significantly higher precision and thus can prove more properties than using only abstract domains. Moreover, we show that the bounds derived from our approach on the hidden neurons, when applied to a state-of-the-art SMT based verification tool, can improve its performance. We implement our approach into a software tool and validate it over a few DNNs trained on benchmark datasets such as MNIST, etc.
Comment: SAS 2019: 26th Static Analysis Symposium, Porto, Portugal, October 8-11, 2019
Databáze: arXiv