Autor: |
Liu, Haiyi, Liu, Shaoying, Xu, Guangquan, Liu, Ai, Fang, Dingbang |
Předmět: |
|
Zdroj: |
International Journal of Software Engineering & Knowledge Engineering; Feb2024, Vol. 34 Issue 2, p273-300, 28p |
Abstrakt: |
Neural networks are extensively employed in safety-critical systems. However, these critical systems incorporating neural networks continue to pose risks due to the presence of adversarial examples. Although the security of neural networks can be enhanced by verification, verifying neural networks is an NP-hard problem, making the application of verification algorithms to large-scale neural networks a challenging task. For this reason, we propose NNTBFV, a framework that utilizes the principles of Testing-Based Formal Verification (TBFV) to simplify neural networks and verify the simplified networks. Unlike conventional neural network pruning techniques, this approach is based on specifications, with the goal of deriving approximate execution paths under given preconditions. To mitigate the potential issue of unverifiable conditions due to overly broad preconditions, we also propose a precondition partition method. Empirical evidence shows that as the range of preconditions narrows, the size of the execution paths also reduces accordingly. The execution path generated by NNTBFV is still a neural network, so it can be verified by verification tools. In response to the results from the verification tool, we provide a theoretical method for analysis. We evaluate the effectiveness of NNTBFV on the ACAS Xu model project, choosing Verification-based and Random-based neural network simplification algorithms as the baselines for NNTBFV. Experiment results show that NNTBFV can effectively approximate the baseline in terms of simplification capability, and it surpasses the efficiency of the random-based method. [ABSTRACT FROM AUTHOR] |
Databáze: |
Complementary Index |
Externí odkaz: |
|