Robustness Verification for Classifier Ensembles
Autor: | Gross, D., Jansen, N., Pérez, G.A., Raaijmakers, S., Hung, D.V., Sokolsky, O. |
---|---|
Přispěvatelé: | Hung, D.V., Sokolsky, O. |
Rok vydání: | 2020 |
Předmět: |
Computer. Automation
FOS: Computer and information sciences Computer Science - Machine Learning 050101 languages & linguistics Computer science Existential quantification 05 social sciences Machine Learning (stat.ML) 02 engineering and technology Upper and lower bounds Machine Learning (cs.LG) Verification procedure ComputingMethodologies_PATTERNRECOGNITION Statistics - Machine Learning Robustness (computer science) Scalability Software Science 0202 electrical engineering electronic engineering information engineering 020201 artificial intelligence & image processing 0501 psychology and cognitive sciences Expected loss Formal verification Classifier (UML) Algorithm Computer Science::Cryptography and Security |
Zdroj: | Automated Technology for Verification and Analysis ISBN: 9783030591519 ATVA Hung, D.V.; Sokolsky, O. (ed.), Automated Technology for Verification and Analysis: 18th International Symposium, ATVA 2020, Hanoi, Vietnam, October 19–23, 2020, Proceedings, 271-287. Cham : Springer International Publishing STARTPAGE=271;ENDPAGE=287;ISSN=0302-9743;TITLE=Hung, D.V.; Sokolsky, O. (ed.), Automated Technology for Verification and Analysis: 18th International Symposium, ATVA 2020, Hanoi, Vietnam, October 19–23, 2020, Proceedings Hung, D.V.; Sokolsky, O. (ed.), Automated Technology for Verification and Analysis: 18th International Symposium, ATVA 2020, Hanoi, Vietnam, October 19–23, 2020, Proceedings, pp. 271-287 Lecture notes in computer science |
ISSN: | 0302-9743 |
Popis: | We give a formal verification procedure that decides whether a classifier ensemble is robust against arbitrary randomized attacks. Such attacks consist of a set of deterministic attacks and a distribution over this set. The robustness-checking problem consists of assessing, given a set of classifiers and a labelled data set, whether there exists a randomized attack that induces a certain expected loss against all classifiers. We show the NP-hardness of the problem and provide an upper bound on the number of attacks that is sufficient to form an optimal randomized attack. These results provide an effective way to reason about the robustness of a classifier ensemble. We provide SMT and MILP encodings to compute optimal randomized attacks or prove that there is no attack inducing a certain expected loss. In the latter case, the classifier ensemble is provably robust. Our prototype implementation verifies multiple neural-network ensembles trained for image-classification tasks. The experimental results using the MILP encoding are promising both in terms of scalability and the general applicability of our verification procedure. |
Databáze: | OpenAIRE |
Externí odkaz: |