Adversarial Robustness is at Odds with Lazy Training
Autor: | Wang, Yunjuan, Ullah, Enayat, Mianjy, Poorya, Arora, Raman |
---|---|
Rok vydání: | 2022 |
Předmět: | |
Druh dokumentu: | Working Paper |
Popis: | Recent works show that adversarial examples exist for random neural networks [Daniely and Schacham, 2020] and that these examples can be found using a single step of gradient ascent [Bubeck et al., 2021]. In this work, we extend this line of work to "lazy training" of neural networks -- a dominant model in deep learning theory in which neural networks are provably efficiently learnable. We show that over-parametrized neural networks that are guaranteed to generalize well and enjoy strong computational guarantees remain vulnerable to attacks generated using a single step of gradient ascent. Comment: NeurIPS 2022 |
Databáze: | arXiv |
Externí odkaz: |