Autor: |
Si Jiang, Sirui Lu, Dong-Ling Deng |
Jazyk: |
angličtina |
Rok vydání: |
2023 |
Předmět: |
|
Zdroj: |
Quantum Frontiers, Vol 2, Iss 1, Pp 1-7 (2023) |
Druh dokumentu: |
article |
ISSN: |
2731-6106 |
DOI: |
10.1007/s44214-023-00043-z |
Popis: |
Abstract We study the robustness of machine learning approaches to adversarial perturbations, with a focus on supervised learning scenarios. We find that typical phase classifiers based on deep neural networks are extremely vulnerable to adversarial perturbations: adding a tiny amount of carefully crafted noises into the original legitimate examples will cause the classifiers to make incorrect predictions at a notably high confidence level. Through the lens of activation maps, we find that some important underlying physical principles and symmetries remain to be adequately captured for classifiers with even near-perfect performance. This explains why adversarial perturbations exist for fooling these classifiers. In addition, we find that, after adversarial training the classifiers will become more consistent with physical laws and consequently more robust to certain kinds of adversarial perturbations. Our results provide valuable guidance for both theoretical and experimental future studies on applying machine learning techniques to condensed matter physics. |
Databáze: |
Directory of Open Access Journals |
Externí odkaz: |
|
Nepřihlášeným uživatelům se plný text nezobrazuje |
K zobrazení výsledku je třeba se přihlásit.
|