Autor: |
Ding, N., Jalal, N. A., Abdulbaki Alshirbaji, T., Möller, K. |
Rok vydání: |
2021 |
Předmět: |
|
DOI: |
10.5281/zenodo.4922831 |
Popis: |
Deep neural networks are vulnerable to adversarial samples which are usually crafted by adding perturbations to the correct input. Therefore, adversarial samples can be used to identify and evaluate the robustness of a trained model before it is launched. In this work, we introduce two methods to generate adversarial images for a CNN model that was trained to perform surgical tool classification in laparoscopic videos. Both methods proved to be effective to fool the model, but show some drawbacks. |
Databáze: |
OpenAIRE |
Externí odkaz: |
|