Enhancing Robustness of Indoor Robotic Navigation with Free-Space Segmentation Models Against Adversarial Attacks

Autor: An, Qiyuan, Sevastopoulos, Christos, Makedon, Fillia
Rok vydání: 2024
Předmět:
Druh dokumentu: Working Paper
Popis: Endeavors in indoor robotic navigation rely on the accuracy of segmentation models to identify free space in RGB images. However, deep learning models are vulnerable to adversarial attacks, posing a significant challenge to their real-world deployment. In this study, we identify vulnerabilities within the hidden layers of neural networks and introduce a practical approach to reinforce traditional adversarial training. Our method incorporates a novel distance loss function, minimizing the gap between hidden layers in clean and adversarial images. Experiments demonstrate satisfactory performance in improving the model's robustness against adversarial perturbations.
Comment: Accepted to 2023 IEEE International Conference on Robotic Computing (IRC). arXiv admin note: text overlap with arXiv:2311.01966
Databáze: arXiv