Attack and Fault Injection in Self-driving Agents on the Carla Simulator – Experience Report
Autor: | Niccolò Piazzesi, Andrea Ceccarelli, Massimo Hong |
---|---|
Rok vydání: | 2021 |
Předmět: | |
Zdroj: | Lecture Notes in Computer Science ISBN: 9783030839024 SAFECOMP |
DOI: | 10.1007/978-3-030-83903-1_14 |
Popis: | Machine Learning applications are acknowledged at the foundation of autonomous driving, because they are the enabling technology for most driving tasks. However, the inclusion of trained agents in automotive systems exposes the vehicle to novel attacks and faults, that can result in safety threats to the driving tasks. In this paper we report our experimental campaign on the injection of adversarial attacks and software faults in a self-driving agent running in a driving simulator. We show that adversarial attacks and faults injected in the trained agent can lead to erroneous decisions and severely jeopardize safety. The paper shows a feasible and easily-reproducible approach based on open source simulator and tools, and the results clearly motivate the need of both protective measures and extensive testing campaigns. |
Databáze: | OpenAIRE |
Externí odkaz: |