Attacks on Machine Learning: Adversarial Examples in Connected and Autonomous Vehicles
Autor: | Prinkle Sharma, David F. Austin, Hong Liu |
---|---|
Rok vydání: | 2019 |
Předmět: |
050210 logistics & transportation
Situation awareness Artificial neural network Computer science business.industry Deep learning 05 social sciences Supervised learning Mobile robot 02 engineering and technology Adversarial machine learning Machine learning computer.software_genre Random forest Recurrent neural network 020204 information systems 0502 economics and business 0202 electrical engineering electronic engineering information engineering Artificial intelligence business computer |
Zdroj: | 2019 IEEE International Symposium on Technologies for Homeland Security (HST). |
DOI: | 10.1109/hst47167.2019.9032989 |
Popis: | Connected and autonomous vehicles (CAV a.k.a. driverless cars) offset human response for transportation infrastructure, enhancing traffic efficiency, travel leisure, and road safety. Behind the wheels of these mobile robots lies machine learning (ML) to automate mundane driving tasks and make decisions from situational awareness. Attacking ML, the brain of driverless cars, can cause catastrophes. This paper proposes a novel approach to attack CAV by fooling its ML model. Using adversarial examples in CAVs, the work demonstrates how adversarial machine learning can generate attacks hardly detectable by current ML classifiers for CAV misbehavior detection. First, adversarial datasets are generated by a traditional attack engine, which CAV misbehavior detection ML models can easily detect. Building attack ML model takes two phases: training and testing. Using supervised learning, Phase I trains the model on the time-series data, converted from the adversarial datasets. Phase II tests the model, which leads, for the next round of model improvement. The initial round deploys K-Nearest Neighbor (KNN) and Random Forest (RF) algorithms, respectively. The next round, guided by deep learning (DL) models, uses Logistic Regression (LG) of neural network and Long Short-Term Memory (LSTM) of recurrent neural network. The results, in precision-recall (PR) and receiver operating characteristic (ROC) curves, validate the effectiveness of the proposed adversarial ML models. This work reveals the vulnerability in ML. At the same time, it shows the promise to protect critical infrastructure by studying the opponent strategies. Future work includes retraining the adversarial ML models with real-world datasets from pilot CAV sites. |
Databáze: | OpenAIRE |
Externí odkaz: |