A Systematic Approach for Evaluating Artificial Intelligence Models in Industrial Settings

Autor: Paul-Lou Benedick, Jérémy Robert, Yves Le Traon
Jazyk: angličtina
Rok vydání: 2021
Předmět:
Zdroj: Sensors, Vol 21, Iss 18, p 6195 (2021)
Druh dokumentu: article
ISSN: 1424-8220
DOI: 10.3390/s21186195
Popis: Artificial Intelligence (AI) is one of the hottest topics in our society, especially when it comes to solving data-analysis problems. Industry are conducting their digital shifts, and AI is becoming a cornerstone technology for making decisions out of the huge amount of (sensors-based) data available in the production floor. However, such technology may be disappointing when deployed in real conditions. Despite good theoretical performances and high accuracy when trained and tested in isolation, a Machine-Learning (M-L) model may provide degraded performances in real conditions. One reason may be fragility in treating properly unexpected or perturbed data. The objective of the paper is therefore to study the robustness of seven M-L and Deep-Learning (D-L) algorithms, when classifying univariate time-series under perturbations. A systematic approach is proposed for artificially injecting perturbations in the data and for evaluating the robustness of the models. This approach focuses on two perturbations that are likely to happen during data collection. Our experimental study, conducted on twenty sensors’ datasets from the public University of California Riverside (UCR) repository, shows a great disparity of the models’ robustness under data quality degradation. Those results are used to analyse whether the impact of such robustness can be predictable—thanks to decision trees—which would prevent us from testing all perturbations scenarios. Our study shows that building such a predictor is not straightforward and suggests that such a systematic approach needs to be used for evaluating AI models’ robustness.
Databáze: Directory of Open Access Journals
Nepřihlášeným uživatelům se plný text nezobrazuje