Autor: |
Alexandr Oblizanov, Natalya Shevskaya, Anatoliy Kazak, Marina Rudenko, Anna Dorofeeva |
Jazyk: |
angličtina |
Rok vydání: |
2023 |
Předmět: |
|
Zdroj: |
Applied System Innovation, Vol 6, Iss 1, p 26 (2023) |
Druh dokumentu: |
article |
ISSN: |
2571-5577 |
DOI: |
10.3390/asi6010026 |
Popis: |
In recent years, artificial intelligence technologies have been developing more and more rapidly, and a lot of research is aimed at solving the problem of explainable artificial intelligence. Various XAI methods are being developed to allow the user to understand the logic of how machine learning models work, and in order to compare the methods, it is necessary to evaluate them. The paper analyzes various approaches to the evaluation of XAI methods, defines the requirements for the evaluation system and suggests metrics to determine the various technical characteristics of the methods. A study was conducted, using these metrics, which determined the degradation in the explanation quality of the SHAP and LIME methods with increasing correlation in the input data. Recommendations are also given for further research in the field of practical implementation of metrics, expanding the scope of their use. |
Databáze: |
Directory of Open Access Journals |
Externí odkaz: |
|