Achieving Explainability of Intrusion Detection System by Hybrid Oracle-Explainer Approach
Autor: | Michal Choras, Mateusz Szczepanski, Marek Pawlicki, Rafał Kozik |
---|---|
Rok vydání: | 2020 |
Předmět: |
Computer science
media_common.quotation_subject Decision tree 020206 networking & telecommunications Context (language use) 02 engineering and technology Intrusion detection system Oracle Lead (geology) Debugging Risk analysis (engineering) 0202 electrical engineering electronic engineering information engineering 020201 artificial intelligence & image processing media_common |
Zdroj: | IJCNN |
DOI: | 10.1109/ijcnn48605.2020.9207199 |
Popis: | With the progressing development and ubiquitousness of Artificial Intelligence (AI) observed in last decade, the need for creating methods which are explainable and/or interpretable for humans has become a pressing matter. The ability to understand how a system makes a decision is necessary to help develop trust, settle issues of fairness and perform the debugging of a model. Although there are many different techniques allowing to get insights into models’ inner workings, they often come with a trade off in the form of decreased accuracy. In the context of cybersecurity, where a single false negative can lead to a breach and compromise of the whole system, such a price is unacceptable. Therefore, there is a need for a solution which allows for the maximum possible model performance, and at the same time delivers human understandable interpretations. The hybrid approaches to Explainable Artificial Intelligence (XAI) have the potential to achieve this goal. In this work, we present the fundamental concepts and a prototype of a system using such an architecture. |
Databáze: | OpenAIRE |
Externí odkaz: |