Des ontologies pour construire une architecture prédictive, classifier et expliquer
Autor: | Matthieu Bellucci, Nicolas Delestre, Nicolas Malandain, Cecilia Zanni-Merk |
---|---|
Přispěvatelé: | BELLUCCI, Matthieu |
Jazyk: | angličtina |
Rok vydání: | 2022 |
Předmět: |
[INFO.INFO-AI] Computer Science [cs]/Artificial Intelligence [cs.AI]
[INFO.INFO-CV] Computer Science [cs]/Computer Vision and Pattern Recognition [cs.CV] XAI Ontology [INFO.INFO-NE] Computer Science [cs]/Neural and Evolutionary Computing [cs.NE] [INFO.INFO-LG] Computer Science [cs]/Machine Learning [cs.LG] Classification |
Zdroj: | HAL |
Popis: | Explainable AI is gaining traction because of the widespread use of black box models in the industry. Many explanation methods are proposed to explain models without impacting their design. The literature describes a new architecture where an explainable model interacts with an explanation interface to generate explanations tailored for a user. We propose a novel image classification system that combines an ontology with machine learning models based on this architecture. It uses an ontology to add different labels to the same dataset and generates machine learning models to assess the class of an object and its different properties listed in the ontology. The outputs of these models are added to the ontology to verify that these predictions are consistent, using logical reasoning. The ontology can then be explored to understand the prediction and why it is consistent or not. This system can warn the user when a prediction is uncertain, which will help users to trust it. |
Databáze: | OpenAIRE |
Externí odkaz: |