Semantic Description of Explainable Machine Learning Workflows for Improving Trust
Autor: | Luiz Olavo Bonino da Silva Santos, Luis Ferreira Pires, Faiza Bukhsh, João Luiz Rebelo Moreira, Patricia Inoue Nakagawa |
---|---|
Přispěvatelé: | Services, Cybersecurity & Safety, Digital Society Institute, Datamanagement & Biometrics |
Rok vydání: | 2021 |
Předmět: |
Technology
QH301-705.5 Computer science Process (engineering) QC1-999 Interoperability Reuse Ontology (information science) Machine learning computer.software_genre semantic web technologies General Materials Science ontology Biology (General) Healthcare data QD1-999 Instrumentation Fluid Flow and Transfer Processes business.industry Machine Learning (ML) Physics Process Chemistry and Technology General Engineering Engineering (General). Civil engineering (General) Computer Science Applications Support vector machine Chemistry machine learning Workflow XAI Explanation module Artificial intelligence TA1-2040 business computer |
Zdroj: | Applied Sciences Volume 11 Issue 22 Applied Sciences, Vol 11, Iss 10804, p 10804 (2021) Applied Sciences, 11(22):10804. Politechnica University of Bucharest |
ISSN: | 2076-3417 1454-5101 |
Popis: | Explainable Machine Learning comprises methods and techniques that enable users to better understand the machine learning functioning and results. This work proposes an ontology that represents explainable machine learning experiments, allowing data scientists and developers to have a holistic view, a better understanding of the explainable machine learning process, and to build trust. We developed the ontology by reusing an existing domain-specific ontology (ML-SCHEMA) and grounding it in the Unified Foundational Ontology (UFO), aiming at achieving interoperability. The proposed ontology is structured in three modules: (1) the general module, (2) the specific module, and (3) the explanation module. The ontology was evaluated using a case study in the scenario of the COVID-19 pandemic using healthcare data from patients, which are sensitive data. In the case study, we trained a Support Vector Machine to predict mortality of patients infected with COVID-19 and applied existing explanation methods to generate explanations from the trained model. Based on the case study, we populated the ontology and queried it to ensure that it fulfills its intended purpose and to demonstrate its suitability. |
Databáze: | OpenAIRE |
Externí odkaz: |