On Interpretability and Similarity in Concept-Based Machine Learning
Autor: | Dmitry I. Ignatov, Léonard Kwuida |
---|---|
Rok vydání: | 2021 |
Předmět: | |
Zdroj: | Lecture Notes in Computer Science ISBN: 9783030726096 AIST |
DOI: | 10.1007/978-3-030-72610-2_3 |
Popis: | Machine Learning (ML) provides important techniques for classification and predictions. Most of these are black-box models for users and do not provide decision-makers with an explanation. For the sake of transparency or more validity of decisions, the need to develop explainable/interpretable ML-methods is gaining more and more importance. Certain questions need to be addressed: How does an ML procedure derive the class for a particular entity? Why does a particular clustering emerge from a particular unsupervised ML procedure? What can we do if the number of attributes is very large? What are the possible reasons for the mistakes for concrete cases and models? |
Databáze: | OpenAIRE |
Externí odkaz: |