Zobrazeno 1 - 10
of 53
pro vyhledávání: '"Alexey Ignatiev"'
Autor:
Joao Marques-Silva, Alexey Ignatiev
Publikováno v:
Frontiers in Artificial Intelligence, Vol 6 (2023)
Recent years witnessed a number of proposals for the use of the so-called interpretable models in specific application domains. These include high-risk, but also safety-critical domains. In contrast, other works reported some pitfalls of machine lear
Externí odkaz:
https://doaj.org/article/fffc6a841f514a7f80af38a4fbcdd76f
Autor:
Joao Marques-Silva, Alexey Ignatiev
Publikováno v:
Proceedings of the AAAI Conference on Artificial Intelligence. 36:12342-12350
The deployment of systems of artificial intelligence (AI) in high-risk settings warrants the need for trustworthy AI. This crucial requirement is highlighted by recent EU guidelines and regulations, but also by recommendations from OECD and UNESCO, a
Publikováno v:
Proceedings of the AAAI Conference on Artificial Intelligence. 36:3776-3785
Tree ensembles (TEs) denote a prevalent machine learning model that do not offer guarantees of interpretability, that represent a challenge from the perspective of explainable artificial intelligence. Besides model agnostic approaches, recent work pr
Autor:
Aditya A. Shrotri, Nina Narodytska, Alexey Ignatiev, Kuldeep S Meel, Joao Marques-Silva, Moshe Y. Vardi
Publikováno v:
Proceedings of the AAAI Conference on Artificial Intelligence. 36:8304-8314
The need to understand the inner workings of opaque Machine Learning models has prompted researchers to devise various types of post-hoc explanations. A large class of such explainers proceed in two phases: first perturb an input instance whose expla
Publikováno v:
Journal of Artificial Intelligence Research. 72:1251-1279
Decision sets and decision lists are two of the most easily explainable machine learning models. Given the renewed emphasis on explainable machine learning decisions, both of these machine learning models are becoming increasingly attractive, as they
Autor:
Yacine Izza, Xuanxiang Huang, Alexey Ignatiev, Nina Narodytska, Martin Cooper, Joao Marques-Silva
Publikováno v:
International Journal of Approximate Reasoning. 159:108939
The most widely studied explainable AI (XAI) approaches are unsound. This is the case with well-known model-agnostic explanation approaches, and it is also the case with approaches based on saliency maps. One solution is to consider intrinsic interpr
Autor:
Xuanxiang Huang, Yacine Izza, Alexey Ignatiev, Martin Cooper, Nicholas Asher, Joao Marques-Silva
Publikováno v:
Proceedings of the Thirty-Sixth AAAI Conference on Artificial Intelligence
36th AAAI Conference on Artificial Intelligence (AAAI 2022)
36th AAAI Conference on Artificial Intelligence (AAAI 2022), AAAI: American Association for Artificial Intelligence, Feb 2022, Seattle (virtual), United States. pp.5719-5728, ⟨10.1609/aaai.v36i5.20514⟩
Proceedings of the 36th AAAI Conference on Artificial Intelligence
36th AAAI Conference on Artificial Intelligence (AAAI 2022), AAAI: Association for the Advancement of Artificial Intelligence, Feb 2022, virtual, Canada. pp.5719-5728, ⟨10.1609/aaai.v36i5.20514⟩
36th AAAI Conference on Artificial Intelligence (AAAI 2022)
36th AAAI Conference on Artificial Intelligence (AAAI 2022), AAAI: American Association for Artificial Intelligence, Feb 2022, Seattle (virtual), United States. pp.5719-5728, ⟨10.1609/aaai.v36i5.20514⟩
Proceedings of the 36th AAAI Conference on Artificial Intelligence
36th AAAI Conference on Artificial Intelligence (AAAI 2022), AAAI: Association for the Advancement of Artificial Intelligence, Feb 2022, virtual, Canada. pp.5719-5728, ⟨10.1609/aaai.v36i5.20514⟩
International audience; Compilation into propositional languages finds a growing number of practical uses, including in constraint programming, diagnosis and machine learning (ML), among others. One concrete example is the use of propositional langua
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::b7de220643b2ac6040c2e83955b573f3
https://hal.science/hal-03873826/document
https://hal.science/hal-03873826/document
Decision trees (DTs) epitomize the ideal of interpretability of machine learning (ML) models. The interpretability of decision trees motivates explainability approaches by so-called intrinsic interpretability, and it is at the core of recent proposal
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::7abda006e20e79222f5d456bd20e6f6f
Publikováno v:
Artificial Intelligence
Artificial Intelligence, Elsevier, 2021, 300, pp.1-59. ⟨10.1016/j.artint.2021.103552⟩
Artificial Intelligence, 2021, 300, pp.1-59. ⟨10.1016/j.artint.2021.103552⟩
Artificial Intelligence, Elsevier, 2021, 300, pp.1-59. ⟨10.1016/j.artint.2021.103552⟩
Artificial Intelligence, 2021, 300, pp.1-59. ⟨10.1016/j.artint.2021.103552⟩
International audience; The paper describes the use of dual-rail MaxSAT systems to solve Boolean satisfiability (SAT), namely to determine if a set of clauses is satisfiable. The MaxSAT problem is the problem of satisfying the maximum number of claus
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::06ae2fef8f05cf52cd43ee6681dadd67
https://hal.archives-ouvertes.fr/hal-03317630
https://hal.archives-ouvertes.fr/hal-03317630
Publikováno v:
KR
à paraître
18th International Conference on Principles of Knowledge Representation and Reasoning (KR 2021)
18th International Conference on Principles of Knowledge Representation and Reasoning (KR 2021), Principles of Knowledge Representation and Reasoning, Incorporated (KR Inc.), Nov 2021, Hanoi (virtual), Vietnam
à paraître
18th International Conference on Principles of Knowledge Representation and Reasoning (KR 2021)
18th International Conference on Principles of Knowledge Representation and Reasoning (KR 2021), Principles of Knowledge Representation and Reasoning, Incorporated (KR Inc.), Nov 2021, Hanoi (virtual), Vietnam
International audience; Recent work has shown that not only decision trees (DTs) may not be interpretable but also proposed a polynomial-time algorithm for computing one PI-explanation of a DT. This paper shows that for a wide range of classifiers, g