Zobrazeno 1 - 7
of 7
pro vyhledávání: '"Kanamori, Kentaro"'
This paper proposes a new algorithm for learning accurate tree-based models while ensuring the existence of recourse actions. Algorithmic Recourse (AR) aims to provide a recourse action for altering the undesired prediction result given by a model. T
Externí odkaz:
http://arxiv.org/abs/2406.01098
Autor:
Kanamori, Kentaro
This paper proposes a new framework for learning a rule ensemble model that is both accurate and interpretable. A rule ensemble is an interpretable model based on the linear combination of weighted rules. In practice, we often face the trade-off betw
Externí odkaz:
http://arxiv.org/abs/2306.11481
This paper proposes a new framework of algorithmic recourse (AR) that works even in the presence of missing values. AR aims to provide a recourse action for altering the undesired prediction result given by a classifier. Existing AR methods assume th
Externí odkaz:
http://arxiv.org/abs/2304.14606
Since the seminal paper by Breiman in 2001, who pointed out a potential harm of prediction multiplicities from the view of explainable AI, global analysis of a collection of all good models, also known as a `Rashomon set,' has been attracted much att
Externí odkaz:
http://arxiv.org/abs/2204.11285
Autor:
Kanamori, Kentaro, Takagi, Takuya, Kobayashi, Ken, Ike, Yuichi, Uemura, Kento, Arimura, Hiroki
Post-hoc explanation methods for machine learning models have been widely used to support decision-making. One of the popular methods is Counterfactual Explanation (CE), also known as Actionable Recourse, which provides a user with a perturbation vec
Externí odkaz:
http://arxiv.org/abs/2012.11782
In conventional prediction tasks, a machine learning algorithm outputs a single best model that globally optimizes its objective function, which typically is accuracy. Therefore, users cannot access the other models explicitly. In contrast to this, m
Externí odkaz:
http://arxiv.org/abs/1906.01876
Counterfactual Explanation (CE) is a post-hoc explanation method that provides a perturbation for altering the prediction result of a classifier. Users can interpret the perturbation as an "action" to obtain their desired decision results. Existing C
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::e018ec9e082cf04fd4052f13b2d3af65
http://arxiv.org/abs/2304.14606
http://arxiv.org/abs/2304.14606