PRINCE: Provider-side Interpretability with Counterfactual Explanations in Recommender Systems

Autor: Azin Ghazimatin, Oana Balalau, Gerhard Weikum, Rishiraj Saha Roy
Přispěvatelé: Max Planck Institute for Informatics [Saarbrücken], Département d'informatique de l'École polytechnique (X-DEP-INFO), École polytechnique (X), Rich Data Analytics at Cloud Scale (CEDAR), Laboratoire d'informatique de l'École polytechnique [Palaiseau] (LIX), Centre National de la Recherche Scientifique (CNRS)-École polytechnique (X)-Centre National de la Recherche Scientifique (CNRS)-École polytechnique (X)-Inria Saclay - Ile de France, Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria), This work was partly supported by the ERC Synergy Grant 610150 (imPACT) and the DFG Collaborative Research Center 1223. We would like to thank Simon Razniewski from the MPI for Informatics for his insightful comments on the manuscript., École polytechnique (X)-Centre National de la Recherche Scientifique (CNRS)-École polytechnique (X)-Centre National de la Recherche Scientifique (CNRS)-Inria Saclay - Ile de France
Jazyk: angličtina
Rok vydání: 2019
Předmět:
Zdroj: WSDM 2020-13th ACM International Conference on Web Search and Data Mining
WSDM 2020-13th ACM International Conference on Web Search and Data Mining, Feb 2020, Houston, Texas, United States
WSDM
HAL
WSDM '20
Popis: Interpretable explanations for recommender systems and other machine learning models are crucial to gain user trust. Prior works that have focused on paths connecting users and items in a heterogeneous network have several limitations, such as discovering relationships rather than true explanations, or disregarding other users' privacy. In this work, we take a fresh perspective, and present PRINCE: a provider-side mechanism to produce tangible explanations for end-users, where an explanation is defined to be a set of minimal actions performed by the user that, if removed, changes the recommendation to a different item. Given a recommendation, PRINCE uses a polynomial-time optimal algorithm for finding this minimal set of a user's actions from an exponential search space, based on random walks over dynamic graphs. Experiments on two real-world datasets show that PRINCE provides more compact explanations than intuitive baselines, and insights from a crowdsourced user-study demonstrate the viability of such action-based explanations. We thus posit that PRINCE produces scrutable, actionable, and concise explanations, owing to its use of counterfactual evidence, a user's own actions, and minimal sets, respectively.
WSDM 2020, 9 pages
Databáze: OpenAIRE