Zobrazeno 1 - 10
of 643
pro vyhledávání: '"Lesot, P."'
Autor:
Munro, Yann, Sarmiento, Camilo, Bloch, Isabelle, Bourgne, Gauvain, Pelachaud, Catherine, Lesot, Marie-Jeanne
An abstract argumentation framework is a commonly used formalism to provide a static representation of a dialogue. However, the order of enunciation of the arguments in an argumentative dialogue is very important and can affect the outcome of this di
Externí odkaz:
http://arxiv.org/abs/2409.19625
As Machine Learning (ML) models achieve unprecedented levels of performance, the XAI domain aims at making these models understandable by presenting end-users with intelligible explanations. Yet, some existing XAI approaches fail to meet expectations
Externí odkaz:
http://arxiv.org/abs/2405.13474
Autor:
Bhan, Milan, Vittaut, Jean-Noel, Achache, Nina, Legrand, Victor, Chesneau, Nicolas, Blangero, Annabelle, Murris, Juliette, Lesot, Marie-Jeanne
Toxicity mitigation consists in rephrasing text in order to remove offensive or harmful meaning. Neural natural language processing (NLP) models have been widely used to target and mitigate textual toxicity. However, existing methods fail to detoxify
Externí odkaz:
http://arxiv.org/abs/2405.09948
Incorporating natural language rationales in the prompt and In-Context Learning (ICL) have led to a significant improvement of Large Language Models (LLMs) performance. However, generating high-quality rationales require human-annotation or the use o
Externí odkaz:
http://arxiv.org/abs/2402.12038
Explainable AI (XAI) methods have mostly been built to investigate and shed light on single machine learning models and are not designed to capture and explain differences between multiple models effectively. This paper addresses the challenge of und
Externí odkaz:
http://arxiv.org/abs/2309.17095
Autor:
Laugel, Thibault, Jeyasothy, Adulam, Lesot, Marie-Jeanne, Marsala, Christophe, Detyniecki, Marcin
In the field of Explainable Artificial Intelligence (XAI), counterfactual examples explain to a user the predictions of a trained decision model by indicating the modifications to be made to the instance so as to change its associated prediction. The
Externí odkaz:
http://arxiv.org/abs/2305.05840
Counterfactual examples explain a prediction by highlighting changes of instance that flip the outcome of a classifier. This paper proposes TIGTEC, an efficient and modular method for generating sparse, plausible and diverse counterfactual explanatio
Externí odkaz:
http://arxiv.org/abs/2304.12425
In the context of abstract argumentation, we present the benefits of considering temporality, i.e. the order in which arguments are enunciated, as well as causality. We propose a formal method to rewrite the concepts of acyclic abstract argumentation
Externí odkaz:
http://arxiv.org/abs/2303.09197
Autor:
Jeyasothy, Adulam, Laugel, Thibault, Lesot, Marie-Jeanne, Marsala, Christophe, Detyniecki, Marcin
In the field of eXplainable Artificial Intelligence (XAI), post-hoc interpretability methods aim at explaining to a user the predictions of a trained decision model. Integrating prior knowledge into such interpretability methods aims at improving the
Externí odkaz:
http://arxiv.org/abs/2204.11634
Publikováno v:
Proc. of the Int. Conf. of the European Society for Fuzzy Logic and Technology (EUSFLAT2021), Sep 2021, Bratislava, Slovakia
Conceptual Graphs (CGs) are a formalism to represent knowledge. However producing a CG database is complex. To the best of our knowledge, existing methods do not fully use the expressivity of CGs. It is particularly troublesome as it is necessary to
Externí odkaz:
http://arxiv.org/abs/2110.14287