Zobrazeno 1 - 10
of 316
pro vyhledávání: '"RUGGIERI, SALVATORE"'
In this paper, we focus on estimating the causal effect of an intervention over time on a dynamical system. To that end, we formally define causal interventions and their effects over time on discrete-time stochastic processes (DSPs). Then, we show u
Externí odkaz:
http://arxiv.org/abs/2410.10502
We introduce an innovative approach to enhancing the empirical risk minimization (ERM) process in model training through a refined reweighting scheme of the training data to enhance fairness. This scheme aims to uphold the sufficiency rule in fairnes
Externí odkaz:
http://arxiv.org/abs/2408.14126
Deferring systems extend supervised Machine Learning (ML) models with the possibility to defer predictions to human experts. However, evaluating the impact of a deferring strategy on system accuracy is still an overlooked area. This paper fills this
Externí odkaz:
http://arxiv.org/abs/2405.18902
Autor:
Alvarez, Jose M., Ruggieri, Salvatore
Testing for discrimination consists of deriving a profile, known as the comparator, similar to the profile making the discrimination claim, known as the complainant, and comparing the outcomes of these two profiles. An important aspect for establishi
Externí odkaz:
http://arxiv.org/abs/2405.13693
Autor:
Alvarez, Jose M., Ruggieri, Salvatore
Perception occurs when two individuals interpret the same information differently. Despite being a known phenomenon with implications for bias in decision-making, as individual experience determines interpretation, perception remains largely overlook
Externí odkaz:
http://arxiv.org/abs/2401.13408
Publikováno v:
Journal of Data-centric Machine Learning Research (DMLR), Vol 1, (17):1-58, (2024)
With the increasing deployment of machine learning models in many socially sensitive tasks, there is a growing demand for reliable and trustworthy predictions. One way to accomplish these requirements is to allow a model to abstain from making a pred
Externí odkaz:
http://arxiv.org/abs/2401.12708
Autor:
Setzu, Mattia, Ruggieri, Salvatore
Decision Trees are accessible, interpretable, and well-performing classification models. A plethora of variants with increasing expressiveness has been proposed in the last forty years. We contrast the two families of univariate DTs, whose split func
Externí odkaz:
http://arxiv.org/abs/2312.01884
The importance of achieving fairness in machine learning models cannot be overstated. Recent research has pointed out that fairness should be examined from a causal perspective, and several fairness notions based on the on Pearl's causal framework ha
Externí odkaz:
http://arxiv.org/abs/2311.10512
Explaining opaque Machine Learning (ML) models is an increasingly relevant problem. Current explanation in AI (XAI) methods suffer several shortcomings, among others an insufficient incorporation of background knowledge, and a lack of abstraction and
Externí odkaz:
http://arxiv.org/abs/2309.00422
In eXplainable Artificial Intelligence (XAI), several counterfactual explainers have been proposed, each focusing on some desirable properties of counterfactual instances: minimality, actionability, stability, diversity, plausibility, discriminative
Externí odkaz:
http://arxiv.org/abs/2308.15194