Zobrazeno 1 - 10
of 23
pro vyhledávání: '"Grari, Vincent"'
In this paper, we introduce a novel post-processing algorithm that is both model-agnostic and does not require the sensitive attribute at test time. In addition, our algorithm is explicitly designed to enforce minimal changes between biased and debia
Externí odkaz:
http://arxiv.org/abs/2408.15096
Autor:
Grari, Vincent, Detyniecki, Marcin
This paper presents a novel approach to optimizing profit margins in non-life insurance markets through a gradient descent-based method, targeting three key objectives: 1) maximizing profit margins, 2) ensuring conversion rates, and 3) enforcing fair
Externí odkaz:
http://arxiv.org/abs/2404.10275
Autor:
Grari, Vincent, Laugel, Thibault, Hashimoto, Tatsunori, Lamprier, Sylvain, Detyniecki, Marcin
In the field of algorithmic fairness, significant attention has been put on group fairness criteria, such as Demographic Parity and Equalized Odds. Nevertheless, these objectives, measured as global averages, have raised concerns about persistent loc
Externí odkaz:
http://arxiv.org/abs/2310.18413
Most research on fair machine learning has prioritized optimizing criteria such as Demographic Parity and Equalized Odds. Despite these efforts, there remains a limited understanding of how different bias mitigation strategies affect individual predi
Externí odkaz:
http://arxiv.org/abs/2302.07185
At the core of insurance business lies classification between risky and non-risky insureds, actuarial fairness meaning that risky insureds should contribute more and pay a higher premium than non-risky or less-risky ones. Actuaries, therefore, use ec
Externí odkaz:
http://arxiv.org/abs/2202.12008
Publikováno v:
IJCAI 2022
In recent years, most fairness strategies in machine learning models focus on mitigating unwanted biases by assuming that the sensitive information is observed. However this is not always possible in practice. Due to privacy purposes and var-ious reg
Externí odkaz:
http://arxiv.org/abs/2109.04999
In recent years, significant work has been done to include fairness constraints in the training objective of machine learning algorithms. Many state-of the-art algorithms tackle this challenge by learning a fair representation which captures all the
Externí odkaz:
http://arxiv.org/abs/2009.03183
In recent years, fairness has become an important topic in the machine learning research community. In particular, counterfactual fairness aims at building prediction models which ensure fairness at the most individual level. Rather than globally con
Externí odkaz:
http://arxiv.org/abs/2008.13122
Fair classification has become an important topic in machine learning research. While most bias mitigation strategies focus on neural networks, we noticed a lack of work on fair classifiers based on decision trees even though they have proven very ef
Externí odkaz:
http://arxiv.org/abs/1911.05369
The past few years have seen a dramatic rise of academic and societal interest in fair machine learning. While plenty of fair algorithms have been proposed recently to tackle this challenge for discrete variables, only a few ideas exist for continuou
Externí odkaz:
http://arxiv.org/abs/1911.04929