Zobrazeno 1 - 6
of 6
pro vyhledávání: '"Kumar, I. Elizabeth"'
In fair machine learning, one source of performance disparities between groups is over-fitting to groups with relatively few training samples. We derive group-specific bounds on the generalization error of welfare-centric fair machine learning that b
Externí odkaz:
http://arxiv.org/abs/2402.18803
Publikováno v:
AIES '22: Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society
Credit is an essential component of financial wellbeing in America, and unequal access to it is a large factor in the economic disparities between demographic groups that exist today. Today, machine learning algorithms, sometimes trained on alternati
Externí odkaz:
http://arxiv.org/abs/2210.02516
Publikováno v:
2022 ACM Conference on Fairness, Accountability, and Transparency (FAccT '22)
Deployed AI systems often do not work. They can be constructed haphazardly, deployed indiscriminately, and promoted deceptively. However, despite this reality, scholars, the press, and policymakers pay too little attention to functionality. This lead
Externí odkaz:
http://arxiv.org/abs/2206.09511
Autor:
Hancox-Li, Leif, Kumar, I. Elizabeth
As the public seeks greater accountability and transparency from machine learning algorithms, the research literature on methods to explain algorithms and their outputs has rapidly expanded. Feature importance methods form a popular class of explanat
Externí odkaz:
http://arxiv.org/abs/2101.12737
Game-theoretic formulations of feature importance have become popular as a way to "explain" machine learning models. These methods define a cooperative game between the features of a model and distribute influence among these input elements using som
Externí odkaz:
http://arxiv.org/abs/2002.11097
Publikováno v:
Ohio State Law Journal; 2024, Vol. 85 Issue 3, p415-470, 56p