Zobrazeno 1 - 10
of 81
pro vyhledávání: '"De‐Arteaga, Maria"'
Biased human decisions have consequential impacts across various domains, yielding unfair treatment of individuals and resulting in suboptimal outcomes for organizations and society. In recognition of this fact, organizations regularly design and dep
Externí odkaz:
http://arxiv.org/abs/2411.18122
In algorithmic toxicity detection pipelines, it is important to identify which demographic group(s) are the subject of a post, a task commonly known as \textit{target (group) detection}. While accurate detection is clearly important, we further advoc
Externí odkaz:
http://arxiv.org/abs/2407.11933
The pervasive spread of misinformation and disinformation poses a significant threat to society. Professional fact-checkers play a key role in addressing this threat, but the vast scale of the problem forces them to prioritize their limited resources
Externí odkaz:
http://arxiv.org/abs/2401.16558
In this critical survey, we analyze typical claims on the relationship between explainable AI (XAI) and fairness to disentangle the multidimensional relationship between these two concepts. Based on a systematic literature review and a subsequent qua
Externí odkaz:
http://arxiv.org/abs/2310.13007
Growing concerns regarding algorithmic fairness have led to a surge in methodologies to mitigate algorithmic bias. However, such methodologies largely assume that observed labels in training data are correct. This is problematic because bias in label
Externí odkaz:
http://arxiv.org/abs/2307.08945
Autor:
Tahaei, Mohammad, Constantinides, Marios, Quercia, Daniele, Kennedy, Sean, Muller, Michael, Stumpf, Simone, Liao, Q. Vera, Baeza-Yates, Ricardo, Aroyo, Lora, Holbrook, Jess, Luger, Ewa, Madaio, Michael, Blumenfeld, Ilana Golbin, De-Arteaga, Maria, Vitak, Jessica, Olteanu, Alexandra
In recent years, the CHI community has seen significant growth in research on Human-Centered Responsible Artificial Intelligence. While different research communities may use different terminology to discuss similar topics, all of this work is ultima
Externí odkaz:
http://arxiv.org/abs/2302.08157
Publikováno v:
Proceedings of the Web Conference, WWW 2023
Algorithmic bias often arises as a result of differential subgroup validity, in which predictive relationships vary across groups. For example, in toxic language detection, comments targeting different demographic groups can vary markedly across grou
Externí odkaz:
http://arxiv.org/abs/2302.07372
Autor:
Gao, Ruijiang, Saar-Tsechansky, Maytal, De-Arteaga, Maria, Han, Ligong, Sun, Wei, Lee, Min Kyung, Lease, Matthew
Human-AI complementarity is important when neither the algorithm nor the human yields dominant performance across all instances in a given context. Recent work that explored human-AI collaboration has considered decisions that correspond to classific
Externí odkaz:
http://arxiv.org/abs/2302.02944
In this work, we study the effects of feature-based explanations on distributive fairness of AI-assisted decisions, specifically focusing on the task of predicting occupations from short textual bios. We also investigate how any effects are mediated
Externí odkaz:
http://arxiv.org/abs/2209.11812
Machine learning risks reinforcing biases present in data, and, as we argue in this work, in what is absent from data. In healthcare, biases have marked medical history, leading to unequal care affecting marginalised groups. Patterns in missing data
Externí odkaz:
http://arxiv.org/abs/2208.06648