Explanations, Fairness, and Appropriate Reliance in Human-AI Decision-Making

Autor: Schoeffer, Jakob, De-Arteaga, Maria, Kuehl, Niklas
Rok vydání: 2022
Předmět:
Druh dokumentu: Working Paper
DOI: 10.1145/3613904.3642621
Popis: In this work, we study the effects of feature-based explanations on distributive fairness of AI-assisted decisions, specifically focusing on the task of predicting occupations from short textual bios. We also investigate how any effects are mediated by humans' fairness perceptions and their reliance on AI recommendations. Our findings show that explanations influence fairness perceptions, which, in turn, relate to humans' tendency to adhere to AI recommendations. However, we see that such explanations do not enable humans to discern correct and incorrect AI recommendations. Instead, we show that they may affect reliance irrespective of the correctness of AI recommendations. Depending on which features an explanation highlights, this can foster or hinder distributive fairness: when explanations highlight features that are task-irrelevant and evidently associated with the sensitive attribute, this prompts overrides that counter AI recommendations that align with gender stereotypes. Meanwhile, if explanations appear task-relevant, this induces reliance behavior that reinforces stereotype-aligned errors. These results imply that feature-based explanations are not a reliable mechanism to improve distributive fairness.
Comment: ACM CHI Conference on Human Factors in Computing Systems (CHI '24)
Databáze: arXiv