Biased Programmers? Or Biased Data? A Field Experiment in Operationalizing AI Ethics
Autor: | Augustin Chaintreau, Daniel Hsu, Nakul Verma, Bo Cowgill, Fabrizio Dell'Acqua, Samuel Deng |
---|---|
Rok vydání: | 2020 |
Předmět: |
FOS: Computer and information sciences
General Economics (econ.GN) Computer science Population Psychological intervention Standardized test Audit Machine learning computer.software_genre Human capital FOS: Economics and business Computer Science - Computers and Society Software Computers and Society (cs.CY) Code (cryptography) Psychological testing education Programmer Economics - General Economics education.field_of_study Actuarial science Operationalization business.industry Subject (documents) Ai ethics Predictive analytics Test (assessment) Incentive Artificial intelligence business computer |
Zdroj: | EC |
Popis: | Why do biased predictions arise? What interventions can prevent them? We evaluate 8.2 million algorithmic predictions of math performance from $\approx$400 AI engineers, each of whom developed an algorithm under a randomly assigned experimental condition. Our treatment arms modified programmers' incentives, training data, awareness, and/or technical knowledge of AI ethics. We then assess out-of-sample predictions from their algorithms using randomized audit manipulations of algorithm inputs and ground-truth math performance for 20K subjects. We find that biased predictions are mostly caused by biased training data. However, one-third of the benefit of better training data comes through a novel economic mechanism: Engineers exert greater effort and are more responsive to incentives when given better training data. We also assess how performance varies with programmers' demographic characteristics, and their performance on a psychological test of implicit bias (IAT) concerning gender and careers. We find no evidence that female, minority and low-IAT engineers exhibit lower bias or discrimination in their code. However, we do find that prediction errors are correlated within demographic groups, which creates performance improvements through cross-demographic averaging. Finally, we quantify the benefits and tradeoffs of practical managerial or policy interventions such as technical advice, simple reminders, and improved incentives for decreasing algorithmic bias. Part of the Navigating the Broader Impacts of AI Research Workshop at NeurIPS 2020 |
Databáze: | OpenAIRE |
Externí odkaz: |