Assessing gender bias in machine translation: a case study with Google Translate
Autor: | Luis C. Lamb, Pedro H. C. Avelar, Marcelo O. R. Prates |
---|---|
Rok vydání: | 2019 |
Předmět: |
FOS: Computer and information sciences
0209 industrial biotechnology Computer Science - Computation and Language Machine translation Yoruba 02 engineering and technology Scientific literature Debiasing computer.software_genre language.human_language Computer Science - Computers and Society 020901 industrial engineering & automation Artificial Intelligence Order (exchange) Phenomenon Computers and Society (cs.CY) 0202 electrical engineering electronic engineering information engineering language Position (finance) Mainstream 020201 artificial intelligence & image processing Computation and Language (cs.CL) computer Software Cognitive psychology |
Zdroj: | Neural Computing and Applications. 32:6363-6381 |
ISSN: | 1433-3058 0941-0643 |
Popis: | Recently there has been a growing concern about machine bias, where trained statistical models grow to reflect controversial societal asymmetries, such as gender or racial bias. A significant number of AI tools have recently been suggested to be harmfully biased towards some minority, with reports of racist criminal behavior predictors, Iphone X failing to differentiate between two Asian people and Google photos' mistakenly classifying black people as gorillas. Although a systematic study of such biases can be difficult, we believe that automated translation tools can be exploited through gender neutral languages to yield a window into the phenomenon of gender bias in AI. In this paper, we start with a comprehensive list of job positions from the U.S. Bureau of Labor Statistics (BLS) and used it to build sentences in constructions like "He/She is an Engineer" in 12 different gender neutral languages such as Hungarian, Chinese, Yoruba, and several others. We translate these sentences into English using the Google Translate API, and collect statistics about the frequency of female, male and gender-neutral pronouns in the translated output. We show that GT exhibits a strong tendency towards male defaults, in particular for fields linked to unbalanced gender distribution such as STEM jobs. We ran these statistics against BLS' data for the frequency of female participation in each job position, showing that GT fails to reproduce a real-world distribution of female workers. We provide experimental evidence that even if one does not expect in principle a 50:50 pronominal gender distribution, GT yields male defaults much more frequently than what would be expected from demographic data alone. We are hopeful that this work will ignite a debate about the need to augment current statistical translation tools with debiasing techniques which can already be found in the scientific literature. Comment: Accepted for publication on Neural Computing and Applications; 33 pages, 14 figures, 12 tables |
Databáze: | OpenAIRE |
Externí odkaz: |