Bias, awareness, and ignorance in deep-learning-based face recognition
Autor: | Stefan Glüge, Corinna Hertweck, Mohammadreza Amirian, Thilo Stadelmann, Samuel Wehrli |
---|---|
Přispěvatelé: | University of Zurich |
Jazyk: | angličtina |
Rok vydání: | 2021 |
Předmět: |
Blinding
Fairness 10009 Department of Informatics media_common.quotation_subject Fairness Convolutional neural networks Discrimination Ethnic bias Gender bias Energy Engineering and Power Technology Ignorance Convolutional neural network 170: Ethik 000 Computer science knowledge & systems Management Science and Operations Research 006: Spezielle Computerverfahren Facial recognition system Task (project management) Race (biology) Similarity (psychology) Discrimination Ethnic bias media_common business.industry Mechanical Engineering Deep learning Gender bias Artificial intelligence Psychology business Cognitive psychology |
Popis: | Face Recognition (FR) is increasingly influencing our lives: we use it to unlock our phones; police uses it to identify suspects. Two main concerns are associated with this increase in facial recognition: (1) the fact that these systems are typically less accurate for marginalized groups, which can be described as “bias”, and (2) the increased surveillance through these systems. Our paper is concerned with the first issue. Specifically, we explore an intuitive technique for reducing this bias, namely “blinding” models to sensitive features, such as gender or race, and show why this cannot be equated with reducing bias. Even when not designed for this task, facial recognition models can deduce sensitive features, such as gender or race, from pictures of faces—simply because they are trained to determine the “similarity” of pictures. This means that people with similar skin tones, similar hair length, etc. will be seen as similar by facial recognition models. When confronted with biased decision-making by humans, one approach taken in job application screening is to “blind” the human decision-makers to sensitive attributes such as gender and race by not showing pictures of the applicants. Based on a similar idea, one might think that if facial recognition models were less aware of these sensitive features, the difference in accuracy between groups would decrease. We evaluate this assumption—which has already penetrated into the scientific literature as a valid de-biasing method—by measuring how “aware” models are of sensitive features and correlating this with differences in accuracy. In particular, we blind pre-trained models to make them less aware of sensitive attributes. We find that awareness and accuracy do not positively correlate, i.e., that bias$$\ne$$ ≠ awareness. In fact, blinding barely affects accuracy in our experiments. The seemingly simple solution of decreasing bias in facial recognition rates by reducing awareness of sensitive features does thus not work in practice: trying to ignore sensitive attributes is not a viable concept for less biased FR. |
Databáze: | OpenAIRE |
Externí odkaz: |