Zdroj: |
Markl, N 2022, Language variation and algorithmic bias: understanding algorithmic bias in British English automatic speech recognition . in Proceedings of 2022 5th ACM Conference on Fairness, Accountability, and Transparency (FAccT 2022) . pp. 521-534, 5th Annual ACM Conference on Fairness, Accountability, and Transparency, Seoul, Korea, Democratic People's Republic of, 21/06/22 . https://doi.org/10.1145/3531146.3533117 |
Popis: |
All language is characterised by variation which language users employ to construct complex social identities and express social meaning. Like other machine learning technologies, speech and language technologies (re)produce structural oppression when they perform worse for marginalised language communities. Using knowledge and theories from sociolinguistics, I explore why commercial automatic speech recognition systems and other language technologies perform significantly worse for already marginalised populations, such as second-language speakers and speakers of stigmatised varieties of English in the British Isles. Situating language technologies within the broader scholarship around algorithmic bias, consider the allocative and representational harms they can cause even (and perhaps especially) in systems which do not exhibit predictive bias, narrowly defined as differential performance between groups. This raises the question whether addressing or “fixing” this “bias” is actually always equivalent to mitigating the harms algorithmic systems can cause, in particular to marginalised communities. |