Algorithmic Transference: People Overgeneralize Failures of AI in the Government
Autor: | Chiara Longoni, Luca Cian, Ellie J. Kyung |
---|---|
Rok vydání: | 2022 |
Předmět: | |
Zdroj: | Journal of Marketing Research. 60:170-188 |
ISSN: | 1547-7193 0022-2437 |
Popis: | Artificial intelligence (AI) is pervading the government and transforming how public services are provided to consumers across policy areas spanning allocation of government benefits, law enforcement, risk monitoring, and the provision of services. Despite technological improvements, AI systems are fallible and may err. How do consumers respond when learning of AI failures? In 13 preregistered studies (N = 3,724) across a range of policy areas, the authors show that algorithmic failures are generalized more broadly than human failures. This effect is termed “algorithmic transference” as it is an inferential process that generalizes (i.e., transfers) information about one member of a group to another member of that same group. Rather than reflecting generalized algorithm aversion, algorithmic transference is rooted in social categorization: it stems from how people perceive a group of AI systems versus a group of humans. Because AI systems are perceived as more homogeneous than people, failure information about one AI algorithm is transferred to another algorithm to a greater extent than failure information about a person is transferred to another person. Capturing AI's impact on consumers and societies, these results show how the premature or mismanaged deployment of faulty AI technologies may undermine the very institutions that AI systems are meant to modernize. |
Databáze: | OpenAIRE |
Externí odkaz: |