Exploring the impact of classification probabilities on users' trust in ambiguous instances

Autor: Hélio Lopes, Dalai dos Santos Ribeiro, Simone Diniz Junqueira Barbosa, Marisa Do Carmo Silva, Gabriel Diniz Junqueira Barbosa
Rok vydání: 2021
Předmět:
Zdroj: VL/HCC
DOI: 10.1109/vl/hcc51201.2021.9576291
Popis: The large-scale adoption of systems that automate classifications using Machine Learning (ML) algorithms raises pressing challenges as they support or make decisions with profound consequences for human beings. It is important to understand how users' trust is affected by ML models' suggestions, even when those models are wrong. Many research efforts have focused on the user's ability to interpret what a model has learned. In this paper, we seek to understand another aspect of ML interpretability: how the presence of classification probabilities affects users' trust in the model outcomes, especially in ambiguous instances. To this end, we conducted an online survey in which we asked participants to evaluate their agreement with an automatic classification made by an ML model before and after presenting them the model classification probabilities. Surprisingly, we found that, in ambiguous instances, respondents agreed more with incorrect model outcomes than with correct ones, requiring further analyses.
Databáze: OpenAIRE