No Explainability without Accountability
Autor: | Jordan Boyd-Graber, Melissa Birchfield, Tongshuang Wu, Alison Smith-Renner, Ron Fan, Daniel S. Weld, Leah Findlater |
---|---|
Rok vydání: | 2020 |
Předmět: |
Unintended consequences
media_common.quotation_subject 05 social sciences 020207 software engineering 02 engineering and technology Model correction Empirical research Perception Accountability 0202 electrical engineering electronic engineering information engineering Introspection 0501 psychology and cognitive sciences Quality (business) Psychology 050107 human factors User feedback media_common Cognitive psychology |
Zdroj: | CHI |
DOI: | 10.1145/3313831.3376624 |
Popis: | Automatically generated explanations of how machine learning (ML) models reason can help users understand and accept them. However, explanations can have unintended consequences: promoting over-reliance or undermining trust. This paper investigates how explanations shape users' perceptions of ML models with or without the ability to provide feedback to them: (1) does revealing model flaws increase users' desire to "fix" them; (2) does providing explanations cause users to believe - wrongly - that models are introspective, and will thus improve over time. Through two controlled experiments - varying model quality - we show how the combination of explanations and user feedback impacted perceptions, such as frustration and expectations of model improvement. Explanations without opportunity for feedback were frustrating with a lower quality model, while interactions between explanation and feedback for the higher quality model suggest that detailed feedback should not be requested without explanation. Users expected model correction, regardless of whether they provided feedback or received explanations. |
Databáze: | OpenAIRE |
Externí odkaz: |