Shared Interest: Measuring Human-AI Alignment to Identify Recurring Patterns in Model Behavior
Autor: | Boggust, Angie, Hoover, Benjamin, Satyanarayan, Arvind, Strobelt, Hendrik |
---|---|
Rok vydání: | 2021 |
Předmět: | |
Druh dokumentu: | Working Paper |
Popis: | Saliency methods -- techniques to identify the importance of input features on a model's output -- are a common step in understanding neural network behavior. However, interpreting saliency requires tedious manual inspection to identify and aggregate patterns in model behavior, resulting in ad hoc or cherry-picked analysis. To address these concerns, we present Shared Interest: metrics for comparing model reasoning (via saliency) to human reasoning (via ground truth annotations). By providing quantitative descriptors, Shared Interest enables ranking, sorting, and aggregating inputs, thereby facilitating large-scale systematic analysis of model behavior. We use Shared Interest to identify eight recurring patterns in model behavior, such as cases where contextual features or a subset of ground truth features are most important to the model. Working with representative real-world users, we show how Shared Interest can be used to decide if a model is trustworthy, uncover issues missed in manual analyses, and enable interactive probing. Comment: 17 pages, 10 figures. Published in CHI 2022. For more details, see http://shared-interest.csail.mit.edu |
Databáze: | arXiv |
Externí odkaz: |