Sibyl: Understanding and Addressing the Usability Challenges of Machine Learning In High-Stakes Decision Making
Autor: | Dongyu Liu, Alexandra Zytek, Kalyan Veeramachaneni, Rhema Vaithianathan |
---|---|
Jazyk: | angličtina |
Rok vydání: | 2021 |
Předmět: |
FOS: Computer and information sciences
Visual analytics Computer Science - Machine Learning Computer science Computer Science - Human-Computer Interaction 02 engineering and technology Machine learning computer.software_genre Human-Computer Interaction (cs.HC) Machine Learning (cs.LG) Domain (software engineering) Interactivity 0202 electrical engineering electronic engineering information engineering Set (psychology) Interactive visualization Interpretability business.industry 020207 software engineering Usability Computer Graphics and Computer-Aided Design Visualization Signal Processing Computer Vision and Pattern Recognition Artificial intelligence business computer Software |
Popis: | Machine learning (ML) is being applied to a diverse and ever-growing set of domains. In many cases, domain experts - who often have no expertise in ML or data science - are asked to use ML predictions to make high-stakes decisions. Multiple ML usability challenges can appear as result, such as lack of user trust in the model, inability to reconcile human-ML disagreement, and ethical concerns about oversimplification of complex problems to a single algorithm output. In this paper, we investigate the ML usability challenges that present in the domain of child welfare screening through a series of collaborations with child welfare screeners. Following the iterative design process between the ML scientists, visualization researchers, and domain experts (child screeners), we first identified four key ML challenges and honed in on one promising explainable ML technique to address them (local factor contributions). Then we implemented and evaluated our visual analytics tool, Sibyl, to increase the interpretability and interactivity of local factor contributions. The effectiveness of our tool is demonstrated by two formal user studies with 12 non-expert participants and 13 expert participants respectively. Valuable feedback was collected, from which we composed a list of design implications as a useful guideline for researchers who aim to develop an interpretable and interactive visualization tool for ML prediction models deployed for child welfare screeners and other similar domain experts. Updated to version presented at VIS 2021 |
Databáze: | OpenAIRE |
Externí odkaz: |