Abstrakt: |
As they become more common, automated systems are also becoming increasingly opaque, challenging their users' abilities to explain and interpret their outputs. In this study, we test the predictions of fuzzy-trace theory—a leading theory of how people interpret quantitative information—on user decision making after interacting with an online decision aid. We recruited a sample of 205 online crowdworkers and asked them to use a system that was designed to detect URLs that were part of coordinated misinformation campaigns. We examined how user endorsements of system interpretability covaried with performance on this coordinated misinformation detection task and found that subjects who endorsed system interpretability displayed enhanced discernment. This interpretability was, in turn, associated with both objective mathematical ability and mathematical self-confidence. Beyond these individual differences, we evaluated the impact of a theoretically motivated intervention that was designed to promote sensemaking of system output. Participants provided with a "gist" version of system output, expressing the bottom-line meaning of that output, were better able to identify URLs that might have been part of a coordinated misinformation campaign, compared to users given the same information presented as verbatim quantitative metrics. This work highlights the importance of enabling users to grasp the essential, gist meaning of the information they receive from automated systems, which benefits users regardless of individual differences. [ABSTRACT FROM AUTHOR] |