Popis: |
Automation through adaptive and learning systems – be they driven by heuristics or machine learning and artificial intelligence methods – is a firm requirement if the visions of personalized digital health and precision medicine are to be addressed at scale. Manual adjustments based on individual abilities, interests, and needs are not possible at a fine-grained many-dimensional, and high-frequency level in a life-accompanying manner. Humans tend to anthropomorphologize interactive systems; especially those that display or indicate autonomy and agency. This begs the question, whether a possibly shifting sense of perceived agency that will likely be exacerbated with the more widespread release of conversational and agent-like automation interfaces – e.g. as in recent developments in chat-based AI systems, may lead to noteworthy, possibly increased dangers of “not appropriately questioning outputs or actions” of such systems. Given the often sensitive and high-risk application areas, these issues are arguably particularly relevant in digital health, where both more tool-like systems, such as “traffic light status indicators” that simply present coarse data summaries, as well as more agent-like systems, e.g. in conversational decision support, could foster justification practices by conveniently or subversively inviting for the attribution of responsibility to the systems, at the peril of overlooking legal accountability for resulting outcomes and actions. This is the conceptual motivation and foundation of current work-in-progress to empirically investigate the assumptions underpinning this chain of arguments. The work to be discussed aims to provide foundational understandings and to investigate design patterns that can help address these issues. |