Popis: |
Deepfake technologies, present a number of interesting business opportunities for firms. However, these tools have also attracted the attention of lawmakers due to their potential form is use. Here, we consider one such potential use case: the pairing of audio chatbot technologies with voice-based deepfakes, i.e., voice cloning. We examine the potential for voice-clones to manipulate trust. Further, considering that several lawmakers have recently advocated a requirement that chatbot operators disclose chatbots’ autonomous nature to human users, as a means of managing potential consumer manipulation, we also consider whether chatbot disclosure helps to combat any trust-inducing effects of voice clones. In this work we seek to run a series of controlled experiments based on the investment game, evaluating how voice cloning and chatbot disclosure jointly affect subjects’ trust, proxied by their willingness to participate in the game against an autonomous, AI-enabled partner. We explore the influence of disclosure and voice-similarity on trust by utilizing an experiment design first introduced by Charness & Dufwenberg 2006, which was specifically developed to understand how communication, generally, influences trust. In the original game design, subjects are first "matched" to a human playing partner and randomly assigned to play the role of agent A or B. In the original treatment condition, subjects playing the role of B would have the opportunity to pass a written message to party A, stating whatever they like, e.g., superfluous comments, promises, etc. Subsequently, party A would decide whether they wished to play the game with B (choosing "in" or "out"). If party A decided to play, B could choose to walk away with some money, leaving A with nothing, or B could choose to roll a die. Choosing to roll, the final payoff to both parties would depend on the resulting number that was rolled, with a 5 in 6 chance of a positive payoff for both players. This study design enabled Charness & Dufwenberg 2006 to examine how communication would influence the decisions of both A (whether to trust B), and B (how to behave upon receiving that trust). We modified this game, in two key ways. First, we re-implemented the game in a digital setting, incorporating voice-based messaging in addition to the original text-based communication that was considered in the original study. Second, we assign party B to always be played by an autonomous agent. We thus have two experimental conditions that generally mirror those of the original study design: a control condition, where no communication would take place, and a written (text-based) message condition, wherein party A would receive a written text-based message from party B. In the latter case, for the sake of realism, we simply reused messages at random that were drawn from those exchanged by subjects in the original studies as documented by (Charness & Dufwenberg 2006, Charness & Dufwenberg 2010). We then supplement the above conditions with others of our own devising, adding conditions wherein the communication would take place via a voice-based message, i.e., a recording. Further, we manipulated the voice used in generating that recording, employing a random voice in one condition, and a clone voice in another. We also manipulate the degree of trustworthiness (High, No Trust History, Low), as well as whether or not the agent is disclosed as an AI or not. |