Autor: |
Lyons, Joseph B., Jessup, Sarah A., Vo, Thy Q. |
Předmět: |
|
Zdroj: |
Topics in Cognitive Science; Jul2024, Vol. 16 Issue 3, p430-449, 20p |
Abstrakt: |
Prior research has demonstrated that trust in robots and performance of robots are two important factors that influence human–autonomy teaming. However, other factors may influence users' perceptions and use of autonomous systems, such as perceived intent of robots and decision authority of the robots. The current study experimentally examined participants' trust in an autonomous security robot (ASR), perceived trustworthiness of the ASR, and desire to use an ASR that varied in levels of decision authority and benevolence. Participants (N = 340) were recruited from Amazon Mechanical Turk. Results revealed the participants had increased trust in the ASR when the robot was described as having benevolent intent compared to self‐protective intent. There were several interactions between decision authority and intent when predicting the trust process, showing that intent may matter the most when the robot has discretion on executing that intent. Participants stated a desire to use the ASR in a military context compared to a public context. Implications for this research demonstrate that as robots become more prevalent in jobs paired with humans, factors such as transparency provided for the robot's intent and its decision authority will influence users' trust and trustworthiness. The role of decision authority and stated social intent as predictors of trust in autonomous robots Signaling intent to a human partner is an important capability of a machine partner for effective human‐autonomy teams. The current study demonstrated that the benefits of intent signaling vary based on the degree of decision authority a machine partner has to enact behaviors in accordance with that intent. [ABSTRACT FROM AUTHOR] |
Databáze: |
Complementary Index |
Externí odkaz: |
|