Abstrakt: |
Personal healthcare information (PHI) disclosure is vital in leveraging artificial intelligence (AI) technology for depression treatment. Two challenges for PHI disclosure are high privacy concern and low trust. In this study, we integrate three theoretical lenses, i.e., information boundary theory, trust, and AI principles to investigate whether AI principles of empathy, accountability, and explainability can address these two challenges. We propose that AI empathy can increase depression patients' privacy concern and trust simultaneously. This paradox of high privacy concern and high trust has to be addressed for successful AI deployment in depression treatment. The proxies of AI accountability such as AI company reputation and government regulation can help reduce this paradox. Further, we argue that explainability can moderate the relationships between this paradox (i.e., privacy concern and trust) and patient's intention to disclose PHI. Overall, our expected results can provide significant implications to IS literature and practitioners. [ABSTRACT FROM AUTHOR] |