Evaluating Smart Assistant Responses for Accuracy and Misinformation Regarding Human Papillomavirus Vaccination: Content Analysis Study
Autor: | Ryli Hockensmith, Eric R. Walsh-Buhi, Rebecca Fagen Houghton, John Ferrand |
---|---|
Rok vydání: | 2020 |
Předmět: |
Male
medicine.medical_specialty 020205 medical informatics digital health chatbots Health Informatics 02 engineering and technology lcsh:Computer applications to medicine. Medical informatics infodemiology Infodemiology 03 medical and health sciences 0302 clinical medicine 0202 electrical engineering electronic engineering information engineering medicine Humans Papillomavirus Vaccines 030212 general & internal medicine Misinformation Human papillomavirus human papillomavirus misinformation conversational agents smart assistants Original Paper lcsh:Public aspects of medicine Communication Public health Papillomavirus Infections lcsh:RA1-1270 vaccination Digital health Human papillomavirus vaccination Vaccination Content analysis Family medicine lcsh:R858-859.7 Female Psychology |
Zdroj: | Journal of Medical Internet Research Journal of Medical Internet Research, Vol 22, Iss 8, p e19018 (2020) |
ISSN: | 1438-8871 |
DOI: | 10.2196/19018 |
Popis: | Background Almost half (46%) of Americans have used a smart assistant of some kind (eg, Apple Siri), and 25% have used a stand-alone smart assistant (eg, Amazon Echo). This positions smart assistants as potentially useful modalities for retrieving health-related information; however, the accuracy of smart assistant responses lacks rigorous evaluation. Objective This study aimed to evaluate the levels of accuracy, misinformation, and sentiment in smart assistant responses to human papillomavirus (HPV) vaccination–related questions. Methods We systematically examined responses to questions about the HPV vaccine from the following four most popular smart assistants: Apple Siri, Google Assistant, Amazon Alexa, and Microsoft Cortana. One team member posed 10 questions to each smart assistant and recorded all queries and responses. Two raters independently coded all responses (κ=0.85). We then assessed differences among the smart assistants in terms of response accuracy, presence of misinformation, and sentiment regarding the HPV vaccine. Results A total of 103 responses were obtained from the 10 questions posed across the smart assistants. Google Assistant data were excluded owing to nonresponse. Over half (n=63, 61%) of the responses of the remaining three smart assistants were accurate. We found statistically significant differences across the smart assistants (N=103, χ22=7.807, P=.02), with Cortana yielding the greatest proportion of misinformation. Siri yielded the greatest proportion of accurate responses (n=26, 72%), whereas Cortana yielded the lowest proportion of accurate responses (n=33, 54%). Most response sentiments across smart assistants were positive (n=65, 64%) or neutral (n=18, 18%), but Cortana’s responses yielded the largest proportion of negative sentiment (n=7, 12%). Conclusions Smart assistants appear to be average-quality sources for HPV vaccination information, with Alexa responding most reliably. Cortana returned the largest proportion of inaccurate responses, the most misinformation, and the greatest proportion of results with negative sentiments. More collaboration between technology companies and public health entities is necessary to improve the retrieval of accurate health information via smart assistants. |
Databáze: | OpenAIRE |
Externí odkaz: |