The Emerging Role of AI in Patient Education: A Comparative Analysis of the Accuracy of Large Language Models for Pelvic Organ Prolapse.

Autor: Rahimli Ocakoglu, Sakine, Coskun, Burhan
Předmět:
Zdroj: Medical Principles & Practice; 2024, Vol. 33 Issue 4, p330-337, 8p
Abstrakt: Introduction: This study aimed to evaluate the accuracy, completeness, precision, and readability of outputs generated by three large language models (LLMs); these are GPT by OpenAI, BARD by Google, and Bing by Microsoft, in comparison to patient education material on pelvic organ prolapse (POP) provided by the Royal College of Obstetricians and Gynecologists (RCOG). Methods: A total of 15 questions were retrieved from the RCOG website and input into the three LLMs. Two independent reviewers evaluated the outputs for accuracy, completeness, and precision. Readability was assessed using the Simplified Measure of Gobbledygook (SMOG) score and the Flesch-Kincaid Grade Level (FKGL) score. Results: Significant differences were observed in completeness and precision metrics. ChatGPT ranked highest in completeness (66.7%), while Bing led in precision (100%). No significant differences were observed in accuracy across all models. In terms of readability, ChatGPT exhibited higher difficulty than BARD, Bing, and the original RCOG answers. Conclusion: While all models displayed a variable degree of correctness, ChatGPT excelled in completeness, significantly surpassing BARD and Bing. However, Bing led in precision, providing the most relevant and concise answers. Regarding readability, ChatGPT exhibited higher difficulty. We observed that while all LLMs showed varying degrees of correctness in answering RCOG questions on patient information for POP, ChatGPT was the most comprehensive, but its answers were harder to read. Bing, on the other hand, was the most precise. The findings highlight the potential of LLMs in health information dissemination and the need for careful interpretation of their outputs. Highlights of the Study: Studies have been performed to compare the content production performance of large language models (LLMs). The analysis of the concordance of LLMs is in concordance with authoritative gynecology and obstetrics texts. This establishes a new framework for evaluating the medical content of artificial intelligence. This study also initiates readability evaluation of patient information generated by artificial intelligence in women's health. [ABSTRACT FROM AUTHOR]
Databáze: Complementary Index