Autor: |
Shaari, Ariana L., Fano, Adam N., Anakwenze, Oke, Klifto, Christopher |
Předmět: |
|
Zdroj: |
Shoulder & Elbow; Aug2024, Vol. 16 Issue 4, p429-435, 7p |
Abstrakt: |
Background: Artificial intelligence (AI) has progressed at a fast pace. ChatGPT, a rapidly expanding AI platform, has several growing applications in medicine and patient care. However, its ability to provide high-quality answers to patient questions about orthopedic procedures such as Tommy John surgery is unknown. Our objective is to evaluate the quality of information provided by ChatGPT 3.5 and 4.0 in response to patient questions regarding Tommy John surgery. Methods: Twenty-five patient questions regarding Tommy John surgery were posed to ChatGPT 3.5 and 4.0. Readability was assessed via Flesch Kincaid Reading Ease, Flesh Kinkaid Grade Level, Gunning Fog Score, Simple Measure of Gobbledygook, Coleman Liau, and Automated Readability Index. The quality of each response was graded using a 5-point Likert scale. Results: ChatGPT generated information at an educational level that greatly exceeds the recommended level. ChatGPT 4.0 produced slightly better responses to common questions regarding Tommy John surgery with fewer inaccuracies than ChatGPT 3.5. Conclusion: Although ChatGPT can provide accurate information regarding Tommy John surgery, its responses may not be easily comprehended by the average patient. As AI platforms become more accessible to the public, patients must be aware of their limitations. [ABSTRACT FROM AUTHOR] |
Databáze: |
Complementary Index |
Externí odkaz: |
|