Comparative study of ChatGPT and human evaluators on the assessment of medical literature according to recognised reporting standards
Autor: | Hayley A Hutchings, Thomas D Dobbs, Iain S Whitaker, Richard HR Roberts, Stephen R Ali |
---|---|
Jazyk: | angličtina |
Rok vydání: | 2023 |
Předmět: | |
Zdroj: | BMJ Health & Care Informatics, Vol 30, Iss 1 (2023) |
Druh dokumentu: | article |
ISSN: | 2023-1008 2632-1009 |
DOI: | 10.1136/bmjhci-2023-100830 |
Popis: | Introduction Amid clinicians’ challenges in staying updated with medical research, artificial intelligence (AI) tools like the large language model (LLM) ChatGPT could automate appraisal of research quality, saving time and reducing bias. This study compares the proficiency of ChatGPT3 against human evaluation in scoring abstracts to determine its potential as a tool for evidence synthesis.Methods We compared ChatGPT’s scoring of implant dentistry abstracts with human evaluators using the Consolidated Standards of Reporting Trials for Abstracts reporting standards checklist, yielding an overall compliance score (OCS). Bland-Altman analysis assessed agreement between human and AI-generated OCS percentages. Additional error analysis included mean difference of OCS subscores, Welch’s t-test and Pearson’s correlation coefficient.Results Bland-Altman analysis showed a mean difference of 4.92% (95% CI 0.62%, 0.37%) in OCS between human evaluation and ChatGPT. Error analysis displayed small mean differences in most domains, with the highest in ‘conclusion’ (0.764 (95% CI 0.186, 0.280)) and the lowest in ‘blinding’ (0.034 (95% CI 0.818, 0.895)). The strongest correlations between were in ‘harms’ (r=0.32, p |
Databáze: | Directory of Open Access Journals |
Externí odkaz: |