Comparisons of Quality, Correctness, and Similarity Between ChatGPT-Generated and Human-Written Abstracts for Basic Research: Cross-Sectional Study

Autor: Shu-Li Cheng, Shih-Jen Tsai, Ya-Mei Bai, Chih-Hung Ko, Chih-Wei Hsu, Fu-Chi Yang, Chia-Kuang Tsai, Yu-Kang Tu, Szu-Nian Yang, Ping-Tao Tseng, Tien-Wei Hsu, Chih-Sung Liang, Kuan-Pin Su
Jazyk: angličtina
Rok vydání: 2023
Předmět:
Zdroj: Journal of Medical Internet Research, Vol 25, p e51229 (2023)
Druh dokumentu: article
ISSN: 1438-8871
DOI: 10.2196/51229
Popis: BackgroundChatGPT may act as a research assistant to help organize the direction of thinking and summarize research findings. However, few studies have examined the quality, similarity (abstracts being similar to the original one), and accuracy of the abstracts generated by ChatGPT when researchers provide full-text basic research papers. ObjectiveWe aimed to assess the applicability of an artificial intelligence (AI) model in generating abstracts for basic preclinical research. MethodsWe selected 30 basic research papers from Nature, Genome Biology, and Biological Psychiatry. Excluding abstracts, we inputted the full text into ChatPDF, an application of a language model based on ChatGPT, and we prompted it to generate abstracts with the same style as used in the original papers. A total of 8 experts were invited to evaluate the quality of these abstracts (based on a Likert scale of 0-10) and identify which abstracts were generated by ChatPDF, using a blind approach. These abstracts were also evaluated for their similarity to the original abstracts and the accuracy of the AI content. ResultsThe quality of ChatGPT-generated abstracts was lower than that of the actual abstracts (10-point Likert scale: mean 4.72, SD 2.09 vs mean 8.09, SD 1.03; P
Databáze: Directory of Open Access Journals
Nepřihlášeným uživatelům se plný text nezobrazuje