Toward assessing clinical trial publications for reporting transparency
Autor: | Linh Hoang, Zeshan Peng, Mario Malički, Halil Kilicoglu, Jodi Schneider, Graciela Rosemblat, Sahil Wadhwa, Gerben ter Riet |
---|---|
Přispěvatelé: | Cardiology, ACS - Diabetes & metabolism, APH - Aging & Later Life, APH - Personalized Medicine, Faculteit Gezondheid, Urban Vitality |
Jazyk: | angličtina |
Rok vydání: | 2021 |
Předmět: |
Support Vector Machine
Phrase CONSORT Text mining Computer science Section (typography) Corpus annotation Health Informatics computer.software_genre Article law.invention 03 medical and health sciences Annotation 0302 clinical medicine Randomized controlled trial law Humans 030212 general & internal medicine Sentence classification Randomized Controlled Trials as Topic 030304 developmental biology 0303 health sciences business.industry Supervised learning Reporting guidelines Checklist Computer Science Applications Serial Publications Artificial intelligence business Precision and recall computer Natural language processing Sentence |
Zdroj: | Journal of biomedical informatics, 116:103717. Academic Press Inc. Journal of Biomedical Informatics, 116:103717. Academic Press Inc. J Biomed Inform |
ISSN: | 1532-0464 2124-9695 |
DOI: | 10.1016/j.jbi.2021.103717 |
Popis: | ObjectiveTo annotate a corpus of randomized controlled trial (RCT) publications with the checklist items of CONSORT reporting guidelines and using the corpus to develop text mining methods for RCT appraisal.MethodsWe annotated a corpus of 50 RCT articles at the sentence level using 37 fine-grained CONSORT checklist items. A subset (31 articles) was double-annotated and adjudicated, while 19 were annotated by a single annotator and reconciled by another. We calculated inter-annotator agreement at the article and section level using MASI (Measuring Agreement on Set-Valued Items) and at the CONSORT item level using Krippendorff’s α. We experimented with two rule-based methods (phrase-based and section header-based) and two supervised learning approaches (support vector machine and BioBERT-based neural network classifiers), for recognizing 17 methodology-related items in the RCT Methods sections.ResultsWe created CONSORT-TM consisting of 10,709 sentences, 4,845 (45%) of which were annotated with 5,246 labels. A median of 28 CONSORT items (out of possible 37) were annotated per article. Agreement was moderate at the article and section levels (average MASI: 0.60 and 0.64, respectively). Agreement varied considerably among individual checklist items (Krippendorff’s α= 0.06-0.96). The model based on BioBERT performed best overall for recognizing methodology-related items (micro-precision: 0.82, micro-recall: 0.63, micro-F1: 0.71). Combining models using majority vote and label aggregation further improved precision and recall, respectively.ConclusionOur annotated corpus, CONSORT-TM, contains more fine-grained information than earlier RCT corpora. Low frequency of some CONSORT items made it difficult to train effective text mining models to recognize them. For the items commonly reported, CONSORT-TM can serve as a testbed for text mining methods that assess RCT transparency, rigor, and reliability, and support methods for peer review and authoring assistance. Minor modifications to the annotation scheme and a larger corpus could facilitate improved text mining models. CONSORT-TM is publicly available at https://github.com/kilicogluh/CONSORT-TM.Graphical abstractHighlightsWe constructed a corpus of RCT publications annotated with CONSORT checklist items.We developed text mining methods to identify methodology-related check-list items.A BioBERT-based model performs best in recognizing adequately reported items.A phrase-based method performs best in recognizing infrequently reported items.The corpus and the text mining methods can be used to address reporting transparency. |
Databáze: | OpenAIRE |
Externí odkaz: |