Popis: |
Abstract Background Faced with the high cost and limited efficiency of classical randomized controlled trials, researchers are increasingly applying adaptive designs to speed up the development of new drugs. However, the application of adaptive design to drug randomized controlled trials (RCTs) and whether the reporting is adequate are unclear. Thus, this study aimed to summarize the epidemiological characteristics of the relevant trials and assess their reporting quality by the Adaptive designs CONSORT Extension (ACE) checklist. Methods We searched MEDLINE, EMBASE, Cochrane Central Register of Controlled Trials (CENTRAL) and ClinicalTrials.gov from inception to January 2020. We included drug RCTs that explicitly claimed to be adaptive trials or used any type of adaptative design. We extracted the epidemiological characteristics of included studies to summarize their adaptive design application. We assessed the reporting quality of the trials by Adaptive designs CONSORT Extension (ACE) checklist. Univariable and multivariable linear regression models were used to the association of four prespecified factors with the quality of reporting. Results Our survey included 108 adaptive trials. We found that adaptive design has been increasingly applied over the years, and was commonly used in phase II trials (n = 45, 41.7%). The primary reasons for using adaptive design were to speed the trial and facilitate decision-making (n = 24, 22.2%), maximize the benefit of participants (n = 21, 19.4%), and reduce the total sample size (n = 15, 13.9%). Group sequential design (n = 63, 58.3%) was the most frequently applied method, followed by adaptive randomization design (n = 26, 24.1%), and adaptive dose-finding design (n = 24, 22.2%). The proportion of adherence to the ACE checklist of 26 topics ranged from 7.4 to 99.1%, with eight topics being adequately reported (i.e., level of adherence ≥ 80%), and eight others being poorly reported (i.e., level of adherence ≤ 30%). In addition, among the seven items specific for adaptive trials, three were poorly reported: accessibility to statistical analysis plan (n = 8, 7.4%), measures for confidentiality (n = 14, 13.0%), and assessments of similarity between interim stages (n = 25, 23.1%). The mean score of the ACE checklist was 13.9 (standard deviation [SD], 3.5) out of 26. According to our multivariable regression analysis, later published trials (estimated β = 0.14, p |