Autor: |
Buyse M; 1 International Drug Development Institute (IDDI), San Francisco, CA, USA.; 2 Interuniversity Institute for Biostatistics and Statistical Bioinformatics (I-BioStat), Hasselt University, Hasselt, Belgium., Squifflet P; 3 International Drug Development Institute (IDDI), Louvain-la-Neuve, Belgium., Coart E; 3 International Drug Development Institute (IDDI), Louvain-la-Neuve, Belgium., Quinaux E; 3 International Drug Development Institute (IDDI), Louvain-la-Neuve, Belgium., Punt CJ; 4 Department of Medical Oncology, Academic Medical Center, University of Amsterdam, Amsterdam, The Netherlands., Saad ED; 3 International Drug Development Institute (IDDI), Louvain-la-Neuve, Belgium. |
Abstrakt: |
Background/aims Considerable human and financial resources are typically spent to ensure that data collected for clinical trials are free from errors. We investigated the impact of random and systematic errors on the outcome of randomized clinical trials. Methods We used individual patient data relating to response endpoints of interest in two published randomized clinical trials, one in ophthalmology and one in oncology. These randomized clinical trials enrolled 1186 patients with age-related macular degeneration and 736 patients with metastatic colorectal cancer. The ophthalmology trial tested the benefit of pegaptanib for the treatment of age-related macular degeneration and identified a statistically significant treatment benefit, whereas the oncology trial assessed the benefit of adding cetuximab to a regimen of capecitabine, oxaliplatin, and bevacizumab for the treatment of metastatic colorectal cancer and failed to identify a statistically significant treatment difference. We simulated trial results by adding errors that were independent of the treatment group (random errors) and errors that favored one of the treatment groups (systematic errors). We added such errors to the data for the response endpoint of interest for increasing proportions of randomly selected patients. Results Random errors added to up to 50% of the cases produced only slightly inflated variance in the estimated treatment effect of both trials, with no qualitative change in the p-value. In contrast, systematic errors produced bias even for very small proportions of patients with added errors. Conclusion A substantial amount of random errors is required before appreciable effects on the outcome of randomized clinical trials are noted. In contrast, even a small amount of systematic errors can severely bias the estimated treatment effects. Therefore, resources devoted to randomized clinical trials should be spent primarily on minimizing sources of systematic errors which can bias the analyses, rather than on random errors which result only in a small loss in power. |