Autor: |
Tantsyura, Vadim1 vtantsyura@targethealth.com, Dunn, Imogene McCanless2, Fendt, Kaye3, Kim, Yong Joong1, Waters, Joel4, Mitchel, Jules1 |
Předmět: |
|
Zdroj: |
Therapeutic Innovation & Regulatory Science. Nov2015, Vol. 49 Issue 6, p903-910. 8p. |
Abstrakt: |
Background: Data quality within the clinical research enterprise can be defined as the absence of errors that matter and whether the data are fit for purpose. This concept, proposed by the Clinical Trials Transformation Initiative, resulted from a culmination of collaboration with industry, academia, patient advocates, and regulators, and it emphasizes the presence of a hierarchy of error types, resulting in a more efficient and modern data-cleaning paradigm. While source document verification (SDV) is commonly used as a quality control method in clinical research, it is disproportionately expensive and often leads to questionable benefits. Although the current literature suggests that there is a need to reduce the burden of SDV, there is no consensus on how to replace this “tried and true” practice. Methods: This article proposes a practical risk-based monitoring approach based on published statistical evidence addressing the impact of database changes subsequent to SDV. Results: The analysis clearly demonstrates minimal effects of errors and error corrections on study results and study conclusions, with diminishing effect as the study size increases, and it suggests that, on average, <8% SDV is adequate to ensure data quality, with perhaps higher SDV rates for smaller studies and virtually 0% SDV for large studies. Conclusions: It is recommended that SDV, rather than just focusing on key primary efficacy and safety outcomes, focus on data clarification queries as highly discrepant (and the riskiest) data. [ABSTRACT FROM AUTHOR] |
Databáze: |
Library, Information Science & Technology Abstracts |
Externí odkaz: |
|