The Clinical Algorithm Nosology
Autor: | Lawrence K. Gottlieb, Steven D. Pearson, Scott Davis, Carmi Z. Margolis, Lisa K. Schreier |
---|---|
Rok vydání: | 1992 |
Předmět: |
Nosology
Scoring system media_common.quotation_subject Ordinal Scale Machine learning computer.software_genre 03 medical and health sciences 0302 clinical medicine Clinical Protocols Similarity (network science) Humans Medicine Quality (business) 030212 general & internal medicine media_common Observer Variation business.industry 030503 health policy & services Health Policy Decision Trees Reproducibility of Results Guideline Clinical algorithm Inter-rater reliability Evaluation Studies as Topic Data mining Artificial intelligence 0305 other medical science business computer Algorithms |
Zdroj: | Medical Decision Making. 12:123-131 |
ISSN: | 1552-681X 0272-989X |
DOI: | 10.1177/0272989x9201200205 |
Popis: | Concern regarding the cost and quality of medical care has led to a proliferation of competing clinical practice guidelines. No technique has been described for determining objectively the degree of similarity between alternative guidelines for the same clinical problem. The authors describe the development of the Clinical Algorithm Nosology (CAN), a new method to com pare one form of guideline the clinical algorithm The CAN measures overall design com plexity independent of algorithm content, qualitatively describes the clinical differences be tween two alternative algorithms, and then scores the degree of similarity between them CAN algorithm design-complexity scores correlated highly with clinicians' estimates of com plexity on an ordinal scale (r = 0 86) Five pairs of clinical algorithms addressing three topics (gallstone lithotripsy, thyroid nodule, and smusitis) were selected for interrater reliability testing of the CAN clinical-similarity scoring system Raters categorized the similarity of algorithm pathways in alternative algorithms as "identical," "similar," or "different " Interrater agreement was achieved on 85/109 scores (80%), weighted kappa statistic, k = 0 73. It is concluded that the CAN is a valid method for determining the structural complexity of clinical algorithms, and a reliable method for describing differences and scoring the similarity between algorithms for the same clinical problem In the future, the CAN may serve to evaluate the reliability of algorithm development programs, and to support providers and purchasers in choosing among alternative clinical guidelines. Key words. guidelines; clinical algorithms, reliability; validity; quality assurance. (Med Decis Making 1992;12:123-131) |
Databáze: | OpenAIRE |
Externí odkaz: |