The Americleft Speech Project: A Training and Reliability Study
Autor: | Judith E. Trost-Cardamone, Debbie Sell, Cindy Dobbelsteyn, Kelly Nett Cordero, Angela J. Dixon, Gregory J. Stoddard, Anne Harding-Bell, Kristina Wilson, Anna Thurmes, Adriane L. Baylis, Kathy L. Chapman, Triona Sweeney |
---|---|
Rok vydání: | 2016 |
Předmět: |
Male
medicine.medical_specialty Speech-Language Pathology Audit Speech outcome Article Speech Disorders Disability Evaluation 03 medical and health sciences 0302 clinical medicine Physical medicine and rehabilitation Speech Production Measurement Phonetics Reliability study medicine Humans 030223 otorhinolaryngology Reliability (statistics) Protocol (science) business.industry Reproducibility of Results 030206 dentistry Intra-rater reliability Cleft Palate Inter-rater reliability Otorhinolaryngology Physical therapy Female Oral Surgery business |
Zdroj: | The Cleft Palate-Craniofacial Journal. 53:93-108 |
ISSN: | 1545-1569 1055-6656 |
DOI: | 10.1597/14-027 |
Popis: | Objective To describe the results of two reliability studies and to assess the effect of training on interrater reliability scores. Design The first study (1) examined interrater and intrarater reliability scores (weighted and unweighted kappas) and (2) compared interrater reliability scores before and after training on the use of the Cleft Audit Protocol for Speech–Augmented (CAPS-A) with British English-speaking children. The second study examined interrater and intrarater reliability on a modified version of the CAPS-A (CAPS-A Americleft Modification) with American and Canadian English-speaking children. Finally, comparisons were made between the interrater and intrarater reliability scores obtained for Study 1 and Study 2. Participants The participants were speech-language pathologists from the Americleft Speech Project. Results In Study 1, interrater reliability scores improved for 6 of the 13 parameters following training on the CAPS-A protocol. Comparison of the reliability results for the two studies indicated lower scores for Study 2 compared with Study 1. However, this appeared to be an artifact of the kappa statistic that occurred due to insufficient variability in the reliability samples for Study 2. When percent agreement scores were also calculated, the ratings appeared similar across Study 1 and Study 2. Conclusion The findings of this study suggested that improvements in interrater reliability could be obtained following a program of systematic training. However, improvements were not uniform across all parameters. Acceptable levels of reliability were achieved for those parameters most important for evaluation of velopharyngeal function. |
Databáze: | OpenAIRE |
Externí odkaz: |