Evaluating Oral Case Presentations Using a Checklist
Autor: | Yolanda Haywood, Benjamin Blatt, Jillian S. Catalanotti, Karen Lewis, Samuel J. Simmens, Andrea L. Flory, Seema Kakar, Matthew Mintz |
---|---|
Rok vydání: | 2013 |
Předmět: |
Educational measurement
medicine.medical_specialty Faculty Medical Students Medical Objective structured clinical examination Intraclass correlation education Pilot Projects Peer Group Education medicine Humans Reliability (statistics) Medical education business.industry General Medicine Checklist Peer assessment Family medicine District of Columbia Clinical Competence Educational Measurement business Clinical skills Kappa Education Medical Undergraduate |
Zdroj: | Academic Medicine. 88:1363-1367 |
ISSN: | 1040-2446 |
DOI: | 10.1097/acm.0b013e31829efed3 |
Popis: | PURPOSE Previous studies have shown student-evaluators to be reliable assessors of some clinical skills, but this model has not been studied for oral case presentations (OCPs). The purpose of this study was to examine the validity of student-evaluators in assessing OCP by comparing them with faculty. METHOD In 2010, the authors developed a dichotomous checklist. They trained 30 fourth-year medical students (student-evaluators) to use it to assess 170 second-year medical students' OCPs in real time during a year-end objective structured clinical examination. Ten faculty physicians then scored videos of a random sample of these OCPs. After discarding items with poor faculty reliability, the authors assessed agreement between faculty and student-evaluators on 18 individual items, total scores, and pass/fail decisions. RESULTS The total score correlation between student-evaluators and faculty was 0.84 (P < .001) and was somewhat better than the faculty-faculty intraclass correlation (r = 0.71). Using a 70% pass/fail cutoff, faculty and student-evaluator agreement was 74% (Kappa = 0.46; 95% CI, 0.20-0.72). Overall, student-evaluator scores were more lenient than faculty scores (72% versus 56% pass rates; P = .03). CONCLUSIONS Senior student-evaluators were able to reliably assess second-year medical students' OCP skills. The results support the use of student-evaluators for peer assessment of OCPs in low-stakes settings, but evidence of leniency compared with faculty assessment suggests caution in using student-evaluators in high-stakes settings. Extending peer assessment to OCPs provides a practical approach for low-resource evaluation of this essential skill. |
Databáze: | OpenAIRE |
Externí odkaz: |