An updated meta-analysis of the interrater reliability of supervisory performance ratings.

Autor: Zhou Y; Department of Psychology, University of Minnesota, Twin Cities., Sackett PR; Department of Psychology, University of Minnesota, Twin Cities., Shen W; Schulich School of Business, York University., Beatty AS; Human Resources Research Organization.
Jazyk: angličtina
Zdroj: The Journal of applied psychology [J Appl Psychol] 2024 Jun; Vol. 109 (6), pp. 949-970. Date of Electronic Publication: 2024 Jan 25.
DOI: 10.1037/apl0001174
Abstrakt: Given the centrality of the job performance construct to organizational researchers, it is critical to understand the reliability of the most common way it is operationalized in the literature. To this end, we conducted an updated meta-analysis on the interrater reliability of supervisory ratings of job performance ( k = 132 independent samples) using a new meta-analytic procedure (i.e., the Morris estimator), which includes both within- and between-study variance in the calculation of study weights. An important benefit of this approach is that it prevents large-sample studies from dominating the results. In this investigation, we also examined different factors that may affect interrater reliability, including job complexity, managerial level, rating purpose, performance measure, and rater perspective. We found a higher interrater reliability estimate ( r = .65) compared to previous meta-analyses on the topic, and our results converged with an important, but often neglected, finding from a previous meta-analysis by Conway and Huffcutt (1997), such that interrater reliability varies meaningfully by job type ( r = .57 for managerial positions vs. r = .68 for nonmanagerial positions). Given this finding, we advise against the use of an overall grand mean of interrater reliability. Instead, we recommend using job-specific or local reliabilities for making corrections for attenuation. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
Databáze: MEDLINE