An investigation of a method for validating individual raters of performance and its implications for a generalized rating ability

TR Number
Date
1982
Journal Title
Journal ISSN
Volume Title
Publisher
Virginia Polytechnic Institute and State University
Abstract

The present study explored the use of a technique for validating individual raters of performance and its implications for the existence of a generalized "ability" in raters to make accurate assessments of others performance. Subjects were asked to record critical incidents of ratees' performance in two types of job situations-- 1) a videotaped presentation of managers interviewing problem employees, and 2) instructors teaching in actual college classrooms. Subjects also rated the performance of these managers and instructors. Scaled critical incidents were correlated with ratings to derive three kinds of accuracy scores. Two sets of these accuracy scores (the managerial "reliability" and "validity" estimates) were compared to determine if a method for inferring validity using many raters' observations were comparable to a method using only one rater's observations. The accuracy scores derived in two types of settings (i.e., reliability estimates derived from manager data and reliability estimates derived from instructor data) were compared to determine the generalizability of rating accuracy across situations. Unfortunately, little empirical support was provided for the equivalence of the two methods (i.e., "reliability" and "validity") or for the generalized ability notion. Possible reasons for the failure of the present study to support the hypotheses are discussed, with emphasis on the importance of considering the process of rating performance rather than the end products of such a process.

Description
Keywords
Citation