Agreement Between Self and Other Ratings in Multi-Rater Tools: Performance, Alternative Measures, and Importance.
Description: Multi-rater tools also referred to as 360-degree feedback tools, are frequently used in addition to traditional supervisory appraisals due to sources (i.e., supervisor, peer, direct report) unique perspectives and opportunities to view different aspects of job performance. Research has found that the differences among sources are most prevalent between self and other ratings, and the direction of agreement is related to overall job performance. Research has typically focused on one form of agreement, the direction of an individual's self-ratings compared to others' ratings. The current study expanded on past research on rater agreement using a data set (n = 215) consisting of multi-rater data for professionals participating in a leadership development process. The study examined the ability to predict job performance with three different measures of self-other agreement (i.e., difference between overall mean scores (difference), mean absolute difference across items (difference), and mean correlation across items (similarity)). The study also examined how the relationships may differ across performance dimensions. The final purpose was to explore how the importance of the performance dimensions, as rated by the participant, may moderate the relationship between self-other agreement and job performance. Partial support for study's hypotheses was found. The direction and difference measures of agreement on the overall multi-rater tool and performance dimensions accounted for a significant amount of the variance in job performance. The relationship between the similarity measure of agreement and job performance, and the moderating effect of importance were not supported in the current study.
Date: August 2008
Creator: Grahek, Myranda