Which statistic measures the degree to which two raters agree on a score?

Prepare for the Psychometrician Board Licensure Exam with our interactive quizzes. Study with multiple choice questions complete with hints and explanations, and ace your exam!

Cohen's kappa is a statistical measure that quantifies the level of agreement between two or more raters who are assigning categorical ratings to a set of items. It goes beyond simple agreement by accounting for the possibility of agreement occurring by chance. This measure provides a more robust evaluation of the consistency between raters, especially when dealing with nominal data.

The kappa statistic ranges from -1 to 1, where 1 indicates perfect agreement, 0 indicates no agreement beyond chance, and negative values suggest less agreement than would be expected by chance. It is particularly useful in psychological and educational assessments where subjective judgments are common.

Other statistics mentioned serve different purposes, such as measuring differences or relationships between scores rather than direct agreement. Understanding Cohen's kappa allows practitioners to ascertain the reliability of ratings, ensuring that assessments are consistent and trustworthy across different raters.

Subscribe

Get the latest from Examzify

You can unsubscribe at any time. Read our privacy policy