A chance corrected agreement measure takes into account the possibility of agreement
occurring by chance.
The Kappa coefficient is the most popular measure for chance corrected agreement between
qualitative variables. It is the overall observed agreement corrected for the possibility of
agreement occurring by chance. A weighted version of the statistic is useful for ordinal
variables as it weights disagreements dependent on the degree of disagreement between
observers. As the Kappa coefficient is an overall summary statistic, it should be accompanied
by an agreement plot which can show more insight than an overall summary statistic.
Interpretation of the Kappa coefficient is difficult. The most popular set of criteria for
assessing agreement Landis and Koch (1977), who characterized values < 0 as
indicating no agreement and 0–0.20 as slight, 0.21–0.40 as fair, 0.41–0.60 as moderate,
0.61–0.80 as substantial, and 0.81–1 as almost perfect agreement. This set of guidelines is,
however, by no means universally accepted. A substantial imbalance in the contingency table's
marginal totals, either horizontally or vertically, results in a lower Kappa coefficient. And,
it will be higher if the imbalance in the corresponding marginal totals is asymmetrical rather
than symmetrical, or imperfectly rather than perfectly symmetrical.