Agreement Coefficient

Based on the reported 95% confidence interval, the (kappa) is somewhere between 0.27 and 0.51, indicating only a moderate concordance between Siskel and Ebert. Since the overall probability of compliance is (Σ_{i} π_{ii}), the probability of a concordance under the zero hypothesis is the same (Σ_{i} π_{i}π_{+i}). Also note that (Σ_{i} π_{ii} = 0) does not mean a match and (Σ_{i} π_{ii} = 1) indicates a perfect match. Kappa statistics are defined in such a way that a greater value implies greater consistency: in statistics, inter-advisor reliability, inter-council concordance or concordance are the degree of concordance between evaluators. There is a score of homogeneity or consensus in the judges` assessments. A perfect match is when all counts fall on the main diagonal of the table and the probability of an agreement is equal to 1. Cohen`s Kappa statistics (or simply Kappa) are supposed to measure the concordance between two evaluators. To define a perfect disagreement, film ratings would have to be opposed in this case, ideally in extremes. In a 2 x 2 table, it is possible to define perfect disagreements, because any positive evaluation could have some negative evaluation (z.B.

Love vs. Hate it), but what about a square table 3 x 3 or higher? In these cases, there are more possibilities of disagreeing, and so it quickly becomes more complicated, perfect to contradict. .

Comments are closed.