Title | Computing Reliability for Coreference Annotation |
Author(s) |
Rebecca J. Passonneau
Columbia University |
Session | O31-EW |
Abstract | Coreference annotation is annotation of language corpora to indicate which expressions have been used to co-specify the same discourse entity. When annotations of the same data are collected from two or more coders, the reliability of the data may need to be quantified. Two obstacles have stood in the way of applying reliability metrics: incommensurate units across annotations, and lack of a convenient representation of the coding values. Given N coders and M coding units, reliability is computed from an N-by-M matrix that records the value assigned to unit M_<j> by coder N_<k>. The solution I present accommodates a wide range of coding choices for the annotator, while preserving the same units across codings. As a consequence, it permits a straightforward application of reliability measurement. In addition, in coreference annotation, disagreements can be complete or partial, so I incorporate a distance metric to scale disagreements. This method has also been applied to a quite distinct coding task, namely semantic annotation of summaries. |
Keyword(s) | Coreference, reliability, corpora, discourse entity |
Language(s) | English |
Full Paper | 752.pdf |