LREC 2000 2nd International Conference on Language Resources & Evaluation
 

Previous Paper   Next Paper

Title Dialogue Annotation for Language Systems Evaluation
Authors Charfuelán Marcela (Dep. SSR ETSIT-UPM Ciudad Universitaria Madrid, Spain, marcela@gaps.ssr.upm.es)
Relaño Gil José (Dep. SSR ETSIT-UPM Ciudad Universitaria Madrid, Spain, jrelanio@gaps.ssr.upm.es)
Rogríguez Gancedo M. Carmen (Dep. SSR ETSIT-UPM Ciudad Universitaria Madrid, Spain, mcarmen@gaps.ssr.upm.es)
Tapias Merino Daniel (Speech Tecnology Group, Telefónica Investigación y, Desarrollo, S.A. C. Emilio Vargas, 6 28043 Madrid, Spain, daniel@craso.tid.es)
Gómez Luis Hernández (Dep. SSR ETSIT-UPM Ciudad Universitaria Madrid, Spain, luis@gaps.ssr.upm.es)
Keywords Annotated Dialogue Corpora, Annotation frameworks, Annotation Tools, Benchmark, Log files, SLDSs Evaluation Procedures
Session Session SP4 - Tools for Evaluation and Processing of Spoken Language Resources
Full Paper 33.ps, 33.pdf
Abstract The evaluation of Natural Language Processing (NLP) systems is still an open problem demanding further research progress from the research community to establish general evaluation frameworks. In this paper we present an experimental multilevel annotation process to be followed during the testing phase of Spoken Language Dialogue Systems (SLDSs). Based on this process we address some issues related to an annotation scheme of evaluation dialogue corpora and particular annotation tools and processes.