Title |
Evaluating Human-Machine Conversation for Appropriateness |
Authors |
Nick Webb, David Benyon, Preben Hansen and Oil Mival |
Abstract |
Evaluation of complex, collaborative dialogue systems is a difficult task. Traditionally, developers have relied upon subjective feedback from the user, and parametrisation over observable metrics. However, both models place some reliance on the notion of a task; that is, the system is helping to user achieve some clearly defined goal, such as book a flight or complete a banking transaction. It is not clear that such metrics are as useful when dealing with a system that has a more complex task, or even no definable task at all, beyond maintain and performing a collaborative dialogue. Working within the EU funded COMPANIONS program, we investigate the use of appropriateness as a measure of conversation quality, the hypothesis being that good companions need to be good conversational partners . We report initial work in the direction of annotating dialogue for indicators of good conversation, including the annotation and comparison of the output of two generations of the same dialogue system. |
Topics |
Dialogue, Evaluation methodologies, Usability, user satisfaction |
Full paper |
Evaluating Human-Machine Conversation for Appropriateness |
Slides |
Evaluating Human-Machine Conversation for Appropriateness |
Bibtex |
@InProceedings{WEBB10.115,
author = {Nick Webb and David Benyon and Preben Hansen and Oil Mival}, title = {Evaluating Human-Machine Conversation for Appropriateness}, booktitle = {Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10)}, year = {2010}, month = {may}, date = {19-21}, address = {Valletta, Malta}, editor = {Nicoletta Calzolari (Conference Chair) and Khalid Choukri and Bente Maegaard and Joseph Mariani and Jan Odijk and Stelios Piperidis and Mike Rosner and Daniel Tapias}, publisher = {European Language Resources Association (ELRA)}, isbn = {2-9517408-6-7}, language = {english} } |