Skip to main content

Research Repository

Advanced Search

Evaluating human-machine conversation for appropriateness.

Webb, Nick; Benyon, David; Hansen, Preben; Mival, Oli

Authors

Nick Webb

David Benyon

Preben Hansen



Abstract

Evaluation of complex, collaborative dialogue systems is a difficult task.Traditionally, developers have relied upon subjective feedback from the user,and parametrisation over observable metrics. However, both models place somereliance on the notion of a task; that is, the system is helping to userachieve some clearly defined goal, such as book a flight or complete a bankingtransaction. It is not clear that such metrics are as useful when dealing witha system that has a more complex task, or even no definable task at all, beyondmaintain and performing a collaborative dialogue. Working within the EU fundedCOMPANIONS program, we investigate the use of appropriateness as a measure ofconversation quality, the hypothesis being that good companions need to be goodconversational partners . We report initial work in the direction of annotatingdialogue for indicators of good conversation, including the annotation andcomparison of the output of two generations of the same dialogue system

Presentation Conference Type Conference Paper (Published)
Conference Name IREC 2010, Seventh International Conference on Language Resources and Evaluation
Start Date May 17, 2010
End Date May 23, 2010
Publication Date 2010-05
Deposit Date Jun 24, 2010
Publicly Available Date Jun 24, 2010
Publisher European Language Resources Association (ELRA)
Peer Reviewed Peer Reviewed
Pages 84-91
Book Title Proceedings of the Seventh conference on International Language Resources and Evaluation (LREC'10)
ISBN 2951740867
Keywords Dialogue; Evaluation methodologies; Usability; user satisfaction;
Public URL http://researchrepository.napier.ac.uk/id/eprint/3767
Contract Date Jun 24, 2010

Files









You might also like



Downloadable Citations