Nick Webb
Evaluating human-machine conversation for appropriateness.
Webb, Nick; Benyon, David; Hansen, Preben; Mival, Oli
Abstract
Evaluation of complex, collaborative dialogue systems is a difficult task.Traditionally, developers have relied upon subjective feedback from the user,and parametrisation over observable metrics. However, both models place somereliance on the notion of a task; that is, the system is helping to userachieve some clearly defined goal, such as book a flight or complete a bankingtransaction. It is not clear that such metrics are as useful when dealing witha system that has a more complex task, or even no definable task at all, beyondmaintain and performing a collaborative dialogue. Working within the EU fundedCOMPANIONS program, we investigate the use of appropriateness as a measure ofconversation quality, the hypothesis being that good companions need to be goodconversational partners . We report initial work in the direction of annotatingdialogue for indicators of good conversation, including the annotation andcomparison of the output of two generations of the same dialogue system
Citation
Webb, N., Benyon, D., Hansen, P., & Mival, O. (2010). Evaluating human-machine conversation for appropriateness. In Proceedings of the Seventh conference on International Language Resources and Evaluation (LREC'10) (84-91)
Presentation Conference Type | Conference Paper (Published) |
---|---|
Conference Name | IREC 2010, Seventh International Conference on Language Resources and Evaluation |
Start Date | May 17, 2010 |
End Date | May 23, 2010 |
Publication Date | 2010-05 |
Deposit Date | Jun 24, 2010 |
Publicly Available Date | Jun 24, 2010 |
Publisher | European Language Resources Association (ELRA) |
Peer Reviewed | Peer Reviewed |
Pages | 84-91 |
Book Title | Proceedings of the Seventh conference on International Language Resources and Evaluation (LREC'10) |
ISBN | 2951740867 |
Keywords | Dialogue; Evaluation methodologies; Usability; user satisfaction; |
Public URL | http://researchrepository.napier.ac.uk/id/eprint/3767 |
Contract Date | Jun 24, 2010 |
Files
Evaluating human-machine conversation for appropriateness
(682 Kb)
PDF
You might also like
How Was Your Day? evaluating a conversational companion
(2012)
Journal Article
Interaction strategies for an affective conversational agent
(2011)
Journal Article
User experience (UX) design for medical personnel and patients
(2014)
Book Chapter
IoT for a shock warning system
(-0001)
Preprint / Working Paper
Design principles for collaborative device ecologies
(2015)
Presentation / Conference Contribution
Downloadable Citations
About Edinburgh Napier Research Repository
Administrator e-mail: repository@napier.ac.uk
This application uses the following open-source libraries:
SheetJS Community Edition
Apache License Version 2.0 (http://www.apache.org/licenses/)
PDF.js
Apache License Version 2.0 (http://www.apache.org/licenses/)
Font Awesome
SIL OFL 1.1 (http://scripts.sil.org/OFL)
MIT License (http://opensource.org/licenses/mit-license.html)
CC BY 3.0 ( http://creativecommons.org/licenses/by/3.0/)
Powered by Worktribe © 2024
Advanced Search