Skip to main content

Research Repository

Advanced Search

All Outputs (3)

Wizard of Oz Experiments for a companion dialogue system: eliciting companionable conversation. (2010)
Conference Proceeding
Webb, N., Benyon, D., Bradley, J., Hansen, P., & Mival, O. (2010). Wizard of Oz Experiments for a companion dialogue system: eliciting companionable conversation. In Proceedings of the Seventh conference on International Language Resources and Evaluation (LREC'10)

Working within the EU funded COMPANIONS program, we report recent work with a Wizard of Oz (WoZ) dialogue collection system. COMPANION systems require complex models of dialogue, and new models of evaluation. Wizard of Oz dialogues give us a mechani... Read More about Wizard of Oz Experiments for a companion dialogue system: eliciting companionable conversation..

Evaluating human-machine conversation for appropriateness. (2010)
Conference Proceeding
Webb, N., Benyon, D., Hansen, P., & Mival, O. (2010). Evaluating human-machine conversation for appropriateness. In Proceedings of the Seventh conference on International Language Resources and Evaluation (LREC'10) (84-91)

Evaluation of complex, collaborative dialogue systems is a difficult task.Traditionally, developers have relied upon subjective feedback from the user,and parametrisation over observable metrics. However, both models place somereliance on the notion... Read More about Evaluating human-machine conversation for appropriateness..