Skip to main content

Research Repository

Advanced Search

Emotional Voice Puppetry

Pan, Ye; Zhang, Ruisi; Cheng, Shengran; Tan, Shuai; Ding, Yu; Mitchell, Kenny; Yang, Xubo

Authors

Ye Pan

Ruisi Zhang

Shengran Cheng

Shuai Tan

Yu Ding

Xubo Yang



Abstract

The paper presents emotional voice puppetry, an audio-based facial animation approach to portray characters with vivid emotional changes. The lips motion and the surrounding facial areas are controlled by the contents of the audio, and the facial dynamics are established by category of the emotion and the intensity. Our approach is exclusive because it takes account of perceptual validity and geometry instead of pure geometric processes. Another highlight of our approach is the generalizability to multiple characters. The findings showed that training new secondary characters when the rig parameters are categorized as eye, eyebrows, nose, mouth, and signature wrinkles is significant in achieving better generalization results compared to joint training. User studies demonstrate the effectiveness of our approach both qualitatively and quantitatively. Our approach can be applicable in AR/VR and 3DUI, namely, virtual reality avatars/self-avatars, teleconferencing and in-game dialogue.

Journal Article Type Article
Acceptance Date Jan 30, 2023
Online Publication Date Feb 22, 2023
Publication Date 2023-05
Deposit Date Feb 24, 2023
Publicly Available Date Feb 27, 2023
Journal IEEE Transactions on Visualization and Computer Graphics
Print ISSN 1077-2626
Electronic ISSN 1941-0506
Publisher Institute of Electrical and Electronics Engineers
Peer Reviewed Peer Reviewed
Volume 29
Issue 5
Pages 2527-2535
DOI https://doi.org/10.1109/tvcg.2023.3247101
Keywords Virtual reality, audio, emotion, character animation

Files




You might also like



Downloadable Citations