Skip to main content

Research Repository

Advanced Search

Active Learning for Interactive Audio-Animatronic Performance Design

Castellon, Joel; B�cher, Moritz; McCrory, Matt; Ayala, Alfredo; Stolarz, Jeremy; Mitchell, Kenny

Authors

Joel Castellon

Moritz B�cher

Matt McCrory

Alfredo Ayala

Jeremy Stolarz



Abstract

We present a practical neural computational approach for interactive design of Audio-Animatronic® facial performances. An offline quasi-static reference simulation, driven by a coupled mechanical assembly, accurately predicts hyperelastic skin deformations. To achieve interactive digital pose design, we train a shallow, fully connected neural network (KSNN) on input motor activations to solve the simulated mesh vertex positions. Our fully automatic synthetic training algorithm enables a first-of-its-kind learning active learning framework (GEN-LAL) for generative modeling of facial pose simulations. With adaptive selection, we significantly reduce training time to within half that of the unmodified training approach for each new Audio-Animatronic® figure.

Citation

Castellon, J., Bächer, M., McCrory, M., Ayala, A., Stolarz, J., & Mitchell, K. (2020). Active Learning for Interactive Audio-Animatronic Performance Design. The Journal of Computer Graphics Techniques, 9(3), 1-19

Journal Article Type Article
Acceptance Date Mar 12, 2020
Online Publication Date Oct 11, 2020
Publication Date Oct 11, 2020
Deposit Date Oct 16, 2020
Publicly Available Date Oct 16, 2020
Journal Journal of Computer Graphics Techniques
Print ISSN 2331-7418
Peer Reviewed Peer Reviewed
Volume 9
Issue 3
Pages 1-19
Keywords Deep Learning
Public URL http://researchrepository.napier.ac.uk/Output/2693867
Publisher URL http://jcgt.org/published/0009/03/01/

Files







You might also like



Downloadable Citations