Martin Klaudiny
Real-Time Multi-View Facial Capture with Synthetic Training
Klaudiny, Martin; McDonagh, Steven; Bradley, Derek; Beeler, Thabo; Mitchell, Kenny
Authors
Abstract
We present a real-time multi-view facial capture system facilitated by synthetic training imagery. Our method is able to achieve high-quality markerless facial performance capture in real-time from multi-view helmet camera data, employing an actor specific regressor. The regressor training is tailored to specified actor appearance and we further condition it for the expected illumination conditions and the physical capture rig by generating the training data synthetically. In order to leverage the information present in live imagery, which is typically provided by multiple cameras, we propose a novel multi-view regression algorithm that uses multi-dimensional random ferns. We show that higher quality can be achieved by regressing on multiple video streams than previous approaches that were designed to operate on only a single view. Furthermore, we evaluate possible camera placements and propose a novel camera configuration that allows to mount cameras outside the field of view of the actor, which is very beneficial as the cameras are then less of a distraction for the actor and allow for an unobstructed line of sight to the director and other actors. Our new real-time facial capture approach has immediate application in on-set virtual production, in particular with the ever-growing demand for motion-captured facial animation in visual effects and video games.
Citation
Klaudiny, M., McDonagh, S., Bradley, D., Beeler, T., & Mitchell, K. (2017). Real-Time Multi-View Facial Capture with Synthetic Training. Computer Graphics Forum, 36(2), 325-336. https://doi.org/10.1111/cgf.13129
Journal Article Type | Article |
---|---|
Acceptance Date | Apr 1, 2017 |
Online Publication Date | May 23, 2017 |
Publication Date | 2017-05 |
Deposit Date | Jun 23, 2017 |
Journal | Computer Graphics Forum |
Print ISSN | 0167-7055 |
Electronic ISSN | 1467-8659 |
Publisher | Wiley |
Peer Reviewed | Peer Reviewed |
Volume | 36 |
Issue | 2 |
Pages | 325-336 |
DOI | https://doi.org/10.1111/cgf.13129 |
Keywords | Computer graphics, facial capture system, synthetic training imagery, |
Public URL | http://researchrepository.napier.ac.uk/Output/951328 |
You might also like
Structured Teaching Prompt Articulation for Generative-AI Role Embodiment with Augmented Mirror Video Displays
(2024)
Presentation / Conference Contribution
DeFT-Net: Dual-Window Extended Frequency Transformer for Rhythmic Motion Prediction
(2024)
Presentation / Conference Contribution
Auditory Occlusion Based on the Human Body in the Direct Sound Path: Measured and Perceivable Effects
(2024)
Presentation / Conference Contribution
DanceMark: An open telemetry framework for latency sensitive real-time networked immersive experiences
(2024)
Presentation / Conference Contribution
MoodFlow: Orchestrating Conversations with Emotionally Intelligent Avatars in Mixed Reality
(2024)
Presentation / Conference Contribution
Downloadable Citations
About Edinburgh Napier Research Repository
Administrator e-mail: repository@napier.ac.uk
This application uses the following open-source libraries:
SheetJS Community Edition
Apache License Version 2.0 (http://www.apache.org/licenses/)
PDF.js
Apache License Version 2.0 (http://www.apache.org/licenses/)
Font Awesome
SIL OFL 1.1 (http://scripts.sil.org/OFL)
MIT License (http://opensource.org/licenses/mit-license.html)
CC BY 3.0 ( http://creativecommons.org/licenses/by/3.0/)
Powered by Worktribe © 2025
Advanced Search