Llogari Casas
FaceMagic: Real-time Facial Detail Effects on Mobile
Casas, Llogari; Li, Yue; Mitchell, Kenny
Abstract
We present a novel real-time face detail reconstruction method capable of recovering high quality geometry on consumer mobile devices. Our system firstly uses a morphable model and semantic segmentation of facial parts to achieve robust self-calibration. We then capture fine-scale surface details using a patch-based Shape from Shading (SfS) approach. We pre-compute the patch-wise constant Moore–Penrose inverse matrix of the resulting linear system to achieve real-time performance. Our method achieves high interactive frame-rates and experiments show that our new approach is capable of reconstructing high-fidelity geometry with corresponding results to off-line techniques. We illustrate this through comparisons with off-line and on-line related works, and include demonstrations of novel face detail shader effects processing.
Citation
Casas, L., Li, Y., & Mitchell, K. (2020, December). FaceMagic: Real-time Facial Detail Effects on Mobile. Presented at SA '20: SIGGRAPH Asia 2020, Online [Republic of Korea]
Presentation Conference Type | Conference Paper (published) |
---|---|
Conference Name | SA '20: SIGGRAPH Asia 2020 |
Start Date | Dec 6, 2020 |
End Date | Dec 11, 2020 |
Acceptance Date | Sep 16, 2020 |
Online Publication Date | Nov 17, 2020 |
Publication Date | 2020-12 |
Deposit Date | Dec 4, 2020 |
Publicly Available Date | Dec 8, 2020 |
Publisher | Association for Computing Machinery (ACM) |
Pages | 1-4 |
Book Title | SA '20: SIGGRAPH Asia 2020 Technical Communications |
ISBN | 9781450380805 |
DOI | https://doi.org/10.1145/3410700.3425429 |
Keywords | Augmented Reality |
Public URL | http://researchrepository.napier.ac.uk/Output/2708795 |
Publisher URL | https://dl.acm.org/doi/10.1145/3410700.3425429 |
Files
FaceMagic: Real-time Facial Detail Effects On Mobile (accepted version)
(25.5 Mb)
PDF
You might also like
Structured Teaching Prompt Articulation for Generative-AI Role Embodiment with Augmented Mirror Video Displays
(2024)
Presentation / Conference Contribution
DeFT-Net: Dual-Window Extended Frequency Transformer for Rhythmic Motion Prediction
(2024)
Presentation / Conference Contribution
Auditory Occlusion Based on the Human Body in the Direct Sound Path: Measured and Perceivable Effects
(2024)
Presentation / Conference Contribution
DanceMark: An open telemetry framework for latency sensitive real-time networked immersive experiences
(2024)
Presentation / Conference Contribution
MoodFlow: Orchestrating Conversations with Emotionally Intelligent Avatars in Mixed Reality
(2024)
Presentation / Conference Contribution
Downloadable Citations
About Edinburgh Napier Research Repository
Administrator e-mail: repository@napier.ac.uk
This application uses the following open-source libraries:
SheetJS Community Edition
Apache License Version 2.0 (http://www.apache.org/licenses/)
PDF.js
Apache License Version 2.0 (http://www.apache.org/licenses/)
Font Awesome
SIL OFL 1.1 (http://scripts.sil.org/OFL)
MIT License (http://opensource.org/licenses/mit-license.html)
CC BY 3.0 ( http://creativecommons.org/licenses/by/3.0/)
Powered by Worktribe © 2025
Advanced Search