Skip to main content

Research Repository

Advanced Search

Outputs (59)

DanceMark: An open telemetry framework for latency sensitive real-time networked immersive experiences (2024)
Presentation / Conference Contribution
Koniaris, B., Sinclair, D., & Mitchell, K. (2024, March). DanceMark: An open telemetry framework for latency sensitive real-time networked immersive experiences. Presented at IEEE VR Workshop on Open Access Tools and Libraries for Virtual Reality, Orlando, FL

DanceMark is an open telemetry framework designed for latency-sensitive real-time networked immersive experiences, focusing on online dancing in virtual reality within the DanceGraph platform. The goal is to minimize end-to-end latency and enhance us... Read More about DanceMark: An open telemetry framework for latency sensitive real-time networked immersive experiences.

Expressive Talking Avatars (2024)
Journal Article
Pan, Y., Tan, S., Cheng, S., Lin, Q., Zeng, Z., & Mitchell, K. (2024). Expressive Talking Avatars. IEEE Transactions on Visualization and Computer Graphics, 30(5), 2538-2548. https://doi.org/10.1109/TVCG.2024.3372047

Stylized avatars are common virtual representations used in VR to support interaction and communication between remote collaborators. However, explicit expressions are notoriously difficult to create, mainly because most current methods rely on geome... Read More about Expressive Talking Avatars.

Real-time Facial Animation for 3D Stylized Character with Emotion Dynamics (2023)
Presentation / Conference Contribution
Pan, Y., Zhang, R., Wang, J., Ding, Y., & Mitchell, K. (2023, October). Real-time Facial Animation for 3D Stylized Character with Emotion Dynamics. Presented at 31st ACM International Conference on Multimedia, Ottawa, Canada

Our aim is to improve animation production techniques' efficiency and effectiveness. We present two real-time solutions which drive character expressions in a geometrically consistent and perceptually valid way. Our first solution combines keyframe a... Read More about Real-time Facial Animation for 3D Stylized Character with Emotion Dynamics.

Editorial: Games May Host the First Rightful AI Citizens (2023)
Journal Article
Mitchell, K. (2023). Editorial: Games May Host the First Rightful AI Citizens. Games: Research and Practice, 1(2), 1-7. https://doi.org/10.1145/3606834

GAMES creatively take place in imaginative worlds informed by, but often not limited by, real-world challenges, and this advantageously provides an accelerated environment for innovation, where concepts and ideas can be explored unencumbered by physi... Read More about Editorial: Games May Host the First Rightful AI Citizens.

DanceGraph: A Complementary Architecture for Synchronous Dancing Online (2023)
Presentation / Conference Contribution
Sinclair, D., Ademola, A. V., Koniaris, B., & Mitchell, K. (2023, May). DanceGraph: A Complementary Architecture for Synchronous Dancing Online. Paper presented at 36th International Computer Animation & Social Agents (CASA) 2023, Limassol, Cyprus

DanceGraph is an architecture for synchronized online dancing overcoming the latency of net-worked body pose sharing. We break down this challenge by developing a real-time bandwidth-efficient architecture to minimize lag and reduce the timeframe of... Read More about DanceGraph: A Complementary Architecture for Synchronous Dancing Online.

Collimated Whole Volume Light Scattering in Homogeneous Finite Media (2022)
Journal Article
Velinov, Z., & Mitchell, K. (2023). Collimated Whole Volume Light Scattering in Homogeneous Finite Media. IEEE Transactions on Visualization and Computer Graphics, 29(7), 3145-3157. https://doi.org/10.1109/TVCG.2021.3135764

Crepuscular rays form when light encounters an optically thick or opaque medium which masks out portions of the visible scene. Real-time applications commonly estimate this phenomena by connecting paths between light sources and the camera after a si... Read More about Collimated Whole Volume Light Scattering in Homogeneous Finite Media.

Embodied online dance learning objectives of CAROUSEL + (2021)
Presentation / Conference Contribution
Mitchell, K., Koniaris, B., Tamariz, M., Kennedy, J., Cheema, N., Mekler, E., Van Der Linden, P., Herrmann, E., Hämäläinen, P., McGregor, I., Slusallek, P., & Mac Williams, C. (2021, March). Embodied online dance learning objectives of CAROUSEL +. Presented at 2021 IEEE VR 6th Annual Workshop on K-12+ Embodied Learning through Virtual and Augmented Reality (KELVAR), Lisbon, Portugal

This is a position paper concerning the embodied dance learning objectives of the CAROUSEL + 1 project, which aims to impact how online immersive technologies influence multiuser interaction and communication with a focus on dancing and learning danc... Read More about Embodied online dance learning objectives of CAROUSEL +.

FaceMagic: Real-time Facial Detail Effects on Mobile (2020)
Presentation / Conference Contribution
Casas, L., Li, Y., & Mitchell, K. (2020, December). FaceMagic: Real-time Facial Detail Effects on Mobile. Presented at SA '20: SIGGRAPH Asia 2020, Online [Republic of Korea]

We present a novel real-time face detail reconstruction method capable of recovering high quality geometry on consumer mobile devices. Our system firstly uses a morphable model and semantic segmentation of facial parts to achieve robust self-calibrat... Read More about FaceMagic: Real-time Facial Detail Effects on Mobile.

Improving VIP viewer Gaze Estimation and Engagement Using Adaptive Dynamic Anamorphosis (2020)
Journal Article
Pan, Y., & Mitchell, K. (2021). Improving VIP viewer Gaze Estimation and Engagement Using Adaptive Dynamic Anamorphosis. International Journal of Human-Computer Studies, 147, Article 102563. https://doi.org/10.1016/j.ijhcs.2020.102563

Anamorphosis for 2D displays can provide viewer centric perspective viewing, enabling 3D appearance, eye contact and engagement, by adapting dynamically in real time to a single moving viewer’s viewpoint, but at the cost of distorted viewing for othe... Read More about Improving VIP viewer Gaze Estimation and Engagement Using Adaptive Dynamic Anamorphosis.

Active Learning for Interactive Audio-Animatronic Performance Design (2020)
Journal Article
Castellon, J., Bächer, M., McCrory, M., Ayala, A., Stolarz, J., & Mitchell, K. (2020). Active Learning for Interactive Audio-Animatronic Performance Design. The Journal of Computer Graphics Techniques, 9(3), 1-19

We present a practical neural computational approach for interactive design of Audio-Animatronic® facial performances. An offline quasi-static reference simulation, driven by a coupled mechanical assembly, accurately predicts hyperelastic skin deform... Read More about Active Learning for Interactive Audio-Animatronic Performance Design.

Props Alive: A Framework for Augmented Reality Stop Motion Animation (2020)
Presentation / Conference Contribution
Casas, L., Kosek, M., & Mitchell, K. (2017, March). Props Alive: A Framework for Augmented Reality Stop Motion Animation. Presented at 2017 IEEE 10th Workshop on Software Engineering and Architectures for Realtime Interactive Systems (SEARIS), Los Angeles, CA, USA

Stop motion animation evolved in the early days of cinema with the aim to create an illusion of movement with static puppets posed manually each frame. Current stop motion movies introduced 3D printing processes in order to acquire animations more ac... Read More about Props Alive: A Framework for Augmented Reality Stop Motion Animation.

PoseMMR: A Collaborative Mixed Reality Authoring Tool for Character Animation (2020)
Presentation / Conference Contribution
Pan, Y., & Mitchell, K. (2020, March). PoseMMR: A Collaborative Mixed Reality Authoring Tool for Character Animation. Presented at 2020 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW), Atlanta, GA, USA

Augmented reality devices enable new approaches for character animation, e.g., given that character posing is three dimensional in nature it follows that interfaces with higher degrees-of-freedom (DoF) should outperform 2D interfaces. We present Pose... Read More about PoseMMR: A Collaborative Mixed Reality Authoring Tool for Character Animation.

Group-Based Expert Walkthroughs: How Immersive Technologies Can Facilitate the Collaborative Authoring of Character Animation (2020)
Presentation / Conference Contribution
Pan, Y., & Mitchell, K. (2020, March). Group-Based Expert Walkthroughs: How Immersive Technologies Can Facilitate the Collaborative Authoring of Character Animation. Presented at 2020 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW), Atlanta, GA, USA

Immersive technologies have increasingly attracted the attention of the computer animation community in search of more intuitive and effective alternatives to the current sophisticated 2D interfaces. The higher affordances offered by 3D interaction,... Read More about Group-Based Expert Walkthroughs: How Immersive Technologies Can Facilitate the Collaborative Authoring of Character Animation.

Photo-Realistic Facial Details Synthesis from Single Image (2019)
Presentation / Conference Contribution
Chen, A., Chen, Z., Zhang, G., Zhang, Z., Mitchell, K., & Yu, J. (2019, October). Photo-Realistic Facial Details Synthesis from Single Image. Presented at 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Korea

We present a single-image 3D face synthesis technique that can handle challenging facial expressions while recovering fine geometric details. Our technique employs expression analysis for proxy face geometry generation and combines supervised and uns... Read More about Photo-Realistic Facial Details Synthesis from Single Image.

Enhanced Shadow Retargeting with Light-Source Estimation Using Flat Fresnel Lenses (2019)
Journal Article
Casas, L., Fauconneau, M., Kosek, M., Mclister, K., & Mitchell, K. (2019). Enhanced Shadow Retargeting with Light-Source Estimation Using Flat Fresnel Lenses. Computers, 8(2), Article 29. https://doi.org/10.3390/computers8020029

Shadow-retargeting maps depict the appearance of real shadows to virtual shadows given corresponding deformation of scene geometry, such that appearance is seamlessly maintained. By performing virtual shadow reconstruction from unoccluded real-shadow... Read More about Enhanced Shadow Retargeting with Light-Source Estimation Using Flat Fresnel Lenses.

Deep Precomputed Radiance Transfer for Deformable Objects (2019)
Presentation / Conference Contribution
Li, Y., Wiedemann, P., & Mitchell, K. (2019, May). Deep Precomputed Radiance Transfer for Deformable Objects. Presented at ACM Symposium on Interactive 3D Graphics and Games, Montreal, Quebec, Canada

We propose, DeepPRT, a deep convolutional neural network to compactly encapsulate the radiance transfer of a freely deformable object for rasterization in real-time.
With pre-computation of radiance transfer (PRT) we can store complex light interac... Read More about Deep Precomputed Radiance Transfer for Deformable Objects.

Feature-preserving detailed 3D face reconstruction from a single image (2018)
Presentation / Conference Contribution
Li, Y., Ma, L., Fan, H., & Mitchell, K. (2018, December). Feature-preserving detailed 3D face reconstruction from a single image. Presented at the 15th ACM SIGGRAPH European Conference, London, United Kingdom

Dense 3D face reconstruction plays a fundamental role in visual media production involving digital actors. We improve upon high fidelity reconstruction from a single 2D photo with a reconstruction framework that is robust to large variations in expre... Read More about Feature-preserving detailed 3D face reconstruction from a single image.

Multi-reality games: an experience across the entire reality-virtuality continuum (2018)
Presentation / Conference Contribution
Casas, L., Ciccone, L., Çimen, G., Wiedemann, P., Fauconneau, M., Sumner, R. W., & Mitchell, K. (2018, December). Multi-reality games: an experience across the entire reality-virtuality continuum. Presented at the 16th ACM SIGGRAPH International Conference, Tokyo, Japan

Interactive play can take very different forms, from playing with physical board games to fully digital video games. In recent years, new video game paradigms were introduced to connect real-world objects to virtual game characters. However, even the... Read More about Multi-reality games: an experience across the entire reality-virtuality continuum.

Image Based Proximate Shadow Retargeting (2018)
Presentation / Conference Contribution
Casas, L., Fauconneau, M., Kosek, M., Mclister, K., & Mitchell, K. (2018, September). Image Based Proximate Shadow Retargeting. Presented at Computer Graphics & Visual Computing (CGVC) 2018, Swansea University, United Kingdom

We introduce Shadow Retargeting which maps real shadow appearance to virtual shadows given a corresponding deformation of scene geometry, such that appearance is seamlessly maintained. By performing virtual shadow reconstruction from un-occluded real... Read More about Image Based Proximate Shadow Retargeting.

GPU-accelerated depth codec for real-time, high-quality light field reconstruction (2018)
Journal Article
Koniaris, B., Kosek, M., Sinclair, D., & Mitchell, K. (2018). GPU-accelerated depth codec for real-time, high-quality light field reconstruction. Proceedings of the ACM on Computer Graphics and Interactive Techniques, 1(1), 1-15. https://doi.org/10.1145/3203193

Pre-calculated depth information is essential for efficient light field video rendering, due to the prohibitive cost of depth estimation from color when real-time performance is desired. Standard state-of-the-art video codecs fail to satisfy such per... Read More about GPU-accelerated depth codec for real-time, high-quality light field reconstruction.