Skip to main content

Research Repository

Advanced Search

All Outputs (54)

Design Considerations of Voice Articulated Generative AI Virtual Reality Dance Environments (2024)
Presentation / Conference Contribution
Casas, L., Mitchell, K., Tamariz, M., Hannah, S., Sinclair, D., Koniaris, B., & Kennedy, J. (2024, May). Design Considerations of Voice Articulated Generative AI Virtual Reality Dance Environments. Paper presented at SIGCHI GenAI in UGC Workshop, Honolulu

We consider practical and social considerations of collaborating verbally with colleagues and friends, not confined by physical distance, but through seamless networked telepresence to interactively create shared virtual dance environments. In respon... Read More about Design Considerations of Voice Articulated Generative AI Virtual Reality Dance Environments.

Design Considerations of Voice Articulated Generative AI Virtual Reality Dance Environments (2024)
Presentation / Conference Contribution
Casas, L., Mitchell, K., Tamariz, M., Hannah, S., Sinclair, D., Koniaris, B., & Kennedy, J. (2024, May). Design Considerations of Voice Articulated Generative AI Virtual Reality Dance Environments. Presented at CHI24 - Generative AI in User-Generated Cont

We consider practical and social considerations of collaborating verbally with colleagues and friends, not confined by physical distance, but through seamless networked telepres-ence to interactively create shared virtual dance environments. In respo... Read More about Design Considerations of Voice Articulated Generative AI Virtual Reality Dance Environments.

DanceMark: An open telemetry framework for latency sensitive real-time networked immersive experiences (2024)
Presentation / Conference Contribution
Koniaris, B., Sinclair, D., & Mitchell, K. (2024, March). DanceMark: An open telemetry framework for latency sensitive real-time networked immersive experiences. Presented at IEEE VR Workshop on Open Access Tools and Libraries for Virtual Reality, Orland

DanceMark is an open telemetry framework designed for latency-sensitive real-time networked immersive experiences, focusing on online dancing in virtual reality within the DanceGraph platform. The goal is to minimize end-to-end latency and enhance us... Read More about DanceMark: An open telemetry framework for latency sensitive real-time networked immersive experiences.

MoodFlow: Orchestrating Conversations with Emotionally Intelligent Avatars in Mixed Reality (2024)
Presentation / Conference Contribution
Casas, L., Hannah, S., & Mitchell, K. (2024, March). MoodFlow: Orchestrating Conversations with Emotionally Intelligent Avatars in Mixed Reality. Presented at ANIVAE 2024 : 7th IEEE VR Internal Workshop on Animation in Virtual and Augmented Environments,

MoodFlow presents a novel approach at the intersection of mixed reality and conversational artificial intelligence for emotionally intelligent avatars. Through a state machine embedded in user prompts, the system decodes emotional nuances, enabling a... Read More about MoodFlow: Orchestrating Conversations with Emotionally Intelligent Avatars in Mixed Reality.

Real-time Facial Animation for 3D Stylized Character with Emotion Dynamics (2023)
Presentation / Conference Contribution
Pan, Y., Zhang, R., Wang, J., Ding, Y., & Mitchell, K. (2023, October). Real-time Facial Animation for 3D Stylized Character with Emotion Dynamics. Presented at 31st ACM International Conference on Multimedia, Ottawa, Canada

Our aim is to improve animation production techniques' efficiency and effectiveness. We present two real-time solutions which drive character expressions in a geometrically consistent and perceptually valid way. Our first solution combines keyframe a... Read More about Real-time Facial Animation for 3D Stylized Character with Emotion Dynamics.

Intermediated Reality with an AI 3D Printed Character (2023)
Presentation / Conference Contribution
Casas, L., & Mitchell, K. (2023). Intermediated Reality with an AI 3D Printed Character. In SIGGRAPH '23: ACM SIGGRAPH 2023 Real-Time Live!. https://doi.org/10.1145/3588430.3597251

We introduce live character conversational interactions in Intermediated Reality to bring real-world objects to life in Augmented Reality (AR) and Artificial Intelligence (AI). The AI recognizes live speech and generates short character responses, sy... Read More about Intermediated Reality with an AI 3D Printed Character.

DanceGraph: A Complementary Architecture for Synchronous Dancing Online (2023)
Presentation / Conference Contribution
Sinclair, D., Ademola, A. V., Koniaris, B., & Mitchell, K. (2023, May). DanceGraph: A Complementary Architecture for Synchronous Dancing Online. Paper presented at 36th International Computer Animation & Social Agents (CASA) 2023, Limassol, Cyprus

DanceGraph is an architecture for synchronized online dancing overcoming the latency of net-worked body pose sharing. We break down this challenge by developing a real-time bandwidth-efficient architecture to minimize lag and reduce the timeframe of... Read More about DanceGraph: A Complementary Architecture for Synchronous Dancing Online.

Generating real-time detailed ground visualisations from sparse aerial point clouds (2022)
Presentation / Conference Contribution
Murray, A., Mitchell, S., Bradley, A., Waite, E., Ross, C., Jamrozy, J., & Mitchell, K. (2022, December). Generating real-time detailed ground visualisations from sparse aerial point clouds. Paper presented at CVMP 2022: The 19th ACM SIGGRAPH European Con

We present an informed kind of atomic rendering primitive, which forms a local adjacency aware classified particle basis decoupled from texture and topology. Suited to visual synthesis of detailed landscapes inferred from sparse unevenly distributed... Read More about Generating real-time detailed ground visualisations from sparse aerial point clouds.

MienCap: Performance-based Facial Animation with Live Mood Dynamics (2022)
Presentation / Conference Contribution
Pan, Y., Zhang, R., Wang, J., Chen, N., Qiu, Y., Ding, Y., & Mitchell, K. (2022). MienCap: Performance-based Facial Animation with Live Mood Dynamics. In 2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW) (645-64

Our purpose is to improve performance-based animation which can drive believable 3D stylized characters that are truly perceptual. By combining traditional blendshape animation techniques with machine learning models, we present a real time motion ca... Read More about MienCap: Performance-based Facial Animation with Live Mood Dynamics.

Embodied online dance learning objectives of CAROUSEL + (2021)
Presentation / Conference Contribution
Mitchell, K., Koniaris, B., Tamariz, M., Kennedy, J., Cheema, N., Mekler, E., …Mac Williams, C. (2021). Embodied online dance learning objectives of CAROUSEL +. In 2021 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (

This is a position paper concerning the embodied dance learning objectives of the CAROUSEL + 1 project, which aims to impact how online immersive technologies influence multiuser interaction and communication with a focus on dancing and learning danc... Read More about Embodied online dance learning objectives of CAROUSEL +.

FaceMagic: Real-time Facial Detail Effects on Mobile (2020)
Presentation / Conference Contribution
Casas, L., Li, Y., & Mitchell, K. (2020). FaceMagic: Real-time Facial Detail Effects on Mobile. In SA '20: SIGGRAPH Asia 2020 Technical Communications (1-4). https://doi.org/10.1145/3410700.3425429

We present a novel real-time face detail reconstruction method capable of recovering high quality geometry on consumer mobile devices. Our system firstly uses a morphable model and semantic segmentation of facial parts to achieve robust self-calibrat... Read More about FaceMagic: Real-time Facial Detail Effects on Mobile.

Props Alive: A Framework for Augmented Reality Stop Motion Animation (2020)
Presentation / Conference Contribution
Casas, L., Kosek, M., & Mitchell, K. (2020). Props Alive: A Framework for Augmented Reality Stop Motion Animation. In 2017 IEEE 10th Workshop on Software Engineering and Architectures for Realtime Interactive Systems (SEARIS). https://doi.org/10.1109/SEA

Stop motion animation evolved in the early days of cinema with the aim to create an illusion of movement with static puppets posed manually each frame. Current stop motion movies introduced 3D printing processes in order to acquire animations more ac... Read More about Props Alive: A Framework for Augmented Reality Stop Motion Animation.

Group-Based Expert Walkthroughs: How Immersive Technologies Can Facilitate the Collaborative Authoring of Character Animation (2020)
Presentation / Conference Contribution
Pan, Y., & Mitchell, K. (2020). Group-Based Expert Walkthroughs: How Immersive Technologies Can Facilitate the Collaborative Authoring of Character Animation. In 2020 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)

Immersive technologies have increasingly attracted the attention of the computer animation community in search of more intuitive and effective alternatives to the current sophisticated 2D interfaces. The higher affordances offered by 3D interaction,... Read More about Group-Based Expert Walkthroughs: How Immersive Technologies Can Facilitate the Collaborative Authoring of Character Animation.

PoseMMR: A Collaborative Mixed Reality Authoring Tool for Character Animation (2020)
Presentation / Conference Contribution
Pan, Y., & Mitchell, K. (2020). PoseMMR: A Collaborative Mixed Reality Authoring Tool for Character Animation. In 2020 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW) (759-760). https://doi.org/10.1109/vrw50115.202

Augmented reality devices enable new approaches for character animation, e.g., given that character posing is three dimensional in nature it follows that interfaces with higher degrees-of-freedom (DoF) should outperform 2D interfaces. We present Pose... Read More about PoseMMR: A Collaborative Mixed Reality Authoring Tool for Character Animation.

Photo-Realistic Facial Details Synthesis from Single Image (2019)
Presentation / Conference Contribution
Chen, A., Chen, Z., Zhang, G., Zhang, Z., Mitchell, K., & Yu, J. (2019). Photo-Realistic Facial Details Synthesis from Single Image. In 2019 IEEE/CVF International Conference on Computer Vision (ICCV) (9429-9439). https://doi.org/10.1109/ICCV.2019.00952

We present a single-image 3D face synthesis technique that can handle challenging facial expressions while recovering fine geometric details. Our technique employs expression analysis for proxy face geometry generation and combines supervised and uns... Read More about Photo-Realistic Facial Details Synthesis from Single Image.

Recycling a Landmark Dataset for Real-time Facial Capture and Animation with Low Cost HMD Integrated Cameras (2019)
Presentation / Conference Contribution
Dos Santos Brito, C. J., & Mitchell, K. (2019). Recycling a Landmark Dataset for Real-time Facial Capture and Animation with Low Cost HMD Integrated Cameras. In VRCAI '19: The 17th International Conference on Virtual-Reality Continuum and its Application

Preparing datasets for use in the training of real-time face tracking algorithms for HMDs is costly. Manually annotated facial landmarks are accessible for regular photography datasets, but introspectively mounted cameras for VR face tracking have in... Read More about Recycling a Landmark Dataset for Real-time Facial Capture and Animation with Low Cost HMD Integrated Cameras.

JUNGLE: An Interactive Visual Platform for Collaborative Creation and Consumption of Nonlinear Transmedia Stories (2019)
Presentation / Conference Contribution
Kapadia, M., Muniz, C. M., Sohn, S. S., Pan, Y., Schriber, S., Mitchell, K., & Gross, M. (2019). JUNGLE: An Interactive Visual Platform for Collaborative Creation and Consumption of Nonlinear Transmedia Stories. In Interactive Storytelling: 12th Internat

JUNGLE is an interactive, visual platform for the collaborative manipulation and consumption of nonlinear transmedia stories. Intuitive visual interfaces encourage JUNGLE users to explore vast libraries of story worlds, expand existing stories, or co... Read More about JUNGLE: An Interactive Visual Platform for Collaborative Creation and Consumption of Nonlinear Transmedia Stories.

Light Field Synthesis Using Inexpensive Surveillance Camera Systems (2019)
Presentation / Conference Contribution
Dumbgen, F., Schroers, C., & Mitchell, K. (2019). Light Field Synthesis Using Inexpensive Surveillance Camera Systems. . https://doi.org/10.1109/icip.2019.8804269

We present a light field synthesis technique that achieves accurate reconstruction given a low-cost, wide-baseline camera rig. Our system integrates optical flow with methods for rectification, disparity estimation, and feature extraction, which we t... Read More about Light Field Synthesis Using Inexpensive Surveillance Camera Systems.

Deep Precomputed Radiance Transfer for Deformable Objects (2019)
Presentation / Conference Contribution
Li, Y., Wiedemann, P., & Mitchell, K. (2019, May). Deep Precomputed Radiance Transfer for Deformable Objects. Presented at ACM Symposium on Interactive 3D Graphics and Games, Montreal, Quebec, Canada

We propose, DeepPRT, a deep convolutional neural network to compactly encapsulate the radiance transfer of a freely deformable object for rasterization in real-time. With pre-computation of radiance transfer (PRT) we can store complex light interac... Read More about Deep Precomputed Radiance Transfer for Deformable Objects.

Feature-preserving detailed 3D face reconstruction from a single image (2018)
Presentation / Conference Contribution
Li, Y., Ma, L., Fan, H., & Mitchell, K. (2018). Feature-preserving detailed 3D face reconstruction from a single image. In CVMP '18 Proceedings of the 15th ACM SIGGRAPH European Conference on Visual Media Production. https://doi.org/10.1145/3278471.32784

Dense 3D face reconstruction plays a fundamental role in visual media production involving digital actors. We improve upon high fidelity reconstruction from a single 2D photo with a reconstruction framework that is robust to large variations in expre... Read More about Feature-preserving detailed 3D face reconstruction from a single image.