Skip to main content

Research Repository

Advanced Search

All Outputs (49)

Design Considerations of Voice Articulated Generative AI Virtual Reality Dance Environments (2024)
Conference Proceeding
Casas, L., Mitchell, K., Tamariz, M., Hannah, S., Sinclair, D., Koniaris, B., & Kennedy, J. (in press). Design Considerations of Voice Articulated Generative AI Virtual Reality Dance Environments.

We consider practical and social considerations of collaborating verbally with colleagues and friends, not confined by physical distance, but through seamless networked telepres-ence to interactively create shared virtual dance environments. In respo... Read More about Design Considerations of Voice Articulated Generative AI Virtual Reality Dance Environments.

DanceMark: An open telemetry framework for latency sensitive real-time networked immersive experiences (2024)
Conference Proceeding
Koniaris, B., Sinclair, D., & Mitchell, K. (in press). DanceMark: An open telemetry framework for latency sensitive real-time networked immersive experiences.

DanceMark is an open telemetry framework designed for latency-sensitive real-time networked immersive experiences, focusing on online dancing in virtual reality within the DanceGraph platform. The goal is to minimize end-to-end latency and enhance us... Read More about DanceMark: An open telemetry framework for latency sensitive real-time networked immersive experiences.

MoodFlow: Orchestrating Conversations with Emotionally Intelligent Avatars in Mixed Reality (2024)
Conference Proceeding
Casas, L., Hannah, S., & Mitchell, K. (in press). MoodFlow: Orchestrating Conversations with Emotionally Intelligent Avatars in Mixed Reality.

MoodFlow presents a novel approach at the intersection of mixed reality and conversational artificial intelligence for emotionally intelligent avatars. Through a state machine embedded in user prompts, the system decodes emotional nuances, enabling a... Read More about MoodFlow: Orchestrating Conversations with Emotionally Intelligent Avatars in Mixed Reality.

Real-time Facial Animation for 3D Stylized Character with Emotion Dynamics (2023)
Conference Proceeding
Pan, Y., Zhang, R., Wang, J., Ding, Y., & Mitchell, K. (2023). Real-time Facial Animation for 3D Stylized Character with Emotion Dynamics. In MM '23: Proceedings of the 31st ACM International Conference on Multimedia (6851-6859). https://doi.org/10.1145/3581783.3613803

Our aim is to improve animation production techniques' efficiency and effectiveness. We present two real-time solutions which drive character expressions in a geometrically consistent and perceptually valid way. Our first solution combines keyframe a... Read More about Real-time Facial Animation for 3D Stylized Character with Emotion Dynamics.

Intermediated Reality with an AI 3D Printed Character (2023)
Conference Proceeding
Casas, L., & Mitchell, K. (2023). Intermediated Reality with an AI 3D Printed Character. In SIGGRAPH '23: ACM SIGGRAPH 2023 Real-Time Live!. https://doi.org/10.1145/3588430.3597251

We introduce live character conversational interactions in Intermediated Reality to bring real-world objects to life in Augmented Reality (AR) and Artificial Intelligence (AI). The AI recognizes live speech and generates short character responses, sy... Read More about Intermediated Reality with an AI 3D Printed Character.

MienCap: Performance-based Facial Animation with Live Mood Dynamics (2022)
Conference Proceeding
Pan, Y., Zhang, R., Wang, J., Chen, N., Qiu, Y., Ding, Y., & Mitchell, K. (2022). MienCap: Performance-based Facial Animation with Live Mood Dynamics. In 2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW) (645-646). https://doi.org/10.1109/vrw55335.2022.00178

Our purpose is to improve performance-based animation which can drive believable 3D stylized characters that are truly perceptual. By combining traditional blendshape animation techniques with machine learning models, we present a real time motion ca... Read More about MienCap: Performance-based Facial Animation with Live Mood Dynamics.

Embodied online dance learning objectives of CAROUSEL + (2021)
Conference Proceeding
Mitchell, K., Koniaris, B., Tamariz, M., Kennedy, J., Cheema, N., Mekler, E., …Mac Williams, C. (2021). Embodied online dance learning objectives of CAROUSEL +. In 2021 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW) (309-313). https://doi.org/10.1109/VRW52623.2021.00062

This is a position paper concerning the embodied dance learning objectives of the CAROUSEL + 1 project, which aims to impact how online immersive technologies influence multiuser interaction and communication with a focus on dancing and learning danc... Read More about Embodied online dance learning objectives of CAROUSEL +.

FaceMagic: Real-time Facial Detail Effects on Mobile (2020)
Conference Proceeding
Casas, L., Li, Y., & Mitchell, K. (2020). FaceMagic: Real-time Facial Detail Effects on Mobile. In SA '20: SIGGRAPH Asia 2020 Technical Communications (1-4). https://doi.org/10.1145/3410700.3425429

We present a novel real-time face detail reconstruction method capable of recovering high quality geometry on consumer mobile devices. Our system firstly uses a morphable model and semantic segmentation of facial parts to achieve robust self-calibrat... Read More about FaceMagic: Real-time Facial Detail Effects on Mobile.

Props Alive: A Framework for Augmented Reality Stop Motion Animation (2020)
Conference Proceeding
Casas, L., Kosek, M., & Mitchell, K. (2020). Props Alive: A Framework for Augmented Reality Stop Motion Animation. In 2017 IEEE 10th Workshop on Software Engineering and Architectures for Realtime Interactive Systems (SEARIS). https://doi.org/10.1109/SEARIS41720.2017.9183487

Stop motion animation evolved in the early days of cinema with the aim to create an illusion of movement with static puppets posed manually each frame. Current stop motion movies introduced 3D printing processes in order to acquire animations more ac... Read More about Props Alive: A Framework for Augmented Reality Stop Motion Animation.

Group-Based Expert Walkthroughs: How Immersive Technologies Can Facilitate the Collaborative Authoring of Character Animation (2020)
Conference Proceeding
Pan, Y., & Mitchell, K. (2020). Group-Based Expert Walkthroughs: How Immersive Technologies Can Facilitate the Collaborative Authoring of Character Animation. In 2020 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW) ( ‏188-195). https://doi.org/10.1109/vrw50115.2020.00041

Immersive technologies have increasingly attracted the attention of the computer animation community in search of more intuitive and effective alternatives to the current sophisticated 2D interfaces. The higher affordances offered by 3D interaction,... Read More about Group-Based Expert Walkthroughs: How Immersive Technologies Can Facilitate the Collaborative Authoring of Character Animation.

PoseMMR: A Collaborative Mixed Reality Authoring Tool for Character Animation (2020)
Conference Proceeding
Pan, Y., & Mitchell, K. (2020). PoseMMR: A Collaborative Mixed Reality Authoring Tool for Character Animation. In 2020 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW) (759-760). https://doi.org/10.1109/vrw50115.2020.00230

Augmented reality devices enable new approaches for character animation, e.g., given that character posing is three dimensional in nature it follows that interfaces with higher degrees-of-freedom (DoF) should outperform 2D interfaces. We present Pose... Read More about PoseMMR: A Collaborative Mixed Reality Authoring Tool for Character Animation.

Photo-Realistic Facial Details Synthesis from Single Image (2019)
Conference Proceeding
Chen, A., Chen, Z., Zhang, G., Zhang, Z., Mitchell, K., & Yu, J. (2019). Photo-Realistic Facial Details Synthesis from Single Image. In 2019 IEEE/CVF International Conference on Computer Vision (ICCV) (9429-9439). https://doi.org/10.1109/ICCV.2019.00952

We present a single-image 3D face synthesis technique that can handle challenging facial expressions while recovering fine geometric details. Our technique employs expression analysis for proxy face geometry generation and combines supervised and uns... Read More about Photo-Realistic Facial Details Synthesis from Single Image.

Recycling a Landmark Dataset for Real-time Facial Capture and Animation with Low Cost HMD Integrated Cameras (2019)
Conference Proceeding
Dos Santos Brito, C. J., & Mitchell, K. (2019). Recycling a Landmark Dataset for Real-time Facial Capture and Animation with Low Cost HMD Integrated Cameras. In VRCAI '19: The 17th International Conference on Virtual-Reality Continuum and its Applications in Industry. https://doi.org/10.1145/3359997.3365690

Preparing datasets for use in the training of real-time face tracking algorithms for HMDs is costly. Manually annotated facial landmarks are accessible for regular photography datasets, but introspectively mounted cameras for VR face tracking have in... Read More about Recycling a Landmark Dataset for Real-time Facial Capture and Animation with Low Cost HMD Integrated Cameras.

JUNGLE: An Interactive Visual Platform for Collaborative Creation and Consumption of Nonlinear Transmedia Stories (2019)
Conference Proceeding
Kapadia, M., Muniz, C. M., Sohn, S. S., Pan, Y., Schriber, S., Mitchell, K., & Gross, M. (2019). JUNGLE: An Interactive Visual Platform for Collaborative Creation and Consumption of Nonlinear Transmedia Stories. In Interactive Storytelling: 12th International Conference on Interactive Digital Storytelling, ICIDS 2019, Little Cottonwood Canyon, UT, USA, November 19–22, 2019, Proceedings (250-266). https://doi.org/10.1007/978-3-030-33894-7_26

JUNGLE is an interactive, visual platform for the collaborative manipulation and consumption of nonlinear transmedia stories. Intuitive visual interfaces encourage JUNGLE users to explore vast libraries of story worlds, expand existing stories, or co... Read More about JUNGLE: An Interactive Visual Platform for Collaborative Creation and Consumption of Nonlinear Transmedia Stories.

Light Field Synthesis Using Inexpensive Surveillance Camera Systems (2019)
Conference Proceeding
Dumbgen, F., Schroers, C., & Mitchell, K. (2019). Light Field Synthesis Using Inexpensive Surveillance Camera Systems. . https://doi.org/10.1109/icip.2019.8804269

We present a light field synthesis technique that achieves accurate reconstruction given a low-cost, wide-baseline camera rig. Our system integrates optical flow with methods for rectification, disparity estimation, and feature extraction, which we t... Read More about Light Field Synthesis Using Inexpensive Surveillance Camera Systems.

Feature-preserving detailed 3D face reconstruction from a single image (2018)
Conference Proceeding
Li, Y., Ma, L., Fan, H., & Mitchell, K. (2018). Feature-preserving detailed 3D face reconstruction from a single image. In CVMP '18 Proceedings of the 15th ACM SIGGRAPH European Conference on Visual Media Production. https://doi.org/10.1145/3278471.3278473

Dense 3D face reconstruction plays a fundamental role in visual media production involving digital actors. We improve upon high fidelity reconstruction from a single 2D photo with a reconstruction framework that is robust to large variations in expre... Read More about Feature-preserving detailed 3D face reconstruction from a single image.

Multi-reality games: an experience across the entire reality-virtuality continuum (2018)
Conference Proceeding
Casas, L., Ciccone, L., Çimen, G., Wiedemann, P., Fauconneau, M., Sumner, R. W., & Mitchell, K. (2018). Multi-reality games: an experience across the entire reality-virtuality continuum. In Proceedings of the VRCAI2018. https://doi.org/10.1145/3284398.3284411

Interactive play can take very different forms, from playing with physical board games to fully digital video games. In recent years, new video game paradigms were introduced to connect real-world objects to virtual game characters. However, even the... Read More about Multi-reality games: an experience across the entire reality-virtuality continuum.

Image Based Proximate Shadow Retargeting (2018)
Conference Proceeding
Casas, L., Fauconneau, M., Kosek, M., Mclister, K., & Mitchell, K. (2018). Image Based Proximate Shadow Retargeting. In Proceedings of Computer Graphics & Visual Computing (CGVC) 2018. https://doi.org/10.2312/cgvc.20181206

We introduce Shadow Retargeting which maps real shadow appearance to virtual shadows given a corresponding deformation of scene geometry, such that appearance is seamlessly maintained. By performing virtual shadow reconstruction from un-occluded real... Read More about Image Based Proximate Shadow Retargeting.

Method for Efficient CPU-GPU Streaming for Walkthrough of Full Motion Lightfield Video (2017)
Conference Proceeding
Chitalu, F. M., Koniaris, B., & Mitchell, K. (2017). Method for Efficient CPU-GPU Streaming for Walkthrough of Full Motion Lightfield Video. In CVMP 2017: Proceedings of the 14th European Conference on Visual Media Production (CVMP 2017). https://doi.org/10.1145/3150165.3150173

Lightfield video, as a high-dimensional function, is very demanding in terms of storage. As such, lightfield video data, even in a compressed form, do not typically fit in GPU or main memory unless the capture area, resolution or duration is sufficie... Read More about Method for Efficient CPU-GPU Streaming for Walkthrough of Full Motion Lightfield Video.

IRIDiuM+: deep media storytelling with non-linear light field video (2017)
Conference Proceeding
Kosek, M., Koniaris, B., Sinclair, D., Markova, D., Rothnie, F., Smoot, L., & Mitchell, K. (2017). IRIDiuM+: deep media storytelling with non-linear light field video. In SIGGRAPH '17 ACM SIGGRAPH 2017 VR Village. https://doi.org/10.1145/3089269.3089277

We present immersive storytelling in VR enhanced with non-linear sequenced sound, touch and light. Our Deep Media (Rose 2012) aim is to allow for guests to physically enter rendered movies with novel non-linear storytelling capability. With the ab... Read More about IRIDiuM+: deep media storytelling with non-linear light field video.