Skip to main content

Research Repository

Advanced Search

Prof Kenny Mitchell's Outputs (107)

Emotional Voice Puppetry (2023)
Journal Article
Pan, Y., Zhang, R., Cheng, S., Tan, S., Ding, Y., Mitchell, K., & Yang, X. (2023). Emotional Voice Puppetry. IEEE Transactions on Visualization and Computer Graphics, 29(5), 2527-2535. https://doi.org/10.1109/tvcg.2023.3247101

The paper presents emotional voice puppetry, an audio-based facial animation approach to portray characters with vivid emotional changes. The lips motion and the surrounding facial areas are controlled by the contents of the audio, and the facial dyn... Read More about Emotional Voice Puppetry.

Photo-Realistic Facial Details Synthesis from Single Image (2019)
Presentation / Conference Contribution
Chen, A., Chen, Z., Zhang, G., Zhang, Z., Mitchell, K., & Yu, J. (2019, October). Photo-Realistic Facial Details Synthesis from Single Image. Presented at 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Korea

We present a single-image 3D face synthesis technique that can handle challenging facial expressions while recovering fine geometric details. Our technique employs expression analysis for proxy face geometry generation and combines supervised and uns... Read More about Photo-Realistic Facial Details Synthesis from Single Image.

Collimated Whole Volume Light Scattering in Homogeneous Finite Media (2022)
Journal Article
Velinov, Z., & Mitchell, K. (2023). Collimated Whole Volume Light Scattering in Homogeneous Finite Media. IEEE Transactions on Visualization and Computer Graphics, 29(7), 3145-3157. https://doi.org/10.1109/TVCG.2021.3135764

Crepuscular rays form when light encounters an optically thick or opaque medium which masks out portions of the visible scene. Real-time applications commonly estimate this phenomena by connecting paths between light sources and the camera after a si... Read More about Collimated Whole Volume Light Scattering in Homogeneous Finite Media.

Auditory Occlusion Based on the Human Body in the Direct Sound Path: Measured and Perceivable Effects (2024)
Presentation / Conference Contribution
McSeveney, S., Tamariz, M., McGregor, I., Koniaris, B., & Mitchell, K. (2024, September). Auditory Occlusion Based on the Human Body in the Direct Sound Path: Measured and Perceivable Effects. Presented at Audio Mostly 2024 - Explorations in Sonic Cultures, Milan, Italy

Audio plays a key role in the sense of immersion and presence in VR, as it correlates to improved enjoyment of content. We share results of a perception study on the ability of listeners to recognise auditory occlusion due to the presence of a human... Read More about Auditory Occlusion Based on the Human Body in the Direct Sound Path: Measured and Perceivable Effects.

Machine learning for animatronic development and optimization (2025)
Patent
Mitchell, K., Castellon, J., Bacher, M., McCrory, M., Stolarz, J., & Ayala, A. (2025). Machine learning for animatronic development and optimization. US12236168B2

Techniques for animatronic design are provided. A plurality of simulated meshes is generated using a physics simulation model, where the plurality of simulated meshes corresponds to a plurality of actuator configurations for an animatronic mechanical... Read More about Machine learning for animatronic development and optimization.

NeFT-Net: N-window Extended Frequency Transformer for Rhythmic Motion Prediction (2025)
Journal Article
Ademola, A., Sinclair, D., Koniaris, B., Hannah, S., & Mitchell, K. (in press). NeFT-Net: N-window Extended Frequency Transformer for Rhythmic Motion Prediction. Computers and Graphics,

Advancements in prediction of human motion sequences are critical for enabling online virtual reality (VR) users to dance and move in ways that accurately mirror real-world actions, delivering a more immersive and connected experience. However, laten... Read More about NeFT-Net: N-window Extended Frequency Transformer for Rhythmic Motion Prediction.

HoloJig: Interactive Spoken Prompt Specified Generative AI Environments (2025)
Journal Article
Casas, L., Hannah, S., & Mitchell, K. (online). HoloJig: Interactive Spoken Prompt Specified Generative AI Environments. IEEE Computer Graphics and Applications, https://doi.org/10.1109/mcg.2025.3553780

HoloJig offers an interactive, speech-to-VR, virtual reality experience that generates diverse environments in real-time based on live spoken descriptions. Unlike traditional VR systems that rely on pre-built assets, HoloJig dynamically creates perso... Read More about HoloJig: Interactive Spoken Prompt Specified Generative AI Environments.

Audio Occlusion Experiment Data (2025)
Data
McSeveney, S., Tamariz, M., McGregor, I., Koniaris, B., & Mitchell, K. (2025). Audio Occlusion Experiment Data. [Data]

This dataset comprises anonymous user study participant responses of audio occlusion to investigate presence response of body occlusions in the presence of sound sources in the direct path between the person and the audio driver speaker.

DeFT-Net: Dual-Window Extended Frequency Transformer for Rhythmic Motion Prediction (2024)
Presentation / Conference Contribution
Ademola, A., Sinclair, D., Koniaris, B., Hannah, S., & Mitchell, K. (2024, September). DeFT-Net: Dual-Window Extended Frequency Transformer for Rhythmic Motion Prediction. Presented at EG UK Computer Graphics & Visual Computing (2024), London, UK

Enabling online virtual reality (VR) users to dance and move in a way that mirrors the real-world necessitates improvements in the accuracy of predicting human motion sequences paving way for an immersive and connected experience. However, the drawba... Read More about DeFT-Net: Dual-Window Extended Frequency Transformer for Rhythmic Motion Prediction.

MoodFlow: Orchestrating Conversations with Emotionally Intelligent Avatars in Mixed Reality (2024)
Presentation / Conference Contribution
Casas, L., Hannah, S., & Mitchell, K. (2024, March). MoodFlow: Orchestrating Conversations with Emotionally Intelligent Avatars in Mixed Reality. Presented at ANIVAE 2024 : 7th IEEE VR Internal Workshop on Animation in Virtual and Augmented Environments, Orlando, Florida

MoodFlow presents a novel approach at the intersection of mixed reality and conversational artificial intelligence for emotionally intelligent avatars. Through a state machine embedded in user prompts, the system decodes emotional nuances, enabling a... Read More about MoodFlow: Orchestrating Conversations with Emotionally Intelligent Avatars in Mixed Reality.

DanceMark: An open telemetry framework for latency sensitive real-time networked immersive experiences (2024)
Presentation / Conference Contribution
Koniaris, B., Sinclair, D., & Mitchell, K. (2024, March). DanceMark: An open telemetry framework for latency sensitive real-time networked immersive experiences. Presented at IEEE VR Workshop on Open Access Tools and Libraries for Virtual Reality, Orlando, FL

DanceMark is an open telemetry framework designed for latency-sensitive real-time networked immersive experiences, focusing on online dancing in virtual reality within the DanceGraph platform. The goal is to minimize end-to-end latency and enhance us... Read More about DanceMark: An open telemetry framework for latency sensitive real-time networked immersive experiences.

Design Considerations of Voice Articulated Generative AI Virtual Reality Dance Environments (2024)
Presentation / Conference Contribution
Casas, L., Mitchell, K., Tamariz, M., Hannah, S., Sinclair, D., Koniaris, B., & Kennedy, J. (2024, May). Design Considerations of Voice Articulated Generative AI Virtual Reality Dance Environments. Presented at SIGCHI GenAI in UGC Workshop, Honolulu, Hawaii

We consider practical and social considerations of collaborating verbally with colleagues and friends, not confined by physical distance, but through seamless networked telepresence to interactively create shared virtual dance environments. In respon... Read More about Design Considerations of Voice Articulated Generative AI Virtual Reality Dance Environments.

Method and system for visually seamless grafting of volumetric data (2024)
Patent
Mitchell, K. J. (2024). Method and system for visually seamless grafting of volumetric data

Visually seamless grafting of volumetric data. In some implementations, a method includes obtaining volumetric data that represents a first volume including one or more three-dimensional objects. Planar slices of the first volume are determined and f... Read More about Method and system for visually seamless grafting of volumetric data.

Expressive Talking Avatars (2024)
Journal Article
Pan, Y., Tan, S., Cheng, S., Lin, Q., Zeng, Z., & Mitchell, K. (2024). Expressive Talking Avatars. IEEE Transactions on Visualization and Computer Graphics, 30(5), 2538-2548. https://doi.org/10.1109/TVCG.2024.3372047

Stylized avatars are common virtual representations used in VR to support interaction and communication between remote collaborators. However, explicit expressions are notoriously difficult to create, mainly because most current methods rely on geome... Read More about Expressive Talking Avatars.

DanceGraph: A Complementary Architecture for Synchronous Dancing Online (2023)
Presentation / Conference Contribution
Sinclair, D., Ademola, A. V., Koniaris, B., & Mitchell, K. (2023, May). DanceGraph: A Complementary Architecture for Synchronous Dancing Online. Presented at 36th International Computer Animation & Social Agents (CASA) 2023, Limassol, Cyprus

DanceGraph is an architecture for synchronized online dancing overcoming the latency of net-worked body pose sharing. We break down this challenge by developing a real-time bandwidth-efficient architecture to minimize lag and reduce the timeframe of... Read More about DanceGraph: A Complementary Architecture for Synchronous Dancing Online.

Real-time Facial Animation for 3D Stylized Character with Emotion Dynamics (2023)
Presentation / Conference Contribution
Pan, Y., Zhang, R., Wang, J., Ding, Y., & Mitchell, K. (2023, October). Real-time Facial Animation for 3D Stylized Character with Emotion Dynamics. Presented at 31st ACM International Conference on Multimedia, Ottawa, Canada

Our aim is to improve animation production techniques' efficiency and effectiveness. We present two real-time solutions which drive character expressions in a geometrically consistent and perceptually valid way. Our first solution combines keyframe a... Read More about Real-time Facial Animation for 3D Stylized Character with Emotion Dynamics.