Skip to main content

Research Repository

Advanced Search

All Outputs (107)

NeFT-Net: N-window extended frequency transformer for rhythmic motion prediction (2025)
Journal Article
Ademola, A., Sinclair, D., Koniaris, B., Hannah, S., & Mitchell, K. (2025). NeFT-Net: N-window extended frequency transformer for rhythmic motion prediction. Computers and Graphics, 129, Article 104244. https://doi.org/10.1016/j.cag.2025.104244

Advancements in prediction of human motion sequences are critical for enabling online virtual reality (VR) users to dance and move in ways that accurately mirror real-world actions, delivering a more immersive and connected experience. However, laten... Read More about NeFT-Net: N-window extended frequency transformer for rhythmic motion prediction.

HoloJig: Interactive Spoken Prompt Specified Generative AI Environments (2025)
Journal Article
Casas, L., Hannah, S., & Mitchell, K. (2025). HoloJig: Interactive Spoken Prompt Specified Generative AI Environments. IEEE Computer Graphics and Applications, 45(2), 69-77. https://doi.org/10.1109/mcg.2025.3553780

HoloJig offers an interactive, speech-to-VR, virtual reality experience that generates diverse environments in real-time based on live spoken descriptions. Unlike traditional VR systems that rely on pre-built assets, HoloJig dynamically creates perso... Read More about HoloJig: Interactive Spoken Prompt Specified Generative AI Environments.

Machine learning for animatronic development and optimization (2025)
Patent
Mitchell, K., Castellon, J., Bacher, M., McCrory, M., Stolarz, J., & Ayala, A. (2025). Machine learning for animatronic development and optimization. US12236168B2

Techniques for animatronic design are provided. A plurality of simulated meshes is generated using a physics simulation model, where the plurality of simulated meshes corresponds to a plurality of actuator configurations for an animatronic mechanical... Read More about Machine learning for animatronic development and optimization.

Audio Occlusion Experiment Data (2025)
Data
McSeveney, S., Tamariz, M., McGregor, I., Koniaris, B., & Mitchell, K. (2025). Audio Occlusion Experiment Data. [Data]

This dataset comprises anonymous user study participant responses of audio occlusion to investigate presence response of body occlusions in the presence of sound sources in the direct path between the person and the audio driver speaker.

DeFT-Net: Dual-Window Extended Frequency Transformer for Rhythmic Motion Prediction (2024)
Presentation / Conference Contribution
Ademola, A., Sinclair, D., Koniaris, B., Hannah, S., & Mitchell, K. (2024, September). DeFT-Net: Dual-Window Extended Frequency Transformer for Rhythmic Motion Prediction. Presented at EG UK Computer Graphics & Visual Computing (2024), London, UK

Enabling online virtual reality (VR) users to dance and move in a way that mirrors the real-world necessitates improvements in the accuracy of predicting human motion sequences paving way for an immersive and connected experience. However, the drawba... Read More about DeFT-Net: Dual-Window Extended Frequency Transformer for Rhythmic Motion Prediction.

Auditory Occlusion Based on the Human Body in the Direct Sound Path: Measured and Perceivable Effects (2024)
Presentation / Conference Contribution
McSeveney, S., Tamariz, M., McGregor, I., Koniaris, B., & Mitchell, K. (2024, September). Auditory Occlusion Based on the Human Body in the Direct Sound Path: Measured and Perceivable Effects. Presented at Audio Mostly 2024 - Explorations in Sonic Cultures, Milan, Italy

Audio plays a key role in the sense of immersion and presence in VR, as it correlates to improved enjoyment of content. We share results of a perception study on the ability of listeners to recognise auditory occlusion due to the presence of a human... Read More about Auditory Occlusion Based on the Human Body in the Direct Sound Path: Measured and Perceivable Effects.

MoodFlow: Orchestrating Conversations with Emotionally Intelligent Avatars in Mixed Reality (2024)
Presentation / Conference Contribution
Casas, L., Hannah, S., & Mitchell, K. (2024, March). MoodFlow: Orchestrating Conversations with Emotionally Intelligent Avatars in Mixed Reality. Presented at ANIVAE 2024 : 7th IEEE VR Internal Workshop on Animation in Virtual and Augmented Environments, Orlando, Florida

MoodFlow presents a novel approach at the intersection of mixed reality and conversational artificial intelligence for emotionally intelligent avatars. Through a state machine embedded in user prompts, the system decodes emotional nuances, enabling a... Read More about MoodFlow: Orchestrating Conversations with Emotionally Intelligent Avatars in Mixed Reality.

DanceMark: An open telemetry framework for latency sensitive real-time networked immersive experiences (2024)
Presentation / Conference Contribution
Koniaris, B., Sinclair, D., & Mitchell, K. (2024, March). DanceMark: An open telemetry framework for latency sensitive real-time networked immersive experiences. Presented at IEEE VR Workshop on Open Access Tools and Libraries for Virtual Reality, Orlando, FL

DanceMark is an open telemetry framework designed for latency-sensitive real-time networked immersive experiences, focusing on online dancing in virtual reality within the DanceGraph platform. The goal is to minimize end-to-end latency and enhance us... Read More about DanceMark: An open telemetry framework for latency sensitive real-time networked immersive experiences.

Design Considerations of Voice Articulated Generative AI Virtual Reality Dance Environments (2024)
Presentation / Conference Contribution
Casas, L., Mitchell, K., Tamariz, M., Hannah, S., Sinclair, D., Koniaris, B., & Kennedy, J. (2024, May). Design Considerations of Voice Articulated Generative AI Virtual Reality Dance Environments. Presented at SIGCHI GenAI in UGC Workshop, Honolulu, Hawaii

We consider practical and social considerations of collaborating verbally with colleagues and friends, not confined by physical distance, but through seamless networked telepresence to interactively create shared virtual dance environments. In respon... Read More about Design Considerations of Voice Articulated Generative AI Virtual Reality Dance Environments.

Method and system for visually seamless grafting of volumetric data (2024)
Patent
Mitchell, K. J. (2024). Method and system for visually seamless grafting of volumetric data

Visually seamless grafting of volumetric data. In some implementations, a method includes obtaining volumetric data that represents a first volume including one or more three-dimensional objects. Planar slices of the first volume are determined and f... Read More about Method and system for visually seamless grafting of volumetric data.

Expressive Talking Avatars (2024)
Journal Article
Pan, Y., Tan, S., Cheng, S., Lin, Q., Zeng, Z., & Mitchell, K. (2024). Expressive Talking Avatars. IEEE Transactions on Visualization and Computer Graphics, 30(5), 2538-2548. https://doi.org/10.1109/TVCG.2024.3372047

Stylized avatars are common virtual representations used in VR to support interaction and communication between remote collaborators. However, explicit expressions are notoriously difficult to create, mainly because most current methods rely on geome... Read More about Expressive Talking Avatars.

DanceGraph: A Complementary Architecture for Synchronous Dancing Online (2023)
Presentation / Conference Contribution
Sinclair, D., Ademola, A. V., Koniaris, B., & Mitchell, K. (2023, May). DanceGraph: A Complementary Architecture for Synchronous Dancing Online. Presented at 36th International Computer Animation & Social Agents (CASA) 2023, Limassol, Cyprus

DanceGraph is an architecture for synchronized online dancing overcoming the latency of net-worked body pose sharing. We break down this challenge by developing a real-time bandwidth-efficient architecture to minimize lag and reduce the timeframe of... Read More about DanceGraph: A Complementary Architecture for Synchronous Dancing Online.

Real-time Facial Animation for 3D Stylized Character with Emotion Dynamics (2023)
Presentation / Conference Contribution
Pan, Y., Zhang, R., Wang, J., Ding, Y., & Mitchell, K. (2023, October). Real-time Facial Animation for 3D Stylized Character with Emotion Dynamics. Presented at 31st ACM International Conference on Multimedia, Ottawa, Canada

Our aim is to improve animation production techniques' efficiency and effectiveness. We present two real-time solutions which drive character expressions in a geometrically consistent and perceptually valid way. Our first solution combines keyframe a... Read More about Real-time Facial Animation for 3D Stylized Character with Emotion Dynamics.

Intermediated Reality with an AI 3D Printed Character (2023)
Presentation / Conference Contribution
Casas, L., & Mitchell, K. (2023, August). Intermediated Reality with an AI 3D Printed Character. Presented at ACM SIGGRAPH 2023 Real-Time Live!, Los Angeles, CA, USA

We introduce live character conversational interactions in Intermediated Reality to bring real-world objects to life in Augmented Reality (AR) and Artificial Intelligence (AI). The AI recognizes live speech and generates short character responses, sy... Read More about Intermediated Reality with an AI 3D Printed Character.

Editorial: Games May Host the First Rightful AI Citizens (2023)
Journal Article
Mitchell, K. (2023). Editorial: Games May Host the First Rightful AI Citizens. Games: Research and Practice, 1(2), 1-7. https://doi.org/10.1145/3606834

GAMES creatively take place in imaginative worlds informed by, but often not limited by, real-world challenges, and this advantageously provides an accelerated environment for innovation, where concepts and ideas can be explored unencumbered by physi... Read More about Editorial: Games May Host the First Rightful AI Citizens.

Games Futures I (2023)
Journal Article
Deterding, S., Mitchell, K., Kowert, R., & King, B. (2023). Games Futures I. Games: Research and Practice, 1(1), Article 5. https://doi.org/10.1145/3585394

Games Futures collect short opinion pieces by industry and research veterans and new voices envisioning possible and desirable futures and needs for games and playable media. This inaugural series features eight of over thirty pieces.