Skip to main content

Research Repository

Advanced Search

Outputs (107)

NeFT-Net: N-window Extended Frequency Transformer for Rhythmic Motion Prediction (2025)
Journal Article
Ademola, A., Sinclair, D., Koniaris, B., Hannah, S., & Mitchell, K. (in press). NeFT-Net: N-window Extended Frequency Transformer for Rhythmic Motion Prediction. Computers and Graphics,

Advancements in prediction of human motion sequences are critical for enabling online virtual reality (VR) users to dance and move in ways that accurately mirror real-world actions, delivering a more immersive and connected experience. However, laten... Read More about NeFT-Net: N-window Extended Frequency Transformer for Rhythmic Motion Prediction.

HoloJig: Interactive Spoken Prompt Specified Generative AI Environments (2025)
Journal Article
Casas, L., Hannah, S., & Mitchell, K. (online). HoloJig: Interactive Spoken Prompt Specified Generative AI Environments. IEEE Computer Graphics and Applications, https://doi.org/10.1109/mcg.2025.3553780

HoloJig offers an interactive, speech-to-VR, virtual reality experience that generates diverse environments in real-time based on live spoken descriptions. Unlike traditional VR systems that rely on pre-built assets, HoloJig dynamically creates perso... Read More about HoloJig: Interactive Spoken Prompt Specified Generative AI Environments.

Machine learning for animatronic development and optimization (2025)
Patent
Mitchell, K., Castellon, J., Bacher, M., McCrory, M., Stolarz, J., & Ayala, A. (2025). Machine learning for animatronic development and optimization. US12236168B2

Techniques for animatronic design are provided. A plurality of simulated meshes is generated using a physics simulation model, where the plurality of simulated meshes corresponds to a plurality of actuator configurations for an animatronic mechanical... Read More about Machine learning for animatronic development and optimization.

Audio Occlusion Experiment Data (2025)
Data
McSeveney, S., Tamariz, M., McGregor, I., Koniaris, B., & Mitchell, K. (2025). Audio Occlusion Experiment Data. [Data]

This dataset comprises anonymous user study participant responses of audio occlusion to investigate presence response of body occlusions in the presence of sound sources in the direct path between the person and the audio driver speaker.

DeFT-Net: Dual-Window Extended Frequency Transformer for Rhythmic Motion Prediction (2024)
Presentation / Conference Contribution
Ademola, A., Sinclair, D., Koniaris, B., Hannah, S., & Mitchell, K. (2024, September). DeFT-Net: Dual-Window Extended Frequency Transformer for Rhythmic Motion Prediction. Presented at EG UK Computer Graphics & Visual Computing (2024), London, UK

Enabling online virtual reality (VR) users to dance and move in a way that mirrors the real-world necessitates improvements in the accuracy of predicting human motion sequences paving way for an immersive and connected experience. However, the drawba... Read More about DeFT-Net: Dual-Window Extended Frequency Transformer for Rhythmic Motion Prediction.

Auditory Occlusion Based on the Human Body in the Direct Sound Path: Measured and Perceivable Effects (2024)
Presentation / Conference Contribution
McSeveney, S., Tamariz, M., McGregor, I., Koniaris, B., & Mitchell, K. (2024, September). Auditory Occlusion Based on the Human Body in the Direct Sound Path: Measured and Perceivable Effects. Presented at Audio Mostly 2024 - Explorations in Sonic Cultures, Milan, Italy

Audio plays a key role in the sense of immersion and presence in VR, as it correlates to improved enjoyment of content. We share results of a perception study on the ability of listeners to recognise auditory occlusion due to the presence of a human... Read More about Auditory Occlusion Based on the Human Body in the Direct Sound Path: Measured and Perceivable Effects.

DanceMark: An open telemetry framework for latency sensitive real-time networked immersive experiences (2024)
Presentation / Conference Contribution
Koniaris, B., Sinclair, D., & Mitchell, K. (2024, March). DanceMark: An open telemetry framework for latency sensitive real-time networked immersive experiences. Presented at IEEE VR Workshop on Open Access Tools and Libraries for Virtual Reality, Orlando, FL

DanceMark is an open telemetry framework designed for latency-sensitive real-time networked immersive experiences, focusing on online dancing in virtual reality within the DanceGraph platform. The goal is to minimize end-to-end latency and enhance us... Read More about DanceMark: An open telemetry framework for latency sensitive real-time networked immersive experiences.

MoodFlow: Orchestrating Conversations with Emotionally Intelligent Avatars in Mixed Reality (2024)
Presentation / Conference Contribution
Casas, L., Hannah, S., & Mitchell, K. (2024, March). MoodFlow: Orchestrating Conversations with Emotionally Intelligent Avatars in Mixed Reality. Presented at ANIVAE 2024 : 7th IEEE VR Internal Workshop on Animation in Virtual and Augmented Environments, Orlando, Florida

MoodFlow presents a novel approach at the intersection of mixed reality and conversational artificial intelligence for emotionally intelligent avatars. Through a state machine embedded in user prompts, the system decodes emotional nuances, enabling a... Read More about MoodFlow: Orchestrating Conversations with Emotionally Intelligent Avatars in Mixed Reality.