Skip to main content

Research Repository

Advanced Search

Outputs (22)

NeFT-Net: N-window extended frequency transformer for rhythmic motion prediction (2025)
Journal Article
Ademola, A., Sinclair, D., Koniaris, B., Hannah, S., & Mitchell, K. (2025). NeFT-Net: N-window extended frequency transformer for rhythmic motion prediction. Computers and Graphics, 129, Article 104244. https://doi.org/10.1016/j.cag.2025.104244

Advancements in prediction of human motion sequences are critical for enabling online virtual reality (VR) users to dance and move in ways that accurately mirror real-world actions, delivering a more immersive and connected experience. However, laten... Read More about NeFT-Net: N-window extended frequency transformer for rhythmic motion prediction.

HoloJig: Interactive Spoken Prompt Specified Generative AI Environments (2025)
Journal Article
Casas, L., Hannah, S., & Mitchell, K. (online). HoloJig: Interactive Spoken Prompt Specified Generative AI Environments. IEEE Computer Graphics and Applications, https://doi.org/10.1109/mcg.2025.3553780

HoloJig offers an interactive, speech-to-VR, virtual reality experience that generates diverse environments in real-time based on live spoken descriptions. Unlike traditional VR systems that rely on pre-built assets, HoloJig dynamically creates perso... Read More about HoloJig: Interactive Spoken Prompt Specified Generative AI Environments.

Machine learning for animatronic development and optimization (2025)
Patent
Mitchell, K., Castellon, J., Bacher, M., McCrory, M., Stolarz, J., & Ayala, A. (2025). Machine learning for animatronic development and optimization. US12236168B2

Techniques for animatronic design are provided. A plurality of simulated meshes is generated using a physics simulation model, where the plurality of simulated meshes corresponds to a plurality of actuator configurations for an animatronic mechanical... Read More about Machine learning for animatronic development and optimization.

Audio Occlusion Experiment Data (2025)
Data
McSeveney, S., Tamariz, M., McGregor, I., Koniaris, B., & Mitchell, K. (2025). Audio Occlusion Experiment Data. [Data]

This dataset comprises anonymous user study participant responses of audio occlusion to investigate presence response of body occlusions in the presence of sound sources in the direct path between the person and the audio driver speaker.

DeFT-Net: Dual-Window Extended Frequency Transformer for Rhythmic Motion Prediction (2024)
Presentation / Conference Contribution
Ademola, A., Sinclair, D., Koniaris, B., Hannah, S., & Mitchell, K. (2024, September). DeFT-Net: Dual-Window Extended Frequency Transformer for Rhythmic Motion Prediction. Presented at EG UK Computer Graphics & Visual Computing (2024), London, UK

Enabling online virtual reality (VR) users to dance and move in a way that mirrors the real-world necessitates improvements in the accuracy of predicting human motion sequences paving way for an immersive and connected experience. However, the drawba... Read More about DeFT-Net: Dual-Window Extended Frequency Transformer for Rhythmic Motion Prediction.

MoodFlow: Orchestrating Conversations with Emotionally Intelligent Avatars in Mixed Reality (2024)
Presentation / Conference Contribution
Casas, L., Hannah, S., & Mitchell, K. (2024, March). MoodFlow: Orchestrating Conversations with Emotionally Intelligent Avatars in Mixed Reality. Presented at ANIVAE 2024 : 7th IEEE VR Internal Workshop on Animation in Virtual and Augmented Environments, Orlando, Florida

MoodFlow presents a novel approach at the intersection of mixed reality and conversational artificial intelligence for emotionally intelligent avatars. Through a state machine embedded in user prompts, the system decodes emotional nuances, enabling a... Read More about MoodFlow: Orchestrating Conversations with Emotionally Intelligent Avatars in Mixed Reality.

DanceMark: An open telemetry framework for latency sensitive real-time networked immersive experiences (2024)
Presentation / Conference Contribution
Koniaris, B., Sinclair, D., & Mitchell, K. (2024, March). DanceMark: An open telemetry framework for latency sensitive real-time networked immersive experiences. Presented at IEEE VR Workshop on Open Access Tools and Libraries for Virtual Reality, Orlando, FL

DanceMark is an open telemetry framework designed for latency-sensitive real-time networked immersive experiences, focusing on online dancing in virtual reality within the DanceGraph platform. The goal is to minimize end-to-end latency and enhance us... Read More about DanceMark: An open telemetry framework for latency sensitive real-time networked immersive experiences.

Design Considerations of Voice Articulated Generative AI Virtual Reality Dance Environments (2024)
Presentation / Conference Contribution
Casas, L., Mitchell, K., Tamariz, M., Hannah, S., Sinclair, D., Koniaris, B., & Kennedy, J. (2024, May). Design Considerations of Voice Articulated Generative AI Virtual Reality Dance Environments. Presented at SIGCHI GenAI in UGC Workshop, Honolulu, Hawaii

We consider practical and social considerations of collaborating verbally with colleagues and friends, not confined by physical distance, but through seamless networked telepresence to interactively create shared virtual dance environments. In respon... Read More about Design Considerations of Voice Articulated Generative AI Virtual Reality Dance Environments.

Method and system for visually seamless grafting of volumetric data (2024)
Patent
Mitchell, K. J. (2024). Method and system for visually seamless grafting of volumetric data

Visually seamless grafting of volumetric data. In some implementations, a method includes obtaining volumetric data that represents a first volume including one or more three-dimensional objects. Planar slices of the first volume are determined and f... Read More about Method and system for visually seamless grafting of volumetric data.

Expressive Talking Avatars (2024)
Journal Article
Pan, Y., Tan, S., Cheng, S., Lin, Q., Zeng, Z., & Mitchell, K. (2024). Expressive Talking Avatars. IEEE Transactions on Visualization and Computer Graphics, 30(5), 2538-2548. https://doi.org/10.1109/TVCG.2024.3372047

Stylized avatars are common virtual representations used in VR to support interaction and communication between remote collaborators. However, explicit expressions are notoriously difficult to create, mainly because most current methods rely on geome... Read More about Expressive Talking Avatars.

Dense reconstruction for narrow baseline motion observations (2022)
Patent
Mitchell, K., Dumbgen, F., & Liu, S. (2022). Dense reconstruction for narrow baseline motion observations. USPTO

Techniques for constructing a three-dimensional model of facial geometry are disclosed. A first three-dimensional model of an object is generated, based on a plurality of captured images of the object. A projected three-dimensional model of the objec... Read More about Dense reconstruction for narrow baseline motion observations.

Real-time feature preserving rendering of visual effects on an image of a face (2022)
Patent
Mitchell, K. J., Cambra, L. C., & Li, Y. (2022). Real-time feature preserving rendering of visual effects on an image of a face

Embodiments provide techniques for rendering augmented reality effects on an image of a user's face in real time. The method generally includes receiving an image of a face of a user. A global facial depth map and a luminance map are generated based... Read More about Real-time feature preserving rendering of visual effects on an image of a face.

Systems and Methods for Illuminating Physical Space with Shadows of Virtual Objects (2021)
Patent
Velinov, Z. V., Mitchell, K. J., & Hager, IV, J. G. (2021). Systems and Methods for Illuminating Physical Space with Shadows of Virtual Objects

A system can be used in conjunction with a display configured to display an augmented reality (AR) environment including a virtual object placed in a real environment, the virtual object having a virtual location in the AR environment. The system inc... Read More about Systems and Methods for Illuminating Physical Space with Shadows of Virtual Objects.

Introducing real-time lighting effects to illuminate real-world physical objects in see-through augmented reality displays (2021)
Patent
Yueng, J. A., Mitchell, K. J., Panec, T. M., Baumbach, E. H., & Drake, C. D. (2021). Introducing real-time lighting effects to illuminate real-world physical objects in see-through augmented reality displays

Embodiments provide for the rendering of illumination effects on real-world objects in augmented reality systems. An example method generally includes overlaying a shader on the augmented reality display. The shader generally corresponds to a three-d... Read More about Introducing real-time lighting effects to illuminate real-world physical objects in see-through augmented reality displays.