Prof Jessie Kennedy J.Kennedy@napier.ac.uk
Emeritus Professor
Prof Jessie Kennedy J.Kennedy@napier.ac.uk
Emeritus Professor
Dr Iain McGregor I.McGregor@napier.ac.uk
Associate Professor
Prof Kenny Mitchell K.Mitchell2@napier.ac.uk
Professor
Synchronous Patterned Motion Avatars (2024)
Patent
Mitchell, K., Sinclair, D., Koniaris, B., & Ademola, A. (2024). Synchronous Patterned Motion Avatars
DeFT-Net: Dual-Window Extended Frequency Transformer for Rhythmic Motion Prediction (2024)
Presentation / Conference Contribution
Ademola, A., Sinclair, D., Koniaris, B., Hannah, S., & Mitchell, K. (2024, September). DeFT-Net: Dual-Window Extended Frequency Transformer for Rhythmic Motion Prediction. Presented at EG UK Computer Graphics & Visual Computing (2024), London, UKEnabling online virtual reality (VR) users to dance and move in a way that mirrors the real-world necessitates improvements in the accuracy of predicting human motion sequences paving way for an immersive and connected experience. However, the drawba... Read More about DeFT-Net: Dual-Window Extended Frequency Transformer for Rhythmic Motion Prediction.
Auditory Occlusion Based on the Human Body in the Direct Sound Path: Measured and Perceivable Effects (2024)
Presentation / Conference Contribution
McSeveney, S., Tamariz, M., McGregor, I., Koniaris, B., & Mitchell, K. (2024, September). Auditory Occlusion Based on the Human Body in the Direct Sound Path: Measured and Perceivable Effects. Presented at Audio Mostly 2024 - Explorations in Sonic Cultures, Milan, ItalyAudio plays a key role in the sense of immersion and presence in VR, as it correlates to improved enjoyment of content. We share results of a perception study on the ability of listeners to recognise auditory occlusion due to the presence of a human... Read More about Auditory Occlusion Based on the Human Body in the Direct Sound Path: Measured and Perceivable Effects.
MoodFlow: Orchestrating Conversations with Emotionally Intelligent Avatars in Mixed Reality (2024)
Presentation / Conference Contribution
Casas, L., Hannah, S., & Mitchell, K. (2024, March). MoodFlow: Orchestrating Conversations with Emotionally Intelligent Avatars in Mixed Reality. Presented at ANIVAE 2024 : 7th IEEE VR Internal Workshop on Animation in Virtual and Augmented Environments, Orlando, FloridaMoodFlow presents a novel approach at the intersection of mixed reality and conversational artificial intelligence for emotionally intelligent avatars. Through a state machine embedded in user prompts, the system decodes emotional nuances, enabling a... Read More about MoodFlow: Orchestrating Conversations with Emotionally Intelligent Avatars in Mixed Reality.
DanceMark: An open telemetry framework for latency sensitive real-time networked immersive experiences (2024)
Presentation / Conference Contribution
Koniaris, B., Sinclair, D., & Mitchell, K. (2024, March). DanceMark: An open telemetry framework for latency sensitive real-time networked immersive experiences. Presented at IEEE VR Workshop on Open Access Tools and Libraries for Virtual Reality, Orlando, FLDanceMark is an open telemetry framework designed for latency-sensitive real-time networked immersive experiences, focusing on online dancing in virtual reality within the DanceGraph platform. The goal is to minimize end-to-end latency and enhance us... Read More about DanceMark: An open telemetry framework for latency sensitive real-time networked immersive experiences.
Design Considerations of Voice Articulated Generative AI Virtual Reality Dance Environments (2024)
Presentation / Conference Contribution
Casas, L., Mitchell, K., Tamariz, M., Hannah, S., Sinclair, D., Koniaris, B., & Kennedy, J. (2024, May). Design Considerations of Voice Articulated Generative AI Virtual Reality Dance Environments. Presented at SIGCHI GenAI in UGC Workshop, Honolulu, HawaiiWe consider practical and social considerations of collaborating verbally with colleagues and friends, not confined by physical distance, but through seamless networked telepresence to interactively create shared virtual dance environments. In respon... Read More about Design Considerations of Voice Articulated Generative AI Virtual Reality Dance Environments.
WAVE: Anticipatory Movement Visualization for VR Dancing (2024)
Presentation / Conference Contribution
Laattala, M., Piitulainen, R., Ady, N. M., Tamariz, M., & Hämäläinen, P. (2024, May). WAVE: Anticipatory Movement Visualization for VR Dancing. Presented at CHI '24: CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USADance games are one of the most popular game genres in Virtual Reality (VR), and active dance communities have emerged on social VR platforms such as VR Chat. However, effective instruction of dancing in VR or through other computerized means remains... Read More about WAVE: Anticipatory Movement Visualization for VR Dancing.
Augmented Reality Methods and Systems (2024)
Patent
Cambra, L. C., & Mitchell, K. (2024). Augmented Reality Methods and Systems. US11908068Methods and systems employing augmented reality techniques via real-world objects for various purposes. Computer implemented methods are provided for animated augmentation of real-time video of static real-world objects. A computing device receives f... Read More about Augmented Reality Methods and Systems.
DanceGraph: A Complementary Architecture for Synchronous Dancing Online (2023)
Presentation / Conference Contribution
Sinclair, D., Ademola, A. V., Koniaris, B., & Mitchell, K. (2023, May). DanceGraph: A Complementary Architecture for Synchronous Dancing Online. Presented at 36th International Computer Animation & Social Agents (CASA) 2023, Limassol, CyprusDanceGraph is an architecture for synchronized online dancing overcoming the latency of net-worked body pose sharing. We break down this challenge by developing a real-time bandwidth-efficient architecture to minimize lag and reduce the timeframe of... Read More about DanceGraph: A Complementary Architecture for Synchronous Dancing Online.
Real-time Facial Animation for 3D Stylized Character with Emotion Dynamics (2023)
Presentation / Conference Contribution
Pan, Y., Zhang, R., Wang, J., Ding, Y., & Mitchell, K. (2023, October). Real-time Facial Animation for 3D Stylized Character with Emotion Dynamics. Presented at 31st ACM International Conference on Multimedia, Ottawa, CanadaOur aim is to improve animation production techniques' efficiency and effectiveness. We present two real-time solutions which drive character expressions in a geometrically consistent and perceptually valid way. Our first solution combines keyframe a... Read More about Real-time Facial Animation for 3D Stylized Character with Emotion Dynamics.
About Edinburgh Napier Research Repository
Administrator e-mail: repository@napier.ac.uk
This application uses the following open-source libraries:
Apache License Version 2.0 (http://www.apache.org/licenses/)
Apache License Version 2.0 (http://www.apache.org/licenses/)
SIL OFL 1.1 (http://scripts.sil.org/OFL)
MIT License (http://opensource.org/licenses/mit-license.html)
CC BY 3.0 ( http://creativecommons.org/licenses/by/3.0/)
Powered by Worktribe © 2025
Advanced Search