Skip to main content

Research Repository

Advanced Search

All Outputs (87)

Expressive Talking Avatars (2024)
Journal Article
Pan, Y., Tan, S., Cheng, S., Lin, Q., Zeng, Z., & Mitchell, K. (in press). Expressive Talking Avatars. IEEE Transactions on Visualization and Computer Graphics, https://doi.org/10.1109/TVCG.2024.3372047

Stylized avatars are common virtual representations used in VR to support interaction and communication between remote collaborators. However, explicit expressions are notoriously difficult to create, mainly because most current methods rely on geome... Read More about Expressive Talking Avatars.

DanceMark: An open telemetry framework for latency sensitive real-time networked immersive experiences (2024)
Conference Proceeding
Koniaris, B., Sinclair, D., & Mitchell, K. (in press). DanceMark: An open telemetry framework for latency sensitive real-time networked immersive experiences.

DanceMark is an open telemetry framework designed for latency-sensitive real-time networked immersive experiences, focusing on online dancing in virtual reality within the DanceGraph platform. The goal is to minimize end-to-end latency and enhance us... Read More about DanceMark: An open telemetry framework for latency sensitive real-time networked immersive experiences.

MoodFlow: Orchestrating Conversations with Emotionally Intelligent Avatars in Mixed Reality (2024)
Conference Proceeding
Casas, L., Hannah, S., & Mitchell, K. (in press). MoodFlow: Orchestrating Conversations with Emotionally Intelligent Avatars in Mixed Reality.

MoodFlow presents a novel approach at the intersection of mixed reality and conversational artificial intelligence for emotionally intelligent avatars. Through a state machine embedded in user prompts, the system decodes emotional nuances, enabling a... Read More about MoodFlow: Orchestrating Conversations with Emotionally Intelligent Avatars in Mixed Reality.

Real-time Facial Animation for 3D Stylized Character with Emotion Dynamics (2023)
Conference Proceeding
Pan, Y., Zhang, R., Wang, J., Ding, Y., & Mitchell, K. (2023). Real-time Facial Animation for 3D Stylized Character with Emotion Dynamics. In MM '23: Proceedings of the 31st ACM International Conference on Multimedia (6851-6859). https://doi.org/10.1145/3581783.3613803

Our aim is to improve animation production techniques' efficiency and effectiveness. We present two real-time solutions which drive character expressions in a geometrically consistent and perceptually valid way. Our first solution combines keyframe a... Read More about Real-time Facial Animation for 3D Stylized Character with Emotion Dynamics.

Intermediated Reality with an AI 3D Printed Character (2023)
Conference Proceeding
Casas, L., & Mitchell, K. (2023). Intermediated Reality with an AI 3D Printed Character. In SIGGRAPH '23: ACM SIGGRAPH 2023 Real-Time Live!. https://doi.org/10.1145/3588430.3597251

We introduce live character conversational interactions in Intermediated Reality to bring real-world objects to life in Augmented Reality (AR) and Artificial Intelligence (AI). The AI recognizes live speech and generates short character responses, sy... Read More about Intermediated Reality with an AI 3D Printed Character.

Editorial: Games May Host the First Rightful AI Citizens (2023)
Journal Article
Mitchell, K. (2023). Editorial: Games May Host the First Rightful AI Citizens. Games: Research and Practice, 1(2), 1-7. https://doi.org/10.1145/3606834

GAMES creatively take place in imaginative worlds informed by, but often not limited by, real-world challenges, and this advantageously provides an accelerated environment for innovation, where concepts and ideas can be explored unencumbered by physi... Read More about Editorial: Games May Host the First Rightful AI Citizens.

DanceGraph: A Complementary Architecture for Synchronous Dancing Online (2023)
Presentation / Conference
Sinclair, D., Ademola, A. V., Koniaris, B., & Mitchell, K. (2023, May). DanceGraph: A Complementary Architecture for Synchronous Dancing Online. Paper presented at 36th International Computer Animation & Social Agents (CASA) 2023, Limassol, Cyprus

DanceGraph is an architecture for synchronized online dancing overcoming the latency of net-worked body pose sharing. We break down this challenge by developing a real-time bandwidth-efficient architecture to minimize lag and reduce the timeframe of... Read More about DanceGraph: A Complementary Architecture for Synchronous Dancing Online.

Games Futures I (2023)
Journal Article
Deterding, S., Mitchell, K., Kowert, R., & King, B. (2023). Games Futures I. Games: Research and Practice, 1(1), Article 5. https://doi.org/10.1145/3585394

Games Futures collect short opinion pieces by industry and research veterans and new voices envisioning possible and desirable futures and needs for games and playable media. This inaugural series features eight of over thirty pieces.

Inaugural Editorial: A Lighthouse for Games and Playable Media (2023)
Journal Article
Deterding, S., Mitchell, K., Kowert, R., & King, B. (2023). Inaugural Editorial: A Lighthouse for Games and Playable Media. Games: Research and Practice, 1(1), Article 1. https://doi.org/10.1145/3585393

In games and playable media, almost nothing is as it was at the turn of the millennium. Digital and analog games have exploded in reach, diversity, and relevance. Digital platforms and globalisation have shifted and fragmented their centres of gravit... Read More about Inaugural Editorial: A Lighthouse for Games and Playable Media.

Emotional Voice Puppetry (2023)
Journal Article
Pan, Y., Zhang, R., Cheng, S., Tan, S., Ding, Y., Mitchell, K., & Yang, X. (2023). Emotional Voice Puppetry. IEEE Transactions on Visualization and Computer Graphics, 29(5), 2527-2535. https://doi.org/10.1109/tvcg.2023.3247101

The paper presents emotional voice puppetry, an audio-based facial animation approach to portray characters with vivid emotional changes. The lips motion and the surrounding facial areas are controlled by the contents of the audio, and the facial dyn... Read More about Emotional Voice Puppetry.

Generating real-time detailed ground visualisations from sparse aerial point clouds (2022)
Presentation / Conference
Murray, A., Mitchell, S., Bradley, A., Waite, E., Ross, C., Jamrozy, J., & Mitchell, K. (2022, December). Generating real-time detailed ground visualisations from sparse aerial point clouds. Paper presented at CVMP 2022: The 19th ACM SIGGRAPH European Conference on Visual Media Production, London

We present an informed kind of atomic rendering primitive, which forms a local adjacency aware classified particle basis decoupled from texture and topology. Suited to visual synthesis of detailed landscapes inferred from sparse unevenly distributed... Read More about Generating real-time detailed ground visualisations from sparse aerial point clouds.

MienCap: Performance-based Facial Animation with Live Mood Dynamics (2022)
Conference Proceeding
Pan, Y., Zhang, R., Wang, J., Chen, N., Qiu, Y., Ding, Y., & Mitchell, K. (2022). MienCap: Performance-based Facial Animation with Live Mood Dynamics. In 2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW) (645-646). https://doi.org/10.1109/vrw55335.2022.00178

Our purpose is to improve performance-based animation which can drive believable 3D stylized characters that are truly perceptual. By combining traditional blendshape animation techniques with machine learning models, we present a real time motion ca... Read More about MienCap: Performance-based Facial Animation with Live Mood Dynamics.

Collimated Whole Volume Light Scattering in Homogeneous Finite Media (2022)
Journal Article
Velinov, Z., & Mitchell, K. (2023). Collimated Whole Volume Light Scattering in Homogeneous Finite Media. IEEE Transactions on Visualization and Computer Graphics, 29(7), 3145-3157. https://doi.org/10.1109/TVCG.2021.3135764

Crepuscular rays form when light encounters an optically thick or opaque medium which masks out portions of the visible scene. Real-time applications commonly estimate this phenomena by connecting paths between light sources and the camera after a si... Read More about Collimated Whole Volume Light Scattering in Homogeneous Finite Media.

Embodied online dance learning objectives of CAROUSEL + (2021)
Conference Proceeding
Mitchell, K., Koniaris, B., Tamariz, M., Kennedy, J., Cheema, N., Mekler, E., …Mac Williams, C. (2021). Embodied online dance learning objectives of CAROUSEL +. In 2021 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW) (309-313). https://doi.org/10.1109/VRW52623.2021.00062

This is a position paper concerning the embodied dance learning objectives of the CAROUSEL + 1 project, which aims to impact how online immersive technologies influence multiuser interaction and communication with a focus on dancing and learning danc... Read More about Embodied online dance learning objectives of CAROUSEL +.

FaceMagic: Real-time Facial Detail Effects on Mobile (2020)
Conference Proceeding
Casas, L., Li, Y., & Mitchell, K. (2020). FaceMagic: Real-time Facial Detail Effects on Mobile. In SA '20: SIGGRAPH Asia 2020 Technical Communications (1-4). https://doi.org/10.1145/3410700.3425429

We present a novel real-time face detail reconstruction method capable of recovering high quality geometry on consumer mobile devices. Our system firstly uses a morphable model and semantic segmentation of facial parts to achieve robust self-calibrat... Read More about FaceMagic: Real-time Facial Detail Effects on Mobile.

Improving VIP viewer Gaze Estimation and Engagement Using Adaptive Dynamic Anamorphosis (2020)
Journal Article
Pan, Y., & Mitchell, K. (2021). Improving VIP viewer Gaze Estimation and Engagement Using Adaptive Dynamic Anamorphosis. International Journal of Human-Computer Studies, 147, Article 102563. https://doi.org/10.1016/j.ijhcs.2020.102563

Anamorphosis for 2D displays can provide viewer centric perspective viewing, enabling 3D appearance, eye contact and engagement, by adapting dynamically in real time to a single moving viewer’s viewpoint, but at the cost of distorted viewing for othe... Read More about Improving VIP viewer Gaze Estimation and Engagement Using Adaptive Dynamic Anamorphosis.

Active Learning for Interactive Audio-Animatronic Performance Design (2020)
Journal Article
Castellon, J., Bächer, M., McCrory, M., Ayala, A., Stolarz, J., & Mitchell, K. (2020). Active Learning for Interactive Audio-Animatronic Performance Design. The Journal of Computer Graphics Techniques, 9(3), 1-19

We present a practical neural computational approach for interactive design of Audio-Animatronic® facial performances. An offline quasi-static reference simulation, driven by a coupled mechanical assembly, accurately predicts hyperelastic skin deform... Read More about Active Learning for Interactive Audio-Animatronic Performance Design.

Props Alive: A Framework for Augmented Reality Stop Motion Animation (2020)
Conference Proceeding
Casas, L., Kosek, M., & Mitchell, K. (2020). Props Alive: A Framework for Augmented Reality Stop Motion Animation. In 2017 IEEE 10th Workshop on Software Engineering and Architectures for Realtime Interactive Systems (SEARIS). https://doi.org/10.1109/SEARIS41720.2017.9183487

Stop motion animation evolved in the early days of cinema with the aim to create an illusion of movement with static puppets posed manually each frame. Current stop motion movies introduced 3D printing processes in order to acquire animations more ac... Read More about Props Alive: A Framework for Augmented Reality Stop Motion Animation.

PoseMMR: A Collaborative Mixed Reality Authoring Tool for Character Animation (2020)
Conference Proceeding
Pan, Y., & Mitchell, K. (2020). PoseMMR: A Collaborative Mixed Reality Authoring Tool for Character Animation. In 2020 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW) (759-760). https://doi.org/10.1109/vrw50115.2020.00230

Augmented reality devices enable new approaches for character animation, e.g., given that character posing is three dimensional in nature it follows that interfaces with higher degrees-of-freedom (DoF) should outperform 2D interfaces. We present Pose... Read More about PoseMMR: A Collaborative Mixed Reality Authoring Tool for Character Animation.