Skip to main content

Research Repository

Advanced Search

All Outputs (29)

Expressive Talking Avatars (2024)
Journal Article
Pan, Y., Tan, S., Cheng, S., Lin, Q., Zeng, Z., & Mitchell, K. (2024). Expressive Talking Avatars. IEEE Transactions on Visualization and Computer Graphics, 30(5), 2538-2548. https://doi.org/10.1109/TVCG.2024.3372047

Stylized avatars are common virtual representations used in VR to support interaction and communication between remote collaborators. However, explicit expressions are notoriously difficult to create, mainly because most current methods rely on geome... Read More about Expressive Talking Avatars.

Editorial: Games May Host the First Rightful AI Citizens (2023)
Journal Article
Mitchell, K. (2023). Editorial: Games May Host the First Rightful AI Citizens. Games: Research and Practice, 1(2), 1-7. https://doi.org/10.1145/3606834

GAMES creatively take place in imaginative worlds informed by, but often not limited by, real-world challenges, and this advantageously provides an accelerated environment for innovation, where concepts and ideas can be explored unencumbered by physi... Read More about Editorial: Games May Host the First Rightful AI Citizens.

Games Futures I (2023)
Journal Article
Deterding, S., Mitchell, K., Kowert, R., & King, B. (2023). Games Futures I. Games: Research and Practice, 1(1), Article 5. https://doi.org/10.1145/3585394

Games Futures collect short opinion pieces by industry and research veterans and new voices envisioning possible and desirable futures and needs for games and playable media. This inaugural series features eight of over thirty pieces.

Inaugural Editorial: A Lighthouse for Games and Playable Media (2023)
Journal Article
Deterding, S., Mitchell, K., Kowert, R., & King, B. (2023). Inaugural Editorial: A Lighthouse for Games and Playable Media. Games: Research and Practice, 1(1), Article 1. https://doi.org/10.1145/3585393

In games and playable media, almost nothing is as it was at the turn of the millennium. Digital and analog games have exploded in reach, diversity, and relevance. Digital platforms and globalisation have shifted and fragmented their centres of gravit... Read More about Inaugural Editorial: A Lighthouse for Games and Playable Media.

Emotional Voice Puppetry (2023)
Journal Article
Pan, Y., Zhang, R., Cheng, S., Tan, S., Ding, Y., Mitchell, K., & Yang, X. (2023). Emotional Voice Puppetry. IEEE Transactions on Visualization and Computer Graphics, 29(5), 2527-2535. https://doi.org/10.1109/tvcg.2023.3247101

The paper presents emotional voice puppetry, an audio-based facial animation approach to portray characters with vivid emotional changes. The lips motion and the surrounding facial areas are controlled by the contents of the audio, and the facial dyn... Read More about Emotional Voice Puppetry.

Collimated Whole Volume Light Scattering in Homogeneous Finite Media (2022)
Journal Article
Velinov, Z., & Mitchell, K. (2023). Collimated Whole Volume Light Scattering in Homogeneous Finite Media. IEEE Transactions on Visualization and Computer Graphics, 29(7), 3145-3157. https://doi.org/10.1109/TVCG.2021.3135764

Crepuscular rays form when light encounters an optically thick or opaque medium which masks out portions of the visible scene. Real-time applications commonly estimate this phenomena by connecting paths between light sources and the camera after a si... Read More about Collimated Whole Volume Light Scattering in Homogeneous Finite Media.

Improving VIP viewer Gaze Estimation and Engagement Using Adaptive Dynamic Anamorphosis (2020)
Journal Article
Pan, Y., & Mitchell, K. (2021). Improving VIP viewer Gaze Estimation and Engagement Using Adaptive Dynamic Anamorphosis. International Journal of Human-Computer Studies, 147, Article 102563. https://doi.org/10.1016/j.ijhcs.2020.102563

Anamorphosis for 2D displays can provide viewer centric perspective viewing, enabling 3D appearance, eye contact and engagement, by adapting dynamically in real time to a single moving viewer’s viewpoint, but at the cost of distorted viewing for othe... Read More about Improving VIP viewer Gaze Estimation and Engagement Using Adaptive Dynamic Anamorphosis.

Active Learning for Interactive Audio-Animatronic Performance Design (2020)
Journal Article
Castellon, J., Bächer, M., McCrory, M., Ayala, A., Stolarz, J., & Mitchell, K. (2020). Active Learning for Interactive Audio-Animatronic Performance Design. The Journal of Computer Graphics Techniques, 9(3), 1-19

We present a practical neural computational approach for interactive design of Audio-Animatronic® facial performances. An offline quasi-static reference simulation, driven by a coupled mechanical assembly, accurately predicts hyperelastic skin deform... Read More about Active Learning for Interactive Audio-Animatronic Performance Design.

Intermediated Reality: A Framework for Communication Through Tele-Puppetry (2019)
Journal Article
Casas, L., & Mitchell, K. (2019). Intermediated Reality: A Framework for Communication Through Tele-Puppetry. Frontiers in Robotics and AI, 6, https://doi.org/10.3389/frobt.2019.00060

We introduce Intermediated Reality (IR), a framework for intermediated communication enabling collaboration through remote possession of entities (e.g., toys) that come to life in mobile Mediated Reality (MR). As part of a two-way conversation, each... Read More about Intermediated Reality: A Framework for Communication Through Tele-Puppetry.

Enhanced Shadow Retargeting with Light-Source Estimation Using Flat Fresnel Lenses (2019)
Journal Article
Casas, L., Fauconneau, M., Kosek, M., Mclister, K., & Mitchell, K. (2019). Enhanced Shadow Retargeting with Light-Source Estimation Using Flat Fresnel Lenses. Computers, 8(2), Article 29. https://doi.org/10.3390/computers8020029

Shadow-retargeting maps depict the appearance of real shadows to virtual shadows given corresponding deformation of scene geometry, such that appearance is seamlessly maintained. By performing virtual shadow reconstruction from unoccluded real-shadow... Read More about Enhanced Shadow Retargeting with Light-Source Estimation Using Flat Fresnel Lenses.

GPU-accelerated depth codec for real-time, high-quality light field reconstruction (2018)
Journal Article
Koniaris, B., Kosek, M., Sinclair, D., & Mitchell, K. (2018). GPU-accelerated depth codec for real-time, high-quality light field reconstruction. Proceedings of the ACM on Computer Graphics and Interactive Techniques, 1(1), 1-15. https://doi.org/10.1145/3

Pre-calculated depth information is essential for efficient light field video rendering, due to the prohibitive cost of depth estimation from color when real-time performance is desired. Standard state-of-the-art video codecs fail to satisfy such per... Read More about GPU-accelerated depth codec for real-time, high-quality light field reconstruction.

From Faces to Outdoor Light Probes (2018)
Journal Article
Calian, D. A., Lalonde, J., Gotardo, P., Simon, T., Matthews, I., & Mitchell, K. (2018). From Faces to Outdoor Light Probes. Computer Graphics Forum, 37(2), 51-61. https://doi.org/10.1111/cgf.13341

Image‐based lighting has allowed the creation of photo‐realistic computer‐generated content. However, it requires the accurate capture of the illumination conditions, a task neither easy nor intuitive, especially to the average digital photography en... Read More about From Faces to Outdoor Light Probes.

Empowerment and embodiment for collaborative mixed reality systems: Empowerment and Embodiment (2018)
Journal Article
Pan, Y., Sinclair, D., & Mitchell, K. (2018). Empowerment and embodiment for collaborative mixed reality systems: Empowerment and Embodiment. Computer Animation and Virtual Worlds, 29(3-4), https://doi.org/10.1002/cav.1838

We present several mixed‐reality‐based remote collaboration settings by using consumer head‐mounted displays. We investigated how two people are able to work together in these settings. We found that the person in the AR system will be regarded as th... Read More about Empowerment and embodiment for collaborative mixed reality systems: Empowerment and Embodiment.

Compressed Animated Light Fields with Real-time View-dependent Reconstruction (2018)
Journal Article
Koniaris, C., Kosek, M., Sinclair, D., & Mitchell, K. (2019). Compressed Animated Light Fields with Real-time View-dependent Reconstruction. IEEE Transactions on Visualization and Computer Graphics, 25(4), 1666-1680. https://doi.org/10.1109/tvcg.2018.2818

We propose an end-to-end solution for presenting movie quality animated graphics to the user while still allowing the sense of presence afforded by free viewpoint head motion. By transforming offline rendered movie content into a novel immersive repr... Read More about Compressed Animated Light Fields with Real-time View-dependent Reconstruction.

Real-Time Multi-View Facial Capture with Synthetic Training (2017)
Journal Article
Klaudiny, M., McDonagh, S., Bradley, D., Beeler, T., & Mitchell, K. (2017). Real-Time Multi-View Facial Capture with Synthetic Training. Computer Graphics Forum, 36(2), 325-336. https://doi.org/10.1111/cgf.13129

We present a real-time multi-view facial capture system facilitated by synthetic training imagery. Our method is able to achieve high-quality markerless facial performance capture in real-time from multi-view helmet camera data, employing an actor sp... Read More about Real-Time Multi-View Facial Capture with Synthetic Training.

Noise Reduction on G-Buffers for Monte Carlo Filtering: Noise Reduction on G-Buffers for Monte Carlo Filtering (2017)
Journal Article
Moon, B., Iglesias-Guitian, J. A., McDonagh, S., & Mitchell, K. (2017). Noise Reduction on G-Buffers for Monte Carlo Filtering: Noise Reduction on G-Buffers for Monte Carlo Filtering. Computer Graphics Forum, 36(8), 600-612. https://doi.org/10.1111/cgf.13

We propose a novel pre-filtering method that reduces the noise introduced by depth-of-field and motion blur effects in geometric buffers (G-buffers) such as texture, normal and depth images. Our pre-filtering uses world positions and their variances... Read More about Noise Reduction on G-Buffers for Monte Carlo Filtering: Noise Reduction on G-Buffers for Monte Carlo Filtering.

Pixel history linear models for real-time temporal filtering. (2016)
Journal Article
Iglesias-Guitian, J. A., Moon, B., Koniaris, C., Smolikowski, E., & Mitchell, K. (2016). Pixel history linear models for real-time temporal filtering. Computer Graphics Forum, 35(7), 363-372. https://doi.org/10.1111/cgf.13033

We propose a new real-time temporal filtering and antialiasing (AA) method for rasterization graphics pipelines. Our method is based on Pixel History Linear Models (PHLM), a new concept for modeling the history of pixel shading values over time using... Read More about Pixel history linear models for real-time temporal filtering..

Integrating real-time fluid simulation with a voxel engine (2016)
Journal Article
Zadick, J., Kenwright, B., & Mitchell, K. (2016). Integrating real-time fluid simulation with a voxel engine. The Computer Games Journal, 5(1-2), 55-64. https://doi.org/10.1007/s40869-016-0020-5

We present a method of adding sophisticated physical simulations to voxel-based games such as the hugely popular Minecraft (2012. http://minecraft.gamepedia.com/Liquid), thus providing a dynamic and realistic fluid simulation in a voxel environment.... Read More about Integrating real-time fluid simulation with a voxel engine.

Nonlinearly Weighted First-order Regression for Denoising Monte Carlo Renderings (2016)
Journal Article
Bitterli, B., Rousselle, F., Moon, B., Iglesias-Guitián, J. A., Adler, D., Mitchell, K., Jarosz, W., & Novák, J. (2016). Nonlinearly Weighted First-order Regression for Denoising Monte Carlo Renderings. Computer Graphics Forum, 35(4), 107-117. https://d

We address the problem of denoising Monte Carlo renderings by studying existing approaches and proposing a new algorithm that yields state-of-the-art performance on a wide range of scenes. We analyze existing approaches from a theoretical and empiric... Read More about Nonlinearly Weighted First-order Regression for Denoising Monte Carlo Renderings.