Skip to main content

Research Repository

Advanced Search

Collimated Whole Volume Light Scattering in Homogeneous Finite Media (2022)
Journal Article
Velinov, Z., & Mitchell, K. (in press). Collimated Whole Volume Light Scattering in Homogeneous Finite Media. IEEE Transactions on Visualization and Computer Graphics, https://doi.org/10.1109/TVCG.2021.3135764

Crepuscular rays form when light encounters an optically thick or opaque medium which masks out portions of the visible scene. Real-time applications commonly estimate this phenomena by connecting paths between light sources and the camera after a si... Read More about Collimated Whole Volume Light Scattering in Homogeneous Finite Media.

Embodied online dance learning objectives of CAROUSEL + (2021)
Conference Proceeding
Mitchell, K., Koniaris, B., Tamariz, M., Kennedy, J., Cheema, N., Mekler, E., …Mac Williams, C. (2021). Embodied online dance learning objectives of CAROUSEL +. In 2021 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW) (309-313). https://doi.org/10.1109/VRW52623.2021.00062

This is a position paper concerning the embodied dance learning objectives of the CAROUSEL + 1 project, which aims to impact how online immersive technologies influence multiuser interaction and communication with a focus on dancing and learning danc... Read More about Embodied online dance learning objectives of CAROUSEL +.

Performance for Care (2020)
Presentation / Conference
Mahoney, C., & Mermikedes, A. (2020, November). Performance for Care. Presented at Performance for Care, Online

As the use of drama in the education of healthcare professionals becomes more widely accepted, its relationship to the more established practice of simulation based learning merits further examination. Though these two pedagogic approaches stem from... Read More about Performance for Care.

FaceMagic: Real-time Facial Detail Effects on Mobile (2020)
Conference Proceeding
Casas, L., Li, Y., & Mitchell, K. (2020). FaceMagic: Real-time Facial Detail Effects on Mobile. In SA '20: SIGGRAPH Asia 2020 Technical Communications. , (1-4). https://doi.org/10.1145/3410700.3425429

We present a novel real-time face detail reconstruction method capable of recovering high quality geometry on consumer mobile devices. Our system firstly uses a morphable model and semantic segmentation of facial parts to achieve robust self-calibrat... Read More about FaceMagic: Real-time Facial Detail Effects on Mobile.

Improving VIP viewer Gaze Estimation and Engagement Using Adaptive Dynamic Anamorphosis (2020)
Journal Article
Pan, Y., & Mitchell, K. (2021). Improving VIP viewer Gaze Estimation and Engagement Using Adaptive Dynamic Anamorphosis. International Journal of Human-Computer Studies, 147, https://doi.org/10.1016/j.ijhcs.2020.102563

Anamorphosis for 2D displays can provide viewer centric perspective viewing, enabling 3D appearance, eye contact and engagement, by adapting dynamically in real time to a single moving viewer’s viewpoint, but at the cost of distorted viewing for othe... Read More about Improving VIP viewer Gaze Estimation and Engagement Using Adaptive Dynamic Anamorphosis.

Active Learning for Interactive Audio-Animatronic Performance Design (2020)
Journal Article
Castellon, J., Bächer, M., McCrory, M., Ayala, A., Stolarz, J., & Mitchell, K. (2020). Active Learning for Interactive Audio-Animatronic Performance Design. The Journal of Computer Graphics Techniques, 9(3), 1-19

We present a practical neural computational approach for interactive design of Audio-Animatronic® facial performances. An offline quasi-static reference simulation, driven by a coupled mechanical assembly, accurately predicts hyperelastic skin deform... Read More about Active Learning for Interactive Audio-Animatronic Performance Design.

PoseMMR: A Collaborative Mixed Reality Authoring Tool for Character Animation (2020)
Conference Proceeding
Pan, Y., & Mitchell, K. (2020). PoseMMR: A Collaborative Mixed Reality Authoring Tool for Character Animation. In 2020 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW). , (759-760). https://doi.org/10.1109/vrw50115.2020.00230

Augmented reality devices enable new approaches for character animation, e.g., given that character posing is three dimensional in nature it follows that interfaces with higher degrees-of-freedom (DoF) should outperform 2D interfaces. We present Pose... Read More about PoseMMR: A Collaborative Mixed Reality Authoring Tool for Character Animation.

Group-Based Expert Walkthroughs: How Immersive Technologies Can Facilitate the Collaborative Authoring of Character Animation (2020)
Conference Proceeding
Pan, Y., & Mitchell, K. (2020). Group-Based Expert Walkthroughs: How Immersive Technologies Can Facilitate the Collaborative Authoring of Character Animation. In 2020 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW). , ( ‏188-195). https://doi.org/10.1109/vrw50115.2020.00041

Immersive technologies have increasingly attracted the attention of the computer animation community in search of more intuitive and effective alternatives to the current sophisticated 2D interfaces. The higher affordances offered by 3D interaction,... Read More about Group-Based Expert Walkthroughs: How Immersive Technologies Can Facilitate the Collaborative Authoring of Character Animation.

Photo-Realistic Facial Details Synthesis from Single Image (2019)
Conference Proceeding
Chen, A., Chen, Z., Zhang, G., Zhang, Z., Mitchell, K., & Yu, J. (2019). Photo-Realistic Facial Details Synthesis from Single Image. In 2019 IEEE/CVF International Conference on Computer Vision (ICCV) (9429-9439). https://doi.org/10.1109/ICCV.2019.00952

We present a single-image 3D face synthesis technique that can handle challenging facial expressions while recovering fine geometric details. Our technique employs expression analysis for proxy face geometry generation and combines supervised and uns... Read More about Photo-Realistic Facial Details Synthesis from Single Image.

Enhanced Shadow Retargeting with Light-Source Estimation Using Flat Fresnel Lenses (2019)
Journal Article
Casas, L., Fauconneau, M., Kosek, M., Mclister, K., & Mitchell, K. (2019). Enhanced Shadow Retargeting with Light-Source Estimation Using Flat Fresnel Lenses. Computers, 8(2), Article 29. https://doi.org/10.3390/computers8020029

Shadow-retargeting maps depict the appearance of real shadows to virtual shadows given corresponding deformation of scene geometry, such that appearance is seamlessly maintained. By performing virtual shadow reconstruction from unoccluded real-shadow... Read More about Enhanced Shadow Retargeting with Light-Source Estimation Using Flat Fresnel Lenses.

Deep Precomputed Radiance Transfer for Deformable Objects (2019)
Journal Article
Li, Y., Wiedemann, P., & Mitchell, K. (2019). Deep Precomputed Radiance Transfer for Deformable Objects. Proceedings of the ACM on Computer Graphics and Interactive Techniques, 2(1), 1-16. https://doi.org/10.1145/3320284

We propose, DeepPRT, a deep convolutional neural network to compactly encapsulate the radiance transfer of a freely deformable object for rasterization in real-time. With pre-computation of radiance transfer (PRT) we can store complex light interac... Read More about Deep Precomputed Radiance Transfer for Deformable Objects.

Feature-preserving detailed 3D face reconstruction from a single image (2018)
Conference Proceeding
Li, Y., Ma, L., Fan, H., & Mitchell, K. (2018). Feature-preserving detailed 3D face reconstruction from a single image. In CVMP '18 Proceedings of the 15th ACM SIGGRAPH European Conference on Visual Media Production. https://doi.org/10.1145/3278471.3278473

Dense 3D face reconstruction plays a fundamental role in visual media production involving digital actors. We improve upon high fidelity reconstruction from a single 2D photo with a reconstruction framework that is robust to large variations in expre... Read More about Feature-preserving detailed 3D face reconstruction from a single image.

Multi-reality games: an experience across the entire reality-virtuality continuum (2018)
Conference Proceeding
Casas, L., Ciccone, L., Çimen, G., Wiedemann, P., Fauconneau, M., Sumner, R. W., & Mitchell, K. (2018). Multi-reality games: an experience across the entire reality-virtuality continuum. In Proceedings of the VRCAI2018. https://doi.org/10.1145/3284398.3284411

Interactive play can take very different forms, from playing with physical board games to fully digital video games. In recent years, new video game paradigms were introduced to connect real-world objects to virtual game characters. However, even the... Read More about Multi-reality games: an experience across the entire reality-virtuality continuum.

Image Based Proximate Shadow Retargeting (2018)
Conference Proceeding
Casas, L., Fauconneau, M., Kosek, M., Mclister, K., & Mitchell, K. (2018). Image Based Proximate Shadow Retargeting. In Proceedings of Computer Graphics & Visual Computing (CGVC) 2018. https://doi.org/10.2312/cgvc.20181206

We introduce Shadow Retargeting which maps real shadow appearance to virtual shadows given a corresponding deformation of scene geometry, such that appearance is seamlessly maintained. By performing virtual shadow reconstruction from un-occluded real... Read More about Image Based Proximate Shadow Retargeting.

GPU-accelerated depth codec for real-time, high-quality light field reconstruction (2018)
Journal Article
Koniaris, B., Kosek, M., Sinclair, D., & Mitchell, K. (2018). GPU-accelerated depth codec for real-time, high-quality light field reconstruction. Proceedings of the ACM on Computer Graphics and Interactive Techniques, 1(1), 1-15. https://doi.org/10.1145/3203193

Pre-calculated depth information is essential for efficient light field video rendering, due to the prohibitive cost of depth estimation from color when real-time performance is desired. Standard state-of-the-art video codecs fail to satisfy such per... Read More about GPU-accelerated depth codec for real-time, high-quality light field reconstruction.

From Faces to Outdoor Light Probes (2018)
Journal Article
Calian, D. A., Lalonde, J., Gotardo, P., Simon, T., Matthews, I., & Mitchell, K. (2018). From Faces to Outdoor Light Probes. Computer Graphics Forum, 37(2), 51-61. https://doi.org/10.1111/cgf.13341

Image‐based lighting has allowed the creation of photo‐realistic computer‐generated content. However, it requires the accurate capture of the illumination conditions, a task neither easy nor intuitive, especially to the average digital photography en... Read More about From Faces to Outdoor Light Probes.

Empowerment and embodiment for collaborative mixed reality systems: Empowerment and Embodiment (2018)
Journal Article
Pan, Y., Sinclair, D., & Mitchell, K. (2018). Empowerment and embodiment for collaborative mixed reality systems: Empowerment and Embodiment. Computer Animation and Virtual Worlds, 29(3-4), https://doi.org/10.1002/cav.1838

We present several mixed‐reality‐based remote collaboration settings by using consumer head‐mounted displays. We investigated how two people are able to work together in these settings. We found that the person in the AR system will be regarded as th... Read More about Empowerment and embodiment for collaborative mixed reality systems: Empowerment and Embodiment.

Compressed Animated Light Fields with Real-time View-dependent Reconstruction (2018)
Journal Article
Koniaris, C., Kosek, M., Sinclair, D., & Mitchell, K. (2019). Compressed Animated Light Fields with Real-time View-dependent Reconstruction. IEEE Transactions on Visualization and Computer Graphics, 25(4), 1666-1680. https://doi.org/10.1109/tvcg.2018.2818156

We propose an end-to-end solution for presenting movie quality animated graphics to the user while still allowing the sense of presence afforded by free viewpoint head motion. By transforming offline rendered movie content into a novel immersive repr... Read More about Compressed Animated Light Fields with Real-time View-dependent Reconstruction.

IRIDiuM+: deep media storytelling with non-linear light field video (2017)
Conference Proceeding
Kosek, M., Koniaris, B., Sinclair, D., Markova, D., Rothnie, F., Smoot, L., & Mitchell, K. (2017). IRIDiuM+: deep media storytelling with non-linear light field video. In SIGGRAPH '17 ACM SIGGRAPH 2017 VR Village. https://doi.org/10.1145/3089269.3089277

We present immersive storytelling in VR enhanced with non-linear sequenced sound, touch and light. Our Deep Media (Rose 2012) aim is to allow for guests to physically enter rendered movies with novel non-linear storytelling capability. With the ab... Read More about IRIDiuM+: deep media storytelling with non-linear light field video.

Real-time rendering with compressed animated light fields. (2017)
Conference Proceeding
Koniaris, B., Kosek, M., Sinclair, D., & Mitchell, K. (2017). Real-time rendering with compressed animated light fields. In GI '17 Proceedings of the 43rd Graphics Interface Conference. , (33-40). https://doi.org/10.20380/GI2017.05

We propose an end-to-end solution for presenting movie quality animated graphics to the user while still allowing the sense of presence afforded by free viewpoint head motion. By transforming offline rendered movie content into a novel immersive repr... Read More about Real-time rendering with compressed animated light fields..