Skip to main content

Research Repository

Advanced Search

Outputs (24)

Collimated Whole Volume Light Scattering in Homogeneous Finite Media (2022)
Journal Article
Velinov, Z., & Mitchell, K. (2023). Collimated Whole Volume Light Scattering in Homogeneous Finite Media. IEEE Transactions on Visualization and Computer Graphics, 29(7), 3145-3157. https://doi.org/10.1109/TVCG.2021.3135764

Crepuscular rays form when light encounters an optically thick or opaque medium which masks out portions of the visible scene. Real-time applications commonly estimate this phenomena by connecting paths between light sources and the camera after a si... Read More about Collimated Whole Volume Light Scattering in Homogeneous Finite Media.

Improving VIP viewer Gaze Estimation and Engagement Using Adaptive Dynamic Anamorphosis (2020)
Journal Article
Pan, Y., & Mitchell, K. (2021). Improving VIP viewer Gaze Estimation and Engagement Using Adaptive Dynamic Anamorphosis. International Journal of Human-Computer Studies, 147, Article 102563. https://doi.org/10.1016/j.ijhcs.2020.102563

Anamorphosis for 2D displays can provide viewer centric perspective viewing, enabling 3D appearance, eye contact and engagement, by adapting dynamically in real time to a single moving viewer’s viewpoint, but at the cost of distorted viewing for othe... Read More about Improving VIP viewer Gaze Estimation and Engagement Using Adaptive Dynamic Anamorphosis.

Active Learning for Interactive Audio-Animatronic Performance Design (2020)
Journal Article
Castellon, J., Bächer, M., McCrory, M., Ayala, A., Stolarz, J., & Mitchell, K. (2020). Active Learning for Interactive Audio-Animatronic Performance Design. The Journal of Computer Graphics Techniques, 9(3), 1-19

We present a practical neural computational approach for interactive design of Audio-Animatronic® facial performances. An offline quasi-static reference simulation, driven by a coupled mechanical assembly, accurately predicts hyperelastic skin deform... Read More about Active Learning for Interactive Audio-Animatronic Performance Design.

Intermediated Reality: A Framework for Communication Through Tele-Puppetry (2019)
Journal Article
Casas, L., & Mitchell, K. (2019). Intermediated Reality: A Framework for Communication Through Tele-Puppetry. Frontiers in Robotics and AI, 6, https://doi.org/10.3389/frobt.2019.00060

We introduce Intermediated Reality (IR), a framework for intermediated communication enabling collaboration through remote possession of entities (e.g., toys) that come to life in mobile Mediated Reality (MR). As part of a two-way conversation, each... Read More about Intermediated Reality: A Framework for Communication Through Tele-Puppetry.

Enhanced Shadow Retargeting with Light-Source Estimation Using Flat Fresnel Lenses (2019)
Journal Article
Casas, L., Fauconneau, M., Kosek, M., Mclister, K., & Mitchell, K. (2019). Enhanced Shadow Retargeting with Light-Source Estimation Using Flat Fresnel Lenses. Computers, 8(2), Article 29. https://doi.org/10.3390/computers8020029

Shadow-retargeting maps depict the appearance of real shadows to virtual shadows given corresponding deformation of scene geometry, such that appearance is seamlessly maintained. By performing virtual shadow reconstruction from unoccluded real-shadow... Read More about Enhanced Shadow Retargeting with Light-Source Estimation Using Flat Fresnel Lenses.

GPU-accelerated depth codec for real-time, high-quality light field reconstruction (2018)
Journal Article
Koniaris, B., Kosek, M., Sinclair, D., & Mitchell, K. (2018). GPU-accelerated depth codec for real-time, high-quality light field reconstruction. Proceedings of the ACM on Computer Graphics and Interactive Techniques, 1(1), 1-15. https://doi.org/10.1145/3

Pre-calculated depth information is essential for efficient light field video rendering, due to the prohibitive cost of depth estimation from color when real-time performance is desired. Standard state-of-the-art video codecs fail to satisfy such per... Read More about GPU-accelerated depth codec for real-time, high-quality light field reconstruction.

From Faces to Outdoor Light Probes (2018)
Journal Article
Calian, D. A., Lalonde, J., Gotardo, P., Simon, T., Matthews, I., & Mitchell, K. (2018). From Faces to Outdoor Light Probes. Computer Graphics Forum, 37(2), 51-61. https://doi.org/10.1111/cgf.13341

Image‐based lighting has allowed the creation of photo‐realistic computer‐generated content. However, it requires the accurate capture of the illumination conditions, a task neither easy nor intuitive, especially to the average digital photography en... Read More about From Faces to Outdoor Light Probes.

Empowerment and embodiment for collaborative mixed reality systems: Empowerment and Embodiment (2018)
Journal Article
Pan, Y., Sinclair, D., & Mitchell, K. (2018). Empowerment and embodiment for collaborative mixed reality systems: Empowerment and Embodiment. Computer Animation and Virtual Worlds, 29(3-4), https://doi.org/10.1002/cav.1838

We present several mixed‐reality‐based remote collaboration settings by using consumer head‐mounted displays. We investigated how two people are able to work together in these settings. We found that the person in the AR system will be regarded as th... Read More about Empowerment and embodiment for collaborative mixed reality systems: Empowerment and Embodiment.

Compressed Animated Light Fields with Real-time View-dependent Reconstruction (2018)
Journal Article
Koniaris, C., Kosek, M., Sinclair, D., & Mitchell, K. (2019). Compressed Animated Light Fields with Real-time View-dependent Reconstruction. IEEE Transactions on Visualization and Computer Graphics, 25(4), 1666-1680. https://doi.org/10.1109/tvcg.2018.2818

We propose an end-to-end solution for presenting movie quality animated graphics to the user while still allowing the sense of presence afforded by free viewpoint head motion. By transforming offline rendered movie content into a novel immersive repr... Read More about Compressed Animated Light Fields with Real-time View-dependent Reconstruction.