Skip to main content

Research Repository

Advanced Search

Outputs (91)

Empowerment and embodiment for collaborative mixed reality systems: Empowerment and Embodiment (2018)
Journal Article
Pan, Y., Sinclair, D., & Mitchell, K. (2018). Empowerment and embodiment for collaborative mixed reality systems: Empowerment and Embodiment. Computer Animation and Virtual Worlds, 29(3-4), https://doi.org/10.1002/cav.1838

We present several mixed‐reality‐based remote collaboration settings by using consumer head‐mounted displays. We investigated how two people are able to work together in these settings. We found that the person in the AR system will be regarded as th... Read More about Empowerment and embodiment for collaborative mixed reality systems: Empowerment and Embodiment.

System and method of presenting views of a virtual space (2018)
Patent
Mitchell, K., Koniaris, C., Iglesias-Guitian, J., Moon, B., & Smolikowski, E. (2018). System and method of presenting views of a virtual space. US20180114343

Views of a virtual space may be presented based on predicted colors of individual pixels of individual frame images that depict the views of the virtual space. Predictive models may be assigned to individual pixels that predict individual pixel color... Read More about System and method of presenting views of a virtual space.

Compressed Animated Light Fields with Real-time View-dependent Reconstruction (2018)
Journal Article
Koniaris, C., Kosek, M., Sinclair, D., & Mitchell, K. (2019). Compressed Animated Light Fields with Real-time View-dependent Reconstruction. IEEE Transactions on Visualization and Computer Graphics, 25(4), 1666-1680. https://doi.org/10.1109/tvcg.2018.2818156

We propose an end-to-end solution for presenting movie quality animated graphics to the user while still allowing the sense of presence afforded by free viewpoint head motion. By transforming offline rendered movie content into a novel immersive repr... Read More about Compressed Animated Light Fields with Real-time View-dependent Reconstruction.

Method for Efficient CPU-GPU Streaming for Walkthrough of Full Motion Lightfield Video (2017)
Presentation / Conference Contribution
Chitalu, F. M., Koniaris, B., & Mitchell, K. (2017, December). Method for Efficient CPU-GPU Streaming for Walkthrough of Full Motion Lightfield Video. Presented at 14th European Conference on Visual Media Production (CVMP 2017), London, United Kingdom

Lightfield video, as a high-dimensional function, is very demanding in terms of storage. As such, lightfield video data, even in a compressed form, do not typically fit in GPU or main memory unless the capture area, resolution or duration is sufficie... Read More about Method for Efficient CPU-GPU Streaming for Walkthrough of Full Motion Lightfield Video.

IRIDiuM+: deep media storytelling with non-linear light field video (2017)
Presentation / Conference Contribution
Kosek, M., Koniaris, B., Sinclair, D., Markova, D., Rothnie, F., Smoot, L., & Mitchell, K. (2017, July). IRIDiuM+: deep media storytelling with non-linear light field video. Presented at ACM SIGGRAPH 2017 VR Village on - SIGGRAPH '17, Los Angeles, California

We present immersive storytelling in VR enhanced with non-linear
sequenced sound, touch and light. Our Deep Media (Rose 2012) aim
is to allow for guests to physically enter rendered movies with
novel non-linear storytelling capability. With the ab... Read More about IRIDiuM+: deep media storytelling with non-linear light field video.

Real-time rendering with compressed animated light fields. (2017)
Presentation / Conference Contribution
Koniaris, B., Kosek, M., Sinclair, D., & Mitchell, K. (2017, May). Real-time rendering with compressed animated light fields. Presented at 43rd Graphics Interface Conference

We propose an end-to-end solution for presenting movie quality animated graphics to the user while still allowing the sense of presence afforded by free viewpoint head motion. By transforming offline rendered movie content into a novel immersive repr... Read More about Real-time rendering with compressed animated light fields..

Real-Time Multi-View Facial Capture with Synthetic Training (2017)
Journal Article
Klaudiny, M., McDonagh, S., Bradley, D., Beeler, T., & Mitchell, K. (2017). Real-Time Multi-View Facial Capture with Synthetic Training. Computer Graphics Forum, 36(2), 325-336. https://doi.org/10.1111/cgf.13129

We present a real-time multi-view facial capture system facilitated by synthetic training imagery. Our method is able to achieve high-quality markerless facial performance capture in real-time from multi-view helmet camera data, employing an actor sp... Read More about Real-Time Multi-View Facial Capture with Synthetic Training.

Noise Reduction on G-Buffers for Monte Carlo Filtering: Noise Reduction on G-Buffers for Monte Carlo Filtering (2017)
Journal Article
Moon, B., Iglesias-Guitian, J. A., McDonagh, S., & Mitchell, K. (2017). Noise Reduction on G-Buffers for Monte Carlo Filtering: Noise Reduction on G-Buffers for Monte Carlo Filtering. Computer Graphics Forum, 36(8), 600-612. https://doi.org/10.1111/cgf.13155

We propose a novel pre-filtering method that reduces the noise introduced by depth-of-field and motion blur effects in geometric
buffers (G-buffers) such as texture, normal and depth images. Our pre-filtering uses world positions and their variances... Read More about Noise Reduction on G-Buffers for Monte Carlo Filtering: Noise Reduction on G-Buffers for Monte Carlo Filtering.

Rapid one-shot acquisition of dynamic VR avatars (2017)
Presentation / Conference Contribution
Malleson, C., Kosek, M., Klaudiny, M., Huerta, I., Bazin, J.-C., Sorkine-Hornung, A., Mine, M., & Mitchell, K. (2017, March). Rapid one-shot acquisition of dynamic VR avatars. Presented at 2017 IEEE Virtual Reality (VR), Los Angeles, US

We present a system for rapid acquisition of bespoke, animatable, full-body avatars including face texture and shape. A blendshape rig with a skeleton is used as a template for customization. Identity blendshapes are used to customize the body and fa... Read More about Rapid one-shot acquisition of dynamic VR avatars.

Interactive Ray-Traced Area Lighting with Adaptive Polynomial Filtering (2016)
Presentation / Conference Contribution
Iglesias-Guitian, J. A., Moon, B., & Mitchell, K. (2016, December). Interactive Ray-Traced Area Lighting with Adaptive Polynomial Filtering. Presented at CVMP, The 13th European Conference on Visual Media Production, London, UK

Area lighting computation is a key component for synthesizing photo-realistic rendered images, and it simulates plausible soft shadows by considering geometric relationships between area lights and three-dimensional scenes, in some cases even account... Read More about Interactive Ray-Traced Area Lighting with Adaptive Polynomial Filtering.