Skip to main content

Research Repository

Advanced Search

All Outputs (54)

Multi-reality games: an experience across the entire reality-virtuality continuum (2018)
Presentation / Conference Contribution
Casas, L., Ciccone, L., Çimen, G., Wiedemann, P., Fauconneau, M., Sumner, R. W., & Mitchell, K. (2018, December). Multi-reality games: an experience across the entire reality-virtuality continuum. Presented at the 16th ACM SIGGRAPH International Conference, Tokyo, Japan

Interactive play can take very different forms, from playing with physical board games to fully digital video games. In recent years, new video game paradigms were introduced to connect real-world objects to virtual game characters. However, even the... Read More about Multi-reality games: an experience across the entire reality-virtuality continuum.

Image Based Proximate Shadow Retargeting (2018)
Presentation / Conference Contribution
Casas, L., Fauconneau, M., Kosek, M., Mclister, K., & Mitchell, K. (2018, September). Image Based Proximate Shadow Retargeting. Presented at Computer Graphics & Visual Computing (CGVC) 2018, Swansea University, United Kingdom

We introduce Shadow Retargeting which maps real shadow appearance to virtual shadows given a corresponding deformation of scene geometry, such that appearance is seamlessly maintained. By performing virtual shadow reconstruction from un-occluded real... Read More about Image Based Proximate Shadow Retargeting.

Method for Efficient CPU-GPU Streaming for Walkthrough of Full Motion Lightfield Video (2017)
Presentation / Conference Contribution
Chitalu, F. M., Koniaris, B., & Mitchell, K. (2017, December). Method for Efficient CPU-GPU Streaming for Walkthrough of Full Motion Lightfield Video. Presented at 14th European Conference on Visual Media Production (CVMP 2017), London, United Kingdom

Lightfield video, as a high-dimensional function, is very demanding in terms of storage. As such, lightfield video data, even in a compressed form, do not typically fit in GPU or main memory unless the capture area, resolution or duration is sufficie... Read More about Method for Efficient CPU-GPU Streaming for Walkthrough of Full Motion Lightfield Video.

IRIDiuM+: deep media storytelling with non-linear light field video (2017)
Presentation / Conference Contribution
Kosek, M., Koniaris, B., Sinclair, D., Markova, D., Rothnie, F., Smoot, L., & Mitchell, K. (2017, July). IRIDiuM+: deep media storytelling with non-linear light field video. Presented at ACM SIGGRAPH 2017 VR Village on - SIGGRAPH '17, Los Angeles, California

We present immersive storytelling in VR enhanced with non-linear sequenced sound, touch and light. Our Deep Media (Rose 2012) aim is to allow for guests to physically enter rendered movies with novel non-linear storytelling capability. With the ab... Read More about IRIDiuM+: deep media storytelling with non-linear light field video.

Real-time rendering with compressed animated light fields. (2017)
Presentation / Conference Contribution
Koniaris, B., Kosek, M., Sinclair, D., & Mitchell, K. (2017, May). Real-time rendering with compressed animated light fields. Presented at 43rd Graphics Interface Conference

We propose an end-to-end solution for presenting movie quality animated graphics to the user while still allowing the sense of presence afforded by free viewpoint head motion. By transforming offline rendered movie content into a novel immersive repr... Read More about Real-time rendering with compressed animated light fields..

Rapid one-shot acquisition of dynamic VR avatars (2017)
Presentation / Conference Contribution
Malleson, C., Kosek, M., Klaudiny, M., Huerta, I., Bazin, J., Sorkine-Hornung, A., Mine, M., & Mitchell, K. (2017, March). Rapid one-shot acquisition of dynamic VR avatars. Presented at 2017 IEEE Virtual Reality (VR), Los Angeles, US

We present a system for rapid acquisition of bespoke, animatable, full-body avatars including face texture and shape. A blendshape rig with a skeleton is used as a template for customization. Identity blendshapes are used to customize the body and fa... Read More about Rapid one-shot acquisition of dynamic VR avatars.

Interactive Ray-Traced Area Lighting with Adaptive Polynomial Filtering (2016)
Presentation / Conference Contribution
Iglesias-Guitian, J. A., Moon, B., & Mitchell, K. (2016). Interactive Ray-Traced Area Lighting with Adaptive Polynomial Filtering. In Proceedings of the 13th European Conference on Visual Media Production (CVMP 2016)

Area lighting computation is a key component for synthesizing photo-realistic rendered images, and it simulates plausible soft shadows by considering geometric relationships between area lights and three-dimensional scenes, in some cases even account... Read More about Interactive Ray-Traced Area Lighting with Adaptive Polynomial Filtering.

Synthetic Prior Design for Real-Time Face Tracking (2016)
Presentation / Conference Contribution
McDonagh, S., Klaudiny, M., Bradley, D., Beeler, T., Matthews, I., & Mitchell, K. (2016). Synthetic Prior Design for Real-Time Face Tracking. In 2016 Fourth International Conference on 3D Vision (3DV),. https://doi.org/10.1109/3dv.2016.72

Real-time facial performance capture has recently been gaining popularity in virtual film production, driven by advances in machine learning, which allows for fast inference of facial geometry from video streams. These learning-based approaches are s... Read More about Synthetic Prior Design for Real-Time Face Tracking.

Real-time Physics-based Motion Capture with Sparse Sensors (2016)
Presentation / Conference Contribution
Andrews, S., Huerta, I., Komura, T., Sigal, L., & Mitchell, K. (2016, December). Real-time Physics-based Motion Capture with Sparse Sensors. Presented at 13th European Conference on Visual Media Production (CVMP 2016) - CVMP 2016

We propose a framework for real-time tracking of humans using sparse multi-modal sensor sets, including data obtained from optical markers and inertial measurement units. A small number of sensors leaves the performer unencumbered by not requiring de... Read More about Real-time Physics-based Motion Capture with Sparse Sensors.

Stereohaptics: a haptic interaction toolkit for tangible virtual experiences (2016)
Presentation / Conference Contribution
Israr, A., Zhao, S., McIntosh, K., Schwemler, Z., Fritz, A., Mars, J., Bedford, J., Frisson, C., Huerta, I., Kosek, M., Koniaris, B., & Mitchell, K. (2016, July). Stereohaptics: a haptic interaction toolkit for tangible virtual experiences. Presented at ACM SIGGRAPH 2016 Studio on - SIGGRAPH '16, Anaheim, CA, US

With a recent rise in the availability of affordable head mounted gear sets, various sensory stimulations (e.g., visual, auditory and haptics) are integrated to provide seamlessly embodied virtual experience in areas such as education, entertainment,... Read More about Stereohaptics: a haptic interaction toolkit for tangible virtual experiences.

IRIDiuM: immersive rendered interactive deep media (2016)
Presentation / Conference Contribution
Koniaris, B., Israr, A., Mitchell, K., Huerta, I., Kosek, M., Darragh, K., …Moon, B. (2016). IRIDiuM: immersive rendered interactive deep media. . https://doi.org/10.1145/2929490.2929496

Compelling virtual reality experiences require high quality imagery as well as head motion with six degrees of freedom. Most existing systems limit the motion of the viewer (prerecorded fixed position 360 video panoramas), or are limited in realism,... Read More about IRIDiuM: immersive rendered interactive deep media.

User, metric, and computational evaluation of foveated rendering methods (2016)
Presentation / Conference Contribution
Swafford, N. T., Iglesias-Guitian, J. A., Koniaris, C., Moon, B., Cosker, D., & Mitchell, K. (2016, July). User, metric, and computational evaluation of foveated rendering methods. Presented at Proceedings of the ACM Symposium on Applied Perception - SAP '16

Perceptually lossless foveated rendering methods exploit human perception by selectively rendering at different quality levels based on eye gaze (at a lower computational cost) while still maintaining the user's perception of a full quality render. W... Read More about User, metric, and computational evaluation of foveated rendering methods.

Adaptive polynomial rendering (2016)
Presentation / Conference Contribution
Moon, B., McDonagh, S., Mitchell, K., & Gross, M. (2016, July). Adaptive polynomial rendering. Presented at ACM SIGGRAPH 2016, Anaheim, California, US

In this paper, we propose a new adaptive rendering method to improve the performance of Monte Carlo ray tracing, by reducing noise contained in rendered images while preserving high-frequency edges. Our method locally approximates an image with polyn... Read More about Adaptive polynomial rendering.

Online view sampling for estimating depth from light fields (2015)
Presentation / Conference Contribution
Kim, C., Subr, K., Mitchell, K., Sorkine-Hornung, A., & Gross, M. (2015). Online view sampling for estimating depth from light fields. In 2015 IEEE International Conference on Image Processing (ICIP). https://doi.org/10.1109/icip.2015.7350981

Geometric information such as depth obtained from light fields finds more applications recently. Where and how to sample images to populate a light field is an important problem to maximize the usability of information gathered for depth reconstru... Read More about Online view sampling for estimating depth from light fields.

Latency aware foveated rendering in unreal engine 4 (2015)
Presentation / Conference Contribution
Swafford, N. T., Cosker, D., & Mitchell, K. (2015). Latency aware foveated rendering in unreal engine 4. In CVMP '15 Proceedings of the 12th European Conference on Visual Media Production. https://doi.org/10.1145/2824840.2824863

We contribute a foveated rendering implementation in Unreal Engine 4 (UE4) and a straight-forward metric to allow calculation of rendered foveal region sizes to compensate for overall system latency and maintain perceptual losslessness. Our system de... Read More about Latency aware foveated rendering in unreal engine 4.

Real-time variable rigidity texture mapping (2015)
Presentation / Conference Contribution
Koniaris, C., Mitchell, K., & Cosker, D. (2015, November). Real-time variable rigidity texture mapping. Presented at Proceedings of the 12th European Conference on Visual Media Production - CVMP '15

Parameterisation of models is typically generated for a single pose, the rest pose. When a model deforms, its parameterisation characteristics change, leading to distortions in the appearance of texture-mapped mesostructure. Such distortions are unde... Read More about Real-time variable rigidity texture mapping.

Carpet unrolling for character control on uneven terrain (2015)
Presentation / Conference Contribution
Miller, M., Holden, D., Al-Ashqar, R., Dubach, C., Mitchell, K., & Komura, T. (2015, November). Carpet unrolling for character control on uneven terrain. Presented at MIG '15 Proceedings of the 8th ACM SIGGRAPH Conference on Motion in Games

We propose a type of relationship descriptor based on carpet unrolling that computes the joint positions of a character based on the sum of relative vectors originating from a local coordinate system embedded on the surface of a carpet. Given a terra... Read More about Carpet unrolling for character control on uneven terrain.

Augmented creativity: bridging the real and virtual worlds to enhance creative play (2015)
Presentation / Conference Contribution
Zünd, F., Ryffel, M., Magnenat, S., Marra, A., Nitti, M., Kapadia, M., Noris, G., Mitchell, K., Gross, M., & Sumner, R. W. (2015, November). Augmented creativity: bridging the real and virtual worlds to enhance creative play. Presented at SIGGRAPH ASIA 2015 Mobile Graphics and Interactive Applications on - SA '15

Augmented Reality (AR) holds unique and promising potential to bridge between real-world activities and digital experiences, allowing users to engage their imagination and boost their creativity. We propose the concept of Augmented Creativity as empl... Read More about Augmented creativity: bridging the real and virtual worlds to enhance creative play.

Poxels: polygonal voxel environment rendering (2014)
Presentation / Conference Contribution
Miller, M., Cumming, A., Chalmers, K., Kenwright, B., & Mitchell, K. (2014, November). Poxels: polygonal voxel environment rendering. Presented at Proceedings of the 20th ACM Symposium on Virtual Reality Software and Technology - VRST '14

We present efficient rendering of opaque, sparse, voxel environments with data amplified in local graphics memory with stream-out from a geomery shader to a cached vertex buffer pool. We show that our Poxel rendering primitive aligns with optimized r... Read More about Poxels: polygonal voxel environment rendering.

L3V: A Layered Video Format for 3D Display (2014)
Presentation / Conference Contribution
Mitchell, K., Sinclair, D., Kosek, M., & Swaford, N. (2014, November). L3V: A Layered Video Format for 3D Display. Presented at Conference on Visual Media Production, London

We present a layered video format for 3D interactive display which adapts and exploits well-developed 2D codecs with layer centric packing for real-time user perspective playback. We demonstrate our 3D video format for both handheld 3D on mobile devi... Read More about L3V: A Layered Video Format for 3D Display.