Skip to main content

Research Repository

Advanced Search

All Outputs (88)

Method for Efficient CPU-GPU Streaming for Walkthrough of Full Motion Lightfield Video (2017)
Conference Proceeding
Chitalu, F. M., Koniaris, B., & Mitchell, K. (2017). Method for Efficient CPU-GPU Streaming for Walkthrough of Full Motion Lightfield Video. In CVMP 2017: Proceedings of the 14th European Conference on Visual Media Production (CVMP 2017). https://doi.org/10.1145/3150165.3150173

Lightfield video, as a high-dimensional function, is very demanding in terms of storage. As such, lightfield video data, even in a compressed form, do not typically fit in GPU or main memory unless the capture area, resolution or duration is sufficie... Read More about Method for Efficient CPU-GPU Streaming for Walkthrough of Full Motion Lightfield Video.

IRIDiuM+: deep media storytelling with non-linear light field video (2017)
Conference Proceeding
Kosek, M., Koniaris, B., Sinclair, D., Markova, D., Rothnie, F., Smoot, L., & Mitchell, K. (2017). IRIDiuM+: deep media storytelling with non-linear light field video. In SIGGRAPH '17 ACM SIGGRAPH 2017 VR Village. https://doi.org/10.1145/3089269.3089277

We present immersive storytelling in VR enhanced with non-linear sequenced sound, touch and light. Our Deep Media (Rose 2012) aim is to allow for guests to physically enter rendered movies with novel non-linear storytelling capability. With the ab... Read More about IRIDiuM+: deep media storytelling with non-linear light field video.

Real-time rendering with compressed animated light fields. (2017)
Conference Proceeding
Koniaris, B., Kosek, M., Sinclair, D., & Mitchell, K. (2017). Real-time rendering with compressed animated light fields. In GI '17 Proceedings of the 43rd Graphics Interface Conference (33-40). https://doi.org/10.20380/GI2017.05

We propose an end-to-end solution for presenting movie quality animated graphics to the user while still allowing the sense of presence afforded by free viewpoint head motion. By transforming offline rendered movie content into a novel immersive repr... Read More about Real-time rendering with compressed animated light fields..

Noise Reduction on G-Buffers for Monte Carlo Filtering: Noise Reduction on G-Buffers for Monte Carlo Filtering (2017)
Journal Article
Moon, B., Iglesias-Guitian, J. A., McDonagh, S., & Mitchell, K. (2017). Noise Reduction on G-Buffers for Monte Carlo Filtering: Noise Reduction on G-Buffers for Monte Carlo Filtering. Computer Graphics Forum, 36(8), 600-612. https://doi.org/10.1111/cgf.13155

We propose a novel pre-filtering method that reduces the noise introduced by depth-of-field and motion blur effects in geometric buffers (G-buffers) such as texture, normal and depth images. Our pre-filtering uses world positions and their variances... Read More about Noise Reduction on G-Buffers for Monte Carlo Filtering: Noise Reduction on G-Buffers for Monte Carlo Filtering.

Real-Time Multi-View Facial Capture with Synthetic Training (2017)
Journal Article
Klaudiny, M., McDonagh, S., Bradley, D., Beeler, T., & Mitchell, K. (2017). Real-Time Multi-View Facial Capture with Synthetic Training. Computer Graphics Forum, 36(2), 325-336. https://doi.org/10.1111/cgf.13129

We present a real-time multi-view facial capture system facilitated by synthetic training imagery. Our method is able to achieve high-quality markerless facial performance capture in real-time from multi-view helmet camera data, employing an actor sp... Read More about Real-Time Multi-View Facial Capture with Synthetic Training.

Rapid one-shot acquisition of dynamic VR avatars (2017)
Conference Proceeding
Malleson, C., Kosek, M., Klaudiny, M., Huerta, I., Bazin, J., Sorkine-Hornung, A., …Mitchell, K. (2017). Rapid one-shot acquisition of dynamic VR avatars. In IEEE 2017 Virtual Reality (VR). https://doi.org/10.1109/vr.2017.7892240

We present a system for rapid acquisition of bespoke, animatable, full-body avatars including face texture and shape. A blendshape rig with a skeleton is used as a template for customization. Identity blendshapes are used to customize the body and fa... Read More about Rapid one-shot acquisition of dynamic VR avatars.

Interactive Ray-Traced Area Lighting with Adaptive Polynomial Filtering (2016)
Conference Proceeding
Iglesias-Guitian, J. A., Moon, B., & Mitchell, K. (2016). Interactive Ray-Traced Area Lighting with Adaptive Polynomial Filtering. In Proceedings of the 13th European Conference on Visual Media Production (CVMP 2016)

Area lighting computation is a key component for synthesizing photo-realistic rendered images, and it simulates plausible soft shadows by considering geometric relationships between area lights and three-dimensional scenes, in some cases even account... Read More about Interactive Ray-Traced Area Lighting with Adaptive Polynomial Filtering.

Synthetic Prior Design for Real-Time Face Tracking (2016)
Conference Proceeding
McDonagh, S., Klaudiny, M., Bradley, D., Beeler, T., Matthews, I., & Mitchell, K. (2016). Synthetic Prior Design for Real-Time Face Tracking. In 2016 Fourth International Conference on 3D Vision (3DV),. https://doi.org/10.1109/3dv.2016.72

Real-time facial performance capture has recently been gaining popularity in virtual film production, driven by advances in machine learning, which allows for fast inference of facial geometry from video streams. These learning-based approaches are s... Read More about Synthetic Prior Design for Real-Time Face Tracking.

Real-time Physics-based Motion Capture with Sparse Sensors (2016)
Conference Proceeding
Andrews, S., Huerta, I., Komura, T., Sigal, L., & Mitchell, K. (2016). Real-time Physics-based Motion Capture with Sparse Sensors. In Proceedings of the 13th European Conference on Visual Media Production (CVMP 2016). https://doi.org/10.1145/2998559.2998564

We propose a framework for real-time tracking of humans using sparse multi-modal sensor sets, including data obtained from optical markers and inertial measurement units. A small number of sensors leaves the performer unencumbered by not requiring de... Read More about Real-time Physics-based Motion Capture with Sparse Sensors.

Pixel history linear models for real-time temporal filtering. (2016)
Journal Article
Iglesias-Guitian, J. A., Moon, B., Koniaris, C., Smolikowski, E., & Mitchell, K. (2016). Pixel history linear models for real-time temporal filtering. Computer Graphics Forum, 35(7), 363-372. https://doi.org/10.1111/cgf.13033

We propose a new real-time temporal filtering and antialiasing (AA) method for rasterization graphics pipelines. Our method is based on Pixel History Linear Models (PHLM), a new concept for modeling the history of pixel shading values over time using... Read More about Pixel history linear models for real-time temporal filtering..

Integrating real-time fluid simulation with a voxel engine (2016)
Journal Article
Zadick, J., Kenwright, B., & Mitchell, K. (2016). Integrating real-time fluid simulation with a voxel engine. The Computer Games Journal, 5(1-2), 55-64. https://doi.org/10.1007/s40869-016-0020-5

We present a method of adding sophisticated physical simulations to voxel-based games such as the hugely popular Minecraft (2012. http://minecraft.gamepedia.com/Liquid), thus providing a dynamic and realistic fluid simulation in a voxel environment.... Read More about Integrating real-time fluid simulation with a voxel engine.

Nonlinearly Weighted First-order Regression for Denoising Monte Carlo Renderings (2016)
Journal Article
Bitterli, B., Rousselle, F., Moon, B., Iglesias-Guitián, J. A., Adler, D., Mitchell, K., …Novák, J. (2016). Nonlinearly Weighted First-order Regression for Denoising Monte Carlo Renderings. Computer Graphics Forum, 35(4), 107-117. https://doi.org/10.1111/cgf.12954

We address the problem of denoising Monte Carlo renderings by studying existing approaches and proposing a new algorithm that yields state-of-the-art performance on a wide range of scenes. We analyze existing approaches from a theoretical and empiric... Read More about Nonlinearly Weighted First-order Regression for Denoising Monte Carlo Renderings.

Stereohaptics: a haptic interaction toolkit for tangible virtual experiences (2016)
Conference Proceeding
Israr, A., Zhao, S., McIntosh, K., Schwemler, Z., Fritz, A., Mars, J., …Mitchell, K. (2016). Stereohaptics: a haptic interaction toolkit for tangible virtual experiences. In SIGGRAPH '16: ACM SIGGRAPH 2016 Studio. https://doi.org/10.1145/2929484.2970273

With a recent rise in the availability of affordable head mounted gear sets, various sensory stimulations (e.g., visual, auditory and haptics) are integrated to provide seamlessly embodied virtual experience in areas such as education, entertainment,... Read More about Stereohaptics: a haptic interaction toolkit for tangible virtual experiences.

IRIDiuM: immersive rendered interactive deep media (2016)
Conference Proceeding
Koniaris, B., Israr, A., Mitchell, K., Huerta, I., Kosek, M., Darragh, K., …Moon, B. (2016). IRIDiuM: immersive rendered interactive deep media. . https://doi.org/10.1145/2929490.2929496

Compelling virtual reality experiences require high quality imagery as well as head motion with six degrees of freedom. Most existing systems limit the motion of the viewer (prerecorded fixed position 360 video panoramas), or are limited in realism,... Read More about IRIDiuM: immersive rendered interactive deep media.

User, metric, and computational evaluation of foveated rendering methods (2016)
Conference Proceeding
Swafford, N. T., Iglesias-Guitian, J. A., Koniaris, C., Moon, B., Cosker, D., & Mitchell, K. (2016). User, metric, and computational evaluation of foveated rendering methods. In SAP '16 Proceedings of the ACM Symposium on Applied Perception. https://doi.org/10.1145/2931002.2931011

Perceptually lossless foveated rendering methods exploit human perception by selectively rendering at different quality levels based on eye gaze (at a lower computational cost) while still maintaining the user's perception of a full quality render. W... Read More about User, metric, and computational evaluation of foveated rendering methods.

Adaptive polynomial rendering (2016)
Journal Article
Moon, B., McDonagh, S., Mitchell, K., & Gross, M. (2016). Adaptive polynomial rendering. ACM transactions on graphics, 35(4), Article 40. https://doi.org/10.1145/2897824.2925936

In this paper, we propose a new adaptive rendering method to improve the performance of Monte Carlo ray tracing, by reducing noise contained in rendered images while preserving high-frequency edges. Our method locally approximates an image with polyn... Read More about Adaptive polynomial rendering.

Simulation and skinning of heterogeneous texture detail deformation (2016)
Patent
Koniaris, C., Mitchell, K., & Cosker, D. (2016). Simulation and skinning of heterogeneous texture detail deformation. US2016133040

A method is disclosed for reducing distortions introduced by deformation of a surface with an existing parameterization. In an exemplary embodiment, the method comprises receiving a rest pose mesh comprising a plurality of faces, a rigidity map corre... Read More about Simulation and skinning of heterogeneous texture detail deformation.

Online view sampling for estimating depth from light fields (2015)
Conference Proceeding
Kim, C., Subr, K., Mitchell, K., Sorkine-Hornung, A., & Gross, M. (2015). Online view sampling for estimating depth from light fields. In 2015 IEEE International Conference on Image Processing (ICIP). https://doi.org/10.1109/icip.2015.7350981

Geometric information such as depth obtained from light fields finds more applications recently. Where and how to sample images to populate a light field is an important problem to maximize the usability of information gathered for depth reconstru... Read More about Online view sampling for estimating depth from light fields.

Real-time variable rigidity texture mapping (2015)
Conference Proceeding
Koniaris, C., Mitchell, K., & Cosker, D. (2015). Real-time variable rigidity texture mapping. In CVMP '15 Proceedings of the 12th European Conference on Visual Media Production. https://doi.org/10.1145/2824840.2824850

Parameterisation of models is typically generated for a single pose, the rest pose. When a model deforms, its parameterisation characteristics change, leading to distortions in the appearance of texture-mapped mesostructure. Such distortions are unde... Read More about Real-time variable rigidity texture mapping.

Latency aware foveated rendering in unreal engine 4 (2015)
Conference Proceeding
Swafford, N. T., Cosker, D., & Mitchell, K. (2015). Latency aware foveated rendering in unreal engine 4. In CVMP '15 Proceedings of the 12th European Conference on Visual Media Production. https://doi.org/10.1145/2824840.2824863

We contribute a foveated rendering implementation in Unreal Engine 4 (UE4) and a straight-forward metric to allow calculation of rendered foveal region sizes to compensate for overall system latency and maintain perceptual losslessness. Our system de... Read More about Latency aware foveated rendering in unreal engine 4.