Skip to main content

Research Repository

Advanced Search

Props Alive: A Framework for Augmented Reality Stop Motion Animation (2020)
Presentation / Conference Contribution
Casas, L., Kosek, M., & Mitchell, K. (2017, March). Props Alive: A Framework for Augmented Reality Stop Motion Animation. Presented at 2017 IEEE 10th Workshop on Software Engineering and Architectures for Realtime Interactive Systems (SEARIS), Los Angeles, CA, USA

Stop motion animation evolved in the early days of cinema with the aim to create an illusion of movement with static puppets posed manually each frame. Current stop motion movies introduced 3D printing processes in order to acquire animations more ac... Read More about Props Alive: A Framework for Augmented Reality Stop Motion Animation.

Photo-Realistic Facial Details Synthesis from Single Image (2019)
Presentation / Conference Contribution
Chen, A., Chen, Z., Zhang, G., Zhang, Z., Mitchell, K., & Yu, J. (2019, October). Photo-Realistic Facial Details Synthesis from Single Image. Presented at 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Korea

We present a single-image 3D face synthesis technique that can handle challenging facial expressions while recovering fine geometric details. Our technique employs expression analysis for proxy face geometry generation and combines supervised and uns... Read More about Photo-Realistic Facial Details Synthesis from Single Image.

Enhanced Shadow Retargeting with Light-Source Estimation Using Flat Fresnel Lenses (2019)
Journal Article
Casas, L., Fauconneau, M., Kosek, M., Mclister, K., & Mitchell, K. (2019). Enhanced Shadow Retargeting with Light-Source Estimation Using Flat Fresnel Lenses. Computers, 8(2), Article 29. https://doi.org/10.3390/computers8020029

Shadow-retargeting maps depict the appearance of real shadows to virtual shadows given corresponding deformation of scene geometry, such that appearance is seamlessly maintained. By performing virtual shadow reconstruction from unoccluded real-shadow... Read More about Enhanced Shadow Retargeting with Light-Source Estimation Using Flat Fresnel Lenses.

Real-time rendering with compressed animated light fields. (2017)
Presentation / Conference Contribution
Koniaris, B., Kosek, M., Sinclair, D., & Mitchell, K. (2017, May). Real-time rendering with compressed animated light fields. Presented at 43rd Graphics Interface Conference

We propose an end-to-end solution for presenting movie quality animated graphics to the user while still allowing the sense of presence afforded by free viewpoint head motion. By transforming offline rendered movie content into a novel immersive repr... Read More about Real-time rendering with compressed animated light fields..

Real-Time Multi-View Facial Capture with Synthetic Training (2017)
Journal Article
Klaudiny, M., McDonagh, S., Bradley, D., Beeler, T., & Mitchell, K. (2017). Real-Time Multi-View Facial Capture with Synthetic Training. Computer Graphics Forum, 36(2), 325-336. https://doi.org/10.1111/cgf.13129

We present a real-time multi-view facial capture system facilitated by synthetic training imagery. Our method is able to achieve high-quality markerless facial performance capture in real-time from multi-view helmet camera data, employing an actor sp... Read More about Real-Time Multi-View Facial Capture with Synthetic Training.

Noise Reduction on G-Buffers for Monte Carlo Filtering: Noise Reduction on G-Buffers for Monte Carlo Filtering (2017)
Journal Article
Moon, B., Iglesias-Guitian, J. A., McDonagh, S., & Mitchell, K. (2017). Noise Reduction on G-Buffers for Monte Carlo Filtering: Noise Reduction on G-Buffers for Monte Carlo Filtering. Computer Graphics Forum, 36(8), 600-612. https://doi.org/10.1111/cgf.13155

We propose a novel pre-filtering method that reduces the noise introduced by depth-of-field and motion blur effects in geometric
buffers (G-buffers) such as texture, normal and depth images. Our pre-filtering uses world positions and their variances... Read More about Noise Reduction on G-Buffers for Monte Carlo Filtering: Noise Reduction on G-Buffers for Monte Carlo Filtering.

Rapid one-shot acquisition of dynamic VR avatars (2017)
Presentation / Conference Contribution
Malleson, C., Kosek, M., Klaudiny, M., Huerta, I., Bazin, J.-C., Sorkine-Hornung, A., Mine, M., & Mitchell, K. (2017, March). Rapid one-shot acquisition of dynamic VR avatars. Presented at 2017 IEEE Virtual Reality (VR), Los Angeles, US

We present a system for rapid acquisition of bespoke, animatable, full-body avatars including face texture and shape. A blendshape rig with a skeleton is used as a template for customization. Identity blendshapes are used to customize the body and fa... Read More about Rapid one-shot acquisition of dynamic VR avatars.

Synthetic Prior Design for Real-Time Face Tracking (2016)
Presentation / Conference Contribution
McDonagh, S., Klaudiny, M., Bradley, D., Beeler, T., Matthews, I., & Mitchell, K. (2016, October). Synthetic Prior Design for Real-Time Face Tracking. Presented at 2016 Fourth International Conference on 3D Vision (3DV)

Real-time facial performance capture has recently been gaining popularity in virtual film production, driven by advances in machine learning, which allows for fast inference of facial geometry from video streams. These learning-based approaches are s... Read More about Synthetic Prior Design for Real-Time Face Tracking.

Real-time Physics-based Motion Capture with Sparse Sensors (2016)
Presentation / Conference Contribution
Andrews, S., Huerta, I., Komura, T., Sigal, L., & Mitchell, K. (2016, December). Real-time Physics-based Motion Capture with Sparse Sensors. Presented at 13th European Conference on Visual Media Production (CVMP 2016) - CVMP 2016

We propose a framework for real-time tracking of humans using sparse multi-modal sensor sets, including data obtained from optical markers and inertial measurement units. A small number of sensors leaves the performer unencumbered by not requiring de... Read More about Real-time Physics-based Motion Capture with Sparse Sensors.

Pixel history linear models for real-time temporal filtering. (2016)
Journal Article
Iglesias-Guitian, J. A., Moon, B., Koniaris, C., Smolikowski, E., & Mitchell, K. (2016). Pixel history linear models for real-time temporal filtering. Computer Graphics Forum, 35(7), 363-372. https://doi.org/10.1111/cgf.13033

We propose a new real-time temporal filtering and antialiasing (AA) method for rasterization graphics pipelines. Our method is based on Pixel History Linear Models (PHLM), a new concept for modeling the history of pixel shading values over time using... Read More about Pixel history linear models for real-time temporal filtering..

Nonlinearly Weighted First-order Regression for Denoising Monte Carlo Renderings (2016)
Journal Article
Bitterli, B., Rousselle, F., Moon, B., Iglesias-Guitián, J. A., Adler, D., Mitchell, K., Jarosz, W., & Novák, J. (2016). Nonlinearly Weighted First-order Regression for Denoising Monte Carlo Renderings. Computer Graphics Forum, 35(4), 107-117. https://doi.org/10.1111/cgf.12954

We address the problem of denoising Monte Carlo renderings by studying existing approaches and proposing a new algorithm that yields state-of-the-art performance on a wide range of scenes. We analyze existing approaches from a theoretical and empiric... Read More about Nonlinearly Weighted First-order Regression for Denoising Monte Carlo Renderings.

IRIDiuM: immersive rendered interactive deep media (2016)
Presentation / Conference Contribution
Koniaris, B., Israr, A., Mitchell, K., Huerta, I., Kosek, M., Darragh, K., Malleson, C., Jamrozy, J., Swafford, N., Guitian, J., & Moon, B. (2017, July). IRIDiuM: immersive rendered interactive deep media. Presented at ACM SIGGRAPH 2016 VR Village on - SIGGRAPH '16

Compelling virtual reality experiences require high quality imagery as well as head motion with six degrees of freedom. Most existing systems limit the motion of the viewer (prerecorded fixed position 360 video panoramas), or are limited in realism,... Read More about IRIDiuM: immersive rendered interactive deep media.

Stereohaptics: a haptic interaction toolkit for tangible virtual experiences (2016)
Presentation / Conference Contribution
Israr, A., Zhao, S., McIntosh, K., Schwemler, Z., Fritz, A., Mars, J., Bedford, J., Frisson, C., Huerta, I., Kosek, M., Koniaris, B., & Mitchell, K. (2016, July). Stereohaptics: a haptic interaction toolkit for tangible virtual experiences. Presented at ACM SIGGRAPH 2016 Studio on - SIGGRAPH '16, Anaheim, CA, US

With a recent rise in the availability of affordable head mounted gear sets, various sensory stimulations (e.g., visual, auditory and haptics) are integrated to provide seamlessly embodied virtual experience in areas such as education, entertainment,... Read More about Stereohaptics: a haptic interaction toolkit for tangible virtual experiences.

User, metric, and computational evaluation of foveated rendering methods (2016)
Presentation / Conference Contribution
Swafford, N. T., Iglesias-Guitian, J. A., Koniaris, C., Moon, B., Cosker, D., & Mitchell, K. (2016, July). User, metric, and computational evaluation of foveated rendering methods. Presented at Proceedings of the ACM Symposium on Applied Perception - SAP '16

Perceptually lossless foveated rendering methods exploit human perception by selectively rendering at different quality levels based on eye gaze (at a lower computational cost) while still maintaining the user's perception of a full quality render. W... Read More about User, metric, and computational evaluation of foveated rendering methods.

Adaptive polynomial rendering (2016)
Presentation / Conference Contribution
Moon, B., McDonagh, S., Mitchell, K., & Gross, M. (2016, July). Adaptive polynomial rendering. Presented at ACM SIGGRAPH 2016, Anaheim, California, US

In this paper, we propose a new adaptive rendering method to improve the performance of Monte Carlo ray tracing, by reducing noise contained in rendered images while preserving high-frequency edges. Our method locally approximates an image with polyn... Read More about Adaptive polynomial rendering.

Online view sampling for estimating depth from light fields (2015)
Presentation / Conference Contribution
Kim, C., Subr, K., Mitchell, K., Sorkine-Hornung, A., & Gross, M. (2015, September). Online view sampling for estimating depth from light fields. Presented at 2015 IEEE International Conference on Image Processing (ICIP)

Geometric information such as depth obtained from light fields
finds more applications recently. Where and how to sample
images to populate a light field is an important problem to
maximize the usability of information gathered for depth reconstru... Read More about Online view sampling for estimating depth from light fields.

Real-time variable rigidity texture mapping (2015)
Presentation / Conference Contribution
Koniaris, C., Mitchell, K., & Cosker, D. (2015, November). Real-time variable rigidity texture mapping. Presented at Proceedings of the 12th European Conference on Visual Media Production - CVMP '15

Parameterisation of models is typically generated for a single pose, the rest pose. When a model deforms, its parameterisation characteristics change, leading to distortions in the appearance of texture-mapped mesostructure. Such distortions are unde... Read More about Real-time variable rigidity texture mapping.

Latency aware foveated rendering in unreal engine 4 (2015)
Presentation / Conference Contribution
Swafford, N. T., Cosker, D., & Mitchell, K. (2015, November). Latency aware foveated rendering in unreal engine 4. Presented at Proceedings of the 12th European Conference on Visual Media Production - CVMP '15

We contribute a foveated rendering implementation in Unreal Engine 4 (UE4) and a straight-forward metric to allow calculation of rendered foveal region sizes to compensate for overall system latency and maintain perceptual losslessness. Our system de... Read More about Latency aware foveated rendering in unreal engine 4.

Guided ecological simulation for artistic editing of plant distributions in natural scenes (2015)
Journal Article
Bradbury, G. A., Subr, K., Koniaris, C., Mitchell, K., & Weyrich, T. (2015). Guided ecological simulation for artistic editing of plant distributions in natural scenes. The Journal of Computer Graphics Techniques, 4(4), 28-53

In this paper we present a novel approach to author vegetation cover of large natural scenes. Unlike stochastic scatter-instancing tools for plant placement (such as multi-class blue noise generators), we use a simulation based on ecological processe... Read More about Guided ecological simulation for artistic editing of plant distributions in natural scenes.

Carpet unrolling for character control on uneven terrain (2015)
Presentation / Conference Contribution
Miller, M., Holden, D., Al-Ashqar, R., Dubach, C., Mitchell, K., & Komura, T. (2015, November). Carpet unrolling for character control on uneven terrain. Presented at MIG '15 Proceedings of the 8th ACM SIGGRAPH Conference on Motion in Games

We propose a type of relationship descriptor based on carpet unrolling that computes the joint positions of a character based on the sum of relative vectors originating from a local coordinate system embedded on the surface of a carpet. Given a terra... Read More about Carpet unrolling for character control on uneven terrain.