Skip to main content

Research Repository

Advanced Search

Props Alive: A Framework for Augmented Reality Stop Motion Animation (2020)
Conference Proceeding
Casas, L., Kosek, M., & Mitchell, K. (2020). Props Alive: A Framework for Augmented Reality Stop Motion Animation. In 2017 IEEE 10th Workshop on Software Engineering and Architectures for Realtime Interactive Systems (SEARIS). https://doi.org/10.1109/SEARIS41720.2017.9183487

Stop motion animation evolved in the early days of cinema with the aim to create an illusion of movement with static puppets posed manually each frame. Current stop motion movies introduced 3D printing processes in order to acquire animations more ac... Read More about Props Alive: A Framework for Augmented Reality Stop Motion Animation.

Photo-Realistic Facial Details Synthesis from Single Image (2019)
Conference Proceeding
Chen, A., Chen, Z., Zhang, G., Zhang, Z., Mitchell, K., & Yu, J. (2019). Photo-Realistic Facial Details Synthesis from Single Image. In 2019 IEEE/CVF International Conference on Computer Vision (ICCV) (9429-9439). https://doi.org/10.1109/ICCV.2019.00952

We present a single-image 3D face synthesis technique that can handle challenging facial expressions while recovering fine geometric details. Our technique employs expression analysis for proxy face geometry generation and combines supervised and uns... Read More about Photo-Realistic Facial Details Synthesis from Single Image.

Enhanced Shadow Retargeting with Light-Source Estimation Using Flat Fresnel Lenses (2019)
Journal Article
Casas, L., Fauconneau, M., Kosek, M., Mclister, K., & Mitchell, K. (2019). Enhanced Shadow Retargeting with Light-Source Estimation Using Flat Fresnel Lenses. Computers, 8(2), Article 29. https://doi.org/10.3390/computers8020029

Shadow-retargeting maps depict the appearance of real shadows to virtual shadows given corresponding deformation of scene geometry, such that appearance is seamlessly maintained. By performing virtual shadow reconstruction from unoccluded real-shadow... Read More about Enhanced Shadow Retargeting with Light-Source Estimation Using Flat Fresnel Lenses.

Real-time rendering with compressed animated light fields. (2017)
Conference Proceeding
Koniaris, B., Kosek, M., Sinclair, D., & Mitchell, K. (2017). Real-time rendering with compressed animated light fields. In GI '17 Proceedings of the 43rd Graphics Interface Conference (33-40). https://doi.org/10.20380/GI2017.05

We propose an end-to-end solution for presenting movie quality animated graphics to the user while still allowing the sense of presence afforded by free viewpoint head motion. By transforming offline rendered movie content into a novel immersive repr... Read More about Real-time rendering with compressed animated light fields..

Noise Reduction on G-Buffers for Monte Carlo Filtering: Noise Reduction on G-Buffers for Monte Carlo Filtering (2017)
Journal Article
Moon, B., Iglesias-Guitian, J. A., McDonagh, S., & Mitchell, K. (2017). Noise Reduction on G-Buffers for Monte Carlo Filtering: Noise Reduction on G-Buffers for Monte Carlo Filtering. Computer Graphics Forum, 36(8), 600-612. https://doi.org/10.1111/cgf.13155

We propose a novel pre-filtering method that reduces the noise introduced by depth-of-field and motion blur effects in geometric buffers (G-buffers) such as texture, normal and depth images. Our pre-filtering uses world positions and their variances... Read More about Noise Reduction on G-Buffers for Monte Carlo Filtering: Noise Reduction on G-Buffers for Monte Carlo Filtering.

Real-Time Multi-View Facial Capture with Synthetic Training (2017)
Journal Article
Klaudiny, M., McDonagh, S., Bradley, D., Beeler, T., & Mitchell, K. (2017). Real-Time Multi-View Facial Capture with Synthetic Training. Computer Graphics Forum, 36(2), 325-336. https://doi.org/10.1111/cgf.13129

We present a real-time multi-view facial capture system facilitated by synthetic training imagery. Our method is able to achieve high-quality markerless facial performance capture in real-time from multi-view helmet camera data, employing an actor sp... Read More about Real-Time Multi-View Facial Capture with Synthetic Training.

Rapid one-shot acquisition of dynamic VR avatars (2017)
Conference Proceeding
Malleson, C., Kosek, M., Klaudiny, M., Huerta, I., Bazin, J., Sorkine-Hornung, A., …Mitchell, K. (2017). Rapid one-shot acquisition of dynamic VR avatars. In IEEE 2017 Virtual Reality (VR). https://doi.org/10.1109/vr.2017.7892240

We present a system for rapid acquisition of bespoke, animatable, full-body avatars including face texture and shape. A blendshape rig with a skeleton is used as a template for customization. Identity blendshapes are used to customize the body and fa... Read More about Rapid one-shot acquisition of dynamic VR avatars.

Synthetic Prior Design for Real-Time Face Tracking (2016)
Conference Proceeding
McDonagh, S., Klaudiny, M., Bradley, D., Beeler, T., Matthews, I., & Mitchell, K. (2016). Synthetic Prior Design for Real-Time Face Tracking. In 2016 Fourth International Conference on 3D Vision (3DV),. https://doi.org/10.1109/3dv.2016.72

Real-time facial performance capture has recently been gaining popularity in virtual film production, driven by advances in machine learning, which allows for fast inference of facial geometry from video streams. These learning-based approaches are s... Read More about Synthetic Prior Design for Real-Time Face Tracking.

Real-time Physics-based Motion Capture with Sparse Sensors (2016)
Conference Proceeding
Andrews, S., Huerta, I., Komura, T., Sigal, L., & Mitchell, K. (2016). Real-time Physics-based Motion Capture with Sparse Sensors. In Proceedings of the 13th European Conference on Visual Media Production (CVMP 2016). https://doi.org/10.1145/2998559.2998564

We propose a framework for real-time tracking of humans using sparse multi-modal sensor sets, including data obtained from optical markers and inertial measurement units. A small number of sensors leaves the performer unencumbered by not requiring de... Read More about Real-time Physics-based Motion Capture with Sparse Sensors.

Pixel history linear models for real-time temporal filtering. (2016)
Journal Article
Iglesias-Guitian, J. A., Moon, B., Koniaris, C., Smolikowski, E., & Mitchell, K. (2016). Pixel history linear models for real-time temporal filtering. Computer Graphics Forum, 35(7), 363-372. https://doi.org/10.1111/cgf.13033

We propose a new real-time temporal filtering and antialiasing (AA) method for rasterization graphics pipelines. Our method is based on Pixel History Linear Models (PHLM), a new concept for modeling the history of pixel shading values over time using... Read More about Pixel history linear models for real-time temporal filtering..

Nonlinearly Weighted First-order Regression for Denoising Monte Carlo Renderings (2016)
Journal Article
Bitterli, B., Rousselle, F., Moon, B., Iglesias-Guitián, J. A., Adler, D., Mitchell, K., …Novák, J. (2016). Nonlinearly Weighted First-order Regression for Denoising Monte Carlo Renderings. Computer Graphics Forum, 35(4), 107-117. https://doi.org/10.1111/cgf.12954

We address the problem of denoising Monte Carlo renderings by studying existing approaches and proposing a new algorithm that yields state-of-the-art performance on a wide range of scenes. We analyze existing approaches from a theoretical and empiric... Read More about Nonlinearly Weighted First-order Regression for Denoising Monte Carlo Renderings.

Stereohaptics: a haptic interaction toolkit for tangible virtual experiences (2016)
Conference Proceeding
Israr, A., Zhao, S., McIntosh, K., Schwemler, Z., Fritz, A., Mars, J., …Mitchell, K. (2016). Stereohaptics: a haptic interaction toolkit for tangible virtual experiences. In SIGGRAPH '16: ACM SIGGRAPH 2016 Studio. https://doi.org/10.1145/2929484.2970273

With a recent rise in the availability of affordable head mounted gear sets, various sensory stimulations (e.g., visual, auditory and haptics) are integrated to provide seamlessly embodied virtual experience in areas such as education, entertainment,... Read More about Stereohaptics: a haptic interaction toolkit for tangible virtual experiences.

IRIDiuM: immersive rendered interactive deep media (2016)
Conference Proceeding
Koniaris, B., Israr, A., Mitchell, K., Huerta, I., Kosek, M., Darragh, K., …Moon, B. (2016). IRIDiuM: immersive rendered interactive deep media. . https://doi.org/10.1145/2929490.2929496

Compelling virtual reality experiences require high quality imagery as well as head motion with six degrees of freedom. Most existing systems limit the motion of the viewer (prerecorded fixed position 360 video panoramas), or are limited in realism,... Read More about IRIDiuM: immersive rendered interactive deep media.

User, metric, and computational evaluation of foveated rendering methods (2016)
Conference Proceeding
Swafford, N. T., Iglesias-Guitian, J. A., Koniaris, C., Moon, B., Cosker, D., & Mitchell, K. (2016). User, metric, and computational evaluation of foveated rendering methods. In SAP '16 Proceedings of the ACM Symposium on Applied Perception. https://doi.org/10.1145/2931002.2931011

Perceptually lossless foveated rendering methods exploit human perception by selectively rendering at different quality levels based on eye gaze (at a lower computational cost) while still maintaining the user's perception of a full quality render. W... Read More about User, metric, and computational evaluation of foveated rendering methods.

Adaptive polynomial rendering (2016)
Journal Article
Moon, B., McDonagh, S., Mitchell, K., & Gross, M. (2016). Adaptive polynomial rendering. ACM transactions on graphics, 35(4), Article 40. https://doi.org/10.1145/2897824.2925936

In this paper, we propose a new adaptive rendering method to improve the performance of Monte Carlo ray tracing, by reducing noise contained in rendered images while preserving high-frequency edges. Our method locally approximates an image with polyn... Read More about Adaptive polynomial rendering.

Online view sampling for estimating depth from light fields (2015)
Conference Proceeding
Kim, C., Subr, K., Mitchell, K., Sorkine-Hornung, A., & Gross, M. (2015). Online view sampling for estimating depth from light fields. In 2015 IEEE International Conference on Image Processing (ICIP). https://doi.org/10.1109/icip.2015.7350981

Geometric information such as depth obtained from light fields finds more applications recently. Where and how to sample images to populate a light field is an important problem to maximize the usability of information gathered for depth reconstru... Read More about Online view sampling for estimating depth from light fields.

Latency aware foveated rendering in unreal engine 4 (2015)
Conference Proceeding
Swafford, N. T., Cosker, D., & Mitchell, K. (2015). Latency aware foveated rendering in unreal engine 4. In CVMP '15 Proceedings of the 12th European Conference on Visual Media Production. https://doi.org/10.1145/2824840.2824863

We contribute a foveated rendering implementation in Unreal Engine 4 (UE4) and a straight-forward metric to allow calculation of rendered foveal region sizes to compensate for overall system latency and maintain perceptual losslessness. Our system de... Read More about Latency aware foveated rendering in unreal engine 4.

Real-time variable rigidity texture mapping (2015)
Conference Proceeding
Koniaris, C., Mitchell, K., & Cosker, D. (2015). Real-time variable rigidity texture mapping. In CVMP '15 Proceedings of the 12th European Conference on Visual Media Production. https://doi.org/10.1145/2824840.2824850

Parameterisation of models is typically generated for a single pose, the rest pose. When a model deforms, its parameterisation characteristics change, leading to distortions in the appearance of texture-mapped mesostructure. Such distortions are unde... Read More about Real-time variable rigidity texture mapping.

Guided ecological simulation for artistic editing of plant distributions in natural scenes (2015)
Journal Article
Bradbury, G. A., Subr, K., Koniaris, C., Mitchell, K., & Weyrich, T. (2015). Guided ecological simulation for artistic editing of plant distributions in natural scenes. The Journal of Computer Graphics Techniques, 4(4), 28-53

In this paper we present a novel approach to author vegetation cover of large natural scenes. Unlike stochastic scatter-instancing tools for plant placement (such as multi-class blue noise generators), we use a simulation based on ecological processe... Read More about Guided ecological simulation for artistic editing of plant distributions in natural scenes.

Carpet unrolling for character control on uneven terrain (2015)
Conference Proceeding
Miller, M., Holden, D., Al-Ashqar, R., Dubach, C., Mitchell, K., & Komura, T. (2015). Carpet unrolling for character control on uneven terrain. In MIG '15 Proceedings of the 8th ACM SIGGRAPH Conference on Motion in Games. https://doi.org/10.1145/2822013.2822031

We propose a type of relationship descriptor based on carpet unrolling that computes the joint positions of a character based on the sum of relative vectors originating from a local coordinate system embedded on the surface of a carpet. Given a terra... Read More about Carpet unrolling for character control on uneven terrain.