Skip to main content

Research Repository

Advanced Search

Outputs (88)

Design Considerations of Voice Articulated Generative AI Virtual Reality Dance Environments (2024)
Presentation / Conference Contribution
Casas, L., Mitchell, K., Tamariz, M., Hannah, S., Sinclair, D., Koniaris, B., & Kennedy, J. (2024, May). Design Considerations of Voice Articulated Generative AI Virtual Reality Dance Environments. Presented at CHI24 - Generative AI in User-Generated Cont

We consider practical and social considerations of collaborating verbally with colleagues and friends, not confined by physical distance, but through seamless networked telepres-ence to interactively create shared virtual dance environments. In respo... Read More about Design Considerations of Voice Articulated Generative AI Virtual Reality Dance Environments.

Expressive Talking Avatars (2024)
Journal Article
Pan, Y., Tan, S., Cheng, S., Lin, Q., Zeng, Z., & Mitchell, K. (2024). Expressive Talking Avatars. IEEE Transactions on Visualization and Computer Graphics, 30(5), 2538-2548. https://doi.org/10.1109/TVCG.2024.3372047

Stylized avatars are common virtual representations used in VR to support interaction and communication between remote collaborators. However, explicit expressions are notoriously difficult to create, mainly because most current methods rely on geome... Read More about Expressive Talking Avatars.

DanceMark: An open telemetry framework for latency sensitive real-time networked immersive experiences (2024)
Presentation / Conference Contribution
Koniaris, B., Sinclair, D., & Mitchell, K. (2024, March). DanceMark: An open telemetry framework for latency sensitive real-time networked immersive experiences. Presented at IEEE VR Workshop on Open Access Tools and Libraries for Virtual Reality, Orland

DanceMark is an open telemetry framework designed for latency-sensitive real-time networked immersive experiences, focusing on online dancing in virtual reality within the DanceGraph platform. The goal is to minimize end-to-end latency and enhance us... Read More about DanceMark: An open telemetry framework for latency sensitive real-time networked immersive experiences.

MoodFlow: Orchestrating Conversations with Emotionally Intelligent Avatars in Mixed Reality (2024)
Presentation / Conference Contribution
Casas, L., Hannah, S., & Mitchell, K. (2024, March). MoodFlow: Orchestrating Conversations with Emotionally Intelligent Avatars in Mixed Reality. Presented at ANIVAE 2024 : 7th IEEE VR Internal Workshop on Animation in Virtual and Augmented Environments,

MoodFlow presents a novel approach at the intersection of mixed reality and conversational artificial intelligence for emotionally intelligent avatars. Through a state machine embedded in user prompts, the system decodes emotional nuances, enabling a... Read More about MoodFlow: Orchestrating Conversations with Emotionally Intelligent Avatars in Mixed Reality.

Real-time Facial Animation for 3D Stylized Character with Emotion Dynamics (2023)
Presentation / Conference Contribution
Pan, Y., Zhang, R., Wang, J., Ding, Y., & Mitchell, K. (2023, October). Real-time Facial Animation for 3D Stylized Character with Emotion Dynamics. Presented at 31st ACM International Conference on Multimedia, Ottawa, Canada

Our aim is to improve animation production techniques' efficiency and effectiveness. We present two real-time solutions which drive character expressions in a geometrically consistent and perceptually valid way. Our first solution combines keyframe a... Read More about Real-time Facial Animation for 3D Stylized Character with Emotion Dynamics.

Intermediated Reality with an AI 3D Printed Character (2023)
Presentation / Conference Contribution
Casas, L., & Mitchell, K. (2023). Intermediated Reality with an AI 3D Printed Character. In SIGGRAPH '23: ACM SIGGRAPH 2023 Real-Time Live!. https://doi.org/10.1145/3588430.3597251

We introduce live character conversational interactions in Intermediated Reality to bring real-world objects to life in Augmented Reality (AR) and Artificial Intelligence (AI). The AI recognizes live speech and generates short character responses, sy... Read More about Intermediated Reality with an AI 3D Printed Character.

Editorial: Games May Host the First Rightful AI Citizens (2023)
Journal Article
Mitchell, K. (2023). Editorial: Games May Host the First Rightful AI Citizens. Games: Research and Practice, 1(2), 1-7. https://doi.org/10.1145/3606834

GAMES creatively take place in imaginative worlds informed by, but often not limited by, real-world challenges, and this advantageously provides an accelerated environment for innovation, where concepts and ideas can be explored unencumbered by physi... Read More about Editorial: Games May Host the First Rightful AI Citizens.

DanceGraph: A Complementary Architecture for Synchronous Dancing Online (2023)
Presentation / Conference Contribution
Sinclair, D., Ademola, A. V., Koniaris, B., & Mitchell, K. (2023, May). DanceGraph: A Complementary Architecture for Synchronous Dancing Online. Paper presented at 36th International Computer Animation & Social Agents (CASA) 2023, Limassol, Cyprus

DanceGraph is an architecture for synchronized online dancing overcoming the latency of net-worked body pose sharing. We break down this challenge by developing a real-time bandwidth-efficient architecture to minimize lag and reduce the timeframe of... Read More about DanceGraph: A Complementary Architecture for Synchronous Dancing Online.

Inaugural Editorial: A Lighthouse for Games and Playable Media (2023)
Journal Article
Deterding, S., Mitchell, K., Kowert, R., & King, B. (2023). Inaugural Editorial: A Lighthouse for Games and Playable Media. Games: Research and Practice, 1(1), Article 1. https://doi.org/10.1145/3585393

In games and playable media, almost nothing is as it was at the turn of the millennium. Digital and analog games have exploded in reach, diversity, and relevance. Digital platforms and globalisation have shifted and fragmented their centres of gravit... Read More about Inaugural Editorial: A Lighthouse for Games and Playable Media.

Games Futures I (2023)
Journal Article
Deterding, S., Mitchell, K., Kowert, R., & King, B. (2023). Games Futures I. Games: Research and Practice, 1(1), Article 5. https://doi.org/10.1145/3585394

Games Futures collect short opinion pieces by industry and research veterans and new voices envisioning possible and desirable futures and needs for games and playable media. This inaugural series features eight of over thirty pieces.

Emotional Voice Puppetry (2023)
Journal Article
Pan, Y., Zhang, R., Cheng, S., Tan, S., Ding, Y., Mitchell, K., & Yang, X. (2023). Emotional Voice Puppetry. IEEE Transactions on Visualization and Computer Graphics, 29(5), 2527-2535. https://doi.org/10.1109/tvcg.2023.3247101

The paper presents emotional voice puppetry, an audio-based facial animation approach to portray characters with vivid emotional changes. The lips motion and the surrounding facial areas are controlled by the contents of the audio, and the facial dyn... Read More about Emotional Voice Puppetry.

Generating real-time detailed ground visualisations from sparse aerial point clouds (2022)
Presentation / Conference Contribution
Murray, A., Mitchell, S., Bradley, A., Waite, E., Ross, C., Jamrozy, J., & Mitchell, K. (2022, December). Generating real-time detailed ground visualisations from sparse aerial point clouds. Paper presented at CVMP 2022: The 19th ACM SIGGRAPH European Con

We present an informed kind of atomic rendering primitive, which forms a local adjacency aware classified particle basis decoupled from texture and topology. Suited to visual synthesis of detailed landscapes inferred from sparse unevenly distributed... Read More about Generating real-time detailed ground visualisations from sparse aerial point clouds.

MienCap: Performance-based Facial Animation with Live Mood Dynamics (2022)
Presentation / Conference Contribution
Pan, Y., Zhang, R., Wang, J., Chen, N., Qiu, Y., Ding, Y., & Mitchell, K. (2022). MienCap: Performance-based Facial Animation with Live Mood Dynamics. In 2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW) (645-64

Our purpose is to improve performance-based animation which can drive believable 3D stylized characters that are truly perceptual. By combining traditional blendshape animation techniques with machine learning models, we present a real time motion ca... Read More about MienCap: Performance-based Facial Animation with Live Mood Dynamics.

Collimated Whole Volume Light Scattering in Homogeneous Finite Media (2022)
Journal Article
Velinov, Z., & Mitchell, K. (2023). Collimated Whole Volume Light Scattering in Homogeneous Finite Media. IEEE Transactions on Visualization and Computer Graphics, 29(7), 3145-3157. https://doi.org/10.1109/TVCG.2021.3135764

Crepuscular rays form when light encounters an optically thick or opaque medium which masks out portions of the visible scene. Real-time applications commonly estimate this phenomena by connecting paths between light sources and the camera after a si... Read More about Collimated Whole Volume Light Scattering in Homogeneous Finite Media.

Embodied online dance learning objectives of CAROUSEL + (2021)
Presentation / Conference Contribution
Mitchell, K., Koniaris, B., Tamariz, M., Kennedy, J., Cheema, N., Mekler, E., …Mac Williams, C. (2021). Embodied online dance learning objectives of CAROUSEL +. In 2021 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (

This is a position paper concerning the embodied dance learning objectives of the CAROUSEL + 1 project, which aims to impact how online immersive technologies influence multiuser interaction and communication with a focus on dancing and learning danc... Read More about Embodied online dance learning objectives of CAROUSEL +.

FaceMagic: Real-time Facial Detail Effects on Mobile (2020)
Presentation / Conference Contribution
Casas, L., Li, Y., & Mitchell, K. (2020). FaceMagic: Real-time Facial Detail Effects on Mobile. In SA '20: SIGGRAPH Asia 2020 Technical Communications (1-4). https://doi.org/10.1145/3410700.3425429

We present a novel real-time face detail reconstruction method capable of recovering high quality geometry on consumer mobile devices. Our system firstly uses a morphable model and semantic segmentation of facial parts to achieve robust self-calibrat... Read More about FaceMagic: Real-time Facial Detail Effects on Mobile.

Improving VIP viewer Gaze Estimation and Engagement Using Adaptive Dynamic Anamorphosis (2020)
Journal Article
Pan, Y., & Mitchell, K. (2021). Improving VIP viewer Gaze Estimation and Engagement Using Adaptive Dynamic Anamorphosis. International Journal of Human-Computer Studies, 147, Article 102563. https://doi.org/10.1016/j.ijhcs.2020.102563

Anamorphosis for 2D displays can provide viewer centric perspective viewing, enabling 3D appearance, eye contact and engagement, by adapting dynamically in real time to a single moving viewer’s viewpoint, but at the cost of distorted viewing for othe... Read More about Improving VIP viewer Gaze Estimation and Engagement Using Adaptive Dynamic Anamorphosis.

Active Learning for Interactive Audio-Animatronic Performance Design (2020)
Journal Article
Castellon, J., Bächer, M., McCrory, M., Ayala, A., Stolarz, J., & Mitchell, K. (2020). Active Learning for Interactive Audio-Animatronic Performance Design. The Journal of Computer Graphics Techniques, 9(3), 1-19

We present a practical neural computational approach for interactive design of Audio-Animatronic® facial performances. An offline quasi-static reference simulation, driven by a coupled mechanical assembly, accurately predicts hyperelastic skin deform... Read More about Active Learning for Interactive Audio-Animatronic Performance Design.

Props Alive: A Framework for Augmented Reality Stop Motion Animation (2020)
Presentation / Conference Contribution
Casas, L., Kosek, M., & Mitchell, K. (2020). Props Alive: A Framework for Augmented Reality Stop Motion Animation. In 2017 IEEE 10th Workshop on Software Engineering and Architectures for Realtime Interactive Systems (SEARIS). https://doi.org/10.1109/SEA

Stop motion animation evolved in the early days of cinema with the aim to create an illusion of movement with static puppets posed manually each frame. Current stop motion movies introduced 3D printing processes in order to acquire animations more ac... Read More about Props Alive: A Framework for Augmented Reality Stop Motion Animation.

PoseMMR: A Collaborative Mixed Reality Authoring Tool for Character Animation (2020)
Presentation / Conference Contribution
Pan, Y., & Mitchell, K. (2020). PoseMMR: A Collaborative Mixed Reality Authoring Tool for Character Animation. In 2020 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW) (759-760). https://doi.org/10.1109/vrw50115.202

Augmented reality devices enable new approaches for character animation, e.g., given that character posing is three dimensional in nature it follows that interfaces with higher degrees-of-freedom (DoF) should outperform 2D interfaces. We present Pose... Read More about PoseMMR: A Collaborative Mixed Reality Authoring Tool for Character Animation.

Group-Based Expert Walkthroughs: How Immersive Technologies Can Facilitate the Collaborative Authoring of Character Animation (2020)
Presentation / Conference Contribution
Pan, Y., & Mitchell, K. (2020). Group-Based Expert Walkthroughs: How Immersive Technologies Can Facilitate the Collaborative Authoring of Character Animation. In 2020 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)

Immersive technologies have increasingly attracted the attention of the computer animation community in search of more intuitive and effective alternatives to the current sophisticated 2D interfaces. The higher affordances offered by 3D interaction,... Read More about Group-Based Expert Walkthroughs: How Immersive Technologies Can Facilitate the Collaborative Authoring of Character Animation.

Photo-Realistic Facial Details Synthesis from Single Image (2019)
Presentation / Conference Contribution
Chen, A., Chen, Z., Zhang, G., Zhang, Z., Mitchell, K., & Yu, J. (2019). Photo-Realistic Facial Details Synthesis from Single Image. In 2019 IEEE/CVF International Conference on Computer Vision (ICCV) (9429-9439). https://doi.org/10.1109/ICCV.2019.00952

We present a single-image 3D face synthesis technique that can handle challenging facial expressions while recovering fine geometric details. Our technique employs expression analysis for proxy face geometry generation and combines supervised and uns... Read More about Photo-Realistic Facial Details Synthesis from Single Image.

Recycling a Landmark Dataset for Real-time Facial Capture and Animation with Low Cost HMD Integrated Cameras (2019)
Presentation / Conference Contribution
Dos Santos Brito, C. J., & Mitchell, K. (2019). Recycling a Landmark Dataset for Real-time Facial Capture and Animation with Low Cost HMD Integrated Cameras. In VRCAI '19: The 17th International Conference on Virtual-Reality Continuum and its Application

Preparing datasets for use in the training of real-time face tracking algorithms for HMDs is costly. Manually annotated facial landmarks are accessible for regular photography datasets, but introspectively mounted cameras for VR face tracking have in... Read More about Recycling a Landmark Dataset for Real-time Facial Capture and Animation with Low Cost HMD Integrated Cameras.

JUNGLE: An Interactive Visual Platform for Collaborative Creation and Consumption of Nonlinear Transmedia Stories (2019)
Presentation / Conference Contribution
Kapadia, M., Muniz, C. M., Sohn, S. S., Pan, Y., Schriber, S., Mitchell, K., & Gross, M. (2019). JUNGLE: An Interactive Visual Platform for Collaborative Creation and Consumption of Nonlinear Transmedia Stories. In Interactive Storytelling: 12th Internat

JUNGLE is an interactive, visual platform for the collaborative manipulation and consumption of nonlinear transmedia stories. Intuitive visual interfaces encourage JUNGLE users to explore vast libraries of story worlds, expand existing stories, or co... Read More about JUNGLE: An Interactive Visual Platform for Collaborative Creation and Consumption of Nonlinear Transmedia Stories.

Depth codec for real-time, high-quality light field reconstruction (2019)
Patent
Mitchell, K., Koniaris, C., Kosek, M., & Sinclair, D. Depth codec for real-time, high-quality light field reconstruction. US20190313080A1

Systems, methods, and articles of manufacture are disclosed that enable the compression of depth data and real-time reconstruction of high-quality light fields. In one aspect, spatial compression and decompression of depth images is divided into the... Read More about Depth codec for real-time, high-quality light field reconstruction.

Light Field Synthesis Using Inexpensive Surveillance Camera Systems (2019)
Presentation / Conference Contribution
Dumbgen, F., Schroers, C., & Mitchell, K. (2019). Light Field Synthesis Using Inexpensive Surveillance Camera Systems. . https://doi.org/10.1109/icip.2019.8804269

We present a light field synthesis technique that achieves accurate reconstruction given a low-cost, wide-baseline camera rig. Our system integrates optical flow with methods for rectification, disparity estimation, and feature extraction, which we t... Read More about Light Field Synthesis Using Inexpensive Surveillance Camera Systems.

Intermediated Reality: A Framework for Communication Through Tele-Puppetry (2019)
Journal Article
Casas, L., & Mitchell, K. (2019). Intermediated Reality: A Framework for Communication Through Tele-Puppetry. Frontiers in Robotics and AI, 6, https://doi.org/10.3389/frobt.2019.00060

We introduce Intermediated Reality (IR), a framework for intermediated communication enabling collaboration through remote possession of entities (e.g., toys) that come to life in mobile Mediated Reality (MR). As part of a two-way conversation, each... Read More about Intermediated Reality: A Framework for Communication Through Tele-Puppetry.

Enhanced Shadow Retargeting with Light-Source Estimation Using Flat Fresnel Lenses (2019)
Journal Article
Casas, L., Fauconneau, M., Kosek, M., Mclister, K., & Mitchell, K. (2019). Enhanced Shadow Retargeting with Light-Source Estimation Using Flat Fresnel Lenses. Computers, 8(2), Article 29. https://doi.org/10.3390/computers8020029

Shadow-retargeting maps depict the appearance of real shadows to virtual shadows given corresponding deformation of scene geometry, such that appearance is seamlessly maintained. By performing virtual shadow reconstruction from unoccluded real-shadow... Read More about Enhanced Shadow Retargeting with Light-Source Estimation Using Flat Fresnel Lenses.

Deep Precomputed Radiance Transfer for Deformable Objects (2019)
Presentation / Conference Contribution
Li, Y., Wiedemann, P., & Mitchell, K. (2019, May). Deep Precomputed Radiance Transfer for Deformable Objects. Presented at ACM Symposium on Interactive 3D Graphics and Games, Montreal, Quebec, Canada

We propose, DeepPRT, a deep convolutional neural network to compactly encapsulate the radiance transfer of a freely deformable object for rasterization in real-time. With pre-computation of radiance transfer (PRT) we can store complex light interac... Read More about Deep Precomputed Radiance Transfer for Deformable Objects.

Memory Allocation For Seamless Media Content Presentation (2019)
Patent
Mitchell, K., Koniaris, C., & Chitalu, F. (2019). Memory Allocation For Seamless Media Content Presentation. US20190096028

A system for performing memory allocation for seamless media content presentation includes a computing platform having a CPU, a GPU having a GPU memory, and a main memory storing a memory allocation software code. The CPU executes the memory allocati... Read More about Memory Allocation For Seamless Media Content Presentation.

Feature-preserving detailed 3D face reconstruction from a single image (2018)
Presentation / Conference Contribution
Li, Y., Ma, L., Fan, H., & Mitchell, K. (2018). Feature-preserving detailed 3D face reconstruction from a single image. In CVMP '18 Proceedings of the 15th ACM SIGGRAPH European Conference on Visual Media Production. https://doi.org/10.1145/3278471.32784

Dense 3D face reconstruction plays a fundamental role in visual media production involving digital actors. We improve upon high fidelity reconstruction from a single 2D photo with a reconstruction framework that is robust to large variations in expre... Read More about Feature-preserving detailed 3D face reconstruction from a single image.

Multi-reality games: an experience across the entire reality-virtuality continuum (2018)
Presentation / Conference Contribution
Casas, L., Ciccone, L., Çimen, G., Wiedemann, P., Fauconneau, M., Sumner, R. W., & Mitchell, K. (2018). Multi-reality games: an experience across the entire reality-virtuality continuum. In Proceedings of the VRCAI2018. https://doi.org/10.1145/3284398.3

Interactive play can take very different forms, from playing with physical board games to fully digital video games. In recent years, new video game paradigms were introduced to connect real-world objects to virtual game characters. However, even the... Read More about Multi-reality games: an experience across the entire reality-virtuality continuum.

Real-time rendering with compressed animated light fields (2018)
Patent
Mitchell, K., Koniaris, C., Kosek, M., & Sinclair, D. (2018). Real-time rendering with compressed animated light fields. US20180322691

Systems, methods, and articles of manufacture for real-time rendering using compressed animated light fields are disclosed. One embodiment provides a pipeline, from offline rendering of an animated scene from sparse optimized viewpoints to real-time... Read More about Real-time rendering with compressed animated light fields.

Image Based Proximate Shadow Retargeting (2018)
Presentation / Conference Contribution
Casas, L., Fauconneau, M., Kosek, M., Mclister, K., & Mitchell, K. (2018, September). Image Based Proximate Shadow Retargeting. Presented at Computer Graphics & Visual Computing (CGVC) 2018, Swansea University, United Kingdom

We introduce Shadow Retargeting which maps real shadow appearance to virtual shadows given a corresponding deformation of scene geometry, such that appearance is seamlessly maintained. By performing virtual shadow reconstruction from un-occluded real... Read More about Image Based Proximate Shadow Retargeting.

GPU-accelerated depth codec for real-time, high-quality light field reconstruction (2018)
Journal Article
Koniaris, B., Kosek, M., Sinclair, D., & Mitchell, K. (2018). GPU-accelerated depth codec for real-time, high-quality light field reconstruction. Proceedings of the ACM on Computer Graphics and Interactive Techniques, 1(1), 1-15. https://doi.org/10.1145/3

Pre-calculated depth information is essential for efficient light field video rendering, due to the prohibitive cost of depth estimation from color when real-time performance is desired. Standard state-of-the-art video codecs fail to satisfy such per... Read More about GPU-accelerated depth codec for real-time, high-quality light field reconstruction.

From Faces to Outdoor Light Probes (2018)
Journal Article
Calian, D. A., Lalonde, J., Gotardo, P., Simon, T., Matthews, I., & Mitchell, K. (2018). From Faces to Outdoor Light Probes. Computer Graphics Forum, 37(2), 51-61. https://doi.org/10.1111/cgf.13341

Image‐based lighting has allowed the creation of photo‐realistic computer‐generated content. However, it requires the accurate capture of the illumination conditions, a task neither easy nor intuitive, especially to the average digital photography en... Read More about From Faces to Outdoor Light Probes.

Empowerment and embodiment for collaborative mixed reality systems: Empowerment and Embodiment (2018)
Journal Article
Pan, Y., Sinclair, D., & Mitchell, K. (2018). Empowerment and embodiment for collaborative mixed reality systems: Empowerment and Embodiment. Computer Animation and Virtual Worlds, 29(3-4), https://doi.org/10.1002/cav.1838

We present several mixed‐reality‐based remote collaboration settings by using consumer head‐mounted displays. We investigated how two people are able to work together in these settings. We found that the person in the AR system will be regarded as th... Read More about Empowerment and embodiment for collaborative mixed reality systems: Empowerment and Embodiment.

System and method of presenting views of a virtual space (2018)
Patent
Mitchell, K., Koniaris, C., Iglesias-Guitian, J., Moon, B., & Smolikowski, E. (2018). System and method of presenting views of a virtual space. US20180114343

Views of a virtual space may be presented based on predicted colors of individual pixels of individual frame images that depict the views of the virtual space. Predictive models may be assigned to individual pixels that predict individual pixel color... Read More about System and method of presenting views of a virtual space.

Compressed Animated Light Fields with Real-time View-dependent Reconstruction (2018)
Journal Article
Koniaris, C., Kosek, M., Sinclair, D., & Mitchell, K. (2019). Compressed Animated Light Fields with Real-time View-dependent Reconstruction. IEEE Transactions on Visualization and Computer Graphics, 25(4), 1666-1680. https://doi.org/10.1109/tvcg.2018.2818

We propose an end-to-end solution for presenting movie quality animated graphics to the user while still allowing the sense of presence afforded by free viewpoint head motion. By transforming offline rendered movie content into a novel immersive repr... Read More about Compressed Animated Light Fields with Real-time View-dependent Reconstruction.

Method for Efficient CPU-GPU Streaming for Walkthrough of Full Motion Lightfield Video (2017)
Presentation / Conference Contribution
Chitalu, F. M., Koniaris, B., & Mitchell, K. (2017). Method for Efficient CPU-GPU Streaming for Walkthrough of Full Motion Lightfield Video. In CVMP 2017: Proceedings of the 14th European Conference on Visual Media Production (CVMP 2017). https://doi.org

Lightfield video, as a high-dimensional function, is very demanding in terms of storage. As such, lightfield video data, even in a compressed form, do not typically fit in GPU or main memory unless the capture area, resolution or duration is sufficie... Read More about Method for Efficient CPU-GPU Streaming for Walkthrough of Full Motion Lightfield Video.

IRIDiuM+: deep media storytelling with non-linear light field video (2017)
Presentation / Conference Contribution
Kosek, M., Koniaris, B., Sinclair, D., Markova, D., Rothnie, F., Smoot, L., & Mitchell, K. (2017). IRIDiuM+: deep media storytelling with non-linear light field video. In SIGGRAPH '17 ACM SIGGRAPH 2017 VR Village. https://doi.org/10.1145/3089269.3089277

We present immersive storytelling in VR enhanced with non-linear sequenced sound, touch and light. Our Deep Media (Rose 2012) aim is to allow for guests to physically enter rendered movies with novel non-linear storytelling capability. With the ab... Read More about IRIDiuM+: deep media storytelling with non-linear light field video.

Real-time rendering with compressed animated light fields. (2017)
Presentation / Conference Contribution
Koniaris, B., Kosek, M., Sinclair, D., & Mitchell, K. (2017, May). Real-time rendering with compressed animated light fields. Presented at 43rd Graphics Interface Conference

We propose an end-to-end solution for presenting movie quality animated graphics to the user while still allowing the sense of presence afforded by free viewpoint head motion. By transforming offline rendered movie content into a novel immersive repr... Read More about Real-time rendering with compressed animated light fields..

Noise Reduction on G-Buffers for Monte Carlo Filtering: Noise Reduction on G-Buffers for Monte Carlo Filtering (2017)
Journal Article
Moon, B., Iglesias-Guitian, J. A., McDonagh, S., & Mitchell, K. (2017). Noise Reduction on G-Buffers for Monte Carlo Filtering: Noise Reduction on G-Buffers for Monte Carlo Filtering. Computer Graphics Forum, 36(8), 600-612. https://doi.org/10.1111/cgf.13

We propose a novel pre-filtering method that reduces the noise introduced by depth-of-field and motion blur effects in geometric buffers (G-buffers) such as texture, normal and depth images. Our pre-filtering uses world positions and their variances... Read More about Noise Reduction on G-Buffers for Monte Carlo Filtering: Noise Reduction on G-Buffers for Monte Carlo Filtering.

Real-Time Multi-View Facial Capture with Synthetic Training (2017)
Journal Article
Klaudiny, M., McDonagh, S., Bradley, D., Beeler, T., & Mitchell, K. (2017). Real-Time Multi-View Facial Capture with Synthetic Training. Computer Graphics Forum, 36(2), 325-336. https://doi.org/10.1111/cgf.13129

We present a real-time multi-view facial capture system facilitated by synthetic training imagery. Our method is able to achieve high-quality markerless facial performance capture in real-time from multi-view helmet camera data, employing an actor sp... Read More about Real-Time Multi-View Facial Capture with Synthetic Training.

Rapid one-shot acquisition of dynamic VR avatars (2017)
Presentation / Conference Contribution
Malleson, C., Kosek, M., Klaudiny, M., Huerta, I., Bazin, J., Sorkine-Hornung, A., Mine, M., & Mitchell, K. (2017, March). Rapid one-shot acquisition of dynamic VR avatars. Presented at 2017 IEEE Virtual Reality (VR), Los Angeles, US

We present a system for rapid acquisition of bespoke, animatable, full-body avatars including face texture and shape. A blendshape rig with a skeleton is used as a template for customization. Identity blendshapes are used to customize the body and fa... Read More about Rapid one-shot acquisition of dynamic VR avatars.

Interactive Ray-Traced Area Lighting with Adaptive Polynomial Filtering (2016)
Presentation / Conference Contribution
Iglesias-Guitian, J. A., Moon, B., & Mitchell, K. (2016). Interactive Ray-Traced Area Lighting with Adaptive Polynomial Filtering. In Proceedings of the 13th European Conference on Visual Media Production (CVMP 2016)

Area lighting computation is a key component for synthesizing photo-realistic rendered images, and it simulates plausible soft shadows by considering geometric relationships between area lights and three-dimensional scenes, in some cases even account... Read More about Interactive Ray-Traced Area Lighting with Adaptive Polynomial Filtering.

Synthetic Prior Design for Real-Time Face Tracking (2016)
Presentation / Conference Contribution
McDonagh, S., Klaudiny, M., Bradley, D., Beeler, T., Matthews, I., & Mitchell, K. (2016). Synthetic Prior Design for Real-Time Face Tracking. In 2016 Fourth International Conference on 3D Vision (3DV),. https://doi.org/10.1109/3dv.2016.72

Real-time facial performance capture has recently been gaining popularity in virtual film production, driven by advances in machine learning, which allows for fast inference of facial geometry from video streams. These learning-based approaches are s... Read More about Synthetic Prior Design for Real-Time Face Tracking.

Real-time Physics-based Motion Capture with Sparse Sensors (2016)
Presentation / Conference Contribution
Andrews, S., Huerta, I., Komura, T., Sigal, L., & Mitchell, K. (2016, December). Real-time Physics-based Motion Capture with Sparse Sensors. Presented at 13th European Conference on Visual Media Production (CVMP 2016) - CVMP 2016

We propose a framework for real-time tracking of humans using sparse multi-modal sensor sets, including data obtained from optical markers and inertial measurement units. A small number of sensors leaves the performer unencumbered by not requiring de... Read More about Real-time Physics-based Motion Capture with Sparse Sensors.

Pixel history linear models for real-time temporal filtering. (2016)
Journal Article
Iglesias-Guitian, J. A., Moon, B., Koniaris, C., Smolikowski, E., & Mitchell, K. (2016). Pixel history linear models for real-time temporal filtering. Computer Graphics Forum, 35(7), 363-372. https://doi.org/10.1111/cgf.13033

We propose a new real-time temporal filtering and antialiasing (AA) method for rasterization graphics pipelines. Our method is based on Pixel History Linear Models (PHLM), a new concept for modeling the history of pixel shading values over time using... Read More about Pixel history linear models for real-time temporal filtering..

Integrating real-time fluid simulation with a voxel engine (2016)
Journal Article
Zadick, J., Kenwright, B., & Mitchell, K. (2016). Integrating real-time fluid simulation with a voxel engine. The Computer Games Journal, 5(1-2), 55-64. https://doi.org/10.1007/s40869-016-0020-5

We present a method of adding sophisticated physical simulations to voxel-based games such as the hugely popular Minecraft (2012. http://minecraft.gamepedia.com/Liquid), thus providing a dynamic and realistic fluid simulation in a voxel environment.... Read More about Integrating real-time fluid simulation with a voxel engine.

Nonlinearly Weighted First-order Regression for Denoising Monte Carlo Renderings (2016)
Journal Article
Bitterli, B., Rousselle, F., Moon, B., Iglesias-Guitián, J. A., Adler, D., Mitchell, K., Jarosz, W., & Novák, J. (2016). Nonlinearly Weighted First-order Regression for Denoising Monte Carlo Renderings. Computer Graphics Forum, 35(4), 107-117. https://d

We address the problem of denoising Monte Carlo renderings by studying existing approaches and proposing a new algorithm that yields state-of-the-art performance on a wide range of scenes. We analyze existing approaches from a theoretical and empiric... Read More about Nonlinearly Weighted First-order Regression for Denoising Monte Carlo Renderings.

Stereohaptics: a haptic interaction toolkit for tangible virtual experiences (2016)
Presentation / Conference Contribution
Israr, A., Zhao, S., McIntosh, K., Schwemler, Z., Fritz, A., Mars, J., …Mitchell, K. (2016). Stereohaptics: a haptic interaction toolkit for tangible virtual experiences. In SIGGRAPH '16: ACM SIGGRAPH 2016 Studio. https://doi.org/10.1145/2929484.297027

With a recent rise in the availability of affordable head mounted gear sets, various sensory stimulations (e.g., visual, auditory and haptics) are integrated to provide seamlessly embodied virtual experience in areas such as education, entertainment,... Read More about Stereohaptics: a haptic interaction toolkit for tangible virtual experiences.

IRIDiuM: immersive rendered interactive deep media (2016)
Presentation / Conference Contribution
Koniaris, B., Israr, A., Mitchell, K., Huerta, I., Kosek, M., Darragh, K., …Moon, B. (2016). IRIDiuM: immersive rendered interactive deep media. . https://doi.org/10.1145/2929490.2929496

Compelling virtual reality experiences require high quality imagery as well as head motion with six degrees of freedom. Most existing systems limit the motion of the viewer (prerecorded fixed position 360 video panoramas), or are limited in realism,... Read More about IRIDiuM: immersive rendered interactive deep media.

User, metric, and computational evaluation of foveated rendering methods (2016)
Presentation / Conference Contribution
Swafford, N. T., Iglesias-Guitian, J. A., Koniaris, C., Moon, B., Cosker, D., & Mitchell, K. (2016). User, metric, and computational evaluation of foveated rendering methods. In SAP '16 Proceedings of the ACM Symposium on Applied Perception. https://doi.

Perceptually lossless foveated rendering methods exploit human perception by selectively rendering at different quality levels based on eye gaze (at a lower computational cost) while still maintaining the user's perception of a full quality render. W... Read More about User, metric, and computational evaluation of foveated rendering methods.

Adaptive polynomial rendering (2016)
Presentation / Conference Contribution
Moon, B., McDonagh, S., Mitchell, K., & Gross, M. Adaptive polynomial rendering. Presented at ACM SIGGRAPH 2016, Anaheim, California, US

In this paper, we propose a new adaptive rendering method to improve the performance of Monte Carlo ray tracing, by reducing noise contained in rendered images while preserving high-frequency edges. Our method locally approximates an image with polyn... Read More about Adaptive polynomial rendering.

Simulation and skinning of heterogeneous texture detail deformation (2016)
Patent
Koniaris, C., Mitchell, K., & Cosker, D. (2016). Simulation and skinning of heterogeneous texture detail deformation. US2016133040

A method is disclosed for reducing distortions introduced by deformation of a surface with an existing parameterization. In an exemplary embodiment, the method comprises receiving a rest pose mesh comprising a plurality of faces, a rigidity map corre... Read More about Simulation and skinning of heterogeneous texture detail deformation.

Online view sampling for estimating depth from light fields (2015)
Presentation / Conference Contribution
Kim, C., Subr, K., Mitchell, K., Sorkine-Hornung, A., & Gross, M. (2015). Online view sampling for estimating depth from light fields. In 2015 IEEE International Conference on Image Processing (ICIP). https://doi.org/10.1109/icip.2015.7350981

Geometric information such as depth obtained from light fields finds more applications recently. Where and how to sample images to populate a light field is an important problem to maximize the usability of information gathered for depth reconstru... Read More about Online view sampling for estimating depth from light fields.

Latency aware foveated rendering in unreal engine 4 (2015)
Presentation / Conference Contribution
Swafford, N. T., Cosker, D., & Mitchell, K. (2015). Latency aware foveated rendering in unreal engine 4. In CVMP '15 Proceedings of the 12th European Conference on Visual Media Production. https://doi.org/10.1145/2824840.2824863

We contribute a foveated rendering implementation in Unreal Engine 4 (UE4) and a straight-forward metric to allow calculation of rendered foveal region sizes to compensate for overall system latency and maintain perceptual losslessness. Our system de... Read More about Latency aware foveated rendering in unreal engine 4.

Real-time variable rigidity texture mapping (2015)
Presentation / Conference Contribution
Koniaris, C., Mitchell, K., & Cosker, D. (2015, November). Real-time variable rigidity texture mapping. Presented at Proceedings of the 12th European Conference on Visual Media Production - CVMP '15

Parameterisation of models is typically generated for a single pose, the rest pose. When a model deforms, its parameterisation characteristics change, leading to distortions in the appearance of texture-mapped mesostructure. Such distortions are unde... Read More about Real-time variable rigidity texture mapping.

Guided ecological simulation for artistic editing of plant distributions in natural scenes (2015)
Journal Article
Bradbury, G. A., Subr, K., Koniaris, C., Mitchell, K., & Weyrich, T. (2015). Guided ecological simulation for artistic editing of plant distributions in natural scenes. The Journal of Computer Graphics Techniques, 4(4), 28-53

In this paper we present a novel approach to author vegetation cover of large natural scenes. Unlike stochastic scatter-instancing tools for plant placement (such as multi-class blue noise generators), we use a simulation based on ecological processe... Read More about Guided ecological simulation for artistic editing of plant distributions in natural scenes.

Carpet unrolling for character control on uneven terrain (2015)
Presentation / Conference Contribution
Miller, M., Holden, D., Al-Ashqar, R., Dubach, C., Mitchell, K., & Komura, T. (2015). Carpet unrolling for character control on uneven terrain. In MIG '15 Proceedings of the 8th ACM SIGGRAPH Conference on Motion in Games. https://doi.org/10.1145/2822013.

We propose a type of relationship descriptor based on carpet unrolling that computes the joint positions of a character based on the sum of relative vectors originating from a local coordinate system embedded on the surface of a carpet. Given a terra... Read More about Carpet unrolling for character control on uneven terrain.

Augmented creativity: bridging the real and virtual worlds to enhance creative play (2015)
Presentation / Conference Contribution
Zünd, F., Ryffel, M., Magnenat, S., Marra, A., Nitti, M., Kapadia, M., …Sumner, R. W. (2015). Augmented creativity: bridging the real and virtual worlds to enhance creative play. In SA '15 SIGGRAPH Asia 2015 Mobile Graphics and Interactive Application

Augmented Reality (AR) holds unique and promising potential to bridge between real-world activities and digital experiences, allowing users to engage their imagination and boost their creativity. We propose the concept of Augmented Creativity as empl... Read More about Augmented creativity: bridging the real and virtual worlds to enhance creative play.

Adaptive rendering with linear predictions (2015)
Journal Article
Moon, B., Iglesias-Guitian, J. A., Yoon, S., & Mitchell, K. (2015). Adaptive rendering with linear predictions. ACM transactions on graphics, 34(4), 121:1-121:11. https://doi.org/10.1145/2766992

We propose a new adaptive rendering algorithm that enhances the performance of Monte Carlo ray tracing by reducing the noise, i.e., variance, while preserving a variety of high-frequency edges in rendered images through a novel prediction based re... Read More about Adaptive rendering with linear predictions.

Poxels: polygonal voxel environment rendering (2014)
Presentation / Conference Contribution
Miller, M., Cumming, A., Chalmers, K., Kenwright, B., & Mitchell, K. (2014). Poxels: polygonal voxel environment rendering. In Proceedings of the 20th ACM Symposium on Virtual Reality Software and Technology - VRST '14 (235-236). https://doi.org/10.1145/

We present efficient rendering of opaque, sparse, voxel environments with data amplified in local graphics memory with stream-out from a geomery shader to a cached vertex buffer pool. We show that our Poxel rendering primitive aligns with optimized r... Read More about Poxels: polygonal voxel environment rendering.

L3V: A Layered Video Format for 3D Display (2014)
Presentation / Conference Contribution
Mitchell, K., Sinclair, D., Kosek, M., & Swaford, N. (2014, November). L3V: A Layered Video Format for 3D Display. Presented at Conference on Visual Media Production, London

We present a layered video format for 3D interactive display which adapts and exploits well-developed 2D codecs with layer centric packing for real-time user perspective playback. We demonstrate our 3D video format for both handheld 3D on mobile devi... Read More about L3V: A Layered Video Format for 3D Display.

Content aware texture mapping on deformable surfaces (2014)
Patent
Koniaris, C., Cosker, D., Yang, X., Mitchell, K., & Matthews, I. (2014). Content aware texture mapping on deformable surfaces. US2014267306

A method is disclosed for reducing distortions introduced by deformation of a surface with an existing parameterization. In one embodiment, the distortions are reduced over a user-specified convex region in texture space ensuring optimization is loca... Read More about Content aware texture mapping on deformable surfaces.

Error analysis of estimators that use combinations of stochastic sampling strategies for direct illumination (2014)
Journal Article
Subr, K., Nowrouzezahrai, D., Jarosz, W., Kautz, J., & Mitchell, K. (2014). Error analysis of estimators that use combinations of stochastic sampling strategies for direct illumination. Computer Graphics Forum, 33(4), 93-102. https://doi.org/10.1111/cgf.1

We present a theoretical analysis of error of combinations of Monte Carlo estimators used in image synthesis. Importance sampling and multiple importance sampling are popular variance-reduction strategies. Unfortunately, neither strategy improves the... Read More about Error analysis of estimators that use combinations of stochastic sampling strategies for direct illumination.

Survey of texture mapping techniques for representing and rendering volumetric mesostructure (2014)
Journal Article
Koniaris, B., Cosker, D., Yang, X., & Mitchell, K. (2014). Survey of texture mapping techniques for representing and rendering volumetric mesostructure. The Journal of Computer Graphics Techniques, 3(2), 18-60

Representation and rendering of volumetric mesostructure using texture mapping can potentially allow the display of highly detailed, animated surfaces at a low performance cost. Given the need for consistently more detailed and dynamic worlds rendere... Read More about Survey of texture mapping techniques for representing and rendering volumetric mesostructure.

Iterative image warping (2012)
Journal Article
Bowles, H., Mitchell, K., Sumner, R., Moore, J., & Gross, M. (2012). Iterative image warping. Computer Graphics Forum, 31, 237-246. https://doi.org/10.1111/j.1467-8659.2012.03002.x

Animated image sequences often exhibit a large amount of inter-frame coherence which standard rendering algorithms and pipelines are ill-equipped to exploit, limiting their efficiency. To address this inefficiency we transfer rendering results across... Read More about Iterative image warping.

Efficient Rasterization for Edge-based 3D Object Tracking on Mobile Devices (2012)
Presentation / Conference Contribution
Kissling, E., Mitchell, K., Oskam, T., & Gross, M. (2012). Efficient Rasterization for Edge-based 3D Object Tracking on Mobile Devices. In SIGGRAPH Asia 2012 Technical Briefs (12:1-12:4). https://doi.org/10.1145/2407746.2407758

Augmented reality applications on hand-held devices suffer from the limited available processing power. While methods to detect the location of artificially textured markers within the scene are commonly used, geometric properties of three-dimensiona... Read More about Efficient Rasterization for Edge-based 3D Object Tracking on Mobile Devices.

Surround Haptics: Sending Shivers Down Your Spine (2011)
Presentation / Conference Contribution
Israr, A., Poupyrev, I., Ioffreda, C., Cox, J., Gouveia, N., Bowles, H., …Williams, T. (2011). Surround Haptics: Sending Shivers Down Your Spine. In ACM SIGGRAPH 2011 Emerging Technologies (14:1-14:1). https://doi.org/10.1145/2048259.2048273

Surround Haptics is a new tactile technology that uses a low-resolution grid of inexpensive vibrating actuators to generate high-resolution, continuous, moving tactile strokes on human skin [1]. The user would not feel the discrete tactile pulses and... Read More about Surround Haptics: Sending Shivers Down Your Spine.

Runtime Implementation of Modular Radiance Transfer. (2011)
Presentation / Conference Contribution
Loos, B., Antani, L., Mitchell, K., Nowrouzezahrai, D., Jarosz, W., & Sloan, P. (2011). Runtime Implementation of Modular Radiance Transfer. In ACM SIGGRAPH 2011 Talks (59:1-59:1). https://doi.org/10.1145/2037826.2037905

Real-time rendering of indirect lighting significantly enhances the sense of realism in video games. Unfortunately, previously including such effects often required time consuming scene dependent precomputation and heavy runtime computations unsuitab... Read More about Runtime Implementation of Modular Radiance Transfer..

Light factorization for mixed-frequency shadows in augmented reality (2011)
Presentation / Conference Contribution
Nowrouzezahrai, D., Geiger, S., Mitchell, K., Sumner, R., Jarosz, W., & Gross, M. (2011). Light factorization for mixed-frequency shadows in augmented reality. In Mixed and Augmented Reality (ISMAR), 2011 10th IEEE International Symposium on (173-179). h

Integrating animated virtual objects with their surroundings for high-quality augmented reality requires both geometric and radio-metric consistency. We focus on the latter of these problems and present an approach that captures and factorizes extern... Read More about Light factorization for mixed-frequency shadows in augmented reality.

OSCAM-optimized stereoscopic camera control for interactive 3D (2011)
Journal Article
Oskam, T., Hornung, A., Bowles, H., Mitchell, K., & Gross, M. (2011). OSCAM-optimized stereoscopic camera control for interactive 3D. ACM transactions on graphics, 30, 189. https://doi.org/10.1145/2024156.2024223

This paper presents a controller for camera convergence and interaxial separation that specifically addresses challenges in interactive stereoscopic applications like games. In such applications, unpredictable viewer- or object-motion often compromis... Read More about OSCAM-optimized stereoscopic camera control for interactive 3D.

Capture and analysis of racing gameplay metrics (2011)
Journal Article
Jimenez, E., Mitchell, K., & Seron, F. (2011). Capture and analysis of racing gameplay metrics. IEEE Software, 28, 46-52. https://doi.org/10.1109/MS.2011.71

This article presents a flexible, extendable system called Tracktivity that can capture gameplay metrics in any type of leaderboard-based video game. This system incorporates novel visualizations, including a dynamic competition balancing (DCB) measu... Read More about Capture and analysis of racing gameplay metrics.

OSCAM - Optimized Stereoscopic Camera Control for Interactive 3D (2011)
Journal Article
Oskam, T., Hornung, A., Bowles, H., Mitchell, K., & Gross, M. (2011). OSCAM - Optimized Stereoscopic Camera Control for Interactive 3D. ACM transactions on graphics, 30, 189:1-189:8. https://doi.org/10.1145/2070781.2024223

This paper presents a controller for camera convergence and interaxial separation that specifically addresses challenges in interactive stereoscopic applications like games. In such applications, unpredictable viewer- or object-motion often compromis... Read More about OSCAM - Optimized Stereoscopic Camera Control for Interactive 3D.

Split Second Motion Blur (2010)
Presentation / Conference Contribution
Ritchie, M., Modern, G., & Mitchell, K. (2010). Split Second Motion Blur. In ACM SIGGRAPH 2010 Talks (17:1-17:1). https://doi.org/10.1145/1837026.1837048

Motion blur is key to delivering a sense of speed in interactive video game rendering. Further, simulating accurate camera optical exposure properties and reduction of temporal aliasing brings us closer to high quality real-time rendering productions... Read More about Split Second Motion Blur.

Using active constructs in user-interfaces to object-oriented databases. (1997)
Presentation / Conference Contribution
Mitchell, K., Kennedy, J., & Barclay, P. J. (1997). Using active constructs in user-interfaces to object-oriented databases. In Proceedings [of the First] International database engineering and applications symposium, (3-12)

This paper examines the use of active constructs in the definition of user-interfaces to object-oriented databases. A development environment for user-interfaces to databases is presented which features the interactive use of active features of an ob... Read More about Using active constructs in user-interfaces to object-oriented databases..

The perspective tunnel: An inside view on smoothly integrating detail and context. (1997)
Presentation / Conference Contribution
Mitchell, K., & Kennedy, J. (1997). The perspective tunnel: An inside view on smoothly integrating detail and context. In W. Lefer, & M. Grave (Eds.), Visualization in scientific computing '97: proceedings of the Eurographics Workshop in Boulogne-sur-Mer,

The perspective tunnel, a general kind of information visualisation artefact, embodies a visual form which exploits natural human visual perception. Perspective tunnels map information on to the floor, ceiling and walls of a tunnel, so that both ever... Read More about The perspective tunnel: An inside view on smoothly integrating detail and context..

Describing and characterising visualisations. (1996)
Presentation / Conference Contribution
Kennedy, J. B., Mitchell, K. J., & Barclay, P. J. (1996). Describing and characterising visualisations. In 3rd FADIVA Workshop

A generic framework for describing and specifying interfaces to databases has been proposed [1]. Currently this framework is being used as a model for the development of an environment for the construction of user interfaces to object oriented datab... Read More about Describing and characterising visualisations..

A framework for information visualisation (1996)
Journal Article
Kennedy, J., Mitchell, K., & Barclay, P. J. (1996). A framework for information visualisation. SIGMOD record, 25, 30-34

In this paper we examine the issues involved in developing information visualisation systems and present a framework for their construction. The framework addresses the components which must be considered in providing effective visualisations. The fr... Read More about A framework for information visualisation.

DRIVE: An environment for the organised construction of user interfaces to data. (1996)
Presentation / Conference Contribution
Mitchell, K., Kennedy, J., & Barclay, P. J. (1996). DRIVE: An environment for the organised construction of user interfaces to data. In J. Kennedy, & P. J. Barclay (Eds.), Interfaces to Databases (IDS-3): Proceedings of the 3rd International Workshop on I

This paper describes a runtime user-interface development environment (UIDE) for the novel capability of interactively using and specifying user-interfaces to object-oriented databases (IDSs). A framework provides the foundation for IDSs constructed... Read More about DRIVE: An environment for the organised construction of user interfaces to data..

A framework for user-interfaces to databases (1996)
Presentation / Conference Contribution
Mitchell, K., Kennedy, J., & Barclay, P. J. (1996). A framework for user-interfaces to databases. In T. Catarci, M. F. Costabile, S. Levialdi, & G. Santucci (Eds.), Proceedings [of the] 3rd International Workshop on Advanced Visual Interfaces, AVI'96 (81

A framework for user-interfaces to databases (IDSs) is proposed which draws from existing research on human computer interaction (HCI) and database systems. The framework is described in terms of a classification of the characteristic components of a... Read More about A framework for user-interfaces to databases.

3D information visualisation: Identifying and measuring success (1995)
Presentation / Conference Contribution
Kennedy, J., Mitchell, K., Barclay, P., & Marshall, B. (1995). 3D information visualisation: Identifying and measuring success. In Proceedings of the 2nd International FADIVA Workshop, 1995

This paper presents some of our views on information visualisation and interfaces to databases with respect to the theme of the workshop.

Using a conceptual data language to describe a database and its interface (1995)
Presentation / Conference Contribution
Mitchell, K., Kennedy, J., & Barclay, P. J. (1995). Using a conceptual data language to describe a database and its interface. In C. Goble, & J. Keane (Eds.), Advances in Databases: Proceedings [of the] 13th British National Conference on Database - BNCOD

We propose a conceptual approach to defining interfaces to databases which uses the features of a fully object oriented data language to specify interface objects combined with database objects. This achieves a uniform, natural way of describing data... Read More about Using a conceptual data language to describe a database and its interface.