Skip to main content

Research Repository

Advanced Search

All Outputs (12)

DanceMark: An open telemetry framework for latency sensitive real-time networked immersive experiences (2024)
Presentation / Conference Contribution
Koniaris, B., Sinclair, D., & Mitchell, K. (2024, March). DanceMark: An open telemetry framework for latency sensitive real-time networked immersive experiences. Presented at IEEE VR Workshop on Open Access Tools and Libraries for Virtual Reality, Orlando, FL

DanceMark is an open telemetry framework designed for latency-sensitive real-time networked immersive experiences, focusing on online dancing in virtual reality within the DanceGraph platform. The goal is to minimize end-to-end latency and enhance us... Read More about DanceMark: An open telemetry framework for latency sensitive real-time networked immersive experiences.

Design Considerations of Voice Articulated Generative AI Virtual Reality Dance Environments (2024)
Presentation / Conference Contribution
Casas, L., Mitchell, K., Tamariz, M., Hannah, S., Sinclair, D., Koniaris, B., & Kennedy, J. (2024, May). Design Considerations of Voice Articulated Generative AI Virtual Reality Dance Environments. Paper presented at SIGCHI GenAI in UGC Workshop, Honolulu, Hawaii

We consider practical and social considerations of collaborating verbally with colleagues and friends, not confined by physical distance, but through seamless networked telepresence to interactively create shared virtual dance environments. In respon... Read More about Design Considerations of Voice Articulated Generative AI Virtual Reality Dance Environments.

Design Considerations of Voice Articulated Generative AI Virtual Reality Dance Environments (2024)
Presentation / Conference Contribution
Casas, L., Mitchell, K., Tamariz, M., Hannah, S., Sinclair, D., Koniaris, B., & Kennedy, J. (2024, May). Design Considerations of Voice Articulated Generative AI Virtual Reality Dance Environments. Presented at CHI24 - Generative AI in User-Generated Content, Honolulu, Hawaii

We consider practical and social considerations of collaborating verbally with colleagues and friends, not confined by physical distance, but through seamless networked telepres-ence to interactively create shared virtual dance environments. In respo... Read More about Design Considerations of Voice Articulated Generative AI Virtual Reality Dance Environments.

DanceGraph: A Complementary Architecture for Synchronous Dancing Online (2023)
Presentation / Conference Contribution
Sinclair, D., Ademola, A. V., Koniaris, B., & Mitchell, K. (2023, May). DanceGraph: A Complementary Architecture for Synchronous Dancing Online. Paper presented at 36th International Computer Animation & Social Agents (CASA) 2023, Limassol, Cyprus

DanceGraph is an architecture for synchronized online dancing overcoming the latency of net-worked body pose sharing. We break down this challenge by developing a real-time bandwidth-efficient architecture to minimize lag and reduce the timeframe of... Read More about DanceGraph: A Complementary Architecture for Synchronous Dancing Online.

Embodied online dance learning objectives of CAROUSEL + (2021)
Presentation / Conference Contribution
Mitchell, K., Koniaris, B., Tamariz, M., Kennedy, J., Cheema, N., Mekler, E., Van Der Linden, P., Herrmann, E., Hämäläinen, P., McGregor, I., Slusallek, P., & Mac Williams, C. (2021, March). Embodied online dance learning objectives of CAROUSEL +. Presented at 2021 IEEE VR 6th Annual Workshop on K-12+ Embodied Learning through Virtual and Augmented Reality (KELVAR), Lisbon, Portugal

This is a position paper concerning the embodied dance learning objectives of the CAROUSEL + 1 project, which aims to impact how online immersive technologies influence multiuser interaction and communication with a focus on dancing and learning danc... Read More about Embodied online dance learning objectives of CAROUSEL +.

Method for Efficient CPU-GPU Streaming for Walkthrough of Full Motion Lightfield Video (2017)
Presentation / Conference Contribution
Chitalu, F. M., Koniaris, B., & Mitchell, K. (2017, December). Method for Efficient CPU-GPU Streaming for Walkthrough of Full Motion Lightfield Video. Presented at 14th European Conference on Visual Media Production (CVMP 2017), London, United Kingdom

Lightfield video, as a high-dimensional function, is very demanding in terms of storage. As such, lightfield video data, even in a compressed form, do not typically fit in GPU or main memory unless the capture area, resolution or duration is sufficie... Read More about Method for Efficient CPU-GPU Streaming for Walkthrough of Full Motion Lightfield Video.

IRIDiuM+: deep media storytelling with non-linear light field video (2017)
Presentation / Conference Contribution
Kosek, M., Koniaris, B., Sinclair, D., Markova, D., Rothnie, F., Smoot, L., & Mitchell, K. (2017, July). IRIDiuM+: deep media storytelling with non-linear light field video. Presented at ACM SIGGRAPH 2017 VR Village on - SIGGRAPH '17, Los Angeles, California

We present immersive storytelling in VR enhanced with non-linear sequenced sound, touch and light. Our Deep Media (Rose 2012) aim is to allow for guests to physically enter rendered movies with novel non-linear storytelling capability. With the ab... Read More about IRIDiuM+: deep media storytelling with non-linear light field video.

Real-time rendering with compressed animated light fields. (2017)
Presentation / Conference Contribution
Koniaris, B., Kosek, M., Sinclair, D., & Mitchell, K. (2017, May). Real-time rendering with compressed animated light fields. Presented at 43rd Graphics Interface Conference

We propose an end-to-end solution for presenting movie quality animated graphics to the user while still allowing the sense of presence afforded by free viewpoint head motion. By transforming offline rendered movie content into a novel immersive repr... Read More about Real-time rendering with compressed animated light fields..

IRIDiuM: immersive rendered interactive deep media (2016)
Presentation / Conference Contribution
Koniaris, B., Israr, A., Mitchell, K., Huerta, I., Kosek, M., Darragh, K., …Moon, B. (2016). IRIDiuM: immersive rendered interactive deep media. . https://doi.org/10.1145/2929490.2929496

Compelling virtual reality experiences require high quality imagery as well as head motion with six degrees of freedom. Most existing systems limit the motion of the viewer (prerecorded fixed position 360 video panoramas), or are limited in realism,... Read More about IRIDiuM: immersive rendered interactive deep media.

Stereohaptics: a haptic interaction toolkit for tangible virtual experiences (2016)
Presentation / Conference Contribution
Israr, A., Zhao, S., McIntosh, K., Schwemler, Z., Fritz, A., Mars, J., Bedford, J., Frisson, C., Huerta, I., Kosek, M., Koniaris, B., & Mitchell, K. (2016, July). Stereohaptics: a haptic interaction toolkit for tangible virtual experiences. Presented at ACM SIGGRAPH 2016 Studio on - SIGGRAPH '16, Anaheim, CA, US

With a recent rise in the availability of affordable head mounted gear sets, various sensory stimulations (e.g., visual, auditory and haptics) are integrated to provide seamlessly embodied virtual experience in areas such as education, entertainment,... Read More about Stereohaptics: a haptic interaction toolkit for tangible virtual experiences.

User, metric, and computational evaluation of foveated rendering methods (2016)
Presentation / Conference Contribution
Swafford, N. T., Iglesias-Guitian, J. A., Koniaris, C., Moon, B., Cosker, D., & Mitchell, K. (2016, July). User, metric, and computational evaluation of foveated rendering methods. Presented at Proceedings of the ACM Symposium on Applied Perception - SAP '16

Perceptually lossless foveated rendering methods exploit human perception by selectively rendering at different quality levels based on eye gaze (at a lower computational cost) while still maintaining the user's perception of a full quality render. W... Read More about User, metric, and computational evaluation of foveated rendering methods.

Real-time variable rigidity texture mapping (2015)
Presentation / Conference Contribution
Koniaris, C., Mitchell, K., & Cosker, D. (2015, November). Real-time variable rigidity texture mapping. Presented at Proceedings of the 12th European Conference on Visual Media Production - CVMP '15

Parameterisation of models is typically generated for a single pose, the rest pose. When a model deforms, its parameterisation characteristics change, leading to distortions in the appearance of texture-mapped mesostructure. Such distortions are unde... Read More about Real-time variable rigidity texture mapping.