Skip to main content

Research Repository

Advanced Search

All Outputs (8)

An exploration of diversity in Embodied Music Interaction (2023)
Presentation / Conference Contribution
Di Donato, B. (2023, May). An exploration of diversity in Embodied Music Interaction. Presented at CHIME: Music Interaction and Physical Disability, University of Edinburgh, Edinburgh

This is an exploration of diversity in Embodied Music Interaction through three case studies entitled (i) Accessible interactive digital signage for the visually impaired, (ii) Human-Sound Interaction and (iii) BSL in Embodied Music Interaction (EMI)... Read More about An exploration of diversity in Embodied Music Interaction.

Procedural Audio For Virtual Environments Workshop (2022)
Presentation / Conference Contribution
Di Donato, B., & Selfridge, R. (2022, September). Procedural Audio For Virtual Environments Workshop. Presented at Audio Mostly, St. Pölten, Austria

Sound design is a crucial aspect of interactive virtual environments (VEs). Often, this activity is fixated by the constraints of the tools we use or our perception of that medium. In this on-site workshop, we present methods to model sound-producing... Read More about Procedural Audio For Virtual Environments Workshop.

tiNNbre: a timbre-based musical agent (2021)
Presentation / Conference Contribution
Bolzoni, A., Di Donato, B., & Laney, R. (2021). tiNNbre: a timbre-based musical agent. In 2nd Nordic Sound and Music Conference

In this paper, we present tiNNbre, a generative music prototype-system that reacts to timbre gestures. By timbre gesture we mean a sonic (as opposed to body) gesture that mainly conveys artistic meaning through timbre rather than other sonic properti... Read More about tiNNbre: a timbre-based musical agent.

Gesture-Timbre Space: Multidimensional Feature Mapping Using Machine Learning and Concatenative Synthesis (2021)
Presentation / Conference Contribution
Zbyszyński, M., Di Donato, B., Visi, F., & Tanaka, A. (2021). Gesture-Timbre Space: Multidimensional Feature Mapping Using Machine Learning and Concatenative Synthesis. In R. Kronland-Martinet, S. Ystad, & M. Aramaki (Eds.), Perception, Representations,

This chapter explores three systems for mapping embodied gesture, acquired with electromyography and motion sensing, to sound synthesis. A pilot study using granular synthesis is presented, followed by studies employing corpus-based concatenative syn... Read More about Gesture-Timbre Space: Multidimensional Feature Mapping Using Machine Learning and Concatenative Synthesis.

Human-Sound Interaction: Towards a Human-Centred Sonic Interaction Design approach (2020)
Presentation / Conference Contribution
Donato, B. D., Dewey, C., & Michailidis, T. (2020). Human-Sound Interaction: Towards a Human-Centred Sonic Interaction Design approach. In MOCO '20: Proceedings of the 7th International Conference on Movement and Computing. https://doi.org/10.1145/340195

In this paper, we explore human-centered interaction design aspects that determine the realisation and appreciation of musical works (installations, composition and performance), interfaces for sound design and musical expression, augmented instrumen... Read More about Human-Sound Interaction: Towards a Human-Centred Sonic Interaction Design approach.

Designing Gestures for Continuous Sonic Interaction (2019)
Presentation / Conference Contribution
Tanaka, A., Di Donato, B., Zbyszynski, M., & Roks, G. (2019). Designing Gestures for Continuous Sonic Interaction. In M. Queiroz, & A. X. Sedó (Eds.), Proceedings of the International Conference on New Interfaces for Musical Expression (180-185). https:/

This paper presents a system that allows users to quickly try different ways to train neural networks and temporal modeling techniques to associate arm gestures with time varying sound. We created a software framework for this, and designed three int... Read More about Designing Gestures for Continuous Sonic Interaction.

Myo Mapper: a Myo armband to OSC mapper (2018)
Presentation / Conference Contribution
Di Donato, B., Bullock, J., & Tanaka, A. (2018). Myo Mapper: a Myo armband to OSC mapper. In D. Bowman, & L. Dahl (Eds.), Proceedings of the International Conference on New Interfaces for Musical Expression (138-143). https://doi.org/10.5281/zenodo.130270

Myo Mapper is a free and open source cross-platform application to map data from the gestural device Myo armband into Open Sound Control (OSC) messages. It represents a ‘quick and easy’ solution for exploring the Myo’s potential for realising new int... Read More about Myo Mapper: a Myo armband to OSC mapper.

MyoSpat: A hand-gesture controlled system for sound and light projections manipulation (2017)
Presentation / Conference Contribution
Di Donato, B., Dooley, J., Hockman, J., Bullock, J., & Hall, S. (2017). MyoSpat: A hand-gesture controlled system for sound and light projections manipulation. In Proceedings of the 2017 International Computer Music Conference (335-340)

We present MyoSpat, an interactive system that enables performers to control sound and light projections through hand-gestures. MyoSpat is designed and developed using the Myo armband as an input device and Pure Data (Pd) as an audiovisual engine. Th... Read More about MyoSpat: A hand-gesture controlled system for sound and light projections manipulation.