Skip to main content

Research Repository

Advanced Search

All Outputs (8)

The digital Foley: what Foley artists say about using audio synthesis (2024)
Conference Proceeding
Di Donato, B., & McGregor, I. (2024). The digital Foley: what Foley artists say about using audio synthesis. In Proceedings of the 2024 AES 6th International Conference on Audio for Games

Foley is a sound production technique where organicity and authenticity in sound creation are key to fostering creativity. Audio synthesis, Artificial Intelligence (AI) and Interaction Design (IXD) have been explored by the community to investigate t... Read More about The digital Foley: what Foley artists say about using audio synthesis.

Proceedings of the 18th International Audio Mostly Conference (2023)
Conference Proceeding
(2023). Proceedings of the 18th International Audio Mostly Conference. . https://doi.org/10.1145/3616195

Audio Mostly is an interdisciplinary conference on design and experience of interaction with sound that prides itself on embracing applied theory and reflective practice. Its annual gatherings bring together thinkers and doers from academia and indus... Read More about Proceedings of the 18th International Audio Mostly Conference.

tiNNbre: a timbre-based musical agent (2021)
Conference Proceeding
Bolzoni, A., Di Donato, B., & Laney, R. (2021). tiNNbre: a timbre-based musical agent. In 2nd Nordic Sound and Music Conference

In this paper, we present tiNNbre, a generative music prototype-system that reacts to timbre gestures. By timbre gesture we mean a sonic (as opposed to body) gesture that mainly conveys artistic meaning through timbre rather than other sonic properti... Read More about tiNNbre: a timbre-based musical agent.

Gesture-Timbre Space: Multidimensional Feature Mapping Using Machine Learning and Concatenative Synthesis (2021)
Conference Proceeding
Zbyszyński, M., Di Donato, B., Visi, F., & Tanaka, A. (2021). Gesture-Timbre Space: Multidimensional Feature Mapping Using Machine Learning and Concatenative Synthesis. In R. Kronland-Martinet, S. Ystad, & M. Aramaki (Eds.), Perception, Representations, Image, Sound, Music - 14th International Symposium, CMMR 2019, Marseille, France, October 14–18, 2019, Revised Selected Papers (600-622). https://doi.org/10.1007/978-3-030-70210-6_39

This chapter explores three systems for mapping embodied gesture, acquired with electromyography and motion sensing, to sound synthesis. A pilot study using granular synthesis is presented, followed by studies employing corpus-based concatenative syn... Read More about Gesture-Timbre Space: Multidimensional Feature Mapping Using Machine Learning and Concatenative Synthesis.

Human-Sound Interaction: Towards a Human-Centred Sonic Interaction Design approach (2020)
Conference Proceeding
Donato, B. D., Dewey, C., & Michailidis, T. (2020). Human-Sound Interaction: Towards a Human-Centred Sonic Interaction Design approach. In MOCO '20: Proceedings of the 7th International Conference on Movement and Computing. https://doi.org/10.1145/3401956.3404233

In this paper, we explore human-centered interaction design aspects that determine the realisation and appreciation of musical works (installations, composition and performance), interfaces for sound design and musical expression, augmented instrumen... Read More about Human-Sound Interaction: Towards a Human-Centred Sonic Interaction Design approach.

Designing Gestures for Continuous Sonic Interaction (2019)
Conference Proceeding
Tanaka, A., Di Donato, B., Zbyszynski, M., & Roks, G. (2019). Designing Gestures for Continuous Sonic Interaction. In M. Queiroz, & A. X. Sedó (Eds.), Proceedings of the International Conference on New Interfaces for Musical Expression (180-185). https://doi.org/10.5281/zenodo.3672916

This paper presents a system that allows users to quickly try different ways to train neural networks and temporal modeling techniques to associate arm gestures with time varying sound. We created a software framework for this, and designed three int... Read More about Designing Gestures for Continuous Sonic Interaction.

Myo Mapper: a Myo armband to OSC mapper (2018)
Conference Proceeding
Di Donato, B., Bullock, J., & Tanaka, A. (2018). Myo Mapper: a Myo armband to OSC mapper. In D. Bowman, & L. Dahl (Eds.), Proceedings of the International Conference on New Interfaces for Musical Expression (138-143). https://doi.org/10.5281/zenodo.1302705

Myo Mapper is a free and open source cross-platform application to map data from the gestural device Myo armband into Open Sound Control (OSC) messages. It represents a ‘quick and easy’ solution for exploring the Myo’s potential for realising new int... Read More about Myo Mapper: a Myo armband to OSC mapper.

MyoSpat: A hand-gesture controlled system for sound and light projections manipulation (2017)
Conference Proceeding
Di Donato, B., Dooley, J., Hockman, J., Bullock, J., & Hall, S. (2017). MyoSpat: A hand-gesture controlled system for sound and light projections manipulation. In Proceedings of the 2017 International Computer Music Conference (335-340)

We present MyoSpat, an interactive system that enables performers to control sound and light projections through hand-gestures. MyoSpat is designed and developed using the Myo armband as an input device and Pure Data (Pd) as an audiovisual engine. Th... Read More about MyoSpat: A hand-gesture controlled system for sound and light projections manipulation.