Skip to main content

Research Repository

Advanced Search

All Outputs (13)

Scottish Mountain Soundscapes (2024)
Presentation / Conference Contribution
Di Donato, B., & McGregor, I. (2024, June). Scottish Mountain Soundscapes. Presented at Invited seminar at UniMont, University of Milan, Edolo, Italy

Mountain environments provide an immersive auditory experience shaped by natural forces and human interactions. This seminar presents results from an ongoing project that delves into people’s experiences of mountain soundscapes, with a focus on the i... Read More about Scottish Mountain Soundscapes.

The digital Foley: what Foley artists say about using audio synthesis (2024)
Presentation / Conference Contribution
Di Donato, B., & McGregor, I. (2024, April). The digital Foley: what Foley artists say about using audio synthesis. Presented at 2024 AES 6th International Conference on Audio for Games, Tokyo, Japan

Foley is a sound production technique where organicity and authenticity in sound creation are key to fostering creativity. Audio synthesis, Artificial Intelligence (AI) and Interaction Design (IXD) have been explored by the community to investigate t... Read More about The digital Foley: what Foley artists say about using audio synthesis.

Building an Embodied Musicking Dataset for co-creative music-making (2024)
Presentation / Conference Contribution
Vear, C., Poltronieri, F., Di Donato, B., Zhang, Y., Benerradi, J., Hutchinson, S., Turowski, P., Shell, J., & Malekmohamadi, H. (2024, April). Building an Embodied Musicking Dataset for co-creative music-making. Presented at Evostar 2024: The Leading Eur

In this paper, we present our findings of the design, development and deployment of a proof-of-concept dataset that captures some of the physiological, musicological, and psychological aspects of embodied musicking. After outlining the conceptual ele... Read More about Building an Embodied Musicking Dataset for co-creative music-making.

Remote Acoustic Soundscape evaluation using Ambisonic recordings (2023)
Presentation / Conference Contribution
Di Donato, B., & McGregor, I. (2023, September). Remote Acoustic Soundscape evaluation using Ambisonic recordings. Paper presented at Basic Auditory Science 2023, London, UK

The Acoustic Soundscape ISO standard has been used in previous work to assess the soundscape during soundwalks. This approach has different results and contributions to the field. However, there are several challenges to this approach, such as logist... Read More about Remote Acoustic Soundscape evaluation using Ambisonic recordings.

Proceedings of the 18th International Audio Mostly Conference (2023)
Presentation / Conference Contribution
(2023). Proceedings of the 18th International Audio Mostly Conference. . https://doi.org/10.1145/3616195

Audio Mostly is an interdisciplinary conference on design and experience of interaction with sound that prides itself on embracing applied theory and reflective practice. Its annual gatherings bring together thinkers and doers from academia and indus... Read More about Proceedings of the 18th International Audio Mostly Conference.

An exploration of diversity in Embodied Music Interaction (2023)
Presentation / Conference Contribution
Di Donato, B. (2023, May). An exploration of diversity in Embodied Music Interaction. Presented at CHIME: Music Interaction and Physical Disability, University of Edinburgh, Edinburgh

This is an exploration of diversity in Embodied Music Interaction through three case studies entitled (i) Accessible interactive digital signage for the visually impaired, (ii) Human-Sound Interaction and (iii) BSL in Embodied Music Interaction (EMI)... Read More about An exploration of diversity in Embodied Music Interaction.

Procedural Audio For Virtual Environments Workshop (2022)
Presentation / Conference Contribution
Di Donato, B., & Selfridge, R. (2022, September). Procedural Audio For Virtual Environments Workshop. Presented at Audio Mostly, St. Pölten, Austria

Sound design is a crucial aspect of interactive virtual environments (VEs). Often, this activity is fixated by the constraints of the tools we use or our perception of that medium. In this on-site workshop, we present methods to model sound-producing... Read More about Procedural Audio For Virtual Environments Workshop.

tiNNbre: a timbre-based musical agent (2021)
Presentation / Conference Contribution
Bolzoni, A., Di Donato, B., & Laney, R. (2021). tiNNbre: a timbre-based musical agent. In 2nd Nordic Sound and Music Conference

In this paper, we present tiNNbre, a generative music prototype-system that reacts to timbre gestures. By timbre gesture we mean a sonic (as opposed to body) gesture that mainly conveys artistic meaning through timbre rather than other sonic properti... Read More about tiNNbre: a timbre-based musical agent.

Gesture-Timbre Space: Multidimensional Feature Mapping Using Machine Learning and Concatenative Synthesis (2021)
Presentation / Conference Contribution
Zbyszyński, M., Di Donato, B., Visi, F., & Tanaka, A. (2021). Gesture-Timbre Space: Multidimensional Feature Mapping Using Machine Learning and Concatenative Synthesis. In R. Kronland-Martinet, S. Ystad, & M. Aramaki (Eds.), Perception, Representations,

This chapter explores three systems for mapping embodied gesture, acquired with electromyography and motion sensing, to sound synthesis. A pilot study using granular synthesis is presented, followed by studies employing corpus-based concatenative syn... Read More about Gesture-Timbre Space: Multidimensional Feature Mapping Using Machine Learning and Concatenative Synthesis.

Human-Sound Interaction: Towards a Human-Centred Sonic Interaction Design approach (2020)
Presentation / Conference Contribution
Donato, B. D., Dewey, C., & Michailidis, T. (2020). Human-Sound Interaction: Towards a Human-Centred Sonic Interaction Design approach. In MOCO '20: Proceedings of the 7th International Conference on Movement and Computing. https://doi.org/10.1145/340195

In this paper, we explore human-centered interaction design aspects that determine the realisation and appreciation of musical works (installations, composition and performance), interfaces for sound design and musical expression, augmented instrumen... Read More about Human-Sound Interaction: Towards a Human-Centred Sonic Interaction Design approach.

Designing Gestures for Continuous Sonic Interaction (2019)
Presentation / Conference Contribution
Tanaka, A., Di Donato, B., Zbyszynski, M., & Roks, G. (2019). Designing Gestures for Continuous Sonic Interaction. In M. Queiroz, & A. X. Sedó (Eds.), Proceedings of the International Conference on New Interfaces for Musical Expression (180-185). https:/

This paper presents a system that allows users to quickly try different ways to train neural networks and temporal modeling techniques to associate arm gestures with time varying sound. We created a software framework for this, and designed three int... Read More about Designing Gestures for Continuous Sonic Interaction.

Myo Mapper: a Myo armband to OSC mapper (2018)
Presentation / Conference Contribution
Di Donato, B., Bullock, J., & Tanaka, A. (2018). Myo Mapper: a Myo armband to OSC mapper. In D. Bowman, & L. Dahl (Eds.), Proceedings of the International Conference on New Interfaces for Musical Expression (138-143). https://doi.org/10.5281/zenodo.130270

Myo Mapper is a free and open source cross-platform application to map data from the gestural device Myo armband into Open Sound Control (OSC) messages. It represents a ‘quick and easy’ solution for exploring the Myo’s potential for realising new int... Read More about Myo Mapper: a Myo armband to OSC mapper.

MyoSpat: A hand-gesture controlled system for sound and light projections manipulation (2017)
Presentation / Conference Contribution
Di Donato, B., Dooley, J., Hockman, J., Bullock, J., & Hall, S. (2017). MyoSpat: A hand-gesture controlled system for sound and light projections manipulation. In Proceedings of the 2017 International Computer Music Conference (335-340)

We present MyoSpat, an interactive system that enables performers to control sound and light projections through hand-gestures. MyoSpat is designed and developed using the Myo armband as an input device and Pure Data (Pd) as an audiovisual engine. Th... Read More about MyoSpat: A hand-gesture controlled system for sound and light projections manipulation.