Michael Zbyszy?ski
Gesture-Timbre Space: Multidimensional Feature Mapping Using Machine Learning and Concatenative Synthesis
Zbyszy?ski, Michael; Di Donato, Balandino; Visi, Federico; Tanaka, Atau
Authors
Contributors
Richard Kronland-Martinet
Editor
S�lvi Ystad
Editor
Mitsuko Aramaki
Editor
Abstract
This chapter explores three systems for mapping embodied gesture, acquired with electromyography and motion sensing, to sound synthesis. A pilot study using granular synthesis is presented, followed by studies employing corpus-based concatenative synthesis, where small sound units are organized by derived timbral features. We use interactive machine learning in a mapping-by-demonstration paradigm to create regression models that map high-dimensional gestural data to timbral data without dimensionality reduction in three distinct workflows. First, by directly associating individual sound units and static poses (anchor points) in static regression. Second, in whole regression a sound tracing method leverages our intuitive associations between time-varying sound and embodied movement. Third, we extend interactive machine learning through the use of artificial agents and reinforcement learning in an assisted interactive machine learning workflow. We discuss the benefits of organizing the sound corpus using self-organizing maps to address corpus sparseness, and the potential of regression-based mapping at different points in a musical workflow: gesture design, sound design, and mapping design. These systems support expressive performance by creating gesture-timbre spaces that maximize sonic diversity while maintaining coherence, enabling reliable reproduction of target sounds as well as improvisatory exploration of a sonic corpus. They have been made available to the research community, and have been used by the authors in concert performance.
Presentation Conference Type | Conference Paper (Published) |
---|---|
Conference Name | 14th International Symposium, CMMR 2019 |
Start Date | Oct 14, 2019 |
End Date | Oct 18, 2019 |
Acceptance Date | May 1, 2020 |
Online Publication Date | Mar 10, 2021 |
Publication Date | 2021 |
Deposit Date | Aug 9, 2021 |
Publisher | Springer |
Pages | 600-622 |
Series Title | Lecture Notes in Computer Science |
Series Number | 0302-9743 |
Series ISSN | 12631 |
Book Title | Perception, Representations, Image, Sound, Music - 14th International Symposium, CMMR 2019, Marseille, France, October 14–18, 2019, Revised Selected Papers |
ISBN | 978-3-030-70209-0 |
DOI | https://doi.org/10.1007/978-3-030-70210-6_39 |
Keywords | Gestural interaction, Interactive machine learning, Reinforcement learning, Sonic interaction design, Concatenative synthesis, Human-computer interaction |
Public URL | http://researchrepository.napier.ac.uk/Output/2791913 |
Publisher URL | https://www.springer.com/gp/book/9783030702090 |
You might also like
Human-Sound Interaction: Towards a Human-Centred Sonic Interaction Design approach
(2020)
Presentation / Conference Contribution
Improvising through the senses: a performance approach with the indirect use of technology
(2018)
Journal Article
HarpCI, Empowering Performers to Control and Transform Harp Sounds in Live Performance
(2019)
Journal Article
Myo Mapper: a Myo armband to OSC mapper
(2018)
Presentation / Conference Contribution
Downloadable Citations
About Edinburgh Napier Research Repository
Administrator e-mail: repository@napier.ac.uk
This application uses the following open-source libraries:
SheetJS Community Edition
Apache License Version 2.0 (http://www.apache.org/licenses/)
PDF.js
Apache License Version 2.0 (http://www.apache.org/licenses/)
Font Awesome
SIL OFL 1.1 (http://scripts.sil.org/OFL)
MIT License (http://opensource.org/licenses/mit-license.html)
CC BY 3.0 ( http://creativecommons.org/licenses/by/3.0/)
Powered by Worktribe © 2024
Advanced Search