Abstract
In this work, we present a method for generating sound-tracings using a mixture density recurrent neural network (MDRNN). A sound-tracing is a rendering of perceptual qualities of short sound objects through body motion. The model is trained on a dataset of single point sound-tracings with multimodal input data and learns to generate novel tracings. We use a second neural network classifier to show that the input sound can be identified from generated tracings. This is part of an ongoing research effort to examine the complex correlations between sound and movement and the possibility of modelling these relationships using deep learning.
Original language | English |
---|---|
Title of host publication | MOCO '19: Proceedings of the 6th International Conference on Movement and Computing |
Place of Publication | New York |
Publisher | ACM |
Pages | 1-4 |
Number of pages | 4 |
ISBN (Print) | 978-1-4503-7654-9 |
DOIs | |
Publication status | Published - 2019 |
Event | 6th International Conference on Movement and Computing - AZ, Tempe, USA Duration: 1 Jan 2019 → … |
Conference
Conference | 6th International Conference on Movement and Computing |
---|---|
Period | 1/01/19 → … |
Other | October 10 - 12, 2019 |