Tracing from Sound to Movement with Mixture Density Recurrent Neural Networks

Benedikte Wallace, Charles Martin, Kristian Nymoen

    Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

    Abstract

    In this work, we present a method for generating sound-tracings using a mixture density recurrent neural network (MDRNN). A sound-tracing is a rendering of perceptual qualities of short sound objects through body motion. The model is trained on a dataset of single point sound-tracings with multimodal input data and learns to generate novel tracings. We use a second neural network classifier to show that the input sound can be identified from generated tracings. This is part of an ongoing research effort to examine the complex correlations between sound and movement and the possibility of modelling these relationships using deep learning.
    Original languageEnglish
    Title of host publicationMOCO '19: Proceedings of the 6th International Conference on Movement and Computing
    Place of PublicationNew York
    PublisherACM
    Pages1-4
    Number of pages4
    ISBN (Print)978-1-4503-7654-9
    DOIs
    Publication statusPublished - 2019
    Event6th International Conference on Movement and Computing - AZ, Tempe, USA
    Duration: 1 Jan 2019 → …

    Conference

    Conference6th International Conference on Movement and Computing
    Period1/01/19 → …
    OtherOctober 10 - 12, 2019

    Fingerprint

    Dive into the research topics of 'Tracing from Sound to Movement with Mixture Density Recurrent Neural Networks'. Together they form a unique fingerprint.

    Cite this