Towards Movement Generation with Audio Features

Benedikte Wallace, Charles Martin, Jim Torresen, Kristian Nymoen

    Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

    Abstract

    Sound and movement are closely coupled, particularly in dance. Certain audio features have been found to affect the way we move to music. Is this relationship between sound and movement something which can be modelled using machine learning? This work presents initial experiments wherein high-level audio features calculated from a set of music pieces are included in a movement generation model trained on motion capture recordings of improvised dance. Our results indicate that the model learns to generate realistic dance movements which vary depending on the audio features.
    Original languageEnglish
    Title of host publicationProceedings of the 11th International Conference on Computational Creativity
    EditorsF. Amlcar Cardoso, Penousal Machado, Tony Veale and Joao Miguel Cunha
    Place of PublicationCoimbra, Portugal
    PublisherAssociation for Computational Creativity
    Pages284-287
    ISBN (Print)978-989-54160-2-8
    DOIs
    Publication statusPublished - 2020
    Event11th International Conference on Computational Creativity - Coimbra, Portugal
    Duration: 1 Jan 2020 → …

    Conference

    Conference11th International Conference on Computational Creativity
    Period1/01/20 → …
    OtherSeptember 7-11

    Fingerprint

    Dive into the research topics of 'Towards Movement Generation with Audio Features'. Together they form a unique fingerprint.

    Cite this