Learning Embodied Sound-Motion Mappings: Evaluating AI-Generated Dance Improvisation

Benedikte Wallace, Charles P. Martin, Jim Tørresen, Kristian Nymoen

    Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

    3 Citations (Scopus)


    Through dance, a wide range of emotions can be expressed. As virtual agents and robots continue to become part of our daily lives, the need for them to efficiently convey emotion and intent increases. When trained to dance, to what extent can AI learn to model the tacit mappings between sound and motion? Here, we explore the creative capacity of a generative model trained on 3D motion capture recordings of improvised dance. We perform a perceptual judgment experiment wherein respondents rate movement generated by our model as well as human performances. While the sound-motion mappings remain somewhat elusive, particularly when compared to examples of human dance, our study shows that in certain aspects related to perceived dance-likeness and expressivity, the model successfully mimics human dance movement. By employing a perceptual study to evaluate our generative model, we aim to further our ability to understand the affordances and limitations of creative AI.

    Original languageEnglish
    Title of host publicationC and C 2021 - Proceedings of the 13th Conference on Creativity and Cognition
    PublisherAssociation for Computing Machinery
    ISBN (Electronic)9781450383769
    Publication statusPublished - 22 Jun 2021
    Event13th Conference on Creativity and Cognition, C and C 2021 - Virtual, Online, Italy
    Duration: 22 Jun 202123 Jun 2021

    Publication series

    NameACM International Conference Proceeding Series


    Conference13th Conference on Creativity and Cognition, C and C 2021
    CityVirtual, Online


    Dive into the research topics of 'Learning Embodied Sound-Motion Mappings: Evaluating AI-Generated Dance Improvisation'. Together they form a unique fingerprint.

    Cite this