Abstract
Sound and movement are closely coupled, particularly in dance. Certain audio features have been found to affect the way we move to music. Is this relationship between sound and movement something which can be modelled using machine learning? This work presents initial experiments wherein high-level audio features calculated from a set of music pieces are included in a movement generation model trained on motion capture recordings of improvised dance. Our results indicate that the model learns to generate realistic dance movements which vary depending on the audio features.
Original language | English |
---|---|
Title of host publication | Proceedings of the 11th International Conference on Computational Creativity |
Editors | F. Amlcar Cardoso, Penousal Machado, Tony Veale and Joao Miguel Cunha |
Place of Publication | Coimbra, Portugal |
Publisher | Association for Computational Creativity |
Pages | 284-287 |
ISBN (Print) | 978-989-54160-2-8 |
DOIs | |
Publication status | Published - 2020 |
Event | 11th International Conference on Computational Creativity - Coimbra, Portugal Duration: 1 Jan 2020 → … |
Conference
Conference | 11th International Conference on Computational Creativity |
---|---|
Period | 1/01/20 → … |
Other | September 7-11 |