TY - GEN
T1 - Towards playing in the 'Air'
T2 - 17th Sound and Music Computing Conference, SMC 2020
AU - Erdem, Çaǧri
AU - Lan, Qichao
AU - Fuhrer, Julian
AU - Martin, Charles
AU - Torresen, Jim
AU - Jensenius, Alexander Refsum
N1 - Publisher Copyright:
Copyright © 2020 Çaǧri Erdem et al.
PY - 2020
Y1 - 2020
N2 - In acoustic instruments, sound production relies on the interaction between physical objects. Digital musical instruments, on the other hand, are based on arbitrarily designed action-sound mappings. This paper describes the ongoing exploration of an empirically-based approach for simulating guitar playing technique when designing the mappings of 'air instruments'. We present results from an experiment in which 33 electric guitarists performed a set of basic sound-producing actions: impulsive, sustained, and iterative. The dataset consists of bioelectric muscle signals, motion capture, video, and audio recordings. This multimodal dataset was used to train a long short-term memory network (LSTM) with a few hidden layers and relatively short training duration. We show that the network is able to predict audio energy features of free improvisations on the guitar, relying on a dataset of three distinct motion types.
AB - In acoustic instruments, sound production relies on the interaction between physical objects. Digital musical instruments, on the other hand, are based on arbitrarily designed action-sound mappings. This paper describes the ongoing exploration of an empirically-based approach for simulating guitar playing technique when designing the mappings of 'air instruments'. We present results from an experiment in which 33 electric guitarists performed a set of basic sound-producing actions: impulsive, sustained, and iterative. The dataset consists of bioelectric muscle signals, motion capture, video, and audio recordings. This multimodal dataset was used to train a long short-term memory network (LSTM) with a few hidden layers and relatively short training duration. We show that the network is able to predict audio energy features of free improvisations on the guitar, relying on a dataset of three distinct motion types.
UR - http://www.scopus.com/inward/record.url?scp=85097440256&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85097440256
T3 - Proceedings of the Sound and Music Computing Conferences
SP - 177
EP - 184
BT - SMC 2020 - Proceedings of the 17th Sound and Music Computing Conference
A2 - Spagnol, Simone
A2 - Valle, Andrea
PB - CERN
Y2 - 24 June 2020 through 26 June 2020
ER -