TY - GEN
T1 - Deep models for ensemble touch-screen improvisation
AU - Martin, Charles P.
AU - Ellefsen, Kai Olav
AU - Torresen, Jim
N1 - Publisher Copyright:
© 2017 Copyright held by the owner/author(s).
PY - 2017/8/23
Y1 - 2017/8/23
N2 - For many, the pursuit and enjoyment of musical performance goes hand-in-hand with collaborative creativity, whether in a choir, jazz combo, orchestra, or rock band. However, few musical interfaces use the affordances of computers to create or enhance ensemble musical experiences. One possibility for such a system would be to use an artificial neural network (ANN) to model the way other musicians respond to a single performer. Some forms of music have well-understood rules for interaction; however, this is not the case for free improvisation with new touch-screen instruments where styles of interaction may be discovered in each new performance. This paper describes an ANN model of ensemble interactions trained on a corpus of such ensemble touch-screen improvisations. The results show realistic ensemble interactions and the model has been used to implement a live performance system where a performer is accompanied by the predicted and sonified touch gestures of three virtual players.
AB - For many, the pursuit and enjoyment of musical performance goes hand-in-hand with collaborative creativity, whether in a choir, jazz combo, orchestra, or rock band. However, few musical interfaces use the affordances of computers to create or enhance ensemble musical experiences. One possibility for such a system would be to use an artificial neural network (ANN) to model the way other musicians respond to a single performer. Some forms of music have well-understood rules for interaction; however, this is not the case for free improvisation with new touch-screen instruments where styles of interaction may be discovered in each new performance. This paper describes an ANN model of ensemble interactions trained on a corpus of such ensemble touch-screen improvisations. The results show realistic ensemble interactions and the model has been used to implement a live performance system where a performer is accompanied by the predicted and sonified touch gestures of three virtual players.
KW - Deep learning
KW - Ensemble interaction
KW - Mobile music
KW - RNN
KW - Touch screen performance
UR - http://www.scopus.com/inward/record.url?scp=85038368363&partnerID=8YFLogxK
U2 - 10.1145/3123514.3123556
DO - 10.1145/3123514.3123556
M3 - Conference contribution
T3 - ACM International Conference Proceeding Series
BT - Proceedings of the 12th International Audio Mostly Conference
PB - Association for Computing Machinery
T2 - 12th International Audio Mostly Conference, AM 2017
Y2 - 23 August 2017 through 26 August 2017
ER -