TY - JOUR
T1 - A laptop ensemble performance system using recurrent neural networks
AU - Proctor, Rohan
AU - Martin, Charles Patrick
N1 - Publisher Copyright:
© 2020, Steering Committee of the International Conference on New Interfaces for Musical Expression. All rights reserved.
PY - 2020
Y1 - 2020
N2 - The popularity of applying machine learning techniques in musical domains has created an inherent availability of freely accessible pre-trained neural network (NN) models ready for use in creative applications. This work outlines the implementation of one such application in the form of an assistance tool designed for live improvisational performances by laptop ensembles. The primary intention was to leverage off-the-shelf pre-trained NN models as a basis for assisting individual performers either as musical novices looking to engage with more experienced performers or as a tool to expand musical possibilities through new forms of creative expression. The system expands upon a variety of ideas found in different research areas including new interfaces for musical expression, generative music and group performance to produce a networked performance solution served via a web-browser interface. The final implementation of the system offers performers a mixture of high and low-level controls to influence the shape of sequences of notes output by locally run NN models in real time, also allowing performers to define their level of engagement with the assisting generative models. Two test performances were played, with the system shown to feasibly support four performers over a four minute piece while producing musically cohesive and engaging music. Iterations on the design of the system exposed technical constraints on the use of a JavaScript environment for generative models in a live music context, largely derived from inescapable processing overheads.
AB - The popularity of applying machine learning techniques in musical domains has created an inherent availability of freely accessible pre-trained neural network (NN) models ready for use in creative applications. This work outlines the implementation of one such application in the form of an assistance tool designed for live improvisational performances by laptop ensembles. The primary intention was to leverage off-the-shelf pre-trained NN models as a basis for assisting individual performers either as musical novices looking to engage with more experienced performers or as a tool to expand musical possibilities through new forms of creative expression. The system expands upon a variety of ideas found in different research areas including new interfaces for musical expression, generative music and group performance to produce a networked performance solution served via a web-browser interface. The final implementation of the system offers performers a mixture of high and low-level controls to influence the shape of sequences of notes output by locally run NN models in real time, also allowing performers to define their level of engagement with the assisting generative models. Two test performances were played, with the system shown to feasibly support four performers over a four minute piece while producing musically cohesive and engaging music. Iterations on the design of the system exposed technical constraints on the use of a JavaScript environment for generative models in a live music context, largely derived from inescapable processing overheads.
KW - Laptop ensemble
KW - Machine learning
KW - Recurrent neural networks
KW - Web audio
UR - http://www.scopus.com/inward/record.url?scp=85150280854&partnerID=8YFLogxK
M3 - Conference article
AN - SCOPUS:85150280854
SN - 2220-4792
SP - 43
EP - 48
JO - Proceedings of the International Conference on New Interfaces for Musical Expression
JF - Proceedings of the International Conference on New Interfaces for Musical Expression
T2 - 20th International Conference on New Interfaces for Musical Expression, NIME 2020
Y2 - 21 July 2020 through 25 July 2020
ER -