Enhancing the Social Impact of Contemporary Music with Neurotechnology

I am a contemporary classical music composer interested in developing new technologies to aid musical creativity and harness the role of music in social development. After having worked as a research scientist for Sony for a number of years, I moved to Plymouth University in 2003, where I founded the Interdisciplinary Centre for Computer Music Research (ICCMR) to conduct research into these topics. ICCMR is one of the main contributors to the development of a new discipline, which I refer to as Music Neurotechnology [1]. Research into Music Neurotechnology is truly interdisciplinary: it combines musical research with artificial intelligence, bioengineering, neurosciences and medicine. ICCMR’s research outcomes have been published in learned journals of all these fields; for example [2, 3, 4, 5, 6]. This paper introduces one of ICCMR’s most successful projects to date, which demonstrates the social impact of Music Neurotechnology research: the braincomputer music interfacing (BCMI) project. This project is aimed at the development of assistive music technology to enable people with severe physical disabilities to make music controlled with brain signals. In addition to building the technology, I am particularly interested in developing approaches to compose music with it and creating new kinds of contemporary music.


Introduction
I am a contemporary classical music composer interested in developing new technologies to aid musical creativity and harness the role of music in social development.After having worked as a research scientist for Sony for a number of years, I moved to Plymouth University in 2003, where I founded the Interdisciplinary Centre for Computer Music Research (ICCMR) to conduct research into these topics.ICCMR is one of the main contributors to the development of a new discipline, which I refer to as Music Neurotechnology [1].Research into Music Neurotechnology is truly interdisciplinary: it combines musical research with artificial intelligence, bioengineering, neurosciences and medicine.ICCMR's research outcomes have been published in learned journals of all these fields; for example [2,3,4,5,6].This paper introduces one of ICCMR's most successful projects to date, which demonstrates the social impact of Music Neurotechnology research: the brain-computer music interfacing (BCMI) project.This project is aimed at the development of assistive music technology to enable people with severe physical disabilities to make music controlled with brain signals.In addition to building the technology, I am particularly interested in developing approaches to compose music with it and creating new kinds of contemporary music.The BCMI Project Imagine if you could play a musical instrument with signals detected directly from your brain.Would it be possible to generate music representing brain activity?What would the music of our brains sound like?These are some of the questions addressed by Music Neurotechnology research.
I am interested in developing Brain-Computer Interfacing (BCI) technology for music aimed at special needs and music therapy, in particular for people with severe physical disability.A BCI is generally defined as a system that enables direct communication pathways between the brain and a device to be controlled.Currently, the most viable and practical method of scanning brain signals for BCI purposes is to read the brain's electroencephalogram, abbreviated as EEG, with electrodes placed on the scalp [7] (Figure 1).The EEG expresses the overall electrical activity of millions of neurones, but it is a difficult signal to handle because it is extremely faint, and it is filtered by the meninges (the membranes that separate the cortex from the skull), the skull and the scalp.This signal needs to be amplified significantly and analyzed in order to be of any use for a BCI.In BCI research, it is often assumed that: (a) there is information in the EEG that corresponds to different cognitive tasks, or at least a function of some sort, (b) this information can be detected and (c) users can be trained to produce EEG with such information voluntarily [8].I have coined the term Brain-Computer Music Interface, or BCMI, to refer to a BCI system for music [8].My research into BCMI is motivated by the extremely limited opportunities for active participation in music making available for people with severe physical disability, despite advances in music technology.For example, severe brain injury, spinal cord injury and locked-in syndrome result in weak, minimal or no active movement, which therefore prevent the use of gesture-based devices.These patient groups are currently either excluded from music recreation and therapy, or are left to engage in a less active manner through listening only.
My collaborators and I have recently developed a BCMI, which we tested with a locked-in syndrome patient at the Royal Hospital for Neuro-disability in London [6]; her condition was caused by a severe stroke (Figure 1, photograph on the right hand side).Our BCMI is based on a neurological phenomenon known as Steady State Visually Evoked Potentials, abbreviated as SSVEP.These are signals that can be detected in the EEG, which are natural responses to visual stimulation at specific frequencies.For instance, when a person looks at various patterns flashing at different frequencies on a computer screen, this shows up in his or her EEG, and a computer can be programmed to infer which pattern he or she is staring at; for instance, the four patterns shown on the screen on the right hand side photograph, Figure 1.We created musical algorithms to translate EEG signals associated with different flashing frequencies into distinct musical processes.For example, looking at one flashing pattern would sound a certain note, looking at another would produce a certain rhythm, staring at another would change its tempo, and so on.The forthcoming full paper will describe this system in detail and will introduce the composition Activating Memory, which I composed with the technical assistance of Joel Eaton, who is currently in the final stages of his doctoral thesis on BCMI at Plymouth University's ICCMR.
Activating Memory is an unprecedented piece for a string quartet and a BCMI quartet.Each member of the BCMI quartet is furnished with an SSVEP-based BCMI system that enables him or her to generate a musical score in real-time.Each of them generates a part for the string quartet, which is displayed on a computer screen for the respective string performer to sight-read it on the fly during the performance (Figure 2); a short video documentary is available [9].

Concluding Remarks
The technology and the compositional method developed for Activating Memory illustrate the interdisciplinary nature of Music Neurotechnology research and the benefits that such research can bring to humanity.This is an unprecedented piece of music, which is aimed at being much more than mere entertainment or a commodity for the music industry.Here, eight participants can engage in collaborative music making together, where four of them are not required to move.This forms a suitable creative environment for engaging severely physically disabled patients in music making: they are given an active musical voice to playfully interact between themselves and with the musicians of the string quartet.The first public performance of Activating Memory took place in February 2014 at the Peninsula Arts Contemporary Music Festival, Plymouth [10].Physically disabled patients were not involved in his first performance.Currently, I am working with colleagues back in the hospital to trial the new technology with patients, with the objective of staging a concert performance of Activating Memory with them.On the research front, my team and I are developing techniques to expand the SSVEP approach.We are developing ways to detect EEG patterns related to emotional states to control algorithms that generate music.

Figure 1 .
Figure 1.A BCI system extracts information from the EEG to control devices.

Figure 2 .
Figure 2.For Activating Memory the parts for each string player are generated from the brains of four participants and displayed on a computer screen for sight-reading during the performance.