What does your brain sound like? Does it sound like “Rise of the Valkyrie” or more like “Hit me baby one more time”?
Step 1: Acquire a human brain (alive)
We are interested in capturing the brain waves. Specifically the:
- Delta waves: Deepest stages of sleep.
- Beta waves: Normal waking consciousness.
- Alpha waves: Relaxation and meditation (creativity).
- Theta waves: REM sleep (dreams).
- Gamma waves: Hyper-alertness, perception, and integration of sensory input.
Step 2: EEG machine
I am using a EEG machine brought from Neurosky which is rated as Research grade (whatever that means). This measures voltage fluctuations resulting from ionic current flows within the neurons of the brain. While EEG machines are not the most accurate they are now reasonably cheap.
Step 3: EEG –> Overtone
In order to generate music I want to import the EEG brainwave data into Overtone.
We interact with the EEG machine over a serial port. The most mature library for this interface is in Python so there is a little jiggery pokery to get the data into Overtone.
The Brainwave Poller
We use https://github.com/akloster/python-mindwave to interface with the EEG machines data.
Writing all the data out to a FIFO file file as json.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 |
|
Reading from the FIFO file is simple in Clojure.
1 2 3 |
|
Step 3: Sonification
Sonification is the process of taking data and turning it into sound. Here is an example of the data we are now receiving:
A single JSON brainwave packet:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
|
We will focus on the beta-waves for simplicity. Beta-waves fall between 16.5–20Hz.
Beta waves related to:
- Alertness
- Logic
- Critical reasoning
We need to map a signal within 16.5-20Hz into the musical pitches of a sampled piano (21-108 pitches).
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
|
Now we extract the beta-waves from the brainwave data and play them. We play them live as they arrive rather than worrying about scheduling notes:
1 2 3 4 5 6 7 8 9 |
|
Would you like to hear my brain?
The results, please listen to my brain.
Not really music is it? With beta-waves we get a serious of high to low transitions. While we can control at what pitch the transitions occur by performing activities that shape our brain waves the transitions don’t provide the order or structure we need to recognize this as music.
Brain controlled Dubstep
The only logical path left is to try and control Dubstep with our brain. Rather than generative music we can use our brain waves to control the tempo and volume of existing synthesized music.
Here is a Dubstep synth taken from Overtone:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 |
|
Once the synth is running we can send it control signals which will vary any of the properties defined in the arguments to the dubstep function:
- bpm
- wobble
- note
- snare-vol
- kick-vol
- volume
1 2 3 4 5 6 7 |
|
We again have to linearise the beta wave signal to the range of volume 0.0-1.1 and to the bpm 0-400.
Now all thats left to do is connect it to our brain.
Here’s what brain controlled Dubstep sounds like:
And for comparison what playing Go does to your brain activity (I turned the Dubstep down while playing, concentrating with that noise is hard):
Discovery through sound
Mapping brain waves into live music is a challenging task and while we can control music through an EEG machine that control is hard since we are using the brain to do many other things. What is interesting in the path of this experiment is not in fact the music generated but the use of sound to provide a way to hear the differences in datasets.
Hearing the difference between play Go or sleeping, between young people or old people.
Sound as a means of discovering patterns is a largely untapped source.