Joseph Wilk

Joseph Wilk

Programming bits and bobs

Sounds of the Human Brain

What does your brain sound like? Does it sound like “Rise of the Valkyrie” or more like “Hit me baby one more time”?

Step 1: Acquire a human brain (alive)

We are interested in capturing the brain waves. Specifically the:

  • Delta waves: Deepest stages of sleep.
  • Beta waves: Normal waking consciousness.
  • Alpha waves: Relaxation and meditation (creativity).
  • Theta waves: REM sleep (dreams).
  • Gamma waves: Hyper-alertness, perception, and integration of sensory input.

Step 2: EEG machine

I am using a EEG machine brought from Neurosky which is rated as Research grade (whatever that means). This measures voltage fluctuations resulting from ionic current flows within the neurons of the brain. While EEG machines are not the most accurate they are now reasonably cheap.

Step 3: EEG –> Overtone

In order to generate music I want to import the EEG brainwave data into Overtone.

We interact with the EEG machine over a serial port. The most mature library for this interface is in Python so there is a little jiggery pokery to get the data into Overtone.

The Brainwave Poller

We use https://github.com/akloster/python-mindwave to interface with the EEG machines data.

Writing all the data out to a FIFO file file as json.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
import re
import time
import json
import unicodedata

import gevent
from gevent import monkey

from pymindwave import headset
from pymindwave.pyeeg import bin_power

monkey.patch_all()

# connect to the headset
hs = None
hs = headset.Headset('/dev/tty.MindWave')
hs.disconnect()
time.sleep(1)
print 'connecting to headset...'
hs.connect()
time.sleep(1)
while hs.get('state') != 'connected':
    print hs.get('state')
    time.sleep(0.5)
    if hs.get('state') == 'standby':
        hs.connect()
        print 'retrying connecting to headset'

def raw_to_spectrum(rawdata):
    flen = 50
    spectrum, relative_spectrum = bin_power(rawdata, range(flen), 512)
    return spectrum

while True:
    t = time.time()
    waves_vector = hs.get('waves_vector')
    meditation = hs.get('meditation')
    attention = hs.get('attention')
    spectrum = raw_to_spectrum(hs.get('rawdata')).tolist()

    with open("/tmp/brain-data","w") as fp:
        s = {'timestamp': t,
             'meditation': meditation,
             'attention': attention,
             'raw_spectrum': spectrum,
             'delta_waves': waves_vector[0],
             'theta_waves': waves_vector[1],
             'alpha_waves': (waves_vector[2]+waves_vector[3])/2,
             'low_alpha_waves': waves_vector[2],
             'high_alpha_waves': waves_vector[3],
             'beta_waves': (waves_vector[4]+waves_vector[5])/2,
             'low_beta_waves': waves_vector[4],
             'high_beta_waves': waves_vector[5],
             'gamma_waves': (waves_vector[6]+waves_vector[7])/2,
             'low_gamma_waves': waves_vector[6],
             'mid_gamma_waves': waves_vector[7]}

        s = json.dumps(s)
        fp.write(s)
    gevent.sleep(0.4)

Reading from the FIFO file is simple in Clojure.

1
2
3
(while true
  (with-open [reader (clojure.java.io/reader "/tmp/brain-data")]
    (brainwave->music (json/decode (first (line-seq reader)) true))))

Step 3: Sonification

Sonification is the process of taking data and turning it into sound. Here is an example of the data we are now receiving:

A single JSON brainwave packet:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
 {"gamma_waves": 95408,
  "high_beta_waves": 205681,
  "beta_waves": 293928,
  "low_beta_waves": 382176,
  "mid_gamma_waves": 84528,
  "low_alpha_waves": 172417,
  "delta_waves": 117933,
  "low_gamma_waves": 106288,
  "alpha_waves": 112605,
  "theta_waves": 635628,
  "high_alpha_waves": 52793,
  "attention": 0,
  "meditation": 0,
  "timestamp": 1.375811275696894E9}

We will focus on the beta-waves for simplicity. Beta-waves fall between 16.5–20Hz.

EEG beta waves

Beta waves related to:

  • Alertness
  • Logic
  • Critical reasoning

We need to map a signal within 16.5-20Hz into the musical pitches of a sampled piano (21-108 pitches).

1
2
3
4
5
6
7
8
9
10
11
12
13
14
(require '[clojure.math.numeric-tower :as math])

(defn linear-map [x0 x1 y0 y1 x]
  (let [dydx (/ (- y1 y0) (- x1 x0))
        dx (- x x0)]
    (+ y0 (* dydx dx))))

;; Piano range: 0..87
;; Beta wave range: 16500..20000

(defn beta-wave->pitch [signal] (-> (linear-map 16500 20000 21 108 signal) float math/round))

(beta-wave->pitch 16500) ;-> 21
(beta-wave->pitch 20000) ;-> 108

Now we extract the beta-waves from the brainwave data and play them. We play them live as they arrive rather than worrying about scheduling notes:

1
2
3
4
5
6
7
8
9
(use 'overtone.live)
(use 'overtone.inst.sampled-piano)
(require '[cheshire.core :as json])

(while true
  (with-open [reader (clojure.java.io/reader "/tmp/brain-data")]
    (let [beta-wave (-> (first (line-seq reader)) (json/decode true) :beta_waves)]
      (println beta-wave)
      (sampled-piano :note (beta-wave->pitch beta-wave) :sustain 0.2))))

Would you like to hear my brain?

The results, please listen to my brain.

Not really music is it? With beta-waves we get a serious of high to low transitions. While we can control at what pitch the transitions occur by performing activities that shape our brain waves the transitions don’t provide the order or structure we need to recognize this as music.

Brain controlled Dubstep

The only logical path left is to try and control Dubstep with our brain. Rather than generative music we can use our brain waves to control the tempo and volume of existing synthesized music.

Here is a Dubstep synth taken from Overtone:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
(use 'overtone.live)

(defsynth dubstep [bpm 120 wobble 1 note 50 snare-vol 1 kick-vol 1 volume 1 out-bus 0]
  (let [trig (impulse:kr (/ bpm 120))
        freq (midicps note)
        swr (demand trig 0 (dseq [wobble] INF))
        sweep (lin-exp (lf-tri swr) -1 1 40 3000)
        wob (apply + (saw (* freq [0.99 1.01])))
        wob (lpf wob sweep)
        wob (* 0.8 (normalizer wob))
        wob (+ wob (bpf wob 1500 2))
        wob (+ wob (* 0.2 (g-verb wob 9 0.7 0.7)))

        kickenv (decay (t2a (demand (impulse:kr (/ bpm 30)) 0 (dseq [1 0 0 0 0 0 1 0 1 0 0 1 0 0 0 0] INF))) 0.7)
        kick (* (* kickenv 7) (sin-osc (+ 40 (* kickenv kickenv kickenv 200))))
        kick (clip2 kick 1)

        snare (* 3 (pink-noise) (apply + (* (decay (impulse (/ bpm 240) 0.5) [0.4 2]) [1 0.05])))
        snare (+ snare (bpf (* 4 snare) 2000))
        snare (clip2 snare 1)]

       (out out-bus (* volume (clip2 (+ wob (* kick-vol kick) (* snare-vol snare)) 1)))))

Once the synth is running we can send it control signals which will vary any of the properties defined in the arguments to the dubstep function:

  • bpm
  • wobble
  • note
  • snare-vol
  • kick-vol
  • volume
1
2
3
4
5
6
7
(def d (dubstep))

(ctl d :snare-vol 0)
(ctl d :kick-vol 0)
(ctl d :wooble 0)
(ctl d :bpm 20)
(ctl d :v 0.2)

We again have to linearise the beta wave signal to the range of volume 0.0-1.1 and to the bpm 0-400.

Now all thats left to do is connect it to our brain.

Here’s what brain controlled Dubstep sounds like:

And for comparison what playing Go does to your brain activity (I turned the Dubstep down while playing, concentrating with that noise is hard):

Discovery through sound

Mapping brain waves into live music is a challenging task and while we can control music through an EEG machine that control is hard since we are using the brain to do many other things. What is interesting in the path of this experiment is not in fact the music generated but the use of sound to provide a way to hear the differences in datasets.

Hearing the difference between play Go or sleeping, between young people or old people.

Sound as a means of discovering patterns is a largely untapped source.

Comments