← cd ..

Moving Towards Synchrony

By Johnny Tomasiello



Artist's Note

Moving Towards Synchrony is an immersive work whose purpose is to explore the reciprocal relationship between electrical activity in the brain and external stimuli that has been generated -and defined by- those same physiological events.

It investigates the neurological effects of modulating brain waves and their corresponding physiological effects by use of a Brain-Computer Music Interface, which allows for the sonification of the data captured by an electroencephalogram which is translated into musical stimuli in real time.

The research methodology explores how to collect and quantify physiological data through non-invasive neuroimaging, effectively using the subject’s brainwaves to produce real-time interactive soundscapes that, being simultaneously experienced by the subject, have the ability to alter their physiological responses. The melodic and rhythmic content are derived from, and constantly influenced by, the subject’s EEG readings. A subject focusing on the stimuli, will attempt to elicit a change in their physiological systems through experience of the bi-directional feedback.

This project records EEG signals from the subject using four non-invasive dry extra- cranial electrodes from a commercially available MUSE EEG headband. Measurements are recorded from the TP9, AF7, AF8, and TP10 electrodes, as specified by the International Standard EEG placement system, and the data is converted to absolute band powers, based on the logarithm of the Power Spectral Density (PSD) of the EEG data for each channel. Heart rate data is obtained through PPG measurements, although that data is not used in the current version of this project. EEG measurements are recorded in Bels/Db to determine the PSD within each of the frequency ranges.

The EEG readings are translated into music in real time, and the subjects are instructed to employ deep breathing exercises while they focus on the musical feedback. Great care was taken in defining the compositional strategies of the interactive content in order to deliver a truly generative composition that was also capable of producing musically recognizable results.

All permutations of the scales, modes and chords being used, as well as rhythms, and performance characteristics, needed to be considered beforehand so the extraction of a finite set of parameters from the EEG data set could be parsed and used to produce a well-formed and dynamic piece of music.

There are 3 main sections of this Max patch:
1: The EEG data capture section.
2: The EEG data conversion section.
3: the Sound generation and DSP section.

The EEG data capture section receives EEG data from the Muse headband, which is converted to OSC data and transmitted over WiFi via the iOS app Mind Monitor. That data is then split into the five separate brainwave frequency bandwidths: delta, theta, alpha, beta and gamma. Additional data is also captured, including accelerometer, gyroscope, blink and jaw clench, in order to control for any artifacts in the data capture. Sensor connection data is used to visualize the integrity of the sensor’s attachment to the subject. PPG data is also captured for use in a future iteration of the project.

The EEG data conversion section accepts the EEG bandwidth data representing specific event-related potential, and translates it to musical events.

First, significant thresholds for each brainwave frequency bandwidth are defined. These are chosen based on average EEG measurements taken prior to the use of the musical feedback. When those thresholds are reached or exceeded, an event is triggered. Depending on the mappings, those events can be one or more of several types of operations: the sounding of a note, a change in pitch or scale or mode, note values and timings, and/or other generative performance characteristics.

This section is comprised of three subsections that format their data output differently, depending on the use case:
1. Internal Sound Generation and DSP for use completely within the Max environment.
2. External MIDI for use with MIDI equipped hardware or software, and
3. External Frequency and gate, for use with modular synthesizer hardware.
Each of these can be used separately or simultaneously, depending on the needs of the piece.

For the data conversion, the event-related potentials are mapped in the following way:Changes in alpha, relative to the predefined threshold, govern the triggering of notes, as well as the scale and mode. Changes in theta, relative to the threshold, influence note value. Changes in beta, relative to the threshold, influence spatial qualities like reverberation and delay. Changes in delta, relative to the threshold, influence the degree of spatial effects. Changes in gamma, relative to the threshold, influence timbre.

Any of these mappings or threshold decisions can be easily changed to accommodate a different thesis or set of standards.

The third section is Sound generation and DSP. It is responsible for the sonification of the data translated from the EEG data conversion section. This section includes synthesis models, timbre characteristics, and spatial effects.

This projects uses three synthesized voices created in Max 8 for the generative musical feedback. There are two subtractive voices that each use a mix of sine, sawtooth and triangle waves, and one fm voice.

The timbral effects employed are waveform mixing, frequency modulation, and high pass, band pass and low pass filters. The spatial effects used include reverberation, and delay. In addition to the initial settings of the voices, each of the timbral and spatial effects are modulated by separate event-related potential data captured by the EEG.
Johnny Tomasiello is a multidisciplinary artist and composer-researcher living and working in NYC. Drawing on custom-built instruments and software, he references the present sociopolitical landscape using music and new technologies as important mechanisms of expression and identity. His projects are an embodiment of “living art” that is of service to people and society.