Michael Breidenbrücker

EARME

portable reactive music ::: player-composer-platform-protocol for mixed reality software music

EARME

EARME

Content Description

Imagine
You are walking around the city and everything you hear is part of a composition. The car passing by is the bass line, the distant hum of human activity turns into a melody, clicks and noises trigger a rhythm, its position bouncing like pictures mirrored in the shopping window. Someone's talking turns into melodic rap. Your acoustic surrounding is digitally enhanced, your ears are part of a virtual as well as the real world. Your aural awareness can't distinguish between reality and virtuality anymore.

Multiple realities - augmented reality.
You are wearing headphones with built in microphones. Everything you hear is processed by a small walkman-like computer, which is the interface between you and the world. This computer is analysing the incoming sound in real-time, detecting its position in space as well as its frequency-range, harmonic consistency, volume, peaks and attack-energy. Depending on these input values, a composition is deciding the reaction of the digital system. It could trigger samples and place them in the same spatial position as the analysed input (which makes a car sound like an ocean wave), pitch the input sound according to an internal melody or it could sample the sound for later use. All this depends on the loaded composition.

The composition is the software, the software is music.
You are back home, connecting your device to http://www.EARME.net. Somebody composed a new scene and loaded it on the platform. 300k, downloaded in about 30 secs, give you music, which doesn t repeat itself for months. The new dogma: Code based not time based.

Compelling conception / How it works.
EARME software is programmed in max/msp courtesy of ircam Paris. Currently it is implemented on Apple Macintosh computers. The scenario from above was experienced, using an Apple power book (EARME_player), soundman binaural microphones and sony noise cancelling headphones.
Rather than being static time based music, EARME is a dynamic setup, producing music that reacts to your immediate acoustic surrounding in realtime. Because this reactive music is based on - or overlayed with - your real acoustic surrounding, we are calling the music produced mixed reality software music
With EARME, music is returning to be a unique event. An experience collaboratively produced by the people and their surrounding at a certain time.

In five steps from you through EARME and bac into your ears.
Step one:
The binaural microphones mounted on the noise canceling headphones are digitizing all incoming sound.
Step two:
The EARME_Player (Apple Power Book + EARME software player) is analysing the incoming sound in realtime, processing attributes like the volume in eleven frequency ranges, attack energy and spatial sound location.
Step three:
According to these events, the EARME_Scene (software on the power book) is determining how the player reacts.
Step four:
Possible reactions are filters or effects on the real sound (e.g. pitch shifting), triggering samples and determining the spatial location of your output. What reaction is triggered is composed by musicians using EARME_composer software and written into an EARME_Scene which can be downloaded via the internet.
Step five:
The EARME reactions are mixed with the surrounding sound. The result is an auditory environment which has completely altered features. It is your individual soundtrack for your life for the situation you are in, right now. A unique live concert, performed by your life.
EARME_Player (software player) / EARME_Composer / EARME_Scenes are free to download on www.EARME.net

Background:
The paradigm shift:
Composition, definition and perception of music have changed radically since the incorporation of electronic technology in the late 19th century. Advances in the technical field of music production and creation directly triggered the development of new musical directions, of new genres of music. This was based on hardware (tape deck, synthesiser etc) until recently, when the focus shifted to developments in digital music and software
This vast and increasing technological development in music leads to a major shift of paradigm in music. EARME is taking this shift into account and is an attempt to define a genre of music for the very near future.
Shift 1: perception and reception
What is perceived as music was immensely expanded by technological progress. For example, not only are most people accustomed to hearing concrete, recorded sounds in compositions (a aesthetic made possible by the development of microphone, tape deck and, later, the sampler) but we could witness a whole genre emerging out of the use of faulty digital sounds (clicks&cuts).
All this is in tune with the demands of the futurists, which were laid out to include the surrounding sounds of the new industrial age into the compositions of the 20th century. We have extended our soundscape and hearing range more and more, from harmonic tones to disharmonic noises. EARME is incorporating immediate surrounding sound into its output. It amplifies and extends the aural awarness of the listener's immediate surrounding.
Shift 2: real-time audio instead of midi sequencing.
Increasing computing power enables us to analyse and synthesise sound in real-time, which means it is possible to recognise rhythmical structures as well as harmonic chords in the surrounding sound. This innovation plays a major part in the field of speech recognition.
"The new real-time patchable software synthesizers have finally brought audio signal processing out of the ivory tower and into the home of working computer musicians. Now audio can be placed at the centre of real-time music production, and Midi, which for a decade was the backbone of the electronic music studio, can be relegated to its appropriate role as a low-bandwidth I/O solution for keyboards and other input devices. Many other sources of control input can be imagined than are provided by Midi devices." (Miller S Puckette)
EARME is taking advantage of real-time sound analysis to configure a virtual representation of the auditive surrounding.
Shift 3: time based to code based.
Music is not time-based anymore, but process based, music is processor based, music is software based, music is software, software is music.
This tendency became obvious with the rise of software tools like Logic Audio, Supercollider and Reaktor using a variety of plug-ins from Hyperprism to Max. These programs influenced music production in such a drastic way, that they are directly responsible for the emergence of new genres like drum and base / jungle, which was made possible by the strict quantising grid of Cubase.
With current music software designed in a very computer game like manner it is the actual process of interacting with the software and thus creating music that becomes important for the recipient. A growing number of users might not want to listen to the recorded outcome over and over but the act of creating already supplies entertainment.
EARME is Code Based Media.