Michael Breidenbrücker


EARME

Tragbare, reaktive Musik ::: Player-Composer-Platform-Protokoll für Mixed Reality Software-Musik


EARME [link 01]

EARME

Kurzdarstellung

Kurzbeschreibung

Im Gegensatz zur statischen, zeitbasierten Musik ist EARME ein dynamisches Setup, mit dem Musik produziert wird, die in Echtzeit auf ihre unmittelbare akustische Umgebung reagiert.
Da diese reaktive Musik auf der realen akustischen Umgebung basiert oder hiervon überlagert wird, nenne ich die dabei entstehende Musik Mixed Reality Software-Musik.
EARME können Sie hören, indem Sie EARME_SCENES von www.earme.net auf einen tragbaren Minicomputer (z. B. ipaq) herunterladen und Kopfhörer mit integrierten Mikrofonen an das Gerät anschließen.
EARME analysiert nun die eingehenden Klänge in Echtzeit (Lokalisierung der akustischen Quelle, Tonhöhenerkennung, Attack-Erkennung, Vokalerkennung usw.) und reagiert gemäß der (von www.earme.net) heruntergeladenen Komposition (EARME_SCENE).
Die besten Ergebnisse lassen sich in städtischen Umgebungen erzielen, wo jedes akustische Objekt (z. B. Autos, Baustellen, Leute im Gespräch) akustisch-semantisch neu programmiert wird.
So ist EARME beispielsweise in der Lage, ein Sample von Meereswogen in die akustische Position eines sich nähernden Fahrzeugs zu setzen, wodurch sich das reelle Auto wie eine Meereswoge anhört.

AutorInnen

  • Michael Breidenbrücker

Entstehung

Oesterreich, 2000-2001

Eingabe des Beitrags

michael@earme.net, 11.06.2002

Kategorie

  • Hardware |
  • Software

Schlagworte

  • Themen:
    • Wearable Computing |
    • Wahrnehmung |
    • Mixed Reality |
    • nonlineare Narration |
    • Musik |
    • Urbaner Raum |
    • Mobile Computing |
    • Klang |
    • Augmented Reality |
    • Narrative Intelligenz
  • Formate:
    • Software |
    • Virtuelles Environment |
    • Audio
  • Technik:
    • Acoustic Tracking

Ergänzungen zur Schlagwortliste

  • realtime audio analysis

Inhalt

Inhaltliche Beschreibung

Imagine
You are walking around the city and everything you hear is part of a composition. The car passing by is the bass line, the distant hum of human activity turns into a melody, clicks and noises trigger a rhythm, its position bouncing like pictures mirrored in the shopping window. Someone's talking turns into melodic rap. Your acoustic surrounding is digitally enhanced, your ears are part of a virtual as well as the real world. Your aural awareness can't distinguish between reality and virtuality anymore.

Multiple realities - augmented reality.
You are wearing headphones with built in microphones. Everything you hear is processed by a small walkman-like computer, which is the interface between you and the world. This computer is analysing the incoming sound in real-time, detecting its position in space as well as its frequency-range, harmonic consistency, volume, peaks and attack-energy. Depending on these input values, a composition is deciding the reaction of the digital system. It could trigger samples and place them in the same spatial position as the analysed input (which makes a car sound like an ocean wave), pitch the input sound according to an internal melody or it could sample the sound for later use. All this depends on the loaded composition.

The composition is the software, the software is music.
You are back home, connecting your device to http://www.EARME.net. Somebody composed a new scene and loaded it on the platform. 300k, downloaded in about 30 secs, give you music, which doesn t repeat itself for months. The new dogma: Code based not time based.

Compelling conception / How it works.
EARME software is programmed in max/msp courtesy of ircam Paris. Currently it is implemented on Apple Macintosh computers. The scenario from above was experienced, using an Apple power book (EARME_player), soundman binaural microphones and sony noise cancelling headphones.
Rather than being static time based music, EARME is a dynamic setup, producing music that reacts to your immediate acoustic surrounding in realtime. Because this reactive music is based on - or overlayed with - your real acoustic surrounding, we are calling the music produced mixed reality software music
With EARME, music is returning to be a unique event. An experience collaboratively produced by the people and their surrounding at a certain time.

In five steps from you through EARME and bac into your ears.
Step one:
The binaural microphones mounted on the noise canceling headphones are digitizing all incoming sound.
Step two:
The EARME_Player (Apple Power Book + EARME software player) is analysing the incoming sound in realtime, processing attributes like the volume in eleven frequency ranges, attack energy and spatial sound location.
Step three:
According to these events, the EARME_Scene (software on the power book) is determining how the player reacts.
Step four:
Possible reactions are filters or effects on the real sound (e.g. pitch shifting), triggering samples and determining the spatial location of your output. What reaction is triggered is composed by musicians using EARME_composer software and written into an EARME_Scene which can be downloaded via the internet.
Step five:
The EARME reactions are mixed with the surrounding sound. The result is an auditory environment which has completely altered features. It is your individual soundtrack for your life for the situation you are in, right now. A unique live concert, performed by your life.
EARME_Player (software player) / EARME_Composer / EARME_Scenes are free to download on www.EARME.net

Background:
The paradigm shift:
Composition, definition and perception of music have changed radically since the incorporation of electronic technology in the late 19th century. Advances in the technical field of music production and creation directly triggered the development of new musical directions, of new genres of music. This was based on hardware (tape deck, synthesiser etc) until recently, when the focus shifted to developments in digital music and software
This vast and increasing technological development in music leads to a major shift of paradigm in music. EARME is taking this shift into account and is an attempt to define a genre of music for the very near future.
Shift 1: perception and reception
What is perceived as music was immensely expanded by technological progress. For example, not only are most people accustomed to hearing concrete, recorded sounds in compositions (a aesthetic made possible by the development of microphone, tape deck and, later, the sampler) but we could witness a whole genre emerging out of the use of faulty digital sounds (clicks&cuts).
All this is in tune with the demands of the futurists, which were laid out to include the surrounding sounds of the new industrial age into the compositions of the 20th century. We have extended our soundscape and hearing range more and more, from harmonic tones to disharmonic noises. EARME is incorporating immediate surrounding sound into its output. It amplifies and extends the aural awarness of the listener's immediate surrounding.
Shift 2: real-time audio instead of midi sequencing.
Increasing computing power enables us to analyse and synthesise sound in real-time, which means it is possible to recognise rhythmical structures as well as harmonic chords in the surrounding sound. This innovation plays a major part in the field of speech recognition.
"The new real-time patchable software synthesizers have finally brought audio signal processing out of the ivory tower and into the home of working computer musicians. Now audio can be placed at the centre of real-time music production, and Midi, which for a decade was the backbone of the electronic music studio, can be relegated to its appropriate role as a low-bandwidth I/O solution for keyboards and other input devices. Many other sources of control input can be imagined than are provided by Midi devices." (Miller S Puckette)
EARME is taking advantage of real-time sound analysis to configure a virtual representation of the auditive surrounding.
Shift 3: time based to code based.
Music is not time-based anymore, but process based, music is processor based, music is software based, music is software, software is music.
This tendency became obvious with the rise of software tools like Logic Audio, Supercollider and Reaktor using a variety of plug-ins from Hyperprism to Max. These programs influenced music production in such a drastic way, that they are directly responsible for the emergence of new genres like drum and base / jungle, which was made possible by the strict quantising grid of Cubase.
With current music software designed in a very computer game like manner it is the actual process of interacting with the software and thus creating music that becomes important for the recipient. A growing number of users might not want to listen to the recorded outcome over and over but the act of creating already supplies entertainment.
EARME is Code Based Media.











Technik

Hardware / Software

Hardware:
Sony noise cancelling headphones.
Soundman binaural microphones.
Portable computer (preferably apple power book + griffin audio2usb).
Software:
EARME_PLAYER (www.earme.net)
EARME_SCENES (www.earme.net)
EARME software is programmed in max/msp courtesy of ircam Paris.

Lizenz

  • andere

Kontext

Hochschule / Fachbereich

Universität für angewandte Kunst Wien
audio-visuelle-mediengestaltung

URL der Hochschule

» http://www.vis-med.ac.at [link 02]

Betreuer des Projekts

Prof. Karel Dudesek

Kommentar des Betreuers

Earme kann wohl ohne Übertreibung als das beste Projekt während meiner Zeit als Leiter der Medienklasse auf der Hochschule für angewandte Kunst bezeichnet werden. Es handelt sich um ein Mixed-Reality Projekt, das durch Realtime-Sound-Analysis virtuelle akustische Räume mit dem realen akustischen Environment des Zuhörers überlagert.
Für die Zuhörer wird die Semantik von Objekt und Objekt-Klang neu definiert. Für Komponisten bietet Earme die Möglichkeit eines dynamischen Arrangements, dass die unmittelbare akustische Umgebung berücksichtigt.
Michael Breidenbrücker hat es in diesem Projekt nicht nur verstanden Mixed Reality, Net-Application und Wearable Computing zu verknüpfen, sondern er hat ein neues Musik Genre geschaffen, das er Mixed Reality Software Music nannte.
Die Konzeption und Umsetzung von Earme beeindruckt durch bestechende Kompetenz und Verständnis in Medienplattformen, Hardeware und Software. In diesem Sinne möchte ich meine höchste Empfehlung für das Projekt Earme von Michael Breidenbrücker für den Wettbewerb Digital Sparks aussprechen.

Seminar / Kurzbeschreibung

Earme ist im Rahmen der Diplomarbeit von Michael Breidenbrücker entstanden.

Zuordnung Forschungsbereich

Mixed-Reality / Wearable Computing / Medienplattformen

  • › digital sparks 2002 [link 03]

» http://www.earme.net [link 04]

  • ›  [Microsoft® Word | 430 KB ] [link 05]
  • › earme_projectdescription [Microsoft® Word | 1 MB ] [link 06]