Você está na página 1de 198

I congresso internacional de música electroacústica de aveiro

1st international congress for electroacoustic music in aveiro

a tecnologia ao serviço da criação musical


technology as a tool for creating music

Diana Ferreira
Isabel Soveral
Isabel Soveral

Helena Santana
actas / proceedings
eaw2015 coordenação / coordinator
actas proceedings
eaw2015 a tecnologia ao serviço da criação musical 1

eaw2015 a tecnologia ao serviço da criação musical

actas do I congresso internacional de música electroacústica de aveiro


coordenação do congresso Isabel Soveral (INET-MD)
coordenação das actas Helena Santana, Isabel Soveral (INET-MD) e Diana Ferreira (Arte no Tempo)
eaw2015 a tecnologia ao serviço da criação musical 2

Título: EAW2015 - A Tecnologia ao Serviço da Criação Musical

Coordenação do Congresso: Isabel Soveral (INET-MD)

Coordenação do livro de actas: Helena Santana (INET-MD);


Isabel Soveral (INET-MD); Diana Ferreira (Arte no Tempo)

Capa e paginação: Nuno Dias (ID+LED)

Edição: UA Editora / Universidade de Aveiro – INET-MD / Arte no


Tempo

1ª edição: novembro de 2015

ISBN: 978-972-789-462-8

Catalogação recomendada

Congresso Internacional de Música Electroacústica de Aveiro, 1,


Universidade de Aveiro, 2015

EAW2015 [Recurso eletrónico]: a tecnologia ao serviço da criação


musical: actas / 1º Congresso Internacional de Música
Electroacústica de Aveiro; coord. Isabel Soveral, Helena Santana,
Diana Ferreira. - Aveiro: UA Editora, 2015. - 196 p.: il.

Requisitos do sistema: Adobe Acrobat

ISBN 978-972-789-462-8

Música electroacústica // Tecnologia musical // Teoria da música

CDU 789.1/.9

Este livro não foi escrito ao abrigo do novo acordo ortográfico

http://eaw.web.ua.pt/

Todas as imagens e gráficos foram cedidos pelos autores dos


artigos e pertencem aos autores ou foram retirados de espaços da
web onde se encontravam disponíveis.

Os conteúdos disponibilizados nos diferentes artigos são da


responsabilidade dos seus autores.
eaw2015 a tecnologia ao serviço da criação musical 3

Comissão Científica Comissão Executiva


Isabel Soveral Isabel Soveral
Helena Santana Helena Santana
John Chowning Diana Ferreira
Jean-Claude Risset Helder Caixinha
António Sousa Dias Susana Caixinha
Michael Iber Monika Streitová
Ludger Brümmer Nuno Dias
Luís Antunes Pena António Veiga
Pedro Rodrigues Tiago Lestre
Monika Streitová Aoife Hiney
Madalena Soveral Francisco Berény
Daria Semegen Henderson Rodrigues
João Pedro Oliveira Márlou Vieira
Jorge Correia
José Tomás Henriques Comissão Organizadora
Flo Menezes Isabel Soveral
Alexander Mihalič Helena Santana
Benoit Gibson Pedro Rodrigues
Miguel Azguime Diana Ferreira
eaw2015 a tecnologia ao serviço da criação musical 4

Índice | Index

a tecnologia ao serviço da criação musical ..................................................................................................... 5


technology as a tool for creating music ............................................................................................................ 6

I. Síntese | Synthesispd2............................................................................................................................... 7
1.1 The Power of Digital Sound Synthesis – a personal view .......................................................................................................................................... 8
1.2 Some recollections of the early days of computer music ......................................................................................................................................... 14
1.3 Instruments Obedient to His Thought: Edgard Varèse's sound ideals and the actual capabilities of electronic resources he had access to ......... 24
1.4 A Pure Data Spectro-Morphological Analysis Toolkit for Sound-Based Composition .............................................................................................. 31
II. Som e Imagem | Sound and Image ......................................................................................................... 39
2.1 Vertiges de l’image: a personal account on an audiovisual improvisation project ................................................................................................... 40
2.2 Low Cost, Low Tech – composing music with electroacoustic means, not being an electroacoustic composer ..................................................... 47
2.3 Som, corpo, movimento e cena na trilogia de solos "música do pensamento" para piano semi-preparado e técnicas expandidas de Joana Sá .. 51
2.4 Poème électronique: Uma obra hipertextual ............................................................................................................................................................ 59
2.5 Particle system ......................................................................................................................................................................................................... 65
III. Composição e ruído | composition and noise ........................................................................................ 68
3.1 Concealed Rhythmic Interactions Between Live-Electronics and Instrument in “Spacification” ............................................................................... 69
3.2 E se o ruído estruturar a forma? .............................................................................................................................................................................. 74
3.3 Quando os dialetos da língua portuguesa europeia falada se transformam em elementos composicionais........................................................... 81
3.4 Autómatos da Areia (1978/84), Lendas de Neptuno (1987) e Oceanos (1978/79) de Cândido Lima: como o som concreto se torna outro nos
primeiros trabalhos electroacústicos do compositor ........................................................................................................................................................... 89
IV. Espacialização | Spatialization ................................................................................................................ 95
4.1
Polytopes de Iannis Xenakis; determinações arquiteturais de som, luz e cor ......................................................................................................... 96
4.2
A interação relacional existente entre o espaço construído e o espaço percebido no discurso da música eletroacústica, sob a visão de filósofos
do século XX ..................................................................................................................................................................................................................... 102
4.3
Parametric loudspeakers array technology: a 4th dimension of space in electronic music? ................................................................................. 109
4.4
Spatial Hearing and Sound Perception in Musical Composition ............................................................................................................................ 116
4.5
Why Yet Another Sound Spatialisation Tool? ........................................................................................................................................................ 126
4.6
Introducing the Zirkonium MK2 System for Spatial Composition ........................................................................................................................... 130
V. Interpretação/performance | Interpretation/performance .................................................................... 137
5.1 Desiring Machines – A Decentred Approach to Interactive Composition ............................................................................................................... 138
5.2 Instalação Sonora: Transformação A Partir Dos Contextos Espaciais .................................................................................................................. 143
5.3 Performance instrumental aliada à música eletroacústica: transformações na relação intérprete–composição ................................................... 149
5.4 New advancements of the research on the augmented wind instruments: Wind-back and ResoFlute ................................................................. 155
5.5 Soundgrounds: An approach to the evolutionary aesthetics of computer mediated music performance ............................................................... 164
VI. Outras áreas | Other fields ..................................................................................................................... 169
6.1 A inversão das distâncias [1] – do som do corpo ao corpo do som ....................................................................................................................... 170
6.2 Forme Immateriali by Michelangelo Lupone: structure, creation, interaction, evolution of a permanent adaptive music work. ............................. 176
6.3 “Breakfast Serialism” for laptop orchestra and improvisers: A study on twelve-tone indeterminacy using computer mediation ............................ 184
6.4 From live to interactive electronics. Symbiosis: a study on sonic human-computer synergy. ................................................................................ 190
eaw2015 a tecnologia ao serviço da criação musical 5

eaw2015 a tecnologia ao serviço da criação musical


Em 1997, os compositores Isabel Soveral e João Pedro Oliveira criam o Centro de Investigação em
Música Electroacústica (CIME), como espaço de criação e investigação nesta área.
Atualmente dirigido por Isabel Soveral, o CIME lançou em 2014 a plataforma de comunicação
‘Electroacoustic Winds’, procurando difundir o trabalho realizado neste estúdio, assim como
estabelecer relações com parceiros internacionais visando a divulgação e o desenvolvimento da
música nacional.
Em 2015, o ‘Electroacoustic Winds 2015’ (eaw2015) – organizado sob a coordenação de Isabel
Soveral, pelo CIME, o Instituto de Etnomusicologia – Centro de Estudos em Música e Dança (INET-
MD) e a Arte no Tempo- assume-se como um Congresso que inclui não só a apresentação oral de
diversos trabalhos de pesquisa nacionais e internacionais, como a realização de diferentes
actividades de formação e divulgação da música electroacústica, procurando cobrir as áreas de
criação, teoria e tecnologia da música.
Neste contexto, o eaw2015 conta com a participação especial dos compositores americanos John
Chowning (1934) e Daria Semegen (1946), bem como do compositor francês Jean-Claude Risset
(1938). Inclui ainda um conjunto de acções de formação para músicos e técnicos, conferências e
concertos e outras actividades de divulgação. Deste modo, acredita-se cumprir o duplo propósito de
incentivar a investigação e a partilha de conhecimento e, simultaneamente, divulgar e fazer fruir um
género musical de que o público se encontra ainda pouco próximo, perseguindo a ideia de que é no
confronto com objetos artísticos do mais elevado interesse que se conquista um novo público.
Incluindo uma abordagem historiográfica à investigação na área, o congresso cobre um variado
conjunto de temáticas relevantes no panorama da atual investigação em música electroacústica,
nomeadamente a sonificação, o som e imagem, a composição e ruído, a composição algorítmica, a
espacialização e a síntese, bem como diferentes questões relativas à interpretação/performance e à
preservação em electrónica ao vivo.
Estas áreas definem a estrutura da publicação ora apresentada, onde constam artigos propostos por
oradores convidados, bem como por todos os comunicantes que apresentaram as suas propostas à
Comissão Científica do Congresso.
A Comissão Científica, composta por Isabel Soveral, Helena Santana, John Chowning, Jean-Claude
Risset, António Sousa Dias, Michael Iber, Ludger Brümmer, Luís Antunes Pena, Pedro Rodrigues,
Madalena Soveral, Daria Semegen, João Pedro Oliveira, Jorge Correia, José Tomás Henriques,
Flo Menezes, Alexander Mihalič, Benoît Gibson e Miguel Azguime, selecionou os referidos trabalhos
segundo as regras da arbritagem duplamente cega, de que resulta a presente publicação.

A Comissão Organizadora
eaw2015 a tecnologia ao serviço da criação musical 6

eaw2015 technology as a tool for creating music


In 1997, the composers Isabel Soveral and João Pedro Oliveira founded the Centro de Investigação
em Música Electroacústica (CIME - Research Centre for Electroacoustic Music), as a space for
creation and for research in this field.
Currently directed by Isabel Soveral, in 2014 CIME launched the communication platform
‘Electroacoustic Winds’, aiming to promote the work produced in this studio, but also to establish
relations with international partners, in order to disseminate and develop Portuguese music.
In 2015, ‘Electroacoustic Winds 2015’ (eaw2015) – coordinated by Isabel Soveral and organized in
conjunction with CIME, the Instituto de Etnomusicologia – Centro de Estudos em Música e Dança
(Institute of Ethnomusicolgy, Centre for Studies in Music and Dance - INET-MD) and Arte no Tempo -
is a Congress which not only includes oral presentations of national and international research, but
also several educational and promotional activities in the area of electroacoustic music, which aim to
address issues such as creating music, music theory and music technology.
In this context, eaw2015 has invited the American composers John Chowning (b. 1934) and Daria
Semegen (b. 1946) to participate, as well as the French composer Jean-Claude Risset (b. 1934). The
congress also includes a set of educational events for musicians and technicians, conferences,
concerts and other promotional activities. In this way, we hope to achieve our aims of encouraging
research and the sharing of knowledge whilst simultaneously promoting and disseminating a musical
genre that for the public is still very strange, in accordance with the idea that it is only through being
confronted with artistic objects of the utmost interest that a new public is won.
Including a historiographical approach to research in this area, the congress covers a variety of
relevant themes in the panorama of current investigation in Electroacoustic music, such as
sonification, sound and image, composition and noise, algorithmic composition, spatialization and
synthesis, as well as addressing different questions relating to interpretation/performance and the
preservation of live electronics.
These areas define the structure of this publication, which includes the articles presented by the
guest speakers, in addition to contributions from all the delegates who submitted their proposals to
the Congress’ Scientific Committee.
The Scientific Committee, comprised of Isabel Soveral, Helena Santana, John Chowning, Jean-
Claude Risset, António Sousa Dias, Michael Iber, Ludger Brümmer, Luís Antunes Pena, Pedro
Rodrigues, Madalena Soveral, Daria Semegen, João Pedro Oliveira, Jorge Correia, José Tomás
Henriques, Flo Menezes, Alexander Mihalič, Benoît Gibson and Miguel Azguime, selected the
proposals for inclusion through a double blind refereeing process.

The Organizing Commitee


eaw2015 a tecnologia ao serviço da criação musical 7

I. Síntese | Synthesis
eaw2015 a tecnologia ao serviço da criação musical 8

1.1 The Power of Digital Sound Synthesis – a personal view

John M. Chowning Center for Computer Research in Music and Acoustics (CCRMA), Stanford University

Composers/researchers were able to gain free access at


Abstract off hours, when the funded researchers were not at work,
whereas a “synth” costing even a few thousand dollars
There were no sound generating options other than was beyond the means of young composers
synthesis in the early years of computer music. While the
sampling theorem defined by Claude Shannon (Shannon A second important reason was the conceptual simplicity
& Weaver, 1949) implied equally to digital recording & of computers. If a composer could learn to program, the
digital synthesis, the cost of memory prohibited recording, only special equipment required other than the computer
focusing attention on synthesis. The originators of and audio system were the DACs, which were also useful
computer music at Bell Telephone Laboratories (BTL), to funded research in the hearing sciences and
having strong backgrounds in the engineering sciences engineering, for example — in short, the expensive, but
and knowledge of music, saw early on that longstanding fixed machinery, was already there. To create music for
questions about acoustics, perception, tuning and timbre, loudspeakers, therefore, there was no requirement to
could be posed. They not only found answers, but created understand the complexity of cables and devices typical of
short musical works as demonstrations. Those who electronic music studios. But there were requirements:
followed found in their work paths that extended the scope understanding the auditory system, acoustic theory, and
of sound synthesis, auditory perception, and music patience. This article will focus on the intrinsic power of
composition. synthesis, especially that which is exclusive to the digital
domain.
Keywords: Digital Sound Synthesis, Additive Synthesis,
Non-linear Synthesis, FM Synthesis, Spectra, Tuning,
Synthesis and Timbre
Timbre.
There were very simple means of synthesis in the early
Introduction days of computer music fifty years ago. The simplest
waveforms were based upon pulses and connected lines,
In the contemporary world of creating music in the digital as in triangular, saw-tooth and square waves. They
domain, the primary source of acoustic material is required little memory, which was a critical limitation in
sampled sounds, whether originally recorded by the computers of the era. One additional and essential
composer, or more often, copied from the millions of waveform was the sine wave or pure tone. But it was
sampled sounds available online. costly, as a table of values had to be stored, typically 512
There is a rich history of transforming recorded sounds in number. By the time that Mathews had advanced his
that reaches back to the years following World War II, music programs to Music IV (Mathews,1963), the
when Pierre Schaeffer first embraced the tape recorder as technology of computers had also advanced such that the
a medium for creating music from transformed sounds — processing was sufficiently fast to allow many
musique concrète. E. Gayou has fully documented this simultaneous waves to be computed and stored on digital
important musical movement and ongoing institution in Le tapes for later playback through digital to analog
GRM, Groupe de recherches musicales: cinquante ans converters.
d'histoire, (Gayou, 2007), which remains a fully active part
of the French Radio. Additive Synthesis
However, in the first two decades of computer music, the There is little doubt that the most important breakthrough
sole means of producing sound was by means of in the early days of computer music occurred when Jean-
synthesis on computers that were huge, costly and had Claude Risset and Max Mathews began detailed computer
little memory. Sound generating computer programs studies in the analysis, synthesis and perception of
synthesized sound samples that were written to memory acoustic instrument tones. The most important work in
— digital magnetic tape — for playback through a digital- regard to synthesis of complex timbres began with the
to-analog converter (DAC). Typically, the DAC was detailed analysis of trumpet tones by Risset (1965).
connected to a digital tape reader that included a buffer Making use of analytical tools available to him at BTL,
memory to produce a continuous sample stream. The time Risset’s study revealed the “signature” of the trumpet
from the beginning of the process to the sound output timbre — and by extension all brass tones — such that
could be hours, or even days later. synthesized tones were indistinguishable from the
Why then, did any composers/researchers choose to use recorded tones. (Risset’s insight regarding the correlation
computers rather than realtime analog systems that were of intensity and spectral bandwidth led to a major advance
present in electronic music studeios at many universities in my development of FM Synthesis.)
and even becoming available in the 1960s as Risset created natural sounding complex timbres by
synthesizers? One important reason was cost. Curiously, summing a number of sinusoids where each sinusoid had
although computers cost many tens of thousands of its own independent control over amplitude envelope and
dollars, they are general purpose and can be shared frequency through time and could be harmonically or
across many different sectors. Their cost was justified and inharmonically related.
became increasingly present at colleges and universities.
eaw2015 a tecnologia ao serviço da criação musical 9

Following this line, Risset (1971) then developed a catalog vary through time from a pure tone to a complex tone
of computer-synthesized sounds that specified sensitively having either harmonic or inharmonic partials (Figure 2).
rendered tones and sound textures in full detail. This
The modulation in digital domain FM synthesis is linear,
catalog is included in a booklet accompanying a Wergo
which explains why analog synthesizers of the era (Moog,
CD with sound examples (Pierce, Mathews & Risset,
Arp, Oberheim, etc.) were unable to produce the same
(1995), p. 109-254). Risset’s computer code and functions
complex tones with voltage control modulation, which is
are written in the Music V input language developed by
log frequency.
Mathews, Miller, Moore & Pierce (1969). The closest
synthesis program to Music V in a modern language is FM synthesis was first produced in real-time by Barry
Csound, fully described by Boulanger (2000). Truax in 1973 on a Digital Equipment PDP-9, while he was
pursuing graduate work at the Institute of Sonology in
Risset’s capability of simulating natural-sounding tones
Utrecht. A few years later, Truax (1977) wrote an
presupposes an understanding of the perceptual
important paper elevating the understanding of carrier to
relevance of the physical stimuli, only some of which have
modulating frequency ratios in FM synthesis. Also in 1973-
been “selected” as meaningful by the auditory system. In
74, YAMAHA Corporation in Japan, began work on FM
his work Risset relied upon synthesis to confirm that his
synthesis in real-time using purpose-built processing
keen ear was on target, winnowing out any extraneous
hardware.
signal information. His work not only set the benchmark for
high quality sound synthesis, but he left his method,
analysis by synthesis, on the workbench. Thus began the
era of careful listening.
With Risset’s work the medium of computer music
reached a level that gave promise to Mathews’ correct but
abstract assertion that computers (coupled with
loudspeakers) can produce any perceivable sound
(Mathews, 1963).

Non-linear Synthesis
Because non-linear synthesis has no direct relationship to
standard acoustic theory, analysis by synthesis became of
great importance to this family of synthesis methods. As I
worked on FM synthesis through the 1960s and 1970s, I
made full use of this method. Beginning in the late 1970s
other researchers also made use of the analysis by
synthesis method in developing other non-linear methods
of synthesizing complex sounds, including wave-shaping,
granular synthesis, etc.
Because FM synthesis is the simplest of the non-linear Figure 1. The Music IV “instrument” unit generators that were used in
methods, it is the most widely used and because I have the discovery of FM synthesis in the Autumn of 1967. This was well
known to the community of users at BTL and at a few universities, but
used it exclusively in my compositions, I treat it separately. no one had realized the hidden richness to be found through four
control parameters and two functions
FM Synthesis
The discovery of FM synthesis was in 1967. Its discovery
was not a purposeful search—that is, stemming from a
realization from looking at the equation, seen below, that
there were interesting experiments to try. Rather, it was
altogether a discovery of the “ear.”
In its simplest form, frequency modulation synthesis has
few parameters to control the formation of waveforms
(Chowning, 1973) and it is fortuitous that these few
parameters have such remarkable relevance to the
auditory system.

Figure 2 – A dynamic FM spectrum where the carrier and modulating


frequency ratio c:m = 1:1 and the modulation index increases from 0
Equation XX – The FM equation from which both FM synthesis and to 4. The increase in bandwidth is shown as the partials increase in
FM radio theory are derived. number with increase of the modulation Index. The envelopes of the
partials are determined by Bessel functions as described by
While experimenting with extreme vibrato, I discovered Chowning (1973) in his FM paper and by Schottstaedt
that complex dynamic spectra could be produced with only https://ccrma.stanford.edu/software/snd/snd/fm.html
two oscillators (Figure 1). Furthermore, the spectra could
eaw2015 a tecnologia ao serviço da criação musical 10

Other Non-linear synthesis methods Pierce’s Eight-Tone Canon


The success of FM synthesis awakened a number of In his Eight-Tone Canon (1966) Pierce divided the octave
researchers, well schooled in mathematics and/or into eight equal steps; the even-numbered steps (equal to
engineering, to other methods of synthesis that would rely 0, 3, 6 and 9 in a twelve-step division of the octave) and
upon analysis by synthesis to reach target goals. I will only odd-numbered steps, each forming a diminished seventh
mention these methods with their reference, while noting chord. But, what is interesting about this short piece is that
that each has its own unique set of attributes that are Pierce used tones composed not of the harmonic series,
distinctly different from any other. but of sums of sinusoids that progress from sinusoids at
the octave to half-octave (triton) to quarter octave, with
Arfib (1979) working with Risset in France and Le Brun
each iteration of the canon. Odd pitches with odd pitches
(1979) working as a guest researcher with James (Andy)
or even with even are consonant, while odd with even are
Moorer at CCRMA, concurrently invented the same
dissonant. See Figure 3. Except for the octave, the spectra
method for synthesizing complex spectra. Arfib’s paper,
are inharmonic, but composed of frequencies that are
“Digital Synthesis of Complex Spectra by Means of
common to the uncommon pitch space (Pierce et al.,
Multiplication of Nonlinear Distorted Sine Waves” and Le
1995, p. 8, CD Track 17). Of course, with some
Brun’s “Digital Waveshaping Synthesis,” both became
instruments it is possible to arrive at a tuning that is very
known as “Waveshaping Synthesis,” for reasons of
close to eight equal divisions of the octave, but that which
parsimony. A number of successful works have been
is not possible is to perform that tuning with instruments
composed using this method.
whose spectra are constructed of partials having the same
In the first years of IRCAM Gerald Bennet, director of the frequencies.
department Diagonal, invited Johan Sundberg, from the
Technical University in Stockholm to work with X. Rodet in
developing a novel method of synthesizing the singing voice
referred to as FOF (Rodet, 1980). The method grew into a
program, Chant, and part of a composition complex known as
Chant-Formes. See
http://anasynth.ircam.fr/home/english/media/singing-
synthesis-chant-program
Another major development was by C. Roads (1988) who
carefully specified the synthesis method known as
granular synthesis. This method, too, has been used in
many compositions, especially those that are highly
textural, where boundaries between familiar and unfamiliar
sounds are blurred.

Timbre and Tuning Figure 3 – With a harmonic series tone as a reference, Pierce’s
artificial series tones are shown at the octave, half-octave and
From the very beginning of computer music at BTL, there quarter-octave. The equal division eight note scale and the tones are
was the idea that the relationship between timbre and composed of the same frequencies and therefore complementary
tuning could be explored in a manner that was never (Pierce et al., 1995, p. 8, CD Track 17).
before possible. Theories of tuning and spectra could be
tested and listened to, with a precision that exceeded the Risset’s Mutations
auditory systems ability to discriminate.
In his analysis-synthesis studies, Risset realized in
In fact, the very first piece of music produced at BTL using creating natural sounding complex timbres by summing
Mathews’ Music I program, was titled In the Silver Scale numbers of sinusoids, that he had, for the first time,
(1957) by Newman Guttman, one of Mathews’ colleagues. unlocked a sound spectrum from any physical constraints,
No one remembers what the scale was, nor wants to listen as each sinusoid had its own independent control over
to the short piece enough times to figure it out. intensity and frequency through time and whose partials
could be either harmonic or inharmonic.
Tuning & Complementary Spectra Risset created tones that cannot exist in the natural world,
Among the earliest compositions produced with Max complex timbres where the partials themselves are a part
Mathews’ music synthesis programs by computer of the pitch space. In the opening passage of Mutations
(Mathews, 1963, p. 553-557), there were a few works that we hear a melodic pattern, the pitches of which are
demonstrated the unique capability of the computer to sustained and heard as harmony and then we hear a
produce sounds that could not be produced by any other gong-like tone, but one that has very special properties,
means — sounds that were abstract, detached from the because the partials are at the very same frequencies as
laws of physics governing vibrating bodies and columns of the pitches from the melodic pattern. Because of the
air. There are two that I will focus on because they served Gestalt perceptual law of “common fate,” where the attack
as root ideas in three of my own works. envelopes of the partials all begin together and fade away
in order from high to low,
forcing the percept of a single sound object: the partials
are heard not as pitches but as timbre, a single inharmonic
eaw2015 a tecnologia ao serviço da criação musical 11

spectrum “imprinted” with the pitches, as shown in Figure Stria, Phonē and Voices
4.
Quite different from the tones in Pierce’s canon, these
tones are complex, having attributes of real-world sound,
where only with careful listening in context will the listener
hear that a physical and perceptual divide between pitch
and timbre has been bridged — for the first time ever.
As the piece unfolds, Risset extends this idea by the
manipulation of amplitude envelopes, causing partials to
cohere as sound objects and then separate into supple
textures.
It is this work that nearly a decade later provided the idea
for Phonē (1981). Figure 4 – In 1969 Jean-Claude Risset composed Mutations in which
he creates a seamless link between pitch and timbre. Because of
“common fate,” the partials are heard not as pitches but as timbre, a
Structured Spectra and Tuning single inharmonic spectrum “imprinted” with the pitches (Pierce et al.,
1995, p. 251–252,CD Track 45).
All spectra have structure, but not all spectra are
structured. Indeed, nature provides structure, but from the
Three of my own pieces are based upon ordered
point of view of the instrument builder, nature is
inharmonic spectra and a scale based upon powers of the
sometimes capricious: instruments intended to be fine
Golden Ratio. In Phonē and Voices there is a mix of
instruments are sometimes left partially built and
structured inharmonic spectra and harmonic spectra
abandoned, or reduced in price, because of unseen flaws
based upon the singing voice. Voices is a computer and
in the material.
live performance piece for solo soprano.
Every musical instrument, including the singing voice, has
Stria (1977) has been thoroughly described in the Autumn
a unique signature that sets it apart from every other in its
2007 issue of the Computer Music Journal (CMJ) in
class. But, in normal modes of vibration there are
articles by Baudouin (2007), Dahan (2007), Meneghini
commonalities in the manner that nature arranges the
(2007) and .Zattra (2007).
partials: in columns of air, strings, pipes and tubes, the
partials fall into the harmonic series, whereas in I will explain here, therefore, the relation between the
membranes and formed metal the partials are not structured spectra and the scale and tuning only, as it
arranged in a similarly ordered manner. As we have applies to all three works.
shown, in Mutations Risset has created the sound of a As shown in Figure 5, the spectra of tones generated by
formed metal object, a gong, where the partials are
c:m ratios of powers of ϕ can have common inharmonic
perfectly ordered in a manner that in nature could not be
partials, similar to common practice scales and the tones
— but the percept of a gong is perfectly preserved. This
composed of partials in the harmonic series that have
fact is implicative and in 1979, led me directly to the
common harmonic partials. Stria is based upon such
preparatory work for Phonē (1981).
combinations of structured spectra and complementary
Mathews and Pierce (1980) studied the question of scale degrees, following directly the idea of Pierce’s
“Harmony and nonharmonic [sic] partials,” perhaps canon.
motivated by Pierce’s early work, Risset’s work as well as
my own work in synthesis of the singing voice in 1978-79
(Chowning, 1980) and study of continuous signal level
metamorphosis of sung vowels to metalophone
(Chowning, 1990).
An important addition to the idea of structured spectra and
alternative scales and tuning is found in Sethares (2005)
book Tuning, Timbre, Spectrum, Scale, in which the
author, a mathematician, provides details of pitch
structures, tunings and spectra in, especially, non-western
music
Further work in the direction taken in the Eight-Tone
Canon, was initiated by Mathews and Pierce (1989) in
developing the Bohlen Pierce Scale. There is currently a
great interest in this scale with important articles in
progress and in Chapter 6 of Sethares’ (2005) book.
Figure 5 – The pseudo octaves are powers of the Golden Ratio, φ,
and are divided into 9 equal steps, slightly less than a semitone. The
spectra in Stria and Voices are also built on FM ratios of powers of φ.
Four of the lowest partials are also powers of φ. The different line
lengths correspond to side-band pairs, e.g. the lowest and 4th
partials are the 1st lower (236-382) and upper (236+382) side-bands.
The longest, at 236, is the carrier frequency.
eaw2015 a tecnologia ao serviço da criação musical 12

The computer accompaniment of Voices (Chowning, composed in today’s onslaught of sound media, it
2008) is composed of the same ϕ based structured presents, nonetheless, one of the most exciting paths to
spectra as in Stria. The live soprano’s tones are, of music that is yet to be composed, and for which the
course, composed of partials in the harmonic series. theoretical underpinnings are already taking form. The
Because both kinds of spectra progress from widely seeds were sown nearly sixty years ago, so…
spaced partials to more closely spaced partials from low to
high in the pitch space, see Figure 5, the interaction of the Notes
two is not dissonant. The soprano easily finds her pitch
[1] Risset’s insight regarding the correlation of intensity and
frequency in the accompaniment and then tunes according
spectral bandwidth led to a major advance in my development
to the context to produce the pitch blend. The surface
of FM Synthesis.
allure of Stria and Voices is surprisingly different, although
both are based on the same spectral and scale structures.
Bibliography
Phonē began with research on the synthesis of the singing
Arfib, D. (1979). Digital Synthesis of Complex Spectra by
voice by FM synthesis. Working at IRCAM with Johan
Means of Multiplication of Nonlinear Distorted Sine Waves.
Sundberg at hand with whom I could consult, I made rapid
Journal of the Audio Engineering Society, 27(10), 757-768.
progress in creating soprano tones that had a surprising
degree of naturalness. Baudouin, O. (2007). A reconstruction of Stria. Computer
Music Journal, 31(3), 75-81.
In order to demonstrate the source of the naturalness, I
Boulanger, R. C. (2000). The Csound book: perspectives in
produced a sound example that began with a pure tone at
software synthesis, sound design, signal processing, and
a desired pitch frequency. After 5 seconds I ramped up the programming. MIT press.
FM spectrum that modeled a given vowel. However our
“ears” did not perceive a sung vowel, rather more like an Chowning, J. M. (1973). The synthesis of complex audio
spectra by means of frequency modulation. Journal of the
organ stop being pulled on an electronic organ. After 5
Audio Engineering Society, 21(7), 526-534.
more seconds I ramped up a mixture of periodic vibrato
mixed with a piece-wise linear random function. Instantly, Chowning, J. (1980). Synthesis of the singing voice by means
the components fused and not only did we hear a soprano of frequency modulation. Swedis Journal of Musicology.
but we recognized her vowel! Reprinted in Chowning, J. M. (1989). Frequency modulation
synthesis of the singing voice. In Current Directions in
This example has become well known in the hearing and Computer Music Research (pp. 57-63). MIT Press.
perceptual sciences (Chowning, 1980), because it shows Chowning, J. (1990). Music from Machines: Perceptual
that small imperfections in frequency light up the auditory Fusion & Auditory Perspective. Für György Ligeti—Die
system’s recognition capacity. Referate des Ligeti-Kongresses Hamburg 1988.
After some days, I thought of Risset’s Mutation and a Chowning, J. (2008). Fifty Years of Computer Music: Ideas of
composition began to take shape. Extending Risset’s the Past Speak to the Future. In Computer Music Modeling
example of partial coherence because of simultaneous and Retrieval. Sense of Sounds (pp. 1-10). Springer Berlin
onsets and decays (common fate), I took the idea to the Heidelberg.
next level. The envelopes rise again in amplitude, while at Dahan, K. (2007). Surface Tensions: Dynamics of Stria.
the same time the vowel spectra for each voice is ramped Computer Music Journal, 31(3), 65-74.
up, represented by the change in grey density in Figure 6, Gayou, É. (2007). Le GRM, Groupe de recherches musicales:
as the periodic and random vibrato are faded in. This is cinquante ans d'histoire. Fayard.
the kernel idea in Phonē, one that can only be realized in
Le Brun, M. (1979). Digital waveshaping synthesis. Journal of
the synthesis of sound spectra
the Audio Engineering Society, 27(4), 250-266.
Mathews, M. V. (1963). The digital computer as a musical
instrument. Science, 142(3592), 553-557.
Mathews, M. V., Miller, J. E., Moore, F. R., Pierce, J. R., &
Risset, J. C. (1969). The technology of computer music (p.
178). Cambridge: MIT press.
Mathews, M. V., & Pierce, J. R. (1980). Harmony and
nonharmonic partials. The Journal of the Acoustical Society of
America, 68(5), 1252-1257.
Mathews, M. V., & Pierce, J. R. (1989). The Bohlen-Pierce
Scale. In Current directions in computer music research (pp.
Figure 6 – At the beginning of this example the simultaneously 165-173). MIT Press.
occurring sinusoids cohere because they have the same amplitude
envelopes that all begin together (common fate) and there is no Meneghini, M. (2007). An analysis of the compositional
synchronous micro-modulation. We hear therefore a bell tone. With techandniques in John Chowning's Stria. Computer Music
the introduction of voice harmonics and vibrato for each group Journal, 31(3), 26-37.
(common fate transferred) the micro-modulation enables us to
separate out the individual of singers.
Pierce, J. R., Mathews, M. V., & Risset, J. C. (1995). The
Historical CD of Digital Sound Synthesis. Computer Music
Currents 13, Schott Wergo, 1995.
Conclusion
Risset, J. C. (1965). Computer study of trumpet tones. The
While sound synthesis holds a small position in the Journal of the Acoustical Society of America, 38(5), 912-912.
enormous quantity of digital music produced and
eaw2015 a tecnologia ao serviço da criação musical 13

Risset, J. C. (1971). An introductory catalogue of computer


synthesized sounds. Bell Telephone Laboratories.
Roads, C. (1988). Introduction to granular synthesis.
Computer Music Journal, 11-13.
Rodet, X. (1980). Time—Domain Formant—Wave—Function
Synthesis. In Spoken Language Generation and
Understanding (pp. 429-441). Springer Netherlands.
Schottstaedt, B. (1977). The simulation of natural instrument
tones using frequency modulation with a complex modulating
wave. Computer Music Journal, 46-50.
Sethares, W. A. (2005). Tuning, Timbre, Spectrum, Scale.
Springer Science & Business Media.
Shannon, C. E., & Weaver, W. (1949). 77ie mathematical
theory of communication. Urbana: University of Illinois Press.
Truax, B. (1977). Organizational techniques for c: m ratios in
frequency modulation. Computer Music Journal, 39-45.
Zattra, L. (2007). The assembling of Stria by John Chowning:
A philological investigation. Computer Music Journal, 31(3),
38-64.
eaw2015 a tecnologia ao serviço da criação musical 14

1.2 Some recollections of the early days of computer music

Jean-Claude Risset Laboratoire de Mécanique et d’Acoustique, CNRS and Aix-Marseille University, Marseille,
France

Abstract should not do rather than what you should do. The idea of
delegating the compositional choices to a program was
In this text, after mentioning some precursors, I shall inspiring to several composers such as Pierre Barbaud,
enumerate my recollections of the early days of computer Iannis Xenakis, Gottfried-Michael Koenig, Fausto Razzi,
music, concentrating on the use of the computer for the Sever Tipei, Denis Lorrain ...
elaboration of the musical sound. I shall also give details
about the ways I used the computer in my own music. In
Analog sound synthesis
my talks at the Electroacoustic Winds conference, entitled
The early days of computer music – Bell Labs and beyond Around 1950, electronic music used waves that were not
and Computer synthesis and processing, mixed works, generated by acoustic vibrations, but by electrical devices
real time in my work, I shall illustrate my presentation with which had not been designed for making music. Modular
figures and sound examples. electronic synthesizers appeared around 1964 with Moog,
Buchla, Ketoff and others – after the first modular
Keywords: Computer music, Sound synthesis, Sound
programs Music3 and Music4 of Mathews.
processing, Auditory illusions
In the late 1950s, a sort of predecessor of computer sound
Introduction: music and computing precursors synthesis was the huge RCA synthesizer designed by
Harry Olson. However inflexible, with its punched paper
Music has also been very inspiring in the field of rolls, this device was used by Olson himself, and by Milton
computers. One may say that the idea of artificial Babbitt, who left us at 95 four years ago. Babbitt realized
intelligence was first proposed in the field of musical several electronic works with the RCA synthesizer,
composition. Around 1840, Ada Lovelace, who worked including Philomel, which was premiered by Bethany
with Babbage, the designer of the Analytical Engine, a Beardslee – the wife of the late computer music pionneer
mechanical precursor of the digital computer, wrote the Godfrey Winham.
following: "(The Engine's) operating mechanism might act
upon other things besides numbers, were objects found
Computers and sound: the early days
whose mutual fundamental relations could be expressed
by those of the abstract science of operations, and which Computers can produce sound by controlling sound-
should be also susceptible of adaptations to the action of producing devices, for instance generate rhythms by
the operating notation and mechanism of the engine. triggering printers at times controlled by programming.
Supposing, for instance, that the fundamental relations of Sound-producing tend to be quite specific: however the
pitched sounds in the signs of harmony and of musical electroacoustic technology has produced a very general
composition were susceptible of such expressions and sound-producing tool, the loudspeaker. Through
adaptations, the engine might compose elaborate and controlling a loudspeaker by a computer, Max Mathews
scientific pieces of music of any degree of complexity or has implemented at Bell Telephone Laboratories – Bell
extent..." (text quoted by E.A. Bowles, Musicke's Labs - an unprecedently general sound generation
Handmaiden: of Technology in the Service of the Arts. In process.
H.B. Lincoln, editor, The Computer and Music, Cornell This requires not only the generality of the computer
University Press, p. 3-20 (1970)). through the boundless possibilities of programming, but
also the representation continuous curves by
Computer-composed music discontinuous numbers. This is possible without
In fact the first application of the computer to music has information loss for curves which do not vary to quickly,
been in the domain of musical composition: computer provided that one uses enough numbers. If the Fourier
programs can perform mathematical and logical frequency content of a curve is limited to a certain value f
operations and implement compositional rules. The (in Hz), an accurate «sampling» (the representation of the
programming of compositional constraints has been the curve by a string of numbers) does not produce
first application of the computer in music, with the information loss provided «sampling rate» - the number of
experiments of Lejaren Hiller in 1956. Hiller and his measurements of the ordinate of the curve – is at least
colleagues used the computer to make random choices of twice the maximum frequency contained in the Fourier
musical symbols representing pitches and duration of spectrum. Since the audible frequency spectrum is limited
notes, and then to submit them to a filtering rejecting the to about 20 000 Hz, it is possible to represent by samples
inappropriate violating certain compositional rules: in any audible sound if one uses sampling rates of at least
particular they programmed the rules of the XVIIIth century 40 000 samples per second. This principle is often
counterpoint treatise «Dr Gradus at Parnassum». referred to as Shannon’s theorem, but it was already
understood earlier, by Nyquist, by Whittaker in an
Even though disappointing in terms of musical results, the interpolation theorem circa 1918, and even by Cauchy in
experiments of Hiller has been insightful. Milton Babbitt the early XIXth century, according to certain
remarked that the rules of counterpoint specify what you mathematicians.
eaw2015 a tecnologia ao serviço da criação musical 15

In the 1950s and 1960s, Bell Labs was an extraordinary with Pierce, Guttman and Mathews, and he wished to
place – the greatest research laboratory in the world, with come work at Bell Labs with the computer, hoping he
a staff including 1500 PhD. Contrary to what people might would have more control over the sound generation than
believe, the biggest effort was spent on fundamental with the analog electroacoustic tools. He died in 1965
research rather than on applied research. For instance, before having a chance to use the computer musically.
the transistor (a word coined by John Pierce) was
With different programs, one could synthesize sounds in
discovered as part of an exploration of the quantum effect
many different ways: Max soon realized that he would
in solid-state physics, not at all as a result of a project to
have to spend his life writing different programs to
make better triodes – just as the electric bulb was not
implement his musical ideas and to respond to the desires
discovered by trying to make better candles. Bell Labs
of various composers.
brought considerable innovations – the diffraction of
electrons; the discovery of the big bang residual noise; So Max undertook to write a really flexible program, as
information theory; solar cells; and the computer synthesis universal as possible, Music3, followed by Music4,
for music. Music5. The main key to the flexibility of the Musicn
programs was:
Shannon and his colleagues Bruce Oliver and John Pierce
worked at Bell Labs after the 2d World War: in 1948, they
published an important paper, «The Philosophy of PCM» Modularity
(PCM stands for «Pulse Code Modulation») about the Max implemented a modular approach. Starting with
implications of representing continous functions, such as Music3 (1959), the Musicn programs – written by Max and
sound waveforms, as a succession of pulses. As by others – would be compilers, that is, programs that
computers came of age, the idea developed at Bell Labs could generate a multiplicity of different programs. The
to simulate on the computer electronic speech-processing user can decide about the kind of sound synthesis he or
devices such as vocoders instead of painstakingly she wants to implement: he then makes his own choice
constructing them to then realize that their specifications among a repertoire of available modules, each of which
were insufficient to ensure a good quality. A few years corresponds to an elementary function of sound
later, Max Mathews joined Bell Labs after completing his production or transformation (oscillator, random number
thesis on analog computers: he started working on the generator, adder, multiplier, filter). The user then
computer simulation of speech-processing devices. assembles the chosen modules at will, as if he were
In the mid 1950s, John Pierce and Max Mathews attended patching a modular synthesizer.
together a piano concert with works by Schoenberg which Contrary to a common belief, Max’s modular conception
they like, and works by Schnabel they did not like; at the did not copy that of synthesizers: on the contrary, it
intermission, they both had the same thought: «perhaps inspired the analog devices built by Moog, Buchla or
we can do better ...». John Pierce, who was the head of Ketoff using voltage control - but these appeared after
research on communication at Bell Labs, was an 1964, while Music3 was written in 1959. In fact, this
incredible inventor and animator, but also a musical modular concept has influenced most of the synthesis
person: he suggested Max Mathews to take some time to programs – Max’s Music4 and Music5, but also Music10,
explore the synthesis of musical sounds. He helped Max Music360, Music 11, CMusic, Csound–most analog or
develop computer music, and he consistently protected digital synthesizers – such as Arp, DX7, 4A, 4B, 4C, 4X,
him against those who thought music was not Bell Lab’s SYTER, – compilers for physical modeling – such as
business. CORDIS-ANIMA, Genesis, Mimesis, Modalys, real-time
Thus, around 1957, Max Mathews, developed a converter programs like MaxMSP and Pure Data, much used today,
with Henk McDonald, and he set up to program the – and more widely flow-based and object-oriented
computer, thus starting a most exciting venture: programming used in languages for electronic circuits
harnessing the computer for making sound and music. He simulation or computing software such as MATLAB.
did not only give birth to computer music: through all his The Musicn programs are software toolboxes. The
life, he nurtured it with considerable competence and modules that the user selects and connects are virtual and
generosity. correspond to portions of program code. Connections are
The first computer sound synthesis and the first digital stipulated by a declarative text that must follow
recording happened in 1957 at Bell Laboratories, when conventions specific to the program. The Musicn programs
Max Mathews wrote the first program for computer music are software toolboxes. The modules that the user selects
synthesis, called Music1, and his colleague Newman and connects are virtual and correspond to portions of
Guttman realized the first computer music piece, In the program code. Connections are stipulated by a declarative
silver scale. This first attempt at computer synthesis were text that must follow conventions specific to the program.
disappointing, specially considering the potential of the It is more insightful to represent the connections in terms
process, in principe unlimited. of a diagram: the Musicn programs are block-diagram
Max admitted that the result was not extremely rewarding, compilers. In the early 1960s, Max Mathews drew such
and that he had to write a more powerful program. Thus diagrams for a Gravesaner Blätter article commissioned in
came Music2, with 4 voices polyphony. With this version, 1965 by Hermann Scherchen, who had an active interest
Guttman realized a short Pitch Piece, still somewhat in electroacoustics and computer music. In certain later
simplistic, but which Varèse decided to present to the implementations, the connection between modules can be
public in a carte blanche at the Village Gate in New York defined graphically, as in a MaxMSP patch.
City in 1959, as a kind of manifesto. Varèse was friendly
eaw2015 a tecnologia ao serviço da criação musical 16

By selecting among a collection of modules and is tailored to a “music of notes”: this is not so. A note, in
connecting them in various ways, one can implement a the sense of the program, can last a hundredth of a
large number of possibilities, as in construction sets such second or ten minutes. One single note can span a
as Meccano or Lego. complex evolution comprising thousands of notes in the
usual sense; it can also fuse with other notes to give rise
The modular approach is at work in human languages, in
to a single sound entity.
which a small number of basic elements – the phonemes
– are articulated into words and phrases, allowing an I have thus used Music4 and Music5 to build sounds by
immense variety of utterances from a limited elementary additive synthesis, for example, imitations of trumpets or
repertoire. In fact, the idea of a system articulated from a bells, in which each partial is defined by a separate note,
small number of distinct, discontinuous elements – and also sound textures in which the notion of note
“discrete” in the mathematical sense – had been clearly vanishes. This affords wide and precise possibilities to
expressed in the early XIXth century by Wilhelm von define and transform sound - to compose the sound itself.
Humbolt, linguist and brother of the celebrated explorer
Until the middle of the 70s, computer centers were working
Alexander von Humbolt.
in “batch processing”: one had to wait in line – sometimes
Chemistry builds all possible types of material from a few several hours – to get the computed sound samples, and
dozens of substances, namely the chemical elements a special device was needed to convert numbers to sound.
formed of a single type of atoms. Biology also gives rise to The first such devices were built by Max and his
an incredible diversity of animals and plants: common collaborators at Bell Labs: Max made them accessible to
living “bricks” of life have only been indentified some fifty musicians interested in computer music. In the 1960s,
years ago. Hubert Howe, Jim Randall, Geoffrey Winham or Charles
Dodge brought their digital tapes of samples from
Iannis Xenakis once objected that a more ambitious
Princeton or Columbia to turn them into sounds on the
approach would be a top-down approach, starting with the
converters “Hare Gear” or “Tapex,” installed offline at Bell
global rather than with the elementary. Max Mathews
Labs. A historic picture shows Max and Joan Miller, in
answered that he admired the brave attitude of Xenakis,
1964, in front of what seems a big machine: it is only the
but that he did not find capable of acting efficiently this
Hare Gear, an off-line digital-to-analog converter. The
way.
computers, extremely expensive although modest in
performance, occupied big carefully air-conditionned
In those days, rooms.
Programming efficiency
For a long time, it was difficult to equip computers with
was a must. Computers were huge and extremely slow by adequate digital-to-analog converters (as Tom Oberheim
present standards (typically 1000 times slower than can remember). It was not until 1987 that one could find a
today); the amount of storage was very small. Computer commercial computer equipped with high quality sound
centers were accessible in batch processing: one had to output, namely the “Cube” from the NeXT company.
submit a job as punched cards, and the turn-around time Today, Apple makes considerable profits with music.
was often one or two hours. Moreover one had to pay for
the hour of computer time – hundred or thousands of Now, some recollections from the early days.
dollars per hour. Hence Max was intent in programming in
the most efficient way. For several years, computer music happened only in Bell
Laboratories. From the beginning, Edgard Varèse followed
Instead of letting the computer calculate every sample of a
the advent of computer music with interest. He became
periodic waveform, Max Mathews devised the stored
friendly with Guttman, Pierce and Mathews. As early as
waveform oscillator, producing quasi-periodic waveforms
1959, he gave the first public presentation of a computer
of any shape with prescribed amplitude and frequency by
music piece in a carte blanche at the Village Gate, close to
reading a table of values specifying one period of the his residence.
wave. This principle was used universally thereafter.
The composer David Lewin came to Bell Labs to realize
Any connection of modules corresponds to a particular
two synthetic Studies «sonifying» musical structures of his
synthesis model: it is called instrument by analogy. An own design.
instrument can play different notes corresponding to
instantiations of that instrument. In the context of the Max managed to arrange for having a composer in
program, a note statement stipulates starting time, its residence. The first one was the late Jim Tenney, from
length, and the values of the other parameters that can 1962 to 1964. Jim bravely resorted to sound synthesis to
vary from note to note, for instance the successive realize compositions using random choices – at that time,
pitches. In addition, synthesis requires certain John Cage was very influential in the New York avant-
specifications, for example, the sampling rate, the garde. Tenney synthesized Phases, Stochastic Quartet,
specification of the functions used as wave shape, as Noise Study.
envelope, or to control the evolution of another parameter Tenney’s Dialogue has two voices, one with definite pitch
such as the metronomic tempo. and one with a noisy character. Midvalues and ranges of
Thus, to use a synthesis program like Music V, one must pitch and duration of successive notes in each voice are
define the instruments and provide a list of notes that specified by functions of time.The computer chooses the
activate these instruments. The terms of instruments and parameters at random, within the specified range centered
notes are meaningful, but one should not take them too about the specified mean. Thus, at the beginning, the
literally, since they could lead to believe that the program noisy voice is fast and the pitched voice is slow. The score
eaw2015 a tecnologia ao serviço da criação musical 17

consists of the functions prescribed: it just controls the zarlinian and pythagorean. They then played them to
broad outlines of the composition and the computer fills in listeners, who did not notice the difference in tuning but
the details. These functions could be specified within the reported that the versions had different tone qualities.
Music4 program with so-called PLF and PLS Most degrees of the scales are close, but the third and the
compositional subroutines, which the composer can add to sixths are lower with Zarlin and higher with Pythagoras.
make decisions about the notes to be synthesized. The latter tends to be prefered melodically, but Zarlin is
often used in chords by choruses.
Tenney wrote an excellent manual for the brave
composers who would use the Music4 program. Max In the early 1960s, Max had worked with Jim Tenney,
Mathews has decided to donate the program to Joan Miller and John Pierce on the imitation of the violin.
universities and to help them with the technical problems. He did major progress later toward the electronic violin,
It was not easy to port to other processors programs but he early mimicked a beginner scratching the violin
written in low-level languages, close to the internal string with his bow: the initial chaotic phase is evoked by
structure of the machines. In Princeton and Queens an exaggerated random modulation of the wave.
College, Hubert Howe & Godfrey Winham’s made a
I worked on trying to imitate brass tones with the
Fortran version from Max’s version, written in the specific
computer. Some people objected that this is a way to enter
Bell Labs assembly language BeFap.
the future backwards, as McLuhan said. But Varèse
Computer synthesis gradually developed in other places. approved of this choice: he insisted on the need to inject
In UCLA, the geologist Leon Knopoff implemented life and identity into synthetic sounds. Copying the
computer music with the late composer Gerald Strang, spectral evolution in detail works: but I also tried a simpler
who had been Schoenberg’s first assistant in the United recipe. Louder trumpet sounds have brighter spectra. I
States. John Gardner programmed an efficient version on devised a Music4 instrument to perform this feature by
a powerful IBM computer. Tom Oberheim designed digital- rule: the amplitude of the nth harmonic would increase
to-analog converters. faster than the amplitude of the 1st one, with a slope
proportional to the rank of the harmonic. This process
In 1963, Max Mathews wrote in the Science journal an
yielded a fair evocation fo a brassy sound, which indicated
article called The digital computer as a musical instrument,
that variability thoughout the sound was a key to liveliness,
which included this intriguing statement: «There are no
and also that relationship between different sonic
theoretical limitations to the performance of the computer
parameters might be a key to identity.
as a source of musical sounds, in contrast with the
performance of ordinary instruments.» This article played In the late 1960s, Robert Moog implemented this rule by
a decisive role for both John Chowning and myself. In designing a filter in which the input voltage would control
Stanford, John Chowning started in 1964 to work toward the bandwidth of the filter: by ganging the voltage to the
the implementation of computer music synthesis. The amplitude, one produces brassy tones, used in the
same year, my research director Pierre Grivet wrote to trumpets of the record Switched-on Bach by Walter Carlos
John Pierce, and I succeeded Jim Tenney as composer in (now Wendy Carlos).
residence at Bell Labs.
In the 1970s, John Chowning using his invention of audio
I was myself extremely fortunate to be among the firsts to FM, realized this spectral evolution with extreme ease and
take advantage of Max Mathews early music programs, elegance, and Dexter Morrill handcrafted beautiful artificial
Music4 and Music5, and to participate to early brasses for his work Studies for trumpet and computer.
explorations of sound synthesis for music. This shaped all
I was to go back to France for my military service in the
my activity in electroacoustic music and mixed music.
autumn of 1965, before I could complete a piece of music.
When I arrived at Bell Labs in september 1964, Max
Max and Varèse tried in vain to help me get a deferement,
Mathews proposed me to work either on compositional
hoping we would work together. Varèse died of surgery
algorithms or on musical instrurment simulation – a
two months later.
necessary step to learn more about the cues required by
our hearing to appreciate the identity of sounds. I wanted Around 1966, there was a catastrophic change of
to work on sound, and I recorded trumpet tones in Bell computer generation. For the sake of efficiency, many
Labs anechoic chamber to try to imitate them, which had programs had been initially written in a low-level language
not been possible using the data from traditional treatises close to the structure of the processor: these programs
about spectrum and attack and decay times. had to be rewritten from scratch, then debugged – a
tedious and uninspiring work. It became clear that
In one of my first tests of computer synthesis, I used
computers would dramatically evolve with technology, and
functions of time describing the evolution of frequencies in
the problem of portability became’s every one’s concern
bird songs - blackbird, nightingale, wren – to modulate
for the survival of software.
sine waves or noise bands. Such use of arbitrary functions
is a strenth of the synthesis programs. With Music4, the Max Mathews decided to write a new version of Musicn,
possibility to specify pitch with great precision was called Music5. He wrote most of the code in Fortran. He
enticing. It is easy to generate a scale with 13 steps per carefully separated the loops in which the computer spent
octave: many listeners do not notice the strange octave most of his time computing each sample, and this smaller
division. part of the program could be converted for each computer
in the more efficient machine language. Also the program
At that time, the psychologists Paul Boomsliter and
was devised so as to adapt to the available memory space
Warren Creel asked Max Mathews to synthesize three
with regard to the specific desire of each composer: there
versions of some familiar melodies with the same simple
could be a trade-off in term of the number of simultaneous
timbre but with different tunings; equally tempered,
eaw2015 a tecnologia ao serviço da criação musical 18

voices versus the length of the functions, the latter I myself had to synthesize a number of specific episodes
determining precision. for the play Little Boy by Pierre Halet – a fantasmatic
revival of the Hiroshima bombing raid. My music also
A thorough documentation was provided. Max wrote a
comprised instrumental episodes, At some point, a kind of
book which came out in 1969, The Technology of
ghost flute would appear, and I tried to achieve some
Computer Music, in which Music5 was described in great
phrasing by properly changing some synthesis parameters
detail. The book also included an introduction to
along the phrase. These performance nuances are
fundamentals of digital sound processing and a sequence
recorded in the Music5 score. I built a bell-like tone by trial
of tutorial exemples for sound generation.
and error: first non-evenly spaced partial with a sharp
I had come back to Bell Labs in the fall of 1967 till attack and a resonant decay – but the decay sounded
september 1969. I collaborated to the completion of artificial. In a second test, the low frequency had a longer
Music5, I experimented extensively on synthesis, and I decay than the high ones. In a third approximation, the
realized Little Boy and Mutations with Music5. I shall give sound iwas made more lively by introducing beats. The
a few sonic snapshots of that period. complete structure can be read in the Music5 score.
In the fall of 1967, John Chowning visited Bell Labs to Additive synthesis permits to compose bell or gong tones
discuss with Max Mathews. This was my first encounter like chords. For the beginning of my piece Mutations, I had
with John and my first acquaintance of his work. John a melodic motive turn into a chord, and then a gong-like
demonstrated his impressive illusions of moving sound klang echoes this chord, with the same harmonic
sources, and he also explained about his research on the structure. Thus harmony is prolongated into timbre.
powerful possibilities for sound synthesis of audio Partly to achieve dramatic effects for the play Little Boy, I
frequency modulation – a process that could only be worked on
performed in the digital domain. John produced striking
spectral scans through a global control – I could only have Auditory illusions
produced similar effects by controlling individually each
frequency component. This opened the way to a The Musicn synthesis programs allow to contrive very
considerable increase in efficiency, but also of a strong precisely sounds with complex physical structures. This
control of the timbre - as compared to the weak effect of has made possible to produce acoustic illusions: so did
the control of each component. Shepard, Chowning, Knowlton, Wessel and myself.
Beyond their musical utility, illusions demonstrate the
John generously gave his synthesis data, which I noted, specificity of auditory perception and the need of taking it
together with Pierre Ruiz, who was very interested in into account: otherwise musical intentions might be
computer music (one year later, he helped me adapt warped by their incarnation into sound.
Music5 on a dedicated computer). I had just to introduce
into Music5 the capacity to accept negative increments in I pursued the work of Roger Shepard’s on pitch circularity:
the code for the oscillator module, and I could replicate the after his staircase to heaven. I produced continuous
spectral scans. This was a strong demonstration of the glissandi, and a paradoxical sound gliding down in pitch,
ease of exchanging synthesis data with Musicn scores yet ending higher, similarly to Escher’s cascade illusion.
(John had used his adaptation Music10 of Music4 to the The Music5 scores for these examples are fairly simple,
Stanford PDP10 computer). In 1969, Max Mathews went as one can check in my 1969 Sound Catalogue.
to Stanford to participate to one of the first courses on I also demonstrated that a physical octave does not
sound synthesis: he asked me to provide him with data on always sounds as such: I produced tones that seem to go
my synthesis experiments on instrument simulation, sound down by about a semitone when one doubles all their
textures and pitch illusions, and I wrote an «Introductory frequencies, going up a physical octave. I made a similar
Sound Catalogue of Computer Synthesized Sounds» to example with rhythm.
share my synthesis recipes, accompanied with an
explanation of the processes and a recording of the sonic I also synthesized a paradoxical sound that goes down the
result.I took advantage of John’s synthesis data to scale and ends higher, also that slows down yet ends
introduce spectral scans in my 1969 synthesis work faster. I took advantage of John Chowning’s work on the
Mutations, between 7mn40s and 8mn08s. John later told illusions of moving sound sources to make it rotate in
me this was the first piece which used FM synthesis. A space.
few years ago, I found in my archive the sheet where I had Meanwhile, there were other important
noted John’s early FM data, with the date, which clearly
established the precedence of John’s exploration of audio Software developments
FM over some abusive pretenses.
At Stanford University, John Chowning worked to
Wladimir Ussachevsky, the pioneer of tape music at implement Music10, with the collaboration of the late Dave
Columbia with Otto Luening in the early 1950s, often Poole. Music10 was an adaptation of Music4 implemented
visited Bell Labs. He realized his Computer piece # 1 in on the DEC PDP10, an advanced computer used in time-
two parts. For the first part, he processed in his Columbia sharing by the Artificial Intelligence Laboratory. The
studio the harmonic arpeggioes I generated: for the program was written in SAIL, a language similar to Algol.
second part, he inaugurated a special program designed
to edit digitized sound, that was being developed at Bell by Chowning developed spatial illusions and his powerful FM
Steve Johnson and Sandra Pruzansky – the program was synthesis algorithms, which made it easy to perform
called Supersplicer. continuous transformations in timbral space for his works
Sabelith and Turenas.
eaw2015 a tecnologia ao serviço da criação musical 19

In Princeton, Howe and Winham developed Music 4BF, sonic structure, providing if needed additional modules to
written in Fortran, hence easily portable but slow. This was implement processes not foreseen earlier. The Music n
described in a book published in 1975. programs draw on the wide potential of programming, and
they put at the disposal of the user a large variety of tools
Barry Vercoe performed an awesome work in writing
for virtual sound creation.
Music 360 for the extremely complex operating system of
the IBM 360. The issues migrate from hardware to software, from
technology to know-how. The difficulty is no longer the
Shortly before 1970, intermediate size computers
construction of the tools, but their effective use, which
appeared, such as the DEC PDP 11 series. This made it
must take into account the characteristics of perception
possible to set up sessions for individual users to access a
and the imperatives of the musical purpose. One has to
dedicated computer, with unconventional input and output
specify the physical structure of the desired sound, and
devices.
one must predict how it will sound. So there is a
At Bell Labs, Peter Denes and Max Mathews set up a
Honeywell DDP 224 this way, starting in 1967. Together Need for developing know-how in synthesis,
with Pierre Ruiz, I adapted Music5 to this computer, and pyschoacoustics and sensory esthetics
the dedicated session with audio feedback permitted me to
synthesize my piece Mutations much faster than Little The musical purpose is the prerogative of the user, but
Boy, realized in batch processing. one must in any case take perception in account. The
specification of one sound is made by describing all of its
Pierre Ruiz worked on the DDP 224 to implement the first physical parameters and not by stipulating the desired
synthesis by acoustic modeling: the resolution of the
effect: so one must be able to predict the auditory effect of
differential equations (difference equations in the
a physical structure. The experience of computer sound
computer) governing the motion of bowed strings gave an
synthesis shows that the relation between cause and
approximation to the sound of the instrument. The work by
effect, between physical structure and sensory effect, is
Ruiz, Lejaren Hiller and James Beauchamp inspired much more complex that we had believed.
Cadoz, Luciani and Florens in their pioneer
implementation of synthesis by physical modelling. The physical structure of a synthetic sound is known by
construction - the “score” of Music V, more than a notation,
With Richard Moore, Max Mathews implemented Groove,
constitutes a complete structural representation of sound.
a hybrid real-time system, on the DDP 24 in 1968. This
As my example of paradoxical pitch shows, prescribed
was a great set-up, which produced experiments and
relations between certain physical parameters do not
pieces. Its thoughtful design inspired Jon Appleton and
translate into similar– isomorphic – relations between the
Sydney Alonso of Dartmouth for their Synclavier, the first
corresponding perceptual attributes. Listening to a
transportable digital synthesizer. Unfortunately, Groove
synthesized sound allows to experience the
was relatively short-lived. This is the problem with hybrid
psychoacoustic relation between the physical structure
systems which interface to the computer a specific device
and the auditory effect. It is the auditory effect that counts
– an analog synthesizer or a digital processor: these
for music. John Chowning has proposed the expression
devices evolve too quickly with technology, and they do
sensory esthetics for a new filed of musical inquiry relating
not last long enough.
to the quest for perceived musicality – including
Back in France in 1969, I adapted Music5 in Orsay with naturalness.
the late Gérard Charbonneau, and I took a University
Since the beginning of synthesis, the exploration of
music position in Marseille-Luminy, where I had a hard
musical sound has produced real scientific advances
time getting a stand-alone computer for music. I had to
concerning sound and hearing, leading to a better
make a joint application with Alain Colmerauer, pioneer of understanding of musical sound and its psychoacoustics.
logic programming with Prolog. We got a slow
Télémécanique T1600 with a disk holding 5 Mo – hence The knowledge garnered is transmittable, as I mentioned
the sectional structure of my 1975 piece Dialogues for 4 above relating to the 1967 visit of John Chowning to Bell
instruments and synthetic sounds, completed in 1975 just Labs.
before I took leave for IRCAM. That year, Jim Lawson had As Marc Battier wrote in 1992, the use of the software
helped me adapt Music5 on this computer. toolboxes Music n and its derivatives supported the
In the 1970s, Barry Vercoe wrote Music11, an effective development of
program for the PDP11. Then Csound came. The advent
of the Csound software written by Barry Vercoe around An economy of exchanges
1986 was a major step. C, developed at Bell Laboratories,
regarding sonic know-how. The growth of know-how
is a high-level language, hence easily readable, but it can
provides cues that help to build one’s own virtual tools for
also manage specifics of the computer processor. In
sound creation with the help of these software toolboxes.
particular, the UNIX operating system – the ancestor of
Linux – is written in C. Compositional subroutines can be In 1969, John Chowning organized one of the first
written in C. Csound is a heir of Music4 rather than computer music course in Stanford, and he invited Max
Music5: the unit generators transmit their computer output Mathews to teach the use of Music5. Max asked me if I
sample by sample, which makes it easier to combine could pass him some of my synthesis research that he
them. could present. I had been impressed by the ease of
replicating John’s early FM experiments: I hastily
With these developments of Musicn and Csound,
assembled some synthesis results which I thought could
synthesis could envision building practically any kind of
eaw2015 a tecnologia ao serviço da criação musical 20

be of interest, and I gave Max a document which I called research team, José Luís Ferreira, is working in Porto on
An introductory catalogue of computer synthesized a reconstitution of Inharmonique.
sounds. For each sound example, the catalog would
Back to the late 1970s. Stanley Haynes, a gifted composer
provide the Music5 score to produce the sound – in effect
and researcher, produced at IRCAM a work for piano and
an operational recipe-, the recording of the sound (initially
computer-synthesis, Pyramid-Prisms. Haynes developed a
on an enclosed vinyl disc), and an explanation of the
piano-like example of my Sound Catalog to produce a
purpose and the details. This was widely diffused. The
«throng of pianos» for his piece. Haynes documented his
document was reprinted without changes in Wergo’s The
workas well as the compositions he helped other
historical CD of digital sound synthesis (1995).
composers realize at IRCAM. His IRCAM Report 25/80
In the 1980s, John Philip Gather from Amsterdam has was entitled The computer as a sound processor.
partly transcribed my 1969 Sound Catalogue into Csound.
I want to evoke here the memory of Jonathan Harvey, who
I have not yet checked it in detail, - shame on me - but I
left us in december 2012. Harvey studied in England and
know there are problems with some of these
in Princeton with Milton Babbitt. In 1970 he realized
transcriptions.
Timepoints, a computer-synthesized work for tape that
In 1971 and 1973, John Chowning published his seminal was introduced to me by Barry Vercoe, who composed
articles The simulation of moving sound sources and The Synthesism that same year. I presented Timepoints on the
synthesis of complex audio spectra by means of frequency occasion of the first concerts of IRCAM. I persuaded
modulation, giving the information permitting to completely IRCAM to invite Jonathan Harvey to realize a computer
replicate, for instance, the simulation of musical piece, with the assistance of Stanley Haynes. The work
instruments he had accomplished. I myself used these composed by Harvey, Mortuos Plango, Vivos Voco, used
instrument recipes in my piece Dialogues. Music5 as a precise sound processor, which treated
exclusively «concrete» prerecorded material, namely a bell
In 1985, Charles Dodge and Thomas Jerse published a
of the Winchester cathedral and the voice of Harvey’s son.
treatise on computer music describing several synthesis
The structure of the piece is derived from the partials of
techniques we had used.
the bells.
In 1989, Mathews and Pierce published Current Directions
I shall now evoke Musicn in the early days of
in Computer Music Research, a compilation by different
authors of synthesis and processing experiences. I also
mention Eduardo Miranda’s Computer sound design. IRCAM
Users of Csound can benefit from extended know-how: the where Pierre Boulez invited me to be one of the 4
manual The CSound Book compiled by Richard Boulanger composers heads of deparment. (later 5, with Michel
documents a number of interesting synthesized or Decoust heading the department of pedagogy) I took a 4
processed musical examples by various contributors. I years leave from my position in Marseille to head the
was impressed with the skill of Richard’s students when I computer department as from 1975 to 1979. The
visited his synthesis class. Boulanger with Lazzarini computer had a central role, unprecedented in a musical
published a useful audio programming book. institution. Initially I worked with Jim Lawson, Brian Harvey
Musicn and Csound thus favor: and John Gardner. Brian helped adapt the Stanford
version of Music10, and John Gardner developed the
sound inputs of Music5 to make it a genuine sound mixer
Musical work survival and reconstitution
and sound processor.
From the computer score and with some help from the
I used Music5 this way to for an episode of my work
composer, works by John Chowning have recently been
Inharmonique: the voice of Irène Jarsky sings a motive
reconstitued: Stria by Kevin Dahan – a version that will be
recorded in a small studio; then the acoustic space seems
heard in concert here, also by Olivier Baudouin, Turenas
to expand – simply by the addition of delayed echoes.
by Laurent Pottier.
For my pieces Mirages and Songes, I have used the
In the late 1970s, for one of the first composer courses at
mixing facility of Music5 to realize a precise score by
IRCAM, Denis Lorrain wrote an analysis of my work
assembling instrumental motives recorded separately into
Inharmonique for soprano and computer-synthesized
what would be called an illusory ensemble – the
sounds, realized in 1977 when I was at IRCAM. This
performers never played together. In Inharmonique, I used
IRCAM report (26/80) which included the Music5 recipes
the same curve as an envelope to modulate amplitude and
of significant excerpts, in particular bell-like sounds that
frequency, with a frequency quantization.
were later turned into fluid textures by simply changigng
the shape of an amplitude envelope. I also applied an unusual process to make up a complex
frequency envelope: Fourier synthesis, usually used to
About 20 years later, with the help of António de Sousa
make up sound waves with harmonics, serves here to
Dias and Daniel Arfib, I could generate such sounds in
draw a supple curve. I used that at the end of Songes,
real-time with MaxMSP and use a host of them in my work
with Chowning spatialization techniques and pseudo-
Resonant Sound Spaces.
Doppler effects caused by the frequency glides.
MaxMSP is a great modular program oriented toward real-
In 1977, Jean-Louis Richer came from Montreal to
time, however it is not easy to generate a prescribed score
complete a French-speaking version of Music5. Philippe
with it. António de Sousa Dias also worked to convert
Prévôt also joined IRCAM.
several of my syntheses to Csound. A member of his
eaw2015 a tecnologia ao serviço da criação musical 21

In 1979, John Chowning worked in IRCAM: he grains and wavelets, in connection with Alex Grossmann.
accomplished beautiful simulations of the singing voice, Arfib developed software for intimate sonic processing
deceiving even Pierre Boulez, and amazing through analysis-synthesis, Sound Mutations, which
metamorpheses between timbres. Through analysis by inspired IRCAM’s AudioSculpt. I took advantage of this
synthesis, he revealed the cues that permit to distinguish research in works such as Invisible (1994-1996).
two tones in unison, namely the ear sensitivity to the
In the late 1960s, Max Mathews and I had talked about the
vibratory coherence between the partials of each
interest of marrying Music5 and its wide timbral
instrument. These breakthroughs were exploited in his
possibilities with Groove, which made it easy to introduce
work Phonē.
performance nuance. This has been achieved in Csound,
In the late 1970s, the preoccupation with real-time became since performance information can be introduced using
dominant in IRCAM: the public relations claimed that the MIDI.
real-time digital synthesizer being developed would made
Also, I was very intent in my early years of composing to
non-real time obsolete. Initially Luciano Berio wanted
try to marry electronic music and musique concrète. In my
hardware engineer Giuseppe Di Guigno to construct a
view, the sounds available to musique concrète were rich
synthesizer with thousands of fixed –frequency oscillators,
and varied, but hard to gather in a compositional project,
but Max Mathews showed him by Music5 simulation that
whereas the sounds of musique électronique were easier
glissandi would have audible steps. There was then a
to control but dull, lacking life and identity.
project of oscillators with envelopes: Max showed that the
rate of 4 kHz for the envelopes (the control rates of
Music11 and Csound) would be insufficient. The so-called Intimate sonic transformations
4X processor was gradually developed. Resorting to sound processing as sound material opens
IRCAM claimed that their digital synthesizers would made up the wide range of natural or instrumental sounds,
non-real time obsolete: but today most works realized with endowed with liveliness and clear identity. However such
these synthesizers can only be heard as recording. Real- sound material is not as flexible as synthetic sound
time demands make it hard to ensure the sustainability of material. To perform intimate transformations on this
the music. Boulez’s work Répons, emblematic of IRCAM, sound material, one may have to resort to elaborate
was kept alive after the 4X was no longer operational, but analysis-synthesis procedures. Musical research on such
only thanks to men-years of work of dedicated specialists: protocols led my piece Sud, an attempt to marry though
nothing is left of Balz Trumpy’s Wellenspiele real-time hybridation musique concrète and electronic music
version (1978 Donaueschingen), whereas Harvey’s (remembering Cézanne, who wanted to unite curves of
Mortuos Plango remains as a prominent work of that women with shoulders of hills), also to Elementa, Invisible
period. The obsolescence of the technology shoud not Irène, Resonant Sound Spaces and The other Isherwood.
make musical works ephemeral.
Close encounters
In the 1980s, the Yamaha DX7 synthesizer, based on the
knowhow developed by Chowning, was very promising – it It is interesting to stage close encounters between two
could be programmed to some extent, and it had been well worlds of sound: the instrumental world, with strong
documented. CCRMA organized courses in Stanford to identities and constraints, produced by a performer and a
teach composers to program their own sounds on the device which are visible on stage, and the invisible
DX7, but Yamaha soon stopped production to replace the synthetic world, which can get close to the instrumental
DX7 by other models. In contradistinction, Csound, a world but also diverge from it. This is exemplified by works
descendant of Music4 written by Barry Vercoe, maintained of “musique mixte” such as my Dialogues, Passages,
and documented by Richard Boulanger and a group of Voilements, Nature contre nature for live instruments and
people, remains and in active use today – it can emulate computer-synthesized sounds.
the DX-7 as well as other synthesizers or Fender-Rhodes MIDI – as well as the compact disk - appeared around
electric pianos. 1982. Frédéric Boyer realized in Marseille a transcoding
In 1979, I did not renew my 4 year leave at IRCAM. I found between MIDI and Music5. A musical phrase could be
that in IRCAM the research resources were too much played on a MIDI keyboard, with supple tempo, and
subordinated to the demands of immediate music resynthesized in Music5 with a clarinet-like sound: it
production, whereas innovative research demands a blends very well with the live saxophone in my work
detour. I came back to Voilements. The MIDI note numbers can be made to
correspond to non-tempered scales, as in this other
Marseille section of Voilements: the arpeggioes have also been
recorded on a MIDI keyboard and turned into non-
in the Laboratoire de Mécanique et d’Acoustique of CNRS. harmonic synthetic tones.
Daniel Arfib wrote several versions of the Music5 program In my own work, I have tried to merge synthesis, gestures
for small computers, and he developed the synthesis from performers and fluxes from acoustic sound, specially
method of non-linear distorsion – called waveshaping by in Sud and in more recent works like Oscura or
Mark Le Brun, who developed it independently in Stanford. Kaleidophone.
In Arfib’s Le souffle du doux, harmonics unfold as the
index of distorsion increases. I wish to mention a genre which I value highly,

In Marseille we worked on various topics. Arfib and


Richard Kronland-Martinet developed the use of Gabor
eaw2015 a tecnologia ao serviço da criação musical 22

Mixed music There has been a number of developments as a


consequence of real-time operation in the early 1980s.
which allies performers on stage with digital sound coming
Barry Vercoe has introduced the synthetic performer
from the loudspeakers. This genre stages encouters of the
concept for a live instrument accompanied by a computer,
third kind: physical contact between two sound worlds,
which has led to score following: the computer
that of performers, with visible acoustic instruments having
accompaniment must synchronize to the soloist by
a strong identity, and the virtual world of synthesis, devoid
«listening» to it and comparing the result with the musical
of material counterpart and permitting continuous path in
score. The interest in this process has led Miller Puckette
timbral space. The precision in time and pitch permitted by
to design and implement his Max (later Max-MSP) real-
synthesis helps the blending, especially since computer
time graphical software, a powerful modular resource very
sound can be dry and blend with the instruments within
useful to set up refined real-time scheduling interactions
the same reverberation, that of the room.
and to perform real-time sound synthesis and processing.
It is much easier to set up mixed pieces with a «tape» - a
Although most of my use of computer synthesis has
prerecorded acoustic part. Most pieces with live reactive
avoided real-time operation, I have implemented a novel
electronics have a problem with durability. People often
process: live interaction in the acoustic domain. During a
object that tapes contrives the performer in a temporal
residence at MIT Media Lab, where I had access to a
corset. But performers are contrived in any ensemble.
Yamaha Disklavier specially adapted to this end, I have
Many soloists perform with a tape with the suppleness of
written pieces which I called Duets for one pianist: the
chamber music.
pianist has a "partner" - but an invisible, virtual one: in
In his piece Traettoria, Marco Stroppa stages subtle addition to the pianist's part, a second part is played on
relations between the piano score, played by Pierre- the same acoustic piano by a computer which follows the
Laurent Aimard, and the computer synthesis. Marco pianist's performance. The computer program "listens" to
Stroppa shares my reservation about real-time mixed what the pianist plays, and instantly adds its own musical
pieces. part on the same piano: this part is not a mere recording, it
depends upon what the pianist plays and how he plays.
My work Attracteurs Etranges for clarinet and tape was
Here is probably the first piano "duet" for a single pianist.
dedicated to clarinettist Michel Portal. The electroacoustic
sounds were synthesized in Marseille’s LMA. At the The Duets for one pianist require a special piano – such
beginning of the 3d section, after the initial gesture, the as a Yamaha Disklavier – equipped with MIDI input and
figures of the clarinet are echoed by the harp-like output. On such a piano, each key can be played from the
response of a digital filter. Then the computer weaves keyboard, but it can also be activated by electrical signals:
harmonic material made of successive harmonics of these signals trigger motors which actually depress or
chords. Thanks to the Musicn programs, I often used this release the keys. Each key also sends out information as
process of generating harmonic clouds emerging from to when and how loud it is played. The information to and
harmonies, as in the mutation stops of an organ, but from the piano is in the MIDI format, used for synthesizers.
dispersed in time just as a prism disperses the colors of A Macintosh computer receives this information and sends
the rainbow. At the beginning of the third section, such back the appropriate signals to trigger the piano playing:
harmonic clouds cast some shadow and may be some the programming determines in what way the computer
mystery after an energetic beginning. part depends upon what the pianist plays.
I realized eight Sketches and three Etudes, in which I have
Real-time: live interaction in the acoustic domain tried to explore and demonstrate different kinds of live
Real-time was not possible in the early days of computer interaction between the pianist and the computer. A
music, because the computers were too slow, and one computer program "listens" to what the pianist plays, and
had to wait for the end of the computation to hear the instantly adds its own musical part on the same piano: this
sound, so that one could not alter the sound while it was part is not a mere recording, it depends upon what the
being produced. Max Mathews and Richard Moore pianist plays and how he or she plays. The competent
succeeded in getting real-time operation to permit to gestures of the performer influence the music and its
introduce performance nuance in their hybrid system performance in novel and non conventional ways. Hence
Groove, whereby a dedicated computer controlled a set of we have a genuine duet: the pianist's partner, although
voltage-controlled analog sound oscillators and amplifiers. unreal and computerized, is sensitive and responsive.
This was an exemplary system, in which the user could
select the amount of control he or she would have over the Conclusion
music, between that of an instrument – maximal control
Sound synthesis may appear laborious, but the musical
and maximal difficulty – and that of a mere playback
stakes are high: applying compositional processes down
system – minimal difficulty and minimal control. Several
to the level of the sound microstructure, composing the
intermediate amounts of controls could be chosen, for
sound itself. It gives us the keys of a sensitive
example the Music minus one mode. The conductor mode
constructivism of sound. The art of sound color shifts from
was specially interesting: the conductor plays nothing but
tapestry – a choice of fixed colors –to painting – making
can control everything, specially the bear, also the
one’s own color. Monet painted impressions, Cézanne
balance, and all details that could be set up separately
wrote: «when color is at his richess, form is at its
through preliminary sessions similar to orchestral
plenitude». With digital synthesis and processing, the
rehearsals. Pierre Boulez was very interested in this
composer can make the listener attend to time in sound
process, since he wanted electroacoustic music to be rather than to sounds in time.
performed live just as instrumental music.
eaw2015 a tecnologia ao serviço da criação musical 23

Varèse dreamed of the liberation of sound. I think the XXth Mathews, M. V., Miller, J. B., & David, B. B. Jr.(1961). Pitch
century has fairly well liberated sound, to the extent of synchronous analysis of voiced sounds. Journal of the
giving us the vertigo of the white page. Promising but Acoustical Society of America, 33, p. 179-186.
unpredictable devlopments are expected for the XXIst Mathews, M.V. , Miller, J . B., Pierce, J . R., & Tenney, J.
century, with a host of young researchers and creators (1965). Computer study of violin tones. Journal of the
eager to further explore the world of sound and to foster Acoustical Society of America, 38, p. 912 (abstract only).
musical expression.
Mathews, M. V., & Moore, F. R. (1970). Groove—a program
to compose, store and edit functions of time. Communications
Bibliography of the ACM, 13, p. 715 –721.
Bacon, F. (circa 1620), New Atlantis.Revised edition, 1980, Mathews, M. V., Moore, F. R., & Risset, J. C. Computers and
(Jerry Weinberger, editor, Arlington Heights, Illinois: Harlan future music. Science, 1974, 183, 263-268.
Davidson).
Mathews, M.V., & Pierce, J.R., ed. Current Directions in
Boulanger, R. (2000). The Csound book. Cambridge, Computer Music Research (with a compact disk of sound
Massachusetts: MIT Press. examples). 1989, M.I.T. Press, Cambridge, Mass.
Cadoz, C., Luciani, A., & Florens, J.L. (1984). Responsive Max Mathews (2013). Monograph in English: Polychrome
input devices and sound synthesis by simulation of Portraits, n° 18, Paris: INA, Paris, with the support of CCRMA,
instrumental mechanisms: the Cordis system. Computer Stanford University.
Music Journal 8 n° 3, p. 60-73.
Oliver, B.M., Pierce, J.R., Shannon,C.E, (1948).The
Chadabe, J. (1997). Electric Sound – The Past and Promise Philosophy of PCM, Proceedings of the IRE 36, 1324-1331
of Electronic Music. New Jersey: Prentice Hall. (1948)
Chowning, J. (1971). The simulation of moving sound Pierce, J.R. (1983). The science of musical sound. San
sources. Journal of the Audio Engineering Society 19, p. 2-6. Francisco: Freeman/Scientific American,(with sound
Chowning, J. (1973). The synthesis of complex audio spectra examples on disk).
by means of frequency modulation. Journal of the Audio Risset, J. C. (1969). An introductory catalog of computer-
Engineering Society 21, p. 526-534. synthesized sounds. Murray Hill, New Jersey: Bell
Chowning, J. (1980). Computer synthesis of the singing voice. Laboratories. Reissued as part of The historical CD of digital
In Sound generation in winds, strings, computers, Stockholm, sound synthesis, Mainz, Germany: Wergo, 1995.
Sweden: Royal Swedish Academy of Music, p. 4-13. Risset, J.C. (1985), Computer Music Experiments 1964- ... .
Chowning, J., & Bristow, D. FM theory and applications: by Computer Music Journal 9 n° 1, pp. 11-18 (with 5 mn of
musicians for musicians. Tokyo: Yamaha Foundation, 1987. sound examples on disc). Reprinted in Roads, C. editor
(1989), The Music Machine, 1989, M.I.T. Press, Cambridge,
Dodge, C., & Jerse, T. A. (2000). Computer Music: Synthesis, Mass.
Composition and Performance. 2nd edition (paperback).
Cambrige, Massachusetts: MIT Press. Risset, J.C. (1996). Composing sounds, bridging gaps -the
musical role of the computer in my music. In Musik und
Hiller, L. & Isaacson (1959). Experimental Music. New York: Technik, Helga de la Motte-Haber & Rudolf Frisius, ed.,
McGraw Hill. Schott, Mainz, p. 152-181.
Hiller, L., & Ruiz, P. (1971). Synthesizing musical sounds by Risset, J.C. (2014). Composer le son – repères.d’une
solving the wave equation for vibrating objects -Journal of the exploration du monde sonore numérique. Paris: Hermann
Audio Engineering Society, 19, p. 463-470. (reprints of 30 articles, 23 in French and 7 in English, 442 p).
Jean-Claude Risset (2013). Monograph in English: Risset, J.C., Arfib, D., De Sousa Diaq, A., Lorrain, D., Pottier,
Polychrome Portraits, n° 19, INA, Paris, with the support of L. (2002). De Inharmonique à Resonant Sound Spaces:
CCRMA, Stanford University. temps réel et mise en espace. Actes des 9èmes Journées
d’Informatique Musicale, Marseille, p. 83-88.
John Chowning (2013) Monograph in English: Polychrome
Portraits, n° 18, Paris: INA, Paris, with the support of CCRMA, Torra-Mattenklott, C. (2000). Illusionisme musical.
Stanford University. Dissonance n° 64 (mai 2000), p. 4-11.
Mathews, M. V. (1963) The digital computer as a musical Veitl, A. (2010). Falling notes /La chute des notes.(Text in
instrument. Science, 142, p. 553-557. English and in French). Editions Delatour France.
Mathews, M. V. (1969). The technology of computer music. Wessel, D.L. & Risset, J.C. (1979). Les illusions auditives.
Cambridge, Massachusetts: MIT Press. Universalia (Encyclopedia Universalis), p. 167-171.
Mathews, M. V., & Kohut, J. (1973). Electronic simulation of
violin resonances. Journal of the Acoustical Society of
America, 1973, 53, p. 1620-1626.
eaw2015 a tecnologia ao serviço da criação musical 24

1.3 Instruments Obedient to His Thought: Edgard Varèse's sound ideals and the actual
capabilities of electronic resources he had access to

Pedro Bento Conservatório de Música de Aveiro de Calouste Gulbenkian, Portugal

Abstract This yearning for freedom is ubiquous in Varèse’s thought,


together with his quest for new resources. In March 1916
Shortly after his arrival in the USA Varèse attended a he declared:
demonstration of the telharmonium, the instrument his
New instruments must be able to lend [themselves to] varied
mentor Busoni had envisaged as an answer for microtonal
combinations and must not simply remind us of things heard
intervals. At about the same time he stated: “what I'm time and time again. Instruments, after all, must only be
looking for are new technical means which can lead temporary means of expression. Musicians must take up this
themselves to every expression of thought.” question in deep earnest with the help of machinery
specialists. In my own work I have always felt the need for
At several occasions Varèse elaborated on what those
new mediums of expression. I refuse to limit myself to sounds
means could be, and what they should be able to produce. that have already been heard. What I am looking for is new
The present paper discusses some of the nuances of his mechanical mediums which will lend themselves to every
discourse, parallels being drawn with the instruments he expression of thought and keep up with thought. (E. Varèse,
was acquainted with at different times: Bertrand’s 1916a)
dynaphone, the fingerboard theremins specially
In May 1917 he proposed:
commissioned for Ecuatorial and the ondes Martenot used
on the 1961 version, his Ampex 401A, used to collect Notre alphabet est pauvre et illogique. La musique ... a besoin
sound materials for Déserts and the diffusion system of de nouveaux moyens d’expression et la science seule peut lui
the 1958 Phillips Pavillion. infuser une sève adolescente.
Pourquoi futuristes italiens reproduisez-vous servilement la
The capabilities of these resources and their control trépidation de notre vie quotidienne en ce qu’elle n’a que de
interfaces are analysed from an organological perspective, superficiel et de gènant?
and the way they relate to Varèse’s sound ideals and Je rève les instruments obéissants à la pensée—et qui avec
demands is discussed, leading to the conclusion that he l’apport d’une floraison de timbres insoupçonnés se prétent
aux combinations qu’il me plaira de leur imposer et se plient a
felt attracted towards very basic instruments with versatile
l’éxigence de mon rythme intérieur. (E. Varèse, 1917)
interfaces.
These texts present a rich and dense line of thought, a
He used a tape recorder to explore the idiosyncrasies of
core of ideas Varèse would elaborate upon over time.
existing sounds, but he also composed for the Philips
Schematically:
Pavillion, itself a musical instrument with a highly
idiosyncratic spatialization system. 1. Musical alphabet:
Keywords: Edgard Varèse; Theremin; Dynaphone; a) must be enriched [1916];
Ecuatorial; Poème Electronique
b) is poor and illogical [1917];

Introduction 2. No point in:

After arriving in the USA Edgard Varèse expressed his a) being limited by sounds already heard [1916];
need for new sound producing resources. From the early b) reproducing “the superficial and annoying of daily
1930s he tried to get a studio-laboratory, which he only life” [1917].
succeeded for brief periods in the 1950s.
3. Instruments must be only “temporary means of
Varèse was able to overcome the lack of such resources expression” [1916], a practical solution while a more direct
by the way he wrote for traditional instruments. But he was bridge between composer and listener is not found.
also aware of the potential of a number of instruments he
came across. It is the object of this paper to discuss the 4. New mediums / instruments must:
sound producing and controlling capabilities of these a) allow every expression of thought [1916]; be
instruments, and the way they relate to Varèse’s own obedient to the thought [1917];
ideas on organized sound.
b) lend themselves to combinations;
Varèse’s Ideals c) originate a wealth of “unsuspected timbres” [1917].

While in Berlin (1907–1915) Varèse was influenced by 5. The answer lies in:
Ferruccio Busoni, whose Sketch for a New Esthetic of a) “mechanical mediums,” collaboration between
Music [1] he considered “a milestone” in his own musical musician and “machinery specialists” [1916];
development, stating:
b) science [1917].
… when I came upon “music is born free; and to win freedom
is its destiny,” it was like hearing the echo of my own Earning the musician more freedom is implied in the
thoughts. (E. Varèse, 1966, p. 73) identification of current limitations (1, 2a, 3), the need of
extended sound producing capabilities, such as new,
unknown timbres (4c), and the composer’s degree of
eaw2015 a tecnologia ao serviço da criação musical 25

control (4a). The media allowing “every expression of complex, continuous periodic signals with a given
thought” of 1916, stressing an unlimited range of sound strength for each harmonic to be generated.
results, become “instruments obedient to the thought” in
Three telharmoniums were actually built:
1917, emphasizing control given to the composer, which
implies an accurate and responsive interface, ideally 1. a small working model (1898) (Weidenaar, 1995, p. 34);
dispensing with the performer altogether (3). 2. a much larger instrument put to commercial use from
Concerning the technical implementation of the new 26-09-1906 to 16-02-1908, when financial difficulties let to
resources, while in 1917 he referred broadly to “science” its dismissal (ibid., p. 53, 82, 121, 222, 245);
as providing the answer, in 1916 he specified “mechanical 3. A downsized instrument, built in Holyoke (NJ), first
mediums” and “machinery specialists.” Variants of this demonstrated in 09-04-1910 (ibid., p. 232) and moved to
idea are also found in a letter from 26-03-1916, where he New York in 1911, where its premises were built on wet
wrote to his former mother-in-law, Mrs Kaufmann:
soil, creating a number of technical problems. (ibid., p.
I’m looking into the question of getting new electrical 245)
instruments made of my own invention. (L. Varèse, 1972, p.
122) Cahill’s company went bankrupt in 1914, but the third
telharmonium probably lingered there to about 1918. (ibid.,
And in a 1922 newspaper interview: p. 246, 253, 255) The demonstration Varèse attended
What we want is an instrument that will give us continuous must have been of an instrument in deplorable condition,
sound at any pitch. The composer and electrician will have to not least because of the dampness of the place.
labour together to get it. (E. Varèse, 1922)
Varèse’s expectations were certainly influenced by
The word “electronics” had not yet come into general use, Busoni’s perception of the telharmonium. On 16-07-1906,
but the concept was beginning to emerge. Electronics is the latter had written to Vianna da Motta that he just heard
related with active components, the earliest ones being of:
three-electrode valves. These devices had amplification … a “perfect” musical instrument, which sounds are produced
capabilities, and their nonlinear transfer function allowed it by electrical currents regulated according to the number of
to produce sustained electric oscillations. “Continuous vibrations ... for each sound there is a device that provides
sound” was thus within reach, although a suitable interface the fundamental note, another the first overtone, a third the
would be necessary to get it “at any pitch.” second—and so on —; then you may ‘regulate’, as you like,
the number—the volume of the overtones and the relations
Electronics became more visible just after WW1: in May among themselves, such combinations being apparently
1919, during the Victory Liberty Loan, “112 loud-speaking unlimited. (Beirão, Beirão & Archer, 2003, p. 42)
telephone receivers” were suspended over Victory Way,
and in the following years similar public address systems In his Entwurf ..., Busoni references a 1906 article by Ray
were employed in othre civic events in New York. (Bento, Stannard Baker (Baker, 1906), who states Cahill’s idea
2005: p. 22-24) was:
… to construct a machine which would give the player
References to the electrician by Varèse may thus indiciate
absolute control of the tones produced … a perfect
awareness of the potential of whatever resources he came instrument, giving as he says “a sustained tone controlled by
across. the touch” … suppose he could mold that tone under his
hands as a potter molds clay … the player has unlimited
Cahill’s Telharmonium volume at his instant command (ibid., p. 298)

In Berlin, Varèse learned from Busoni about an “electrical” The ideas of moulding a continuous sound and of
instrument, which he saw “demonstrated in New York and “unlimited volume” are very close to the way Varèse would
was disappointed.” (E. Varèse, 1966, p. 74) This compose later. Moreover, the telharmonium had not only
instrument was the telharmonium, pattented by Thaddeus pedals for continuous volume control, but also a “dynamic
Cahill as an “apparatus for generating and distributing manual,” by which “the loudness of the notes can be
music electrically.” (Cahill, 1915) It produced electrical increased or decreased by greater or lesser steps as
signals sent through telephone lines, but no means were required, and with absolute instantaneousness.” (Cahill,
available then for amplifying those signals, requiring 1915, p. 22f).
strong electrical currents to be produced. The result was a Busoni’s presents the telharmonium as a solution to
cumbersome and expensive instrument. produce sounds in his microtonal alternative to equal
The basis of the telharmonium was a set of rotating shafts temperament, describing it as:
moving at constant speeds, proportional to the frequencies ...einem umfangreichen Apparat ...welcher es ermöglicht,
of the notes in equal temperament. Those speeds were einen elektrischen Strom in eine genau berechnete,
obtained from a master shaft through a set of belts or unalterable Anzahl Schwingungen zu verwandeln. Da die
gears. Frequency ratios between notes were thus fixed at Tonhöhe von der Zahl der Schwingungen abhängt und der
the building stage. Apparat auf jede gewünschte Zahl zu ‘stellen’ ist, so ist durch
diesen die unendliche Abstufung der Oktave einfach das
Each note had a number of alternators with a different Werk eines Hebels, der mit dem Zeiger eines Quadranten
number of poles, producing frequencies n times the korrespondiert. (Busoni, 1916, p. 44-45)
rotation frequency of the corresponding shaft. Near
Busoni must have misunderstood something: the
sinusoidal currents were available for the exact harmonics
telharmonium never had any dials controling the
of each note and its octaves. Mixing them allowed
frequencies of individual notes. Perhaps he imagined
eaw2015 a tecnologia ao serviço da criação musical 26

such dials as the way to regulate the “number of lieux différents, donnant un sens de mouvement dans
vibrations.” l’espace ...
Dans le grave ... nous sommes presque arrivés au maximum
Cahill was a champion of just tuning, and for the second de ce que l’organisme humain peut enregister ... au sujet du
telharmonium he developed a keyboard where one pouvoir d’enregistement de hautes fréquences par les oreilles
manual sounded equally tempered notes, while the one moyennes ... je croix qu’on pourrait se baser sur une
below produced a slightly flat and the one above slightly moyenne de 18 000 en toute sûreté, et ajouter aux limites des
sharp versions of some notes, allowing chords with thirds instruments d’aujourd’hui au moins 2 octaves ...
Une chose que je désirerais voir se réaliser est la création
in just tuning to be used. A fourth manual gave very flat
th des Laboratoires acoustiques où compositeurs et physiciens
versions of some notes, tuned as true 7 harmonics for collaboreraient ... On n’a pas, jusqu’ici, assez considéré le
dominant seventh chords. (Weidenaar, 1995, p. 63) problème des sons résultants inférieurs: a) sons
This keyboard was also nonstandard because there was a differentiels ... b) sons additionels. (E. Varèse et al., 1930, p.
123, 125-126, 128)
black key between every two white keys, rather than the
standard 2+3 pattern. It was develped with the help of The main ideas are:
Edwin Hall Pierce (Weienaar, 1994, p. 60f), a performer
1. Freedom from equal temperament;
who also had the task “to devise a practical system of
fingering … and to solve the problem of correctly indicating 2. The ability to create new timbres;
… the manner in which music was to be rendered in just
3. Unlimited dynamics (namey in the fff side);
intonation.” (Pierce, 1924, p. 328) Pierce’s connection with
the telharmonium lasted until early 1907. Later he wrote: 4. Diffusion of sound to different points in space through
loudspeakers;
The younger players whom I taught ... at first followed out my
instructions in regard to intonation, but as time went on 5. Expansion of higher pitches up to 18,000 Hz;
they ... relapsed more and more into the modern tempered
scale. (ibid., 1924, p. 330) 6. Physicists and musicians collaborating in laboratories;

The third telharmonium had a standard double manual 7. Exploring difference and sum heterodyne components.
keyboard with a 2+3 pattern (Weidenaar, 1995, p. 235), Using loudspeakers as a means to spatialize sound may
and although Cahill still found a way to play in just tuning have been suggested by the aforementioned public events
this must have been very limited. Also, while the second in New York. Varèse had plans to use such resources on
telharmonium had 18 alternators per note, there were only his unfinished project The one all alone. In 1931, Alejo
11~12 (ibid., p. 99, 233) on the third, resulting in a Carpentier, who was to write its text based on an original
compass of around five octaves [2]. idea from Varèse, revealed that it was intended to use
The wide compass and possibilities of experimenting with “several ondes Martenot” and exploit “every possibility
small differences in pitch were thus no longer available in offered by the use of electricity on stage, with
the instrument seen by Varèse. Nothing of the dials superimposed planes.” His description of the action
imagined by Busoni, and no way to use but predifined includes the following passage:
frequencies. It could provide “continuous sound,” but not The hour of dawn arrives, and the sun does not appear.
“at any pitch.” Loudspeakers situated on different parts of the stage, and in
the auditorium, announce that “the sun has not been seen
Varèse’s visit to the telharmonium, his requirement for anywhere on the planet [3].”
“mechanical media” and his reference to “electrical
instruments” of his own invention must have occurred at Varèse imagined the bass part of Ecuatorial for Fyodor
about the same time, although the exact sequence of Chaliapin, who had a very powerful voice. For the
events is not clear. Was he in anticipation of seeing the première (15-04-1934) another singer was engaged
telharmonium and just dreaming of having custom-made instead who, according to Slonimsky (Slonimsky, 1983, p.
instruments for himself? Had he seen the instrument and, 211), he “almost disintegrated when confronted with the
although disappointed, was considering possible sound of the theremin.” A hand-held megaphone had to be
variations, more suitable to his purposes? used. (MacDonald, 2003, p. 265)
The reason here was getting the proper dynamic level, but
A Clearer Definition of Varèse’s Sound Targets on 06-12-1936 an article appeared in The New York
In a 1930 interview Varèse gives some cues about the Times under the title “Varese Envisions ‘Space’
sound results he wanted to achieve: Symphonies: Says Orchestra Music of the Future Will Be
Re-Blended by Scattered Amplifiers.” The idea of
Le système temperé actuel me parait périmé ... de nouveaux spatialization through loudspeakers would have its
moyens nous offrent une spéculation illimitée sur les lois de
ultimate materialization in 1958 with Poème Electronique
l’acoustique et de la logique ...
Les instruments que les ingénieurs doivent mettre au point
— where, as in Ecuatorial, very high frequencies are in
avec la collaboration des musiciens, permettront l’emploi de evidence.
tous les sons ... Ils pourront reproduire tous les sons existants Varèse’s aims are also clarified in a letter dated
et collaborer à la création de timbres nouveaux ... Adaptés à 06-02-1933 to the Guggenheim Foundation:
l’acoustique des salles actuelles, ils pouront être doués d’une
énergie illimitée ... Prenant en masse les éléments sonores, il The acoustical work which I have undertaken and which I
y a des possibilités de subdivision par rapport à cette masse: hope to continue in collaboration with René Bertrand consists
celle-ci se divisant en d’autres masses, en d’autres volumes, of experiments which I have suggested on his invention, the
en d’autres plans, ceci de par des diffuseurs disposés en des Dynaphone ... the technical results I look for are as follows:
1 To obtain absolutely pure fundamentals.
eaw2015 a tecnologia ao serviço da criação musical 27

2 By means of loading the fundamentals with certain series of Bertrand was engaged in giving the player as much ease
harmonics to obtain timbres which will produce new sounds. and accuracy of control as possible. He stated the lever
3 To speculate on the new sounds that the combination of two should be long enough for its axe to be turned “sans effort
or more interfering Dynaphones would give if combined in a
appréciable,” and suggested the use of a spring so it could
single instrument.
4 To increase the range of the instrument so as to obtain high be moved with a single finger. (ibid., p. 2) For a five octave
frequencies which no other instrument can give, together with range, a diameter of some 30 cm means 7~8 mm for each
adequate intensity. semitone: not a lot, but still enough to allow for some
The practical result of our work will be a new instrument which microtonal experimenting.
will be adequate to the needs of the creative musician and
musicologist. I have conceived a system by which the The instrument had the standard note locations marked in
instrument may be used not only for the tempered and natural it, but a removable quadrant could be superposed, with the
scales, but one which also allows for the accurate production location of the successive pitches to be played previously
of any number of frequencies and consequently is able to marked. (ibid., p. 2) Such a graphic interface would make
produce any interval or any subdivision required by the it easy for the composer to deal with pitches as
ancient or exotic modes [4]. frequencies, independently of any preexisting system. It
His requirements are consistent with the 1930 interview, also reminds us of Varèse’s remarks in 1936:
but now he speaks of “absoloutely pure fundamentals” and I am sure that the time will come when the composer, after he
“loading” them with “certain series of harmonics.” He also has graphically realized his score, will see this score
speaks of nonstandard tunings. automatically put on a machine that will faithfully transmit the
musical content to the listener ... The new notation will
probably be seismographic [5].
Bertrand’s Dynaphone
Given the importance attributed by Varèse to the The dynaphone is thus a signal generator with a simple
Dynaphone, it is most vexing that so little is known about but effective interface, much in accordance with Varèse’s
it. One photo (L. Varèse, 1972, p. 146-147) shows two needs in the early 1930s. It was invented in 1927, but
half-cilyndrical boxes with a hand-controlled dial moving Varèse first met Bertrand in May 1913. (MacDonald, 2003,
on a half-disc shaped quadrant, some 30 cm in diameter. p. 6) Could it be that they talked about the dials imagined
Photo details of the dial and quadrant are presented in an by Busoni?
article by Dermée, who states he is not able to explain its
principle because Bertrand “l’a couvert par un brevet Ecuatorial, the Theremin Instruments and the
provisoire et il n’a cure qu’on lui enlève le bénéfice de son Ondes Martenot
invention,” but that it produces “oscillations … directement
Dynaphones were unavailable to Varèse when he returned
audibles,” (Dermée, 1928) implying it was based on an
to New York in 1933. At this time, Lev Termen was
audio oscillator. According to Givelet, the apparatus
producing electronic instruments based, on the heterodyne
consists on a single vacuum tube oscillating circuit
principle, where an audio oscillation was obtained through
comprising “une forte self-inductance à noyau de fer, et un
interference between two radio-frequency oscillators, one
condensateur tournant [i.e., variable],” adding that several
of a fixed frequency, the other subject to variation
intermediate endings of the inductance would be used to
according to the capacitance of the player’s right hand as
extend to different octaves. (Givelet, 1928, p. 276) Dermée
it approached a vertical antenna. Pitch could thus be freely
specifies a five octave range for a single instrument, two
controlled by right hand movements. A second antenna,
instruments being able to cover over seven octaves, and
horizontal and loop-shaped, allowed the left hand to
states that the frequency may range from “une période à
control dynamics.
près de 12.000 par seconde ” (Dermée, 1928)—Varèse’s
“high frequencies which no other instrument can give.” Termen also produced fingerboard or cello-theremins,
There is also a reference to levers allowing to vary “la based on a similar principle but with a different interface:
puissance du son et aussi le timbre par l’adjonction pitch was controlled by pressing a plastic tape at different
d’harmoniques.” (B., 1928). , p. 43) places, while the weight of the right hand on a spring-
loaded lever controlled the amplitude. The visual effect
Bertrand’s brevet (Bertrand, 1928) has the title
was not as dramatic as in the space-controlled theremin,
“Commande d’appareils de musique à ondes électriques”
but physical contact with the instrument provided a better
and it does not specify any concrete electronic circuit. His
control. This might have pleased Varèse, who on 14-10-
oscillators must have been fairly standard. The invention
1933 wrote to Jolivet:
itself is defined as a generic control device, being able to
act upon different components of an oscillator: J’ai trouvé le laboratoire—et ce qu’il me faut pour mes
nouveaux instruments… nous sommes avec Thérenin [sic]—
... un levier dont les déplacements angulaires règlent la qui a un magnifique Laboratoire—en plein travail pour mes
capacité d’un condensateur ou la valeur de la self-induction nouveaux instruments. (Varèse & Jolivet, 2002, p. 63, 65)
d’une bobine ou l’intensité du courrant du chauffage de
lampe, ce levier étant solidaire d’un index se déplacant Termen often made instruments by special order [6], and
devant un cadran en indicant ainsi la note émise par for Ecuatorial two fingerboard theremins were
l’appareil. (ibid., p. 1) comissioned on Varèse’s specifications, with a very high
A small switch on the lever allowed the sound to be upper frequency limit, which in 20-02-1959 Varèse quoted
discontinued, in order to separate one note from the next; as 12,544.2 Hz [7]—about the same as on the dynaphone.
but the nature of the interface made glissandi particularly In 09-11-1958, however, he gave 6,000 Hz as the highest
easy to obtain. frequency [8].
eaw2015 a tecnologia ao serviço da criação musical 28

Could Varèse be trying to get a dynaphone clone? And did Tape Recording and Spatialization: The Ampex
the comissioned instruments really attained that 403A, Déserts and the Poème Electronique
specification? At this time loudspeakers were large and of
the full range type, so 6,000 Hz might have been a From 22-03-1952 Varèse owned a tape recorder with
practical limit. This is still above e''''' (5,274 Hz), the accessories (MacDonald, 2003, p. 329, 338), an Ampex
highest note on the 1961 score of Ecuatorial [48], where 401A portable single-track model, with a frequency
Varèse used a pair of ondes Martenot, instead of the response of 30-15,000 Hz (±2 dB) at the highest of two
theremins. speeds (7.5 and 15 ips) and an input allowing to feed the
syncronous motor from an external oscillator. (Ampex,
Was this replacement a matter of necessity rather than 1953)
choice? Varèse seems to have been quite happy with the
instruments by Termen, whom he tried in vain to reach in The Ampex was used to collect material for Déserts, but
a letter dated 05-05-1941: although a single-track would allow easy tape reversing
and collage work, very little further experimenting could
I have just begun a work [where] I want to use several of your have been done without further equipment, and no spatial
instruments—augmenting their range as in those I used for
effects were possible. The Club d’Éssai allowed him to
my Equatorial—especially in the high range. Would you be so
kind as to let me know if it is possible to procure these and complete the interpolations for the 1954 première, where
where ... (Varèse, 1961) the interpolations were diffused in stereo, and in 1961 he
revised them at the Columbia-Princeton Electronic Music
The ondes use the same hetherodyne principle as the Centre. (Ussachevsky & Bayly, 1982/83, p. 149-150)
theremin, but with a luthier rather than an engineer’s
approach. Attached to a string, a ring is placed on the In 1936 Varèse spoke of a new dimension in music:
player’s index finger, and its movement left and right ... sound projection—that feeling that sound is leaving us with
controls pitch, while a pressure sensitive key allows the no hope of being reflected back, a feeling akin to that aroused
left hand to control dynamics and mould amplitude by beams of light sent forth by a powerful searchlight—for the
envelopes. Further left hand commands change the ear as for the eye ... [10]
waveform and switch between differently sounding While the term has a metaphorical or synaesthesical use
louspeakers. here, in 1959 Varèse said of Poème Elecronique [11]:
Varèse was familiar with the ondes. He attended the New “For the first time I heard my music literally projected into
York première of Jolivet’s Concerto (09-11-1949), which space [12].” Composed for the 1958 Brussels World Fair
he thought “admirablement présenté.” (Varèse & Jolivet, Philips Pavilion, it was diffused through Philips 9710M
2002, p. 178) The ondes part was played by Martenot’s loudspeakers, which had a very wide frequency response
sister Ginette, who on 15-11-1949 visited Varèse. (ibid., p. with a substantial emphasis between 2,200 and
178, 180) 15,000 Hz, quite adequate for Varèse’s taste for higher
frequencies.
The 1961 score does not explore the timbre modifying
possibilities of the left hand controls. It should be playable The Philips Pavilion was shaped like a three-peaked tent,
on either dynaphones or the original theremins, but no text the loudspeakers being placed on the inside of its curved
by Varèse seems to exist expressing the same degree of surfaces in two ways:
yearning for the ondes as for those instruments. 1. sound routes, where a track is successively routed to
The ondes parts have constant crescendos and new loudspeakers in a row, so the sound source appears
diminuendos modulating long notes, and dynamic levels to have a linear movement;
from pppp to ffff (plus fp and sffp). Their compass is B-e''''' 2. loudspeaker groups, where sounds can be sent to
(117-5,274 Hz), with pitches below g' (396 Hz) mostly specific points.
doubling the voice. Higher frequencies dominate. On the
organ part, if a 32' register is available [9], frequencies as Sounds could rotate around the public, move vertically, or
low as 16 Hz will be produced. An unusually wide even change from a horizontal to a vertical movement
spectrum is thus explored. along the way. Loudspeaker groups placement allowed
sounds to be bounced either horizontally or vertically.
Since the largest organ pipes usually alternate between (Bento, 2005, p. 136-138)
the left and right side of the console, spatial effects may
result. Also, the use of very high frequency sounds with Three single track and one stereo magnetic tapes were
changing dynamics may originate subjective illusions of combined at the final stage on a three track tape. A
movement, since our directional perception depends on second, synchronized tape contained control signals,
the selective behaviour of our pinna for these frequencies. routing each track to a specific loudspeaker group, or
through a sound route at a given rate.
What Varèse apparently required for the theremin / ondes
parts was thus a means of providing continuous sounds Varèse worked in an improvised studio on a large empty
up to very high frequencies, with the possibility of pavilion, and some loudspeaker rows were placed along
progressing by glissando, and the ability to constantly its walls, in order to allow some experimenting with
modulate their intensity with great precision and through a horizontal sound routes (ibid., p. 139) [13]. According to
wide dynamic range. Willem Tak, who assisted the composer:
Varèse concentrated primarily on the character of the tonal
pattern, and for the most part left us to decide the “intonation”
(the distribution of sound over the loudspeakers). (Tak,
1958/59, p.43)
eaw2015 a tecnologia ao serviço da criação musical 29

Perhaps projecting individual sound masses differently [9] Both the 1934 and the 1961 presentations were at New
into space was more important for Varèse than their exact York’s Town Hall, where a 32’ register based on difference
spatialization. Or he might need to experience the effects tones was introduced in 1935 in the organ, which had nothing
in situ to apreciate the possibilities given by the highly larger than 16' before.
idiosyncratic nature of the spatialization system itself, in [10] Conference in Mary Austin House, Santa Fe, 23-08-1936,
which case his role here would only become visible at a quoted from (Schwartz & Childs, 1998, p. 197).
final stage.
[11] Only the sound part of this multimedia project is
The inner wall surfaces of the Philips Pavillion were discussed here. For its other dimensions see (Treib, 1996).
covered with asbestos, creating a very dry acoustics.
[12] Lecture at Sarah Laurence College, 1959 (Schwartz &
(Bento, 2005, p. 124-128). As Tak put it, “the space ... was
Childs, 1998, p. 207).
to seem at one moment to be narrow and dry, and at
another to seem like a cathedral” (Tak, 1958/59, p.43)— [13] Also, reverberation times would have been much higher
another way of creating spatial suggestions in music which than in the Pavillion.
Varèse explored.
Bibliography
A Final Note Ampex (1953). Series 400 Operation and Maintenance
The instruments and resources which Varèse came Manual. Redwood City (CA), Ampex Electric Corporation.
across seem to have stimulated his thoughts on how to B., P. (1928). “Les Dynaphones”. Le Ménestrel. No. 18 (04-
develop in specific ways the general aesthetic ideals of 05-1928), p. 197.
freedom of expression inherited from Busoni. By the mid
Baker, R. S. (1906). “New Music for an Old World: Dr.
1930s, Varèse’s ideas had evolved into a solid and
Thaddeus Cahill’s Dynamophone an Extraordinary Electrical
consistent discourse, incorporating the idea of sound
Invention for Producing Scientifically Perfect Music”. McClure:
masses moving in space, either metaphorically or literally. vol. 27, no. 3 (July 1906), p. 291-301.
Varèse demanded absolute control over sound and no
Beirão, C. W., Beirão, J. M. M., & Archer, E. (orgs.). 2003.
limits on the available materials. He wanted to deal with Vianna da Motta e Ferrucio Busoni: Correspondência—1898-
pitch, traditionally based on discrete values, as frequency, 1921. Lisboa: Caminho da Música.
a continuous variable, and he particularly liked to explore
the extreme treble. He required flexibility in modulating the Bento, P. (2005). Recursos, Ideias, Concepção e Realização
amplitude and was keen on exploring dynamic levels from Material no Alvorecer da Música Electroacústica: O Poème
Electronique de Edgard Varèse. Master Dissertation,
the barely audible to the deafening strong.
Universidade de Aveiro, Aveiro, Portugal.
His enthusiasm towards the dynaphone and fingerboard
Bertrand, R. (1928). Commande d’appareils de musique à
theremin stems from the liberty of pitch and dynamics
ondes électriques. French Brevet d’invention no. 664,305,
afforded by their simple interfaces. submitted 15-02-1928, granted 22-04-1929.

Notes Busoni, F. (1911). Sketch of a New Esthetic of Music. New


York: G. Schirmer.
[1] First published in 1907, enlarged German edition in 1916.
Varèse acquired in the USA the 1911 English translation Busoni, F. (1916). Entwurf einer neuen Ästhetik der Tonkunst,
(Busoni, 1911). 2nd ed. Leipzig: Insel-Verlag.

[2] Assuming the 3rd and 5th harmonics were produced for Cahill, T. (1915). Art of and Apparatus for Generating and
every note. Distributing Music Electrically. US Patent 1,213,804, granted
23-01-1917.
[3] “Edgard Varèse Escribe Para El Teatro”. Sócial, vol. 14,
no. 4 (April 1931). Quoted from (MacDonald, 2003, p. 227- Dermée, P. (1928). “De l’Etherophone au Dynaphone”. La
228). France Radiophonique. Year 1, no. 2, p. 12.

[4] Letter to Henry Allen Moe, of the Guggenheim Foundation, Givelet, A. (1928). “Les Instruments de Musique a Oscillations
dated 06-02-1933. Quoted from (Manning, 1993, p. 9). Éléctriques”. Le Génie Civil. Tome XCIII, no. 12, p. 272-276.

[5] Conference in Santa Fe, 23-08-1936. Quoted from Glinsky, A. (2000). Theremin: Ether Music and Espionage.
(Schwartz & Childs, 1998, p. 198). Urbana: University of Chicago Press.

[6] A fingerboard instrument producing particularly low MacDonald, M. (2003). Varèse: Astronomer in Sound.
frequencies, served by a huge loudspeaker, was London: Kahn & Averill.
commissioned by Leopold Stokowski to reinforce the basses Manning, P. (1993). Electronic and Computer Music, 2nd ed.
of the Philadelphia Orchestra (Glinsky, 2000, p. 109-111, Oxford: Clarendon Press.
figure between p. 202 and 203).
Mattis, O. (1992). Edgard Varèse and the Visual Arts. PhD
[7] Conference at Sarah Lawrence College on 20-02-1959, Dissertation, Stanford University, s.l.
quoted from (Schwartz & Childs, 1998, p. 206). There may be
some mistake in this number: For a'=440 Hz, Pierce, E. H. (1924). “A Colossal Experiment in ‘Just
g'''''' = 12544.85395, which would never round to 12544.2. Intonation’”. The Musical Quarterly. Vol. 10, no. 3 (July 1923),
p. 326-332.
[8] Conference at The Village Gate Café (09-11-1958), quoted
from (Mattis, 1992, p. 50). Schwartz & Childs (eds.) (1998). Contemporary Composers
on Contemporary Music. Expanded edition. S.l.: Da Capo
Press.
eaw2015 a tecnologia ao serviço da criação musical 30

Slonimsky, N. (1983). “Géométrie Sonore: Edgard Varèse”. In Varèse, E. (1961). Ecuatorial. New York: Colfranc Music
Writings on Music. Vol. 3. New York: Routledge, 2005. Publishing.
Tak, W. (1958/59). “The Sound Effects”. Philips Technical Varèse, E.(1966). “Ferruccio Busoni - A Reminiscence”.
Review. Vol. 20, no. 2/3, p. 43-44. Varsity Graduate: vol. 13, no. 1 (December 1966), p. 73-74.
Treib, M. (1996). Space Calculated in Seconds: The Philips Varèse, E., et al. (1930). “La Mécanisation de la Musique:
Pavilion, Le Corbusier, Edgard Varèse. Princeton, NJ: Conversation Sténographiée à Bifur”. Bifur, no. 5 (31-07-
Princeton University Press. 1930), p. 121-129.
Ussachevsky, V., & Bayly, R. (1982/83). “Ussachevsky on Varèse, E., & Jolivet, A. (2002). Correspondance: 1931-1965.
Varèse: An Interview April 24, 1979 at Goucher College”. Genève, Contrechamps.
Perspectives of New Music, Vol. 21, no. 1/2, p. 145-151.
Varèse, L. (1972). Varèse: A Looking-Glass Diary. Volume I:
Varèse, E. (1916a). [Interview to The Morning Telegraph, 1883-1928. New York: W. W. Norton & Company.
March 1916]. In (L. Varèse, 1972, p. 123).
Weidenaar, R. (1995). Magic Music from the Telharmonium.
Varèse, E. (1916b). [Letter to Mme Kaufmann, dated 26-03- Metuchen, NJ: The Scarecrow Press.
1916]. In (L. Varèse, 1972, p. 122).
Varèse, E. (1917). “Verbe”. 391, no. 5 (05-06-1917), [p. 2].
Varèse, E. (1922). [Interview to The Christian Science
Monitor]. In (Manning, 1993, p. 7).
eaw2015 a tecnologia ao serviço da criação musical 31

1.4 A Pure Data Spectro-Morphological Analysis Toolkit for Sound-Based Composition

Gilberto Bernardes Sound and Music Computing Group, INESC TEC, Portugal
Matthew E. P. Davies Sound and Music Computing Group, INESC TEC, Portugal
Carlos Guedes Sound and Music Computing Group, INESC TEC, Portugal; NYU Abu Dhabi, Uniteted Arab
Emirates

descriptors in applications like Shazam [1] and Moodagent


Abstract [2] take place during the implementation phase of the
algorithm and are hidden from the system’s interface.
This paper presents a computational toolkit for the real- However, creative applications like Echo Nest Remix API
time and offline analysis of audio signals in Pure Data. [3], CataRT (Schwarz, 2006) and earGram (Bernardes et
Specifically, the toolkit encompasses tools to identify al., 2013) give users access to audio descriptors and even
sound objects from an audio stream and to describe encourage them to experiment with their organization in
sound objects attributes adapted to music analysis and order to retrieve and generate different audio sequences
composition. The novelty of our approach in comparison to and outputs.
existing audio description schemes relies on the adoption
of a reduced number of descriptors, selected based on However, even if many computationally-extracted audio
perceptual criteria for sound description by Schaeffer, descriptors—in particular those computed from audio data
Smalley and Thoresen. Furthermore, our toolkit addresses by simple means, commonly referred to as low-level audio
the lack of accessible and universal labels in descriptors—measure musical or perceptual properties of
computational sound description tasks by unpacking sound, they are not adapted to the terminology of musical
applied terminology from statistical analysis and signal practice and are meaningless without understanding the
processing techniques. As a result, we improve the underlying statistical analysis and signal processing
usability of these tools for people with a traditional music techniques. Therefore, one can conclude that one of the
education background and expand the possibilities for most evident and prominent barriers for operating audio
music composition and analysis. descriptors in creative music applications is the lack of
accessible and meaningful labels adapted to particular
Keywords: Sound morphology, Content-based audio application contexts and user preferences. By unpacking
processing, Audio descriptors, Concatenative sound the terminology, we believe that the usability of content-
synthesis. based audio systems can increase considerably, and
appeal to a larger audience, most-importantly including
1. Introduction musicians.
Sound description is an essential task in many disciplines Inspired by the work of Ricard (2004), Peeters and Deruty
from phonetics and psychoacoustics to musicology and (2008), and Schnell, Cifuentes, and Lambert (2010), our
audio processing, which address it for a variety of goal is to develop a computational toolkit for real-time and
purposes and through very distinct approaches (Gouyon offline segmentation and description of sound objects [4]
et al., 2008). Among these, the computational approach to in Pure Data (Puckette, 1996) targeted toward users more
sound description is the most relevant to our work. familiar with music theory and practice than with music
Computational approaches to sound description have technology. Furthermore, contrary to the recent tendency
gained increased attention in recent years given the large in music information retrieval (MIR) to adopt large
expansion of multimedia content over personal and public numbers of audio features in content-based audio
databases and the consequent need for effective processing systems, our toolkit purposefully encompasses
algorithms for browsing, mining, and retrieving these huge a very limited number of descriptors and aligns with some
collections of multimedia data (Grachten et al., 2009). recent studies by Mitrovic, Zeppelzauer, and Eidenberger
Music creation has rapidly followed these approaches and (2007) and Peeters et al. (2011), which have shown that
has been incorporating these tools into composition the information expressed by the totality of audio
processes (Humphrey, Turnbull, & Collins, 2013). descriptors developed exposes a high degree of
redundancy. To this end, we will rely on criteria of musical
The output quality of content-based audio processing perception grounded in sound-based theories by Schaeffer
systems is commonly dependent on the audio (1966), Smalley (1986, 1997), and Thoresen (2007) to
representations they adopt, because most processing select an appropriate set of computational audio
relies on such data. The most common approach to descriptors for music analysis and composition.
represent audio in such systems is the adoption of audio
descriptors, which measure properties of audio signal The remainder of this paper is organized as follows.
content. For example, the brightness of an audio sample Section 2 introduces our research as well as the grounding
can be extracted by the audio descriptor spectral centroid, principles of three musicological sound-based theories,
which measures the “center of mass” of the spectral which we summarize along with their criteria for sound
representation of an audio signal. description in Section 3. Section 4 presents the
computational strategies implemented for identifying
Most content-based audio processing systems that sound objects automatically. Section 5 details at length the
extensively use audio descriptors tend to prevent users proposed sound descriptors included in our toolkit. Section
from accessing them. For example, the use of audio
eaw2015 a tecnologia ao serviço da criação musical 32

6 briefly presents musical applications that adopt our 3. Schaeffer’s Solfeggio and After
toolkit. The paper concludes with conclusion in Section 7.
In TOM, Schaeffer reframes the act of listening to sound
by articulating a phenomenological theory that is primarily
2. A Musicological Approach to Sound concerned with the abstracted characteristics of sounds,
Description rather than their sources and causes (Chion, 1983); an
Until the 1940s, music composition was only confined to attitude that he refers to as reduced listening. This
acoustic instrumental and vocal models and highly tied to listening attitude ultimately establishes the basis of a
the concept of the musical note. Within this paradigm, solfeggio for sound-based works, or in Schaeffer’s words,
pitch and rhythm were understood as the primary musical a “descriptive inventory which precedes musical activity”
elements of musical structure with timbre (restricted (Schaeffer, 1966, as cited in Chion, 1983, p. 124).
almost exclusively to orchestration) and other attributes of Schaeffer’s solfeggio is divided into five stages, of which
sound thought of as secondary (Thoresen, 2007). the first two, commonly addressed together as typo-
The appearance of new electronic instruments and sound morphology, are those most relevant to our work. These
manipulation strategies by that time broke the paradigm two stages aim (i) to identify sound objects from an audio
linking sound to the physical object producing it, and stream, then (ii) to classify them into distinctive types, and,
allowed composers to work with dimensions that were finally, (iii) to describe their morphology. Typology takes
previously inaccessible or totally disregarded in music care of the first two operations and morphology the third.
composition, particularly the use of all sonic phenomena Despite the lack a systematic musicological approach for
as raw material for composition. In electroacoustic music, identifying sound objects in the sound continuum (Smalley,
the basic structural unit of the composition is no longer the 1986), it is important to understand the conceptual basis of
musical note. Instead, the concept of the sound object these unitary elements, as it is a required processing
comes to the fore, significantly extending the spectrum of stage prior to their morphological description. A sound
possibilities (from note to noise) and the exploration of object can be identified by its particular and intrinsic
timbre as a compositional strategy. perceptual qualities that unify it as a sound event on its
As a result, much electroacoustic music was particularly own and distinguishes it from all other sound events
resistant to traditional analysis and categorization. In (Chion, 1983).
addition, the new dimensions explored in electroacoustic The morphological criteria are defined as “observable
music existed for some decades without any theoretical characteristics in the sound object” (Chion, 1983, p. 158),
ground or formal definition that could articulate the and “distinctive features [...or] properties of the perceived
relevant shift within musical composition. Clearly, a unique sound object” (Schaeffer, 1966, p. 501), like the mass of a
set of terms and concepts was needed to discuss, sound (e.g. sinusoidal or white noise), sound’s granularity
analyze, and interpret electroacoustic music (Smalley, and dynamics.
1986).
Two main concepts, matter and form, organize Schaeffer’s
In the early years of electroacoustic music theory, the morphology. For Schaeffer, if matter refers to the
discourse was largely monopolized by engineering characterization of stationary spectral distributions of
terminology, consequently lacking theoretical and sound, then sound matter is what we would hear if we
aesthetic reflection. Schaeffer’s Traité des Objets could freeze the sound in time. Form exposes the
Musicaux (TOM) was the first substantial treatise on the temporal evolution of the matter.
subject, which addressed the correlation between the
world of acoustics and engineering with that of the listener Matter encompasses three criteria: mass, harmonic
and musical practice. While the technology used by timbre, and grain. Mass is the “mode of occupation of the
Schaeffer is now old, his overall approach taken to pitch-field by the sound” (Schaeffer, 1966, as cited in
listening, description, characterization and organization of Chion, 1983, p. 159). By examining the spectral
sound is still a reference. distribution of a sound object, it is possible to define its
mass according to classes that range from noise to a pure
For more than four decades, Schaeffer’s TOM had no sinusoidal sound.
deep implications in (electronic) musical analysis
(Thoresen, 2007; Landy, 2007). This neglect is commonly Harmonic timbre is the most ambiguous criterion
attributed to the difficulty of Schaeffer’s writings and their presented in Schaeffer’s morphology. Its definition is very
unavailability in English until 2004. In this regard, a note vague and closely related to the criterion of mass—
should be paid to the work of Chion (1983), who complementing it with additional qualities of the mass
systematized Schaeffer’s work, as well as the (Schaeffer, 1966). Smalley (1986) avoids this criterion
spectromorphology and aural sonology theories by altogether and Thoresen (2007) presents a sound
Smalley (1986, 1997) and Thoresen (2007) that descriptor, spectral brightness, that clearly belongs to the
acknowledge and reformulate Schaeffer’s morphological harmonic timbre criterion within the mass criteria (sound
criteria within a simpler yet more concise and operable spectrum according to Thoresen’s terminology).
framework. In what follows, we briefly detail the guiding Grain defines the microstructure of the sound matter, such
principles of Schaeffer’s morphological criteria of sound as the rubbing of a bow. Even though it describes an
perception and then delve into their definition. Whenever intrinsic temporal dimension of the sound, it is under the
pertinent, we will interleave the definition of Schaeffer’s criterion of matter because it examines a micro time scale
morphologic criteria with perspectives from Smalley and of music, which the human hear does not distinguish as
Thoresen towards the ultimate goal of establishing a separate entities (Schaeffer, 1966).
theoretical basis to support our toolkit.
eaw2015 a tecnologia ao serviço da criação musical 33

Sound shape/form encompass two criteria: dynamic, and A comprehensive description of our audio beat tracking
pace. The dynamic criterion exposes and characterizes algorithm—largely based on Dixon (2007)—is out of the
the shape of the amplitude envelope. Schaeffer scope of this paper. For a comprehensive description of all
distinguished several types of dynamic profile (e.g. segmentation strategies applied in the toolkit please refer
unvarying, impulsive, etc.), as well as several types of to Bernardes (2014, p. 45-49).
attack (e.g. smooth, steep, etc.).
The pace (allure in French) is another ambiguous concept
presented in Schaeffer’s TOM defined as fluctuations in
the sustain of the spectrum of sound objects—a kind of
generalized vibrato. Smalley avoids this criterion.
Thoresen (2007) adopts the criterion and enlightens its
definition by providing simpler, yet reliable categories for
describing both the nature (pitch, dynamic, and spectral),
and the quality of possible undulations (e.g. their velocity
and amplitude). Still, we find Thoresen’s definition of pace
unsystematic, in light of a possible algorithmic
implementation, since it does not offer a concise
description of the limits of the criteria.

4. Identifying Sound Objects Computationally


We adopt two main methods to identify and segment an
audio stream into sound objects: onset detection and beat
tracking algorithms.
Onset detection algorithms aim to find the location of Figure 1 – Amplitude and pitch detection functions for audio
notes or similar sound events in the audio continuum by onset detection. Vertical lines indicate the note onsets as
inspecting the audio signal for sudden changes in energy, indicated in the musical notation representation.
spectral energy distribution, pitch, etc. Current algorithms
for onset detection adopt quite distinct audio features, or 5. A Computational and Musician-Friendly Audio
combinations of them, in order to convey improved results Description Scheme
for specific types of sounds, such as percussive, pitched
In choosing the audio descriptors that integrate our toolkit,
instrumental, or soundscapes (Bello et al., 2004). Figure 1
we relied on perceptual criteria for sound description from
illustrates the adoption of two functions given a
three musicological theories detailed above: Schaeffer’s
monophonic pitched audio input on which we obtain
typo-morphology, Smalley’s spectromorphology, and
different results when inspecting for onsets. While the
Thoresen’s aural sonology—but we did not fully
drops in the amplitude function group events into larger
incorporate them into the toolkit because of simplicity,
segments, the different “steps” in the continuous pitch
usability, and/or technical reasons. Instead, we selected
tracking function provide a better means for the task of
segmenting the given example into notes events. the criteria that are more adapted for music composition,
and whose technical implementations are feasible. A
In order to address a variety of sounds, we adopted three major concern was the use of terminology from music
distinct onset detection methods in our framework. The theory and practice to denote the descriptors in the toolkit.
first is a perceptually based onset detection algorithm by Therefore, without disregarding the use of concise
Brent (2011), which intends to be used for pitched concepts, the terms used attempt to facilitate the usability
(polyphonic) sounds. The second algorithm is based on a for musicians with a traditional Western music education.
pitch-tracking method by Puckette, Appel and Zicarelli
Table 1 organizes the descriptors included in our toolkit
(1998) and aims to detect note onsets of monophonic
according to two principles. The first and topmost
pitched audio. The third is an adaptive onset detection
organization level splits the descriptors into two categories
function largely based on the work of Brossier (2006),
borrowed from Schaeffer: matter and form.
which inspects the audio for spectral changes. A particular
feature of this method, which intends primarily to be used The criteria related to matter describe the sound objects’
for environmental sounds, is its ability to adapt to the local spectrum as a static phenomenon, representing it by a
properties of the signal to improve the onset estimates. single numerical value, which is meaningful in relation to a
finite space constrained by boundaries that represent
Beat tracking is a computational task that aims to
specific types of sounds. For example, the criteria of
automatically find an underlying tempo and detect the
noisiness ranges from two typological limits (sinusoidal
locations of beats in audio files. It corresponds to the
sounds and white noise), and within these boundaries,
human action of tapping a foot on perceptual music cues
sound objects are defined in a continuous scale of real
that reflect a locally constant inter-beat interval. Although
numbers. The form criteria expose the temporal evolution
beats cannot always be considered as sound objects,
of the matter or the contour of the audio features’ evolution
according to Schaeffer’s theory, we adopt these temporal
and are expressed as lists.
units because of their relevancy under music with a strong
beat, such as electronic dance music. Matter is further divided in two other categories: main and
complementary. While the criteria under the main category
provide meaningful descriptions for the totality of sounds
eaw2015 a tecnologia ao serviço da criação musical 34

that are audible to humans, the criteria under the The novelty of our descriptor in relation to related research
complementary category provides meaningful results for (Ricard, 2004; Peteers and Deruty, 2008) is its
limited types of sounds. For example, pitch—a computation by a combination of four low-level
complementary criterion of mass—only provides descriptors—spectral flatness, tonalness, spectral
meaningful results for pitched sounds. kurtosis, and spectral irregularity—with the aim to provide
a better distinction between pitched and noisy sounds. In
The second categorization adopted in our toolkit is equally
what follows, we define each low-level descriptor used
borrowed from Schaeffer and correspond to three of his
along with their contribution to the overall computation of
five perceptual criteria for sound description: mass,
noisiness.
harmonic timbre, and dynamic. This categorization is used
to organize the following sections in which each descriptor Spectral flatness provides an indicator of how noise-like a
is detailed individually. An emphasis will be given to the sound is, as opposed to being tone-like, by computing the
conceptual basis and musical application of the descriptor ratio of the geometric mean to the arithmetic mean of the
in detriment of their mathematical definition, which relies spectrum. The output of the descriptor is close to 0 for
on algorithms from Brent’s (2009) timbreID library—to sinusoidal sounds and 1 for a fully statured spectrum.
extract low-level audio features from the audio—and an Within this interval, pitched sounds roughly occupy the
altered [5] version of the Porres’s (2011) Dissonance interval [0, 0.1], which is clearly poor when compared to
Model Toolbox—to extract sensory dissonance features that of noisy sounds, which inhabit the rest of the scale.
from the audio.
Tonalness measures the “perceptual clarity of the pitch or
pitches evoked by a sonority” (Parncutt & Strasburger,
MATTER FORM
1994, p. 93) and can be understood as the reverse
indicator of the spectral flatness descriptor. However,
Main Complementary contrary to the spectral flatness descriptor, it provides a
Mass Pitch more refined description on the range of pitched sounds
Spectral as opposed to noisy sounds. The output of the descriptor
Noisiness Fundamental
variability is high for sounds that evoke a clear perception of pitch,
bass and gradually lower for sounds with increasing inharmonic
Harmonic Brightness character.
Timbre Width Spectral kurtosis gives a measure the “peakedness” of the
spectrum around its mean value (Peeters, 2004). Spectral
Sensory
kurtosis is particularly good at distinguishing between
dissonance pitched sounds that range from pure tones (low kurtosis
(roughness) values) to heavy frequency modulations (high kurtosis
values).
Harmonic
Spectral irregularity inspects the spectrum from low to high
Pitch Class
frequencies and denotes how each bin compares to its
Profile immediate neighbors (Jensen, 1999). It provides a clear
Dynamic Loudness Dynamic distinction between jagged spectra, i.e. sounds from tones
with harmonic or inharmonic spectra jagged (e.g. piano or
profile
violin tone) and smooth spectra and spectral distributions
formed by “bands” or an array of sounds, which is non-
Table 1 – Audio descriptors included in our toolkit. locatable in pitch (e.g. sea sounds and filtered noise).
The combination of descriptors detailed above was
5.1 Criteria of Mass heuristically weighted towards a balanced definition
The mass criteria examine the spectral distribution of a between pitched and noisy sounds. The noisiness
sound object in order to characterize the organization of its descriptor ranges between zero and one. Zero represents
components. It not only attempts to detect spectral a full saturated (noisy) spectrum and one represents a
patterns (e.g., pitch, fundamental bass) but also to provide pure sinusoid without partials. Within these two extremes
general consideration of the spectral distribution (e.g., the descriptor covers the full range of audible sounds
noisiness). The criteria of mass encompass four including instrumental, vocal, or environmental sounds.
descriptors: noisiness, pitch, fundamental bass, and
spectral variability. The first is a main descriptor of matter, 5.1.2 Pitch
the second and third are complementary descriptors of
matter, and the last descriptor falls into the form category. The name of the second descriptor of mass is self-
explanatory; it reports the pitch or fundamental frequency
of sound segments. Pitch is a secondary criterion of mass,
5.1.1 Noisiness since it only conveys meaningful results for pitched
The noisiness descriptor measures amount of noisy sounds. This descriptor is not contemplated in any sound-
components in the signal as opposed to pitched based theory discussed in Sections 2 and 3 because it is
components. Inspired by Smalley’s musicological theory, highly attached to the concept of musical note and does
our measure of noisiness is given by a value that falls in a not provide meaningful descriptions for the totality of
limited linear continuum, instead of Schaeffer’s discrete perceivable sounds. However, the pitch descriptor is
typology. adopted here since it may constitute an extremely
eaw2015 a tecnologia ao serviço da criação musical 35

important element in the composition process when 5.2.1 Brightness


dealing with pitched audio signals.
The brightness of a sound is correlated to the centroid of
Pure Data’s built-in object sigmund~ by Puckette is used its spectrum representation and is expressed by the
to compute the fundamental frequency of (monophonic) magnitude of the spectral components of a signal in the
sounds. The output of the descriptor is in MIDI note high-frequency range (Porres, 2011). Although the root of
numbers. this descriptor resides in psychoacoustics, one can also
find it in Thoresen’s (2007) musicological theory, which
5.1.3 Fundamental Bass pinpoints its importance in linguistics—in order to
distinguish between the sounds of vowels and
The fundamental bass descriptor reports the probable
consonants—and in music—as a distinguishable factor to
fundamental frequency or chord root of a sonority. Similar perceive different traditional acoustic instruments.
to the pitch criterion, it is a secondary criterion of mass,
because it is constrained to the specific range of pitched Brightness is computationally expressed by the “center of
sounds. However, contrary to the pitch descriptor it can be mass” of the spectrum (Peeters, 2004) in units of Hertz
applied to polyphonic audio signals. The fundamental bass and its range has been limited to the audible range of
corresponds to the highest value of the pitch salience human hearing, which is roughly from 20 Hz to 20 kHz.
profile of the spectrum. The pitch salience of a particular
frequency is the probability of perceiving it or the clarity 5.2.2 Width
and strength of tone sensation (Porres, 2012). The
fundamental bass is expressed in MIDI note numbers. Width [7] expresses the range between the extremities of
the spectral components of a sound object. In more
empirical terms, we may say that the width characterizes
5.1.4 Spectral Variability the density, thickness, or richness of the spectrum of a
Spectral variability provides a measure of the amount of sound.
change in the spectrum of an audio signal. It is computed An exact computational model of the width of the spectral
by the low-level audio descriptor spectral flux (Peeters, components of a sound poses some problems, because
2004), which calculates the Euclidean distance between the spectral representation of the audio signal may
adjacent spectra. Spectral variability is a form descriptor encompass an amount of uncontrollable noise, even if the
because it provides a description of the temporal evolution ideal conditions during the recording stage were met.
of the sound object’s spectrum at regular intervals of 11.6 Instead of considering a solution for this long approached
ms (analysis windows encompass 23.2 ms). The output of problem, we adopted a simpler, yet effective workaround:
this descriptor is threefold: a curve denoting the spectral the use of the low-level descriptor spectral spread to
variability of the sound object, basic statistical values (e.g. measure the dispersion (or amount of variance) of the
maximum and minimum values, mean, standard deviation, spectrum around its centroid. In such a way, it does not
and variance) that express characteristics extracted from take into account the extreme frequencies of the spectrum,
the aforementioned curve, and finally a single value that but rather a significant part of it to express the frequency
expresses the overall spectral variability throughout the components’ range of a sonority. Like brightness, the
sound object (computed by the accumulated difference output of spectral spread is in units of Hertz.
between analysis windows of 11.6 ms).
5.2.3 Sensory Dissonance
5.2 Criteria of Harmonic Timbre
The descriptor sensory dissonance models innate aspects
The three musicological theories presented earlier provide of human perception that regulate the “pleasantness” of a
little guidance for the formulation of algorithmic strategies sonority. Even if the sensory dissonance is regulated by a
to describe the harmonic timbre content of a signal. few psychoacoustic factors, it is expressed in the current
Schaeffer’s criteria of harmonic timbre are very misleading framework by what is considered to be its most prominent
and too inconsistent to be encoded algorithmically. factor: auditory roughness. In detail, sensory dissonance
Smalley (1986, 1997) does not provide a specific set of describes the beating sensation produced when two
criteria for harmonic timbre; even if he considers this frequencies are a critical bandwidth apart, which is
dimension while describing the mass of sound objects approximately one third of an octave in the middle range
under spectral typology. Thoresen’s sound spectrum of human hearing (Terhardt, 1974). The partials of
criteria (e.g. spectral brightness) are better adapted for complex tones can also produce a beating sensation when
computational use. Additionally, his criteria served as the the same conditions are met; that is, when they are a
main inspiration for our work, in particular concerning the critical bandwidth apart.
adoption of psychoacoustic models of sensory dissonance
as harmonic timbre descriptors [6]. The following sections
will further detail the four descriptors adopted in our toolkit
5.2.4 Harmonic Pitch Class Profile
to characterize harmonic timbre: brightness, width, The harmonic pitch class profile (HPCP), also commonly
sensory dissonance (roughness) and harmonic pitch class referred to as chroma vector (Serrà et al., 2008) [8], is
profile. All harmonic timbre descriptors fall under the main particularly suitable to represent the pitch content of
category because they can measure properties of all polyphonic music signals by mapping the most significant
perceivable sounds, and offer a representation of the units peaks of the spectral distribution to 12 bins, each denoting
with a single numerical value. a note of the equal-tempered scale (pitch classes). Each
bin value represents the relative intensity of a frequency
range around a particular pitch class, which results from
eaw2015 a tecnologia ao serviço da criação musical 36

accumulating the 25 highest peaks of the spectrum Within the scope of earGram, where these analytical
warped to a single octave. strategies have been evaluated, we were able to apply
them to identify and model higher structural levels of the
5.3 Criteria of Dynamics audio data by grouping sound objects into recognizable
patterns up to the macro-temporal level. These
The criteria of dynamics describe the energy of the sound representations were then used to feed algorithm music
objects in two distinct ways: by a single value that offers a strategies commonly applied for the manipulation of
rough representation of its overall loudness, and by a symbolic music representations. The resulting framework
curve denoting the dynamic profile of the unit. The first can be inserted in, and expands upon, the early
measure is given by the loudness descriptor and the computational approaches to music analysis-synthesis
second representation by the dynamic profile. (Hiller & Isaacson, 1959; Rowe, 1993) on the analysis and
automatic generation of music encoded as symbolic
5.3.1 Loudness representations, towards the use of (digital) audio signals.
The loudness descriptor expresses the amplitude of a unit We additionally encouraged several composers to explore
by a single value and is defined by the square root of the the toolkit assuming a perspective in which all sonic
sum of the squared sample values, commonly addressed parameters or criteria for sound description, such as
as root-mean-square (RMS). The loudness descriptions brightness and sensory dissonance, can act as
are computed by Puckette’s object sigmund~, which is fundamental “building blocks” for composition. This is not
included in the software distribution of Pure Data. to say that every piece composed by these means must
use all sonic parameters equally, but that all sonic
5.3.2 Dynamic Profile parameters may be taken into careful consideration when
designing a musical work and seen as primary elements of
The dynamic profile represents the evolution of the sound
musical structure. For a comprehensive description of
object’s amplitude. It can be useful in cases where the
some of these works please refer to Bernardes et al.,
single value of the loudness descriptor is too crude or
(2012), Gomes et al., (2012), Bernardes (2014) and Beyls,
oversimplifying, such as in the retrieval of sound objects Bernardes, and Caetano (2015).
with similar amplitude envelope.
The output of the descriptor is twofold: a curve that 7. Conclusion
indicates the evolution of the energy of the sound object
(measured by its RMS at regular intervals of 11.6 ms and In this paper we presented a Pure Data toolkit for the
analysis window of 23.2 ms) and some basic statistics computational analysis of sound objects’ morphology in
extracted form the curve, such as minimum, maximum, real-time and offline modes. The analysis tools include
mean, standard deviation, and variance (see Figure 2 for methods for segmenting an audio stream into sound
an example of the dynamic profile and extracted objects using MIR strategies for onset detection and beat
statistics). tracking and a set of audio descriptors that characterize
several (morphological) attributes of an audio signal.
The main contributions of this paper are primarily at the
conceptual rather than technical levels. Our toolkit adopts
a reduced number of descriptors in comparison to
analogous audio descriptor schemes, selected based on
perceptual criteria from phenomenological theories by
Schaeffer, Smalley and Thoresen for the analysis of
sound-based compositions. By establishing mappings
between MIR low-level descriptors and perceptual criteria
defined by terms from music theory and practice, we offer
a more user-friendly experience for users with a music
education background, thus allowing this group to
manipulate audio signals through the indirect use of low-
Figure 2 – Dynamic profile of a sound object drawn on top of its level audio descriptors.
waveform. Basic statistics extracted from the curve are
presented at the bottom of the Figure. A distinctive feature of our toolkit is the adoption of
psychoacoustic dissonance models as audio descriptors,
6. Applications which proved to provide robust characterizations of the
harmonic timbre of sound objects in generative music
This toolkit has been developed to extend the
applications such as earGram (Bernardes et al, 2013).
terminological and operational level of earGram
(Bernardes, Guedes, & Pennycook, 2012), a framework Finally, the toolkit shows great possibilities for music
based on a concatenative sound synthesis for computer composition by offering composers the chance to explore
assisted algorithmic composition that manipulates audio dimensions of sound other than the typical primary pitch
descriptors. For this reason the programming environment and duration attributes of acoustic instrumental and vocal
of our choice to implement the toolkit was the same as models.
earGram, i.e. Pure Data. The tools can be accessed and
used independently of the software earGram though and
are accessible at the following address:
https://goo.gl/1Pa0KH.
eaw2015 a tecnologia ao serviço da criação musical 37

Acknowledgments Bello, J. P., Duxbury, C., Davies, M. E., & Sandler, M. B.


(2004). “On the Use of Phase and Energy for Musical Onset
This work is financed by the ERDF ­ European Regional Detection in the Complex Domain”. IEEE Signal Processing
Development Fund through the COMPETE Programme Letters, vol. 11 (nr 6), p. 553-556.
(operational programme for competitiveness) and by National
Funds through the FCT ­ Fundação para a Ciência e a Beyls, P., Bernardes, G., & Caetano, M. (2015). “The
Tecnologia (Portuguese Foundation for Science and Emergence of Complex Behavior as an Organizational
Technology) within project FCOMP-01-0124-FEDER-037281 Paradigm for Concatenative Sound Synthesis”. Proc. of the
and by the FCT post-doctoral grant SFRH/BPD/88722/2012. 2rd xCoAx Conf., (p.184-199).
Brossier, P. (2006). Automatic Annotation of Musical Audio for
Notes Interactive Applications. PhD dissertation, Centre for Digital
[1] Shazam, http://www.shazam.com, last access on 16 Music, Queen Mary University of London, UK.
August 2015.
Chion, M. (1983). Guide des objets sonores: Pierre Schaeffer
[2] Moodagent, http://www.moodagent.com, last access on et la Recherche Musicale. Paris: INA/Buchet-Chastel.
16 August 2015.
Dixon, S. (2007). “Evaluation of the Audio Beat Tracking
[3] Echo Nest Remix API, http://echonest.github.io/remix, System Beatroot”. Journal of New Music Research, vol 36 (nr
last access on 16 August 2015. 1), p. 39-50.
[4] A sound object denotes a basic unit of musical structure Gomes, J. A., Peixoto de Pinho, N., Costa, G., Dias, R.,
analogous to the concept of a note in traditional Western Lopes, F., & Barbosa, Á. (2014). “Composing with
music approaches, but encompassing all perceivable sonic Soundscapes: An Approach Based on Raw Data
matter (Schaeffer, 1966). Reinterpretation”. Proc. of the 3th xCoAx Conf., (p. 260-273).
[5] All modifications to the original algorithms were made to Gouyon, F., Herrera, P., Gómez, E., Cano, P., Bonada, J.,
improve their computational efficiency. Loscos, A., ... & Serra, X. (2008). “Content Processing of
Music Audio Signals”. In P. Polotti & D. Rocchesso (Eds.)
[6] Please note that Schaeffer clearly rejected the use of
Sound to Sense, Sense to Sound: A State of the Art in Sound
psychoacoustic in his solfeggio because (in his opinion) the in
and Music Computing, (p. 83-160). Logos Verlag Berlin
vitro psychoacoustics experiments did not fully apprehend the
GmbH.
multidimensionality qualities of the timbre (Chion, 1983).
Given the space constraints of this paper, we cannot describe Grachten, M., Schedl, M., Pohle, T., & Widmer, G. (2009).
at length the psychoacoustic dissonance models employed “The ISMIR Cloud: A Decade of ISMIR Conf.s at Your
here. To this end, please refer to Parncutt (1989) and Porres Fingertips”. Proc. of the Int. Conf. on Music Information
(2011). Retrieval (p. 63-68).
[7] Please note that Thoresen (2007) adopts a related term, Hiller, L. & Isaacson, L. (1959). Experimental Music:
spectral width, to characterize a different characteristic of the Composition With an Electronic Computer. New York, NY:
spectrum: the mass (called noisiness in our toolkit). Although McGraw-Hill.
width can be seen as a “satellite” descriptor of the mass (or
noisiness), the two concepts can offer different Humphrey, E. J., Turnbull, D., & Collins, T. (2013). “A Brief
characterizations of the spectra. Review of Creative MIR”. Late-Breaking News and Demos
presented at the Int. Conf. on Music Information Retrieval.
[8] The adoption of the term HPCP instead of chroma vector
is due to its widespread use in musical contexts. Jensen, K. (1999). Timbre Models of Musical Sounds.
Doctoral dissertation, Department of Computer Science,
University of Copenhagen, Denmark.
Bibliography
Landy, L. (2007). Understanding the Art of Sound
Bernardes, G., Peixoto de Pinho, N., Lourenço, S., Guedes, Organization. Cambridge, MA: The MIT Press.
C., Pennycook, B., & Oña, E. (2012). “The Creative Process
Behind ‘Dialogismos I’: Theoretical and Technical Mitrovic, D., Zeppelzauer, M., & Eidenberger, H. (2007).
Considerations”. Proc. of the ARTECH – 6th Int. Conf. on “Analysis of the Data Quality of Audio Descriptions of
Digital Arts, (p. 263-268). Environmental Sounds”. Journal of Digital Information
Management, vol. 5 (nr 2), p. 48-55.
Bernardes, G., Guedes, C., & Pennycook, B. (2013).
“EarGram: An Application for Interactive Exploration of Parncutt, R. (1989). Harmony: A Psychoacoustical Approach.
Concatenative Sound Synthesis in Pure Data”. In M. Aramaki, Berlin: Springer-Verlag.
M. Barthet, R. Kronland-Martinet, & S. Ystad (Eds.) From
sounds to music and emotions, (p. 110-129). Berlin- Peeters, G. (2004). A Large Set of Audio Features for Sound
Heidelberg: Springer-Verlag. Description (Similarity and Classification) in the Cuidado
Project. Ircam, Cuidado Project Report.
Bernardes, G. (2014). Composing Music by Selection:
Content-based Algorithmic-assisted Audio Composition. Peeters, G. & Deruty, E. (2008). “Automatic Morphological
Ph.D. dissertation, University of Porto, Portugal. Description of Sounds”. Proc. of Acoustics 08 (p. 5783-5788).

Brent, W. (2009). “A Timbre Analysis and Classification Puckette, M. (1996). "Pure Data”. Proc. of the Int. Computer
Toolkit for Pure Data”. Proc. of the Int. Computer Music Conf., Music Conf., (p. 224-227).
(p. 224-229). Peeters, G., Giordano, B. L., Susini, P., Misdariis, N., &
Brent, W. (2011). “A Perceptually Based Onset Detector for McAdams, S. (2011). “The Timbre Toolbox: Extracting Audio
Real-time and Offline Audio Parsing”. Proc. of the Int. Descriptors From Musical Signals”. Journal of Acoustical
Computer Music Conf., (p. 284-287). Society of America, vol. 130 (nr 5), p. 2902-2916.
eaw2015 a tecnologia ao serviço da criação musical 38

Porres, A. (2011). Dissonance model toolbox in Pure Data. Schwarz, D. (2006). “Real-time Corpus-based Concatenative
Proc. of the 4th Pure Data Convention. Synthesis with CataRT”. Proc. of the Int. Conf. on Digital
Audio Effects, (p. 279-282).
Puckette, M., Apel, T., & Zicarelli, D. (1998). “Real-time Audio
Analysis Tools for Pd and MSP”. Proc. of Int. Computer Music Serrà, J., Gómez, E., Herrera, P., & Serra, X. (2008). “Chroma
Conf., (p. 109-112). Binary Similarity and Local Alignment Applied to Cover Song
Identification”. IEEE Trans. on Audio, Speech and Language
Ricard, J. (2004). Towards computational morphological Processing, vol. 16 (nr 6), p. 1138-1152.
description of sound. Master thesis, Pompeu Fabra
University, Barcelona, Spain. Smalley, D. (1986). “Spectro-morphology and Structuring
Processes”. In S. Emmerson (Ed.) The Language of
Rowe, R. (1993). Interactive Music Systems: Machine Electroacoustic Music, (pp. 61-93). Basingstoke: Macmillan.
Listening and Composing. Cambridge, MA: The MIT Press.
Smalley, D. (1997). “Spectromorphology: Explaining Sound-
Schaeffer, P. (1966). Traité des Objets Musicaux. Paris: Le shapes”. Organised Sound, vol. 2 (nr 2), p. 107-126.
Seuil.
Thoresen, L. (2007). “Spectromorphological Analysis of
Schnell, N., Cifuentes, M. A. S., & Lambert, J. P. (2010). “First Sound Objects: An Adaptation of Pierre Schaeffer’s
Steps in Relaxed Real-time Typo-morphological Audio Typomorphology”. Organised Sound, vol. 12 (nr 2), p. 129-14
Analysis/Synthesis”. Proc. of the Sound and Music Computing
Conf..
eaw2015 a tecnologia ao serviço da criação musical 39

II. Som e Imagem | Sound and Image


eaw2015 a tecnologia ao serviço da criação musical 40

2.1 Vertiges de l’image: a personal account on an audiovisual improvisation project

António de Sousa Dias Escola das Artes/Universidade Católica Portuguesa (EA/UCP), Portugal; Instituto
Superior de Educação e Ciências (ISEC), Portugal

This project intends also to be a proposal on renewing the


Abstract art of the improvisation and its base can be traced back to
the parcourse of Les Phonogénistes. This French group,
“Vertiges de l’Image” is a project on audiovisual computer based in Paris, started in 1998, and dedicates to
experimentation, gathering music improvised by electroacoustic music improvisation, experimenting new
Phonogénistes with the images generated live by António forms, materials and instruments, from the use of the
de Sousa Dias. The project started in 2010 and its last didgeridoo, sheet metal plate, knife bread alongside with
performance took place in 2013. During three years samplers, to the use of instruments such as the recorder,
several experiments were made and this text represents a the Lemur (JazzMutant) or more recently the Karlax (DA
first personal attempt to given an account of the different FACT) through interface software such as ProTools (Avid)
phases of the project, presenting and discussing the or Max for Live (Ableton; Cycling74). Moreover, the
artistic decisions made and carried out in order to achieve personal and professional path of its members promoted
a set of audiovisual improvisation proposals coherent but an eclectic practice of the group regarding contemporary
always willing to preserve performance freedom. musical genres and collaboration with current artists from
Keywords: Audiovisual improvisation, Image and Sound new media, cinema, theatre, visual and literary arts.
interaction and relationships, Live Electronic Music, Hence, after “Vertige de l’espace”, we decided, in 2010, to
Performance. start the “Vertiges de l’image” project. At that time I
become more and more involved with visual aspects in
Introduction media art in close relationship with sound, e.g. Monthey’04
The use of digital systems, and other electromechanic (Sousa Dias 2012), and Tonnetz09 (cf. e.g. Santana &
devices for image and sound generation prompt for new Santana 2015) and this project represented the
challenges to the concert as a standardized audiovisual possibilities of making a step further.
form. More, the fact that computers can integrate The first presentation of “Vertiges de l’image” took place in
audiovisual instruments in improvisation also lead to the a concert dedicated to Audio-Video performances, in
exploration of different forms of expression in this field. Rennes (Bouckaert et al. 2010). It went through
Hence, "Vertiges de l'Image" explores the exchanges subsequent development and we can trace its important
between improvisation on image synthesis by Antonio de key moments, such as:
Sousa Dias and improvised electroacoustic music by the
• the first performance, presented at
French group Les Phonogénistes (Laurence Bouckaert,
Pierre Couprie, Francis Larvor). “Performances Audio-Vidéo" (JIM 2010 Rennes);

The collaboration between us dates from 2009, and was • the artistic residency at the Groupe de
developed through the project "Vertiges de l'Espace" Recherches Musicales (INA-GRM) in August
(Sousa Dias, 2010) and (Couprie & Sousa Dias, 2011). 2010;

In “Vertiges de l’image”, the exploration of the interaction • the DVD production, from September 2010 until
between the four artists, becoming alternately soloists or summer 2011, and the performances held in
tutti, either responding to or following through the images 2011, from September onwards;
and sounds, aims to create configurations and incessantly • the artistic residency at La Tour de Guet - Visions
renewed spaces. Image and sound sculptures involving contemporaines (La Beaudelie – Le Saillant de
the listener, in order to guide him in a universe that we aim Voutezac) in August 2012 and the subsequent
that combines refinement and complexity, looking for performances in 2012 and 2013.
constantly surprise the eye and the ear of the audience.
The next sections will present an account of the visual
This interaction places the composition of image and artistic aims of the project alongside with the solutions
sound at the centre of the improvisation device allowing found and are organised around these key moments.
for an exploitation of intersections between different
artistic disciplines. “Vertiges de l’image” aims to represent First steps, first directions
one possible answer to the challenges prompted by the
new instrumental settings. Hence, the base configuration Once the group agree with the project starting point, an
provided by Les Phonogénistes, is completed by the audiovisual performance based upon sound and image
instruments brought by Sousa Dias, taking as starting improvisation, we start rehearsing and researching forms
point the flexibility required in real-time performance of communication between music and image. The
situation and the knowledge coming from electroacoustic attempts to create meaningful situations lend me to
music studies on the diffusion, projection and audiovisual explore the possibility of generating visual material directly
interpretation. from the music being improvised, improvising also either
with some base image elements or some image
eaw2015 a tecnologia ao serviço da criação musical 41

processing over the elements generated by the music.


This process was achieved through programming in Max
(Cycling74), in particular through its Jitter library (Figure
1).

Figure 2 – Example of animated texture obtained by processing of a


sonogram analysis taken in real time.

Figure 1 – Rehearsal setup. the laptop on the foreground exhibits the


Max/jitter patch used to generate, process and control image, the
screen upward showing the rendered image.

I was interested in exploring intimate aspects in image but


not being distracted by my musical experience. Hence, the
project seemed to me an excellent opportunity, as Les
Phonogénistes would produce the music leaving me totally
free to concentrate on image.
In the continuity from another works exploring
relationships possibilities between music and image, I
wanted to concentrate on abstract visual material, material
which meaning could convey the musicians gestures and
Figure 3 – Example of sphere deformed by musicians sound
musical style. This choice in a “experimental film” amplitude.
approach justifies because of the absence of explicit
definitions for the use of music in the cinema These materials were presented in the “Vertiges de
“known as “experimental”, even abstracted graphics, in which l’image” first performance (Bouckaert et al. 2010).
the concepts of frame, off screen, spatiotemporal coherence, Although conceived to be produced in real-time, they
bodies and objects consistency are, either non-existent, or revealed to be rather deceptive. In fact, most of the time,
much less essential” (Chion 1985 p.89). the quality of the image generated through the real time
On the other hand, also according to Chion, the “simple system designed for the event delivered a poor resolution
presence of music can create the sense of a scene” quality rendering at two to ten images per second.
(Chion 1995 p.121). Hence, this opens a field of
possibilities as, regardless of the image, the music A turning point: The INA-GRM artistic residency
provided by the musicians contributes to some sort of and the DVD production – The INA-GRM artistic
characterization, even if it is not clearly defined imposing a residency
process of subjectivation to the eye.
The above-mentioned problems led to a careful
Regarding a choice of visual materials, there were two consideration among the members of the project. During
main approaches: an artistic residence at INA-GRM, in August 2010, we
the use of produced textures to be subjected to further image decided to revise totally the visual approach, as it did not
processing - I was interested in exploring the concept of skin proved to be efficient. Moreover, the most important
as I was very impressed with Thierry Kuntzel (1948-2007) aspects to consider, a good image rendering quality as
installation, La peau (2007); well as a good amount of freedom in image and sound
exchange in performance, were not being fulfilled as
the generation of material through Max/Jitter, based upon
desired.
image transformation/deformation from the musical
gestures made in real time (Cf. Figure 3 & Figure 2). In fact, in these audiovisual improvisation performances
we wanted that the audiovisual proposals fulfilled us and
where along with artistic initiative there was a joyful
exploitation of the potentialities of the instruments
developed for these situations. Hence, these instruments
eaw2015 a tecnologia ao serviço da criação musical 42

should respond to different situations where the and “freeze” sounds through granulation techniques or the
watchwords are: reuse, adaptability and confidence in the exploitation of inner and suggestions of an off-screen
response of the equipment. Indeed, one of the main space either from the point of view of music or sound,
problems found in the use of video in real time was the specially in conjunction with cinema.
development of a strategy that could guarantee this
One possible example is shown in the following figures.
desired flexibility, speed and reliability.
First, we present a moving figure clearly separated from a
black background (Figure 5).
The visual production setup
In “Vertiges de l’image” there is a theoretical and aesthetic
framework underneath the choices made in terms of
image and sound, but also the options specific to each
domain. Regarding the visual tools, and preserving an
“experimental film” approach, we decided to use a more
effective software for video improvisation, such as
Resolume Avenue (Resolume) a software for “VJs, AV
performers and video artists”.
The final image is created through superimposition of
several layers of images (clips or still images) with
transformations or processing applied at different levels. In
this case, the workflow is arranged in a way more suitable
for performance (Figure 4) and some of its parameters are
controlled by an Evolution UC33 MIDI control interface to
provide more flexibility in real time.

Output Figure 5 – Figure (animated column) against black background.

In a second moment, the figure enlarges itself occupying


General Effects
the entire screen, revealing that the background was in
& processing stage
fact a foreground cache (Figure 6).

Layer 1 Layer 2 Layer 3 Layer 4


Effects Effects Effects Effects

AV Material AV Material AV Material AV Material


Clips and stills Clips and stills Clips and stills Clips and stills

Figure 4 - Vertiges de l'image: Resolume Avenue general patch


structure.

This approach was supported by the reuse of previously


generated or recorded, post-produced and rendered
material and the addition of new material obtained trough
ArtMatic Designer (U&I Software), a modular graphics
synthesizer.

The main visual directions


The idea of skin, in its multiple meanings, was enlarged. Figure 6 – Initial figure occupying the screen, background revealed
For example, taken as an equivalent for frontier, but being as a foreground cache.
itself part of the territories it is supposed to divide, led us
to the exploration of links, ambiguities on territories Finally, this moving image gradually stops its movement
defined by poles such as: and reveals to be a still taken from a photo (Figure 7).

• figure and background; As the main objective of "Vertiges de l'Image" is to


associate the improvisation of both music and video, we
• still image and moving image; seek to create relationships between the music and the
• frame (painting) and cache (cinema) in the sense image. One of the main strategies is based on the
of (Bazin 2005), either concentrating on the perception of the movement either by concurrence or
image, either suggesting the existence of an off- contrast. For example, the music and the image are
screen space. produced using very agitated and nervous materials or
evolving at a slow pace (concurrence) or music and image
These theme were rather important as they can configure complement each other – very agitated music against still
musical counterparts in a broad sense, as we can find images (contrast).
examples of entanglement of figure and ground in music
(cf. Hofstadter 1979, p.70-71), bridges between “moving”
eaw2015 a tecnologia ao serviço da criação musical 43

• Track #3: Les chemins furtifs. Slow mouvement,


based upon the transformation of synthetic
generated clowds and clusters of stars.
• Track #4: Inex. Exploration of illusion of
stereoscopy on a 2D image.
• Track #5: Pile je fonce, face je pile. Free
exchanges in image, exploring colour inversion,
contraction and expansion.
• Track #6: Tout devient rouge à zéro. A blink to
carbon drawing through the presence of gray
tones deformed round forms “responding” to
music against a white background.
These films became also the ground upon which we build
the improvisation setups, and allowed us to perform
improvisation for about one hour, such as the Lisbon
Figure 7 – Original still image obtained from a photo with a blur effect. performance (Bouckaert et al. 2011a).

The DVD production The artistic residency at La Tour de Guet -


One of the ideas behind the project was the possibility of Visions contemporaines: One step further
exploring other means of communication. After the INA- In 2012, we followed an artist residency at La Tour de
GRM residency, and regarding and listening to all the Guet - Visions contemporaines (La Beaudelie – Le Saillant
produced material, we decided to produce a DVD, based de Voutezac), followed by a participation in the
upon edited material. This was a almost an year Rencontres Internationales de Création Temp'óra at
production process, from September 2010 until July 2011.
Cenon, France (Temp’ora association 2012). During the
All the materials were subjected to fine-tuning and residency two directions were carried out. The first one,
rendering to obtain six short length movies, in the manner was the participation of Jean-Marc Chouvel at the piano
of audiovisual études. (Bouckaert et al. 2012) the other was the search and
addition of new materials. The previous directions were
These films, had a first presentation in the form of
then enriched with the idea of moving towards figurative
audiovisual installation, six screens, at La Fonderie de
and “concrete” visual materials. In consequence there
l’image (Bouckaert et al. 2011b) (Figure 8) and were
were added three sets proposals: one more figuration-
released in the summer 2011.
oriented, one more “concrete” material oriented and one
presenting an ambiguity between the previous two.
In the figurative side, we explored a video footage
recorded in the La Petite Grange’s garden.
The set proposes a first moment of “puzzled” image
(Figure 9), followed by the garden footage clearly identified
(Figure 10) several processing techniques (Figure 11), as
well as inserts in the form of close shots taken from the
main frame.

Figure 8 - Vertiges de l'image, installation at Visions du sonore, La


fonderie de l'image (2011).

The short films obtained and their main directions can be


freely described as follows:
• Track #1: Saros. Quick pace movement, playing Figure 9 – “The garden” (provisory title). Multiscreen effect.
with colourful objects.
• Track #2: “...do destino dos anjos.”. Mouvement
based upon an interplay of figure and
background.
eaw2015 a tecnologia ao serviço da criação musical 44

Figure 10– “The garden” (provisory title). The main image setup. Figure 13 – “Concrete” material example: effect processing #1. Base
material: a superimposition of clips containing footage either of marks
on edited celluloid film or film burning.

The third setup presents an ambiguity between the


previous two, in the sense that its base material is rather
figurative, as it is based upon the animation and
processing of a a SMPTE colour bars test pattern. Despite
the geometrical and abstract form it reveals, this is a form
that every viewer in our tradition becomes acquainted.
This test pattern represents either the need to fine tune an
AV device, the lack of video signal, the announcement that
the program in the tape will follow and its presentation in
an animated and disruptive way creates a tension
between what we see and what we think we should be
Figure 11– “The garden” (provisory title). Symmetry effect. seeing (Figure 14 and Figure 15).
These setups were added to the previous ones and given
Along with the exploration of a rather figurative material
in performance during the last quarter of 2012 and the first
I’ve decided to explore also the idea of micro-narrative
semester of 2013.
forms, as defined also by the appearance of a blurred
character walking taken in different moments of that
trajectory. All these elements, try to contribute to a
subjectivation process were each element of the audience
can build its large scale narrative.
The second setup “concrete material oriented” is based
upon clips presenting footage taken from celluloid film,
hence the idea of “concretness” of the material used. This
main idea is complemented with the “revelation” of the
nature of the material itself as the clips present celluloid in
its normal presentation way. The materials present either
footage showing editing marks (in or out edit marks) or
celluloid film burning during projection (a twist as film
burning is filmed from the projection point of view) (Figure
13).

Figure 14 – “Concrete” material example: processing of a SMPTE


colour bars test pattern #1.

Figure 12– “The garden” (provisory title). “Mysterious” character


appearing on stairs and simultaneously on left side (suggestion of an
off screen).
eaw2015 a tecnologia ao serviço da criação musical 45

Bibliography
ArtMatic Designer, U&I Software llc. Last access on July 30th
2015, at http://uisoftware.com/
Bazin, A. (2005, 1st ed. 1967). Painting and Cinema. What Is
Cinema? (Vol. I), London: University of California Press,
p164-169.
Bouckaert, L., Couprie, P., Chouvel, J.-M., Larvor, F. & Sousa
Dias, A. (2012) Vertiges de l'Image #3 - Performance
audiovisuelle (version with piano). Presented at Rencontres
temp'óra, Rocher de Palmer - Cenon, 30th August. Video
example: Vertiges #3 par Les Phonogénistes. Last access on
July 30th 2015, at
http://www.dailymotion.com/video/xtfstu_vertiges-3-par-les-
phonogenistes_music
Bouckaert, L., Couprie, P., Larvor, F. & Sousa Dias, A. (2010)
Vertiges de l’image - Performance audiovisuelle. Presented at
Figure 15 – “Concrete” material example: processing of a SMPTE "Performances Audio-Vidéo" in Journées d'Informatique
colour bars test pattern #2.
Musicale, Rennes: Université Rennes 2, 27th May.

Conclusion and further directions Bouckaert, L., Couprie, P., Larvor, F. & Sousa Dias, A.
(2011a) Vertiges de l’image - Performance audiovisual.
“Vertiges de l’image” represents an important turning point Presented at Festival Música Viva, Lisbon, September.
in my career as its issues are still echoing in my works and
Bouckaert, L., Couprie, P., Larvor, F. & Sousa Dias, A.
productions, even in my approach to sound and image
(2011b) Vertiges de l’image - installation audiovisuelle – 6
editing teaching. It helped me to clarify the underlying
écrans. Presented at Visions Sonores, Campus de La
psychophysiologic motivations to each treatment and Fonderie de l'image, Bagnolet, 17th to 26th June. Last access
choice of fixed or moving image, the relationship between on July 30th 2015, at
intended and obtained effects, and their consequences, http://www.campusfonderiedelimage.org/agenda/visions-
advantages and disadvantages in performance as well as sonores-au-campus
a deeper understanding of performance techniques and
the setup and processing for performance. Bouckaert, L., Couprie, P., Larvor, F. & Sousa Dias, A. (2012
& 2013) Vertiges de l'Image #2 - Performance audiovisuelle.
However, some issues remain open. Even if I explored Presented at Paris: Les Voûtes, November 2012, March &
some of editing figures and tried to enlarge our discourse May 2013.
strategies, there are still many possibilities to further
Chion, M. (1985). Le son au cinéma. Paris: Ed. de l'étoile.
experimentation. Moreover, I think that staging can be
further considered: the actual setup of “Vertiges de Chion, M. (1995). La musique au cinéma. [Paris]: Fayard.
l’image” implies that the musicians sit in front of a Couprie, P. & Sousa Dias, A. (2010) “Vertiges de l'espace:
projection screen like the audience, that can result in the analyse d'une performance électroacoustique improvisée”.
feeling of music for a silent movie (ciné-concert). In this Communication presented in Comment analyser
regard, we are considering the possibility of a different l’improvisation - Colloque international, Ircam, Paris, 12-13
position for the musicians, on stage, and a change the February.
image statute, for example, making video projection
Couprie, P. (2011), Vertiges de l’images: quelques photos.
around or even over the musicians, providing a fully
Last access on July 30th 2015, at
audiovisual integration through videomapping.
http://www.pierrecouprie.fr/?p=113
Finally, I would like to stress out that even if “Vertiges de
Hofstadter, D. (1979). Gödel, Escher, Bach: An Eternal
l’image” remains a personal reading on possible Golden Braid, Harmondsworth: Penguin, 1979.
relationships between image and sound, the passion we
put in every rehearsal, every performance, was our Karlax, DA FACT. Last access on July 30th 2015, at
manner to put in practice Baudelaire’s verse, http://www.dafact.com/

“Les parfums, les couleurs et les sons se répondent.” Lemur, JazzMutant. Last access on July 30th 2015, at
http://www.jazzmutant.com/lemur_overview.php
Final note Les Phonogénistes - Musique électroacoustique improvisée,
art multimédia. Last access on July 30th 2015, at
I would like to thank Laurence Bouckaert, Pierre Couprie, http://www.phonogenistes.fr/
Francis Larvor (Les Phonogénistes): their complicity all
over these years, the rich discussions and ideas exchange Max for Live, Ableton. Last access on July 30th 2015, at
where fundamental to accomplish the visual part of this https://www.ableton.com/en/live/max-for-live/
project. I am also in debt to António de Macedo and Max, Cycling74. Last access on July 30th 2015, at
Susana de Sousa Dias for their advice and directions in https://cycling74.com/
the production of the DVD short films: their cinema
knowledge and experience was very important special in ProTools, Avid. Last access on July 30th 2015, at
http://www.avid.com/us/products/family/pro-tools
the editing phase of the project DVD.
eaw2015 a tecnologia ao serviço da criação musical 46

Resolume Avenue, Resolume. Last access on July 30th


2015, at https://resolume.com/
Resolume Avenue, Resolume. Last access on July 30th
2015, at https://resolume.com/
Santana, H. M. S. & Santana, M. R. S. (2015) A instalação
sonora como espaço de arte plural: a eletrónica ao serviço da
determinação das obras Tonnetz 09-B (2010) e A Dama e o
Unicórnio (2013) de António de Sousa Dias. J. Bidarra, T.
Eça, M. Tavares, R. Leote, L. Pimentel, E. Carvalho, M.
Figueiredo (Editors) 7th International Conference on Digital
Arts – ARTECH 2015, Óbidos.
Sousa Dias, A. (2010) "Vertiges de l'Espace: un instrument
pour la performance électroacoustique improvisée".
Communication presented in Actes des Journées
d’Informatique Musicale, AFIM/Université Rennes 2, Rennes.
Sousa Dias, A. (2012) Installation: Monthey'04 (version 2012).
Communication presented in Congrès Mondial d’Écologie
Sonore #2. Arc et Senans. Last access on July 30th 2015, at
http://www.architecturemusiqueecologie.com/Programme_fr.h
tml
Temp'óra association (2012). Rencontres Internationales de
Création Temp'óra, Cenon, France. Last access on July 30th
2015, at http://www.tempora-site.org/spip.php?rubrique112
Vertiges de l’image (2011). Bouckaert, L., Couprie, P., Larvor,
F. & Sousa Dias, A. France: PNGNTS. DVD.
eaw2015 a tecnologia ao serviço da criação musical 47

2.2 Low Cost, Low Tech – composing music with electroacoustic means, not being an
electroacoustic composer

Francisco Monteiro C.E.S.E.M.; E.S.E./IPP, Portugal

Abstract
Composing
Some years ago the visual artist and a friend Albuquerque What music?
Mendes asked me to do the music for a film he produced,
called A Última Ceia – “The last Supper”. The first – immediate – problem I remember was how to
deal with such a long and mesmerizing film. But soon
The film is a 20 minute long and slow tracking shot along a enough several other problems arose:
big table with many guests, back and forth, in the front and
behind. No sound was recorded on that 2001 dinner, the 1) The role of music
ambiance is relaxed, Albuquerque Mendes among friends; a. as an illustration of the pictures;
all the elements, from lights to décor, are wonderfully
cared. It is a silent film of a performance. b. or as a contrasting element; or as a
mixture of both;
I never had any electroacoustic experience as a creator;
nonetheless, at that time, I had some 12 years of c. or as something aside the movie,
continuous work as a composer. And I really wanted to do although both appearing together; as
music for this film, having in mind that it was thought to be with Merce Cunningham's choreography
presented in exhibitions and other visual art’s occasions. and J. Cage's music.

This paper proposes a view of a composer with no 2) But it was also very important
particular knowledge or even interest in electroacoustic 3) The friendly and soft ambiance of the movie – the
means, composing music that can be understood as chats, the friends, the looseness of those
electroacoustic. Therefore it intends: moments;
1 – to describe the creative and compositional basic 4) My experience as a friend of A.M., with whom I
processes used in “A Última Ceia”; had long aesthetic discussions, always very free
2 - to describe how simple software (Finale™ and and calm;
Audacity™) was used; 5) The music he likes to hear, the music I like to
3 – to discuss what it means to be a composer and/or a hear and play, the music we talked about in long
electroacoustic composer - the differences, the trips in France and Germany;
obsessions, the problems, the fetishes, the needs. 6) Space and time, pictures and sound, avant-garde
I
Keywords: Aesthetics, video, instruments. and post-modernism, modern and Egyptian art ,
performance art – some of the themes we
discussed.
Introduction
It’s interesting to compare the first of these preoccupations
About 2011 I received a phone cal from a visual artist and
– the role of music - with the functions of music in film as
performer, my friend Albuquerque Mendes (A.M.), asking
described in Annabel Cohen (Cohen, 2001, p. 258) and
me to make the music for a film he did in 2001. The film –
used in A Última Ceia:
A Última Ceia (“The Last Supper”) – is the result of a
performance A.M. did with some of his friends and 1) masking superfluous noises, (not applicable in A
coworkers: a table full of men (one of the long sides of the Última Ceia),
table is empty), full equipped for a supper, with A.M. in the 2) providing continuity, (a central feature in A Última
middle, people eating, drinking, apparently chatting Ceia),
friendly with each other.
3) directing attention to specific features,
The event – the performance – was video recorded: the (interesting through the work with A Última Ceia),
lights, all the recording conditions were wonderfully cared;
no sound was recorded - there’s no sound during this 20 4) inducing mood, (not relevant in A Última Ceia),
minutes long video. The camera makes long and slow 5) communicating meaning, (not really applied),
tracking shots along the dining table, back and forth,
focusing the faces, the table, the dishes and the food, the 6) enabling association through Leitmotiv, (very
backs of the chairs, the hands, etc. important in A Última Ceia),

The music, as was clear in discussions with A.M., would 7) intensifying the sense of reality, (somehow
unify the film, giving some kind of movement, of drama, applicable),
perhaps a new sense. The end result would be presented 8) adding aesthetic value (a clear intention in A
in visual art’s exhibitions, perhaps in an experimental film Última Ceia).
festival.
The first thing I did was what I easily though as “small talk”
music, a kind of continuum that would unify the entire
eaw2015 a tecnologia ao serviço da criação musical 48

movie, mixing the individual chats, evolving in time: a also in very low frequency percussive elements, a kind of
twelve tone matrix that was used to compose music for a reality element amidst the slightly sensual, delight supper
string quartet in pizzicato. ambiance.
A final material was the trombone theme, appearing only
How to compose and present the music? in the second half. It was thought as another contrasting
Then I realized that what I was doing was music to be theme, the more electronic acousmatic of all.
performed, played, for musicians, with notes, rhythms, The initial twelve tone materials - the “small talk”
textures, harmonies, etc. But what I wanted and what I continuum - generated reiterations of different sort with
was writing was hardly playable: not fit for musicians but passages of different textures. All materials were
for virtual instruments - a writing program that also could organized according to the sequence of visual events,
“play” the written music – Finale™. producing different interactions with the video, sometimes
I wonder if there were enough funds to contract real as Leitmotive, others as contrasting elements,
musicians, even for studio recording. And, I have to mesmerizing or awakening the public, but also interacting
confess, I felt quite excited about composing music with internally. The whole became a structure, following
electroacoustic means, not being an electroacoustic proportional temporal principles, and, at the same time,
composer. according to the succession of the different takes of the
movie. In fact the movie became, in my view, much more
I developed these first music moments, enlarged it, dramatic, expressive, and much more effective in its
retrograded it, used it simultaneously, transformed it in a transient particularities. The elements, the different image
plastic way. Audacity™ – the only software I dared to use perspectives, all became parts of a narrative, involving
– helped me on this: it was very clear to me that the result light, objects, food, parts of bodies, music.
would be an Audacity™ file, then transformed in a sound
format. But it continued to sound instrumental – some The music file was reworked (mastered) by a sound
ultra-virtuosity string quartet playing pizzicato. And I felt professional and added to the video.
very happy about it.
Rethinking the process
The composing process Rethinking the process, it is important to
highlight that
The composition process developed in a way that the
musical materials were dialectically controlled: their 1) the main parts – the materials – were thought for
properties as expressive features, as meaningful ashes of instruments, or for groups of instruments, for
music history, as ground for further (compositional and instrumental sounds; but
electronic) developments, as part of a specific film
2) the music was not thought for musicians because
narrative; and always attentive of their contexts and their
st of its performance difficulties, and
paradoxes in 21 century post-modern – or better in this
case post-avant-garde – culture. 3) the work was indeed done with electroacoustic
means, although very simple: the instrumental
The video shows a performance made in 2001, designed
sounds were electronically modified, developed,
by an – à peine – post-avant-garde painter and performer,
transformed.
showing men (no women is present) somehow connected
to contemporary art (marchands, curators, artists, critics, It could be “re-composed” for real instruments, but
friends): everything seems to be symbolic. perhaps it would be very different.

The music materials The use of instrumental sounds


The elaboration of music materials became important. I used instrumental sounds because that was my interest,
First of all a fake “Schubert theme” was made, very my idea, the way I really began quickly to understand the
important for me and, perhaps, for A.M.: this theme is pictures with sounds; perhaps that's the way I imagine
clearly the result of the discussions and of the musical music, the way I reflect on what I want to hear, the way I'm
tastes of A.M. and myself. The fake Schubert was really used to do music. Those instrumental sounds were
hard to make, as the intention was to give the almost sometimes difficult to choose (the serial theme was a
arrhythmic image of the expressiveness of musicians mixture of vibraphone, harp and marimba sounds), but
performing Schubert in Wiener Style, especially the well- they corresponded to what I really want to hear. And they
known Moments Musicaux – perhaps iconic of the limited the way I composed, most of the times very
Biedermeier style. The fake Schubert became later also a traditional in what concerns the infrastructure (scales,
transformed echo, as heard in a fair loudspeaker’s system: chords, series), in the techniques (counterpoint,
a fake fake-Schubert, nothing more than an exhalation of transposition, segmenting): those first instrumental
Biedermeier Kitch, or any kind of Kitch. choices limited the whole creative work.
Another important material was done as a serial theme:
percussive sounds appearing in different contrasting ways, The use of technologies
constructed with series of rhythms, dynamics and pitches. But, nevertheless, I used Audacity™. It served to do
This percussive material served as a dramatic contrast in counterpoint (sometimes I had 4 layers at the same time
the whole piece: it became important the dynamic series, doing the “small talk” theme), cutting and transforming the
the Marimba+Vibraphone+Harp sounds were transformed materials, transposing it to impossible octaves,
eaw2015 a tecnologia ao serviço da criação musical 49

transforming the sounds (with Gverb.™, fading, velocity sounds in space; but also the structuring of sounds in
changes, etc.); but rarely transforming it to a point where time, traditionally thought in 2 levels: 1. the opposition of
there was no link to the initial material: acousmatic sounds discreet elements, as also in instrumental music, and 2.
were not in my mind, most of the time. continuous variation of sound objects, specific for
electroacoustic music (Aguilar Salgado, 2005, p. 51). As
Instrumental, Electroacoustic and Sonic art – the what concerns sonic art, several other items can be also
differences essential, such as the source of sounds, the environment
The categories of sounds and the objects producing sounds (as in sound
sculptures).
A group of categories was thought over in order to
In what concerns instrumental music, the focus is the
understand the differences between instrumental,
music materials used and transformed – themes,
electroacoustic and sound art creation. These categories
harmonies, timbres and groups of instruments, etc.:
try to describe and analyze the process of creation on
different levels: already made sound organizations. And, of course, the
structure – the way the sound organizations appear and
1. Basic Materials – with what is made music; are transformed in time and space.
although “sound(s)” could be the unquestionable
A Última Ceia uses themes as in any conservative
answer common to all kinds of music, this
instrumental piece.
question is directed to the type of sounds, to its
origin,
3 – The Process
2. The Focus/Obsessions – this question reflects
the main interests and the obsessions of the The analysis of computer works unveils a certain tendency
composer. among composers to choose, on the one hand, between
attention to the creation of computer instruments and
3. The Process – how does the composer works timbres (“to compose the sound itself”. To refer to J. C.
with the materials? Risset's definition; the concept is as valid for music in
4. The Means – which artifacts does the composer studio time as for real time or live electronic music) or on
uses in his creative process? the other hand to consider the computer software as a
means to realise ideas relating to form. Nevertheless, that
5. The Results – what's the result of the creative radical subdivision is relative. (Zattra, 2006, p.115),
process?
Through structural elaborations and/or through sound
6. How is it presented/performed – meaning the transformations, the composing processes of
way people can ear the results. electroacoustic music and sonic art are far from the
material development and structuring as it is usual in
The answers instrumental music. It seems that the process of
1 - Basic Materials composing is utterly connected to the basic materials and
to the focus of composition.
Sonic art and electroacoustic music use recorded and/or
created sounds: it's fundamental the relation of the In A Última Ceia the compositional processes were, as in
composer to the sound creation, sound recollection and instrumental music, music materials development and
transfiguration; materials are sounds (electronic made, structuring. But, sometimes, the use of sound
recorded, transformed). For instrumental music the transformations (the fake Schubert theme transformation,
possible sounds are already chosen: the sounds of the the cutting and transposing of the serial “reality” theme,
instruments. And the sounds are, most of the times, the trombone theme elaborations) is very present,
already in a previously organized construction, as a group although very incipient comparing to electronic music.
of pitches (s. a. the 12 notes of the temperament), or a
narrower set; or reduced to a group of sounds possible do 4 – The Means
produce with a specific instrument (it's a short list, even Electroacoustic and sonic art uses technology for all
using extended techniques). purposes (making, recording and transforming sounds).
A Última Ceia uses the sound of instruments pre-recorded Instrumental music uses written materials (paper and
in Finale™; and also transformations of these sounds pencil) and/or scoring software (tech) s. a. Finale™.
through Audacity™. In A Última Ceia it began with paper and with
2 - The Focus/Obsessions experimenting with the piano. Then the scoring software
was added and the results transformed through
Unlike écoute traditionelle, the traditional musical model,
where a ‘repertoire of timbres’ (écouter) and a system of
Audacity™.
musical values (comprendre) leads to the types of listening
appropriate to a traditional sound-world, la recherche 5 – The Results
musicale would ideally be derived from the development of a
system of musical values and structures based upon a return Clearly the result of electroacoustic music and sonic art
to the sound itself, mostly through the type of listening composition is a recording; in instrumental music the result
Schaeffer identifies as entendre. (Windsor, 2000, p. 8-9). is a score, so that musicians can play; and then, perhaps,
they will be recorded, becoming also a recording.
The main focus of electronic music and sonic art is sound:
sound creation, sound recollection, sound transformation, In A Última Ceia the result is a recording.
eaw2015 a tecnologia ao serviço da criação musical 50

6 – The Presentation Conclusions


The main way instrumental music is presented is in live A Última Ceia was a challenge in several ways, not only
performances; but we shouldn't neglect recordings of live technically but also aesthetically. Making the music to an
and studio performances. Electroacoustic music and sonic experimental film and being a friend of the film's author
art is presented as a recording: live with public or in any was particularly important. The process of composing,
other way where a recording can be reproduced. sliding between instrumental music techniques and
electroacoustic results was peculiarly interesting as it
In A Última Ceia the result is presented with the video
arose several questions discussed in this paper.
recording.
This discussion emphasizes the very different concerns,
A Última Ceia versus electroacoustic music and activities and results of instrumental, electroacoustic and
sonic art sonic art composers: although they share the same
ground - sounds in space and in time - the materials, the
As a non-electroacoustic composer I didn’t care for sound needs, the perspectives, the obsessions and the results
construction, never thought on transforming the sound can be very different.
materials into others completely different, never used or st
transformed curves: I used directly basic music Music composing in the 21 century encompasses a very
infrastructures, I used ready-made instrumental sounds wide range of activities, involving tradition, culture, our-
(Finale™ sounds), I developed the materials according to day's various needs for sound-constructions, technology,
their potential in instrumental music, although sometimes creative impulses, aesthetic questions, and people hearing
transforming it using electroacoustic means (Audacity™). in different ways. Perhaps in the future there will be the
need to understand what we now call music as different
My preoccupations were not based on sound but on sound activities, each of them joined to a specific social behavior:
constructions, on the transformation and structuring of music for cellulars, music for dancing, music for singing,
music materials. And when I thought about sound, I did it music for airports, music for the concert hall, music for
as a response to structural preoccupations, resolving film, music for musicians.
problems focused on materials.
Nevertheless I believe that a composer of any kind of
What I heard in my mind from the beginning was not
music needs to have various experiences with different
sounds – everyday sounds or created sounds – but
techniques, from easy-listening melody making to
already constructed sound materials: themes that were
counterpoint and to sonic art processing: the capacity to
repeated, transformed, reorganized.
change, to innovate, to adapt to different means will be
And this perhaps obscure, limited way of doing music is much wider; the results in a specific creative music activity
utterly different from the unconditional contact with can be much more interesting and meaningful.
sounds, any sound, created or existing sounds. Contrary
to J. Cage, what gave me pleasure was not sound but What concerns my personal experience, I'm just
sound structures, not sound per se but meaningful beginning.
constructions made with specific sounds, thought in social,
historic, cultural and musical terms. These constructions Notes
were the result of a specific agglomerate of personal and
I - A private blague meaning the contrast between the sparse
cultural implications: sounds people hear only when they
use of elements in modern art and the profusion of forms in
hear western and other civilization's music, played in
Jugend Style / Art Nouveau.
acoustic instruments.
And this is altogether different from electroacoustic music Bibliography
and from sonic art.
Aguilar Salgado, A (2005). Processos de estruturação na
escuta de música eletroacústica. Dissertação apresentada ao
Music as a cultural symbol Curso de Mestrado em Música do Instituto de Artes da
This specific perspective of creativity with sounds is a Universidade Estadual de Campinas, UNICAMP, Campinas,
symbol of a specific kind of music, a human creation Brasil.
thought to be done with the voice and/or instruments, Cohen, A. (2001). “Music as a source of emotion in film”. In
perhaps recorded and reproduced: it confirms the Juslin, P., Sloboda, J., (ed.). Music and emotion: Theory and
symbolic condition of a piece of music as something that research. New York: Oxford University Press. p. 249–272.
somehow sounds as expected – as music; it corroborates
Windsor, L. (2000). “Through and around the acousmatic: the
its cultural significance.
interpretation of electroacoustic sounds”. In Emmerson, S.
Very different are electroacoustic and sonic art pieces, (ed.) Music, Electronic Media and Culture. Ashgate e-book.
with sounds most of the times with no link to acoustic Accessed on 28th July 2015,
instruments and/or to the voice: the link to traditional http://static1.squarespace.com/static/50e79ec7e4b07dba6
music culture has to be mediated through aesthetic help. 0068e4d/t/51570575e4b06ce8229e26c9/1364657525203/
The same problem seems to appear in pieces that use Windsor.pdf.
exhaustively instrumental extended techniques: its link to Zattra, L (2006). “The Identity of the Work: agents and
traditional music culture becomes elusive; the gap is, processes of electroacoustic music”. Organized Sound - An
perhaps, overpassed by the presence of the musicians in International Journal of Music Technology. Vol.11, n.2, p.
a live performance. 113-118. Cambridge: Cambridge University Press.
eaw2015 a tecnologia ao serviço da criação musical 51

2.3 Som, corpo, movimento e cena na trilogia de solos "música do pensamento" para
piano semi-preparado e técnicas expandidas de Joana Sá

Joana Sá Universidade de Aveiro/ INET-MD, Portugal

Abstract ilustrar da melhor forma a linguagem deste conjunto de


obras.
“Music of thought”, an expression by George Steiner, is A seguinte questão colocada por George Steiner cria o
used in this trilogy of solos as an idea that unifies all the mote de abertura da formulação desta “música do
pieces – “Is there in some kindred sense ‘a poetry, a pensamento”:
music of thought’ deeper than that which attaches to the
“Existirá [...] «uma poesia, uma música do pensamento» mais
external uses of language, to style?” “Music of thought”, in
profunda do que aquela que se refere aos usos exteriores da
this context, is meant as a personal language linguagem, ao estilo?” (STEINER, 2012, p. 15).
encompassing multiple dimensions and forms of
expression (music, visuals, movement, words), where A minha noção de música do pensamento refere-se a
music undertakes a central role, articulated with Novalis’s uma linguagem com um imaginário pessoal e onírico
notion that: “the exterior is an interior distributed through próprios, com diversas dimensões e formas de expressão
space”. Here, music and scene create this “interior (musical, visual, da palavra, do movimento), nas quais a
distributed through space”. The body in performance is música assume um papel central. Nesta linguagem, corpo
thought as a “spatial body” and movement as “acting e movimento cumprem importantes funções tanto no ato
thoughts”, influencing and getting influenced by space and compositivo como na performance, já que estas duas
scene, as in Bachelard: “the inhabited house is no longer ações se encontram interligadas. Existe assim, implícita
a space, but that which surrounds a body”. na música, uma coreografia, ou como designa Janet
Halfyard, um “teatro de ações” (Halfyard, 2007).
Sound, movement, words and scene are approached as a
unified structure, where listening and seeing are A linguagem da trilogia de solos procura uma
interlocked, using Bachelard’s idea of “dreaming devices: aproximação à noção de pensamento, essa acção mais
to see and listen, ultra-see and ultra-listen, listen to abstracta que ocorre no nosso interior “sem qualquer
yourself seeing”. possibilidade de partilha” (Tavares, 2013, p. 274) e aos
mecanismos mais imediatos da imaginação e de criação
The music resorts to extended piano techniques:
da “imagem poética”. (Bachelard 1957, p.2) Esta
preparations, installations, sound processing, sound
linguagem aproxima-se em vários aspectos às ideias de
recordings, other instruments, props and sound
Artaud e do seu “teatro da crueldade”, no que diz respeito
amplification. Through these techniques, the performance
à procura de uma “linguagem única a meio caminho entre
often assumes an idea of choreography, of a “theatre of
o gesto e o pensamento” [3] (Artaud, 1964, p.138); à
action”. The visual aspects spring from collaboration with
procura de “um estado antes da linguagem que pode
film director Daniel Neves, and the visual artists Rita Sá
escolher a sua linguagem: música, gestos, movimentos,
and Pedro Diniz Reis. I therefore propose to present in
palavras” [4] (Artaud, 1964, p.94); à concepção de um
detail the use of sound and the cinematic approach to
“espetáculo que se dirige ao organismo inteiro” [5]
scene in this trilogy.
(Artaud, 1964, p. 135) e, como Artaud adianta:
Keywords: Music of thought, Prepared piano, Extended um espetáculo que não tema ir mais além na exploração da
techniques, Through this looking glass, In Praise of nossa sensibilidade nervosa, com ritmos, sons, palavras,
Disorder; ressonâncias. [6] (Artaud, 1964, p. 135);

e ainda a um tipo de concepção de espetáculo que não


Introdução está presa a um texto pré-definido mas que é pensada de
A trilogia de solos, cujas ideias apresento neste artigo, raiz com todas as suas dimensões.
encontra-se ainda em processo de concretização: as duas
Esta linguagem forma-se assim em redor do som e da
primeiras peças – through this looking glass e Elogio da
música, numa relação sinestésica com todos os outros
Desordem – estão concluídas, tendo o seu registo sido
elementos, procurando a partilha de uma experiência
editado [1] e as peças apresentadas ao vivo [2] em
sensorial intensa e invulgar. A este tipo de relação
diversas salas de concerto. Nesta trilogia cumpro o papel
sinestésica associo a expressão “aparelhos de sonhar”
de compositora e performer, encontrando-me neste
usada por Bachelard referente ao poeta Loys Masson,
momento em processo de criação da última peça.
expressão esta que é usada como se fosse uma forma
“Música do pensamento” é a ideia poética unificadora diferente de percepcionar: “aparelhos de sonhar”
desta trilogia que tem sido imaginada e construída ao significando “ver e ouvir, ultra-ver e ultra-ouvir, ouvir-se
longo do processo de criação num mecanismo de vai vem ver” (Bachelard apud Tavares, 2013, p.445).
entre prática artística e investigação teórica. A expressão
Para esta relação sinestésica e para a noção de música
“música do pensamento” é usada por George Steiner no
do pensamento é ainda importante a ideia de Novalis,
seu livro A Poesia do Pensamento – Do Helenismo a
reformulada por M. Tavares de que o “exterior é um
Celan e foi por mim apropriada por achar que esta poderia
eaw2015 a tecnologia ao serviço da criação musical 52

interior que se distribui pelo espaço” (Tavares, 2013, p. “O movimento como pensamento que age, que se explicita,
463). Esta linguagem tem assim uma dinâmica própria que ocupa espaço; o movimento como pensamento tornado
entre interior/exterior provocando frequentemente visível.” (Tavares, 2013, p. 209)
situações limite através de movimentos de abertura e Poderemos relacionar as ideias evocadas com a primeira
exteriorização de densidade sonora máxima conseguidas ideia referida de Novalis e suspeitar que o exterior e o
e de movimentos de fechamento nos quais procura ir ao interior poderão, de facto, partilhar matérias, e que a sua
limiar do audível. constituição principal poderá ser eventualmente o próprio
1. O dentro e o fora movimento: segundo estas teorias, o pensamento poderá
ser visto como movimento interior e, por sua vez, o
Começarei por abordar algumas ideias poéticas, movimento exterior poderá ser visto como exteriorização
filosóficas e imagens literárias relativas às dialéticas do pensamento.
interior/exterior, aberto/fechado, dentro/fora que são Na minha investigação sobre o movimento e sobre esta
importantes para a minha música do pensamento. Refiro relação entre interior e exterior, interessou-me analisar a
em primeiro lugar a ideia já evocada de que “o “exterior é possível continuidade ou descontinuidade entre
um interior que se distribui pelo espaço” (Tavares, 2013, movimento de pensamento da ideia ou gesto musical
p. 463). Nesta ideia está implícita uma relação íntima original e movimento exterior (gesto performativo,
entre o dentro e o fora, uma espécie de partilha material execução da ideia).
de conteúdos entre exterior e interior, ideia esta que me
Quanto a esta relação entre movimentos interiores e
interessa levar para a concepção de música do
exteriores, (e de primeira importância para a noção de
pensamento - uma linguagem exterior que evoca uma
música do pensamento), é imprescindível referir o papel
linguagem interior (o pensamento). Bachelard aborda
da imaginação, como movimento, mas um movimento de
também esta questão ligada à sua “fenomenologia da
natureza peculiar.
imaginação” no capítulo “La dialectique du dehors et du
dedans” [7] da sua obra “A Poética do espaço”, opondo-se Bachelard escreve:
a uma dialéctica de simples dicotomia, afirmando que “Le A imaginação é, antes de mais, um tipo de mobilidade
dedans et le dehors ne sont pas laissés à leur opposition espiritual, o tipo de mobilidade espiritual maior, mais viva e
géométrique” [8] (Bachelard, 1957, p. 206). Dentro deste mais vívida. [10] (Bachelard, 1943, p. 7)
contexto Bachelard fala da complexidade dos movimentos
de abertura e de fechamento inerentes ao Homem, A imaginação é, para Bachelard “a própria experiência da
definindo-o a este como o “ser entreaberto” [9] (idem). abertura, a experiência da novidade” [11] (Bachelard,
1943, p. 5-6 ) Relacionando-a com a “dialéctica do dentro
Estas ideias de Novalis e Bachelard contêm já nelas e do fora”, cito ainda M. Tavares que diz:
próprias uma visão de movimento implícito e será sobre
a imaginação é um movimento interior que projeta o mesmo
este movimento, a diversas escalas e contextos que
movimento interior para as coisas que constituem o seu
continuarei a pensar e escrever. objecto de ação. (Tavares, 2013, p. 401)

E, citando Bachelard na sua obra “A Poética do


2. Pensamento e Imaginação - movimento Devaneio”, M. Tavares escreve:
interior vs. movimento exterior O homem em trabalhos de imaginação está “num dentro que
já não tem fora”. (Tavares, 2013, p. 374)
O pensamento tem sido relacionado com o movimento e a
dança por autores como Wittgenstein, Valéry, Gil e Nesta trilogia, a imaginação e o seu movimento criativo
também M. Tavares, que escreve: “o pensamento move- implícito são talvez os princípios mais importantes, já que
se, anda, acelera, salta, dança, digamos: o pensamento a música procura situar-se num estado primordial, de
como que pratica desporto” e é designado pelo mesmo ebulição, onde ainda não existem categorizações e
autor como “o movimento humano por excelência”. impossibilidades: tudo fervilha numa espécie de estado
(Tavares 2013: 274). A expressão “movimentos do onírico e utópico nesse estado “dentro que já não tem
pensamento” é usada por Wittgenstein que escreve: fora” (idem).

durante o ano de 1913-14, tive alguns pensamento meus. Associado a este estado, o meu imaginário de
[...]. Quero dizer que tenho a impressão de que nessa altura compositora e performer acaba por ter um carácter infantil
dei vida a novos movimentos do pensamento [...]. Ao passo na abordagem ao som e ao próprio instrumento: eu
que agora pareço apenas aplicar velhos movimentos. componho nesse estado de curiosidade primário perante
(Wittgenstein apud Tavares, p. 2013, p. 274 ) um instrumento carregado de História e de “seriedade”.
Neste contexto tento estabelecer uma abertura: o ritual e
Valéry, por sua vez, questiona:
abordagens institucionalizados assimilados por mim são
“O que é uma metáfora a não ser uma espécie de pirueta da postos de parte e é procurada uma nova relação e
ideia cujas diversas imagens ou cujos nomes se pessoal com o som, técnicas instrumentais, performance
aproximam?” (Valéry apud Tavares, 2013, p.273)
e espaço.
Em sentido inverso, na relação do movimento exterior
com o pensamento, M. Tavares, referindo-se à “teoria do 3. Ideia musical e gesto performativo –
passo” de Bachelard, escreve: descontinuidade
O início do séc. XX , no contexto da História da música
ocidental, acentua uma tendência para o distanciamento
eaw2015 a tecnologia ao serviço da criação musical 53

entre os papéis de compositor e intérprete, tornando-se a Referindo-se a Berio e às suas Sequenzas, Halfyard
sua relação muitas vezes fonte de conflitos. Neste período refere-se ao Teatro de Ação como o uso de técnicas
a música: instrumentais invulgares pelo instrumentista e
entrou deliberadamente em conflito no universo dos
consequentes gestos e ações invulgares que levam a
símbolos, tal como os designaríamos hoje, e das ideias. O uma dupla desconstrução da expectativa do espectador: a
compositor tornou-se como o poeta e o pintor, um “artista” expectativa de como um músico se comporta em palco e
cujos ideais e cuja forma de ver o mundo parece desdenhar o a desconstrução da expectativa de como o instrumento
bric-a-brac artesanal dos músicos profissionais [12] (Berio, soa (semelhante ao efeito de Verfremdung de Brecht). No
1985, p.18). seu Teatro de Virtuosismo o gesto performativo é
A crescente abstração da “ideia” musical e a distância integrado no processo composicional, já que Berio
criada entre esta ideia e prática musical (Berio, 1985), isto trabalhava com intérpretes específicos e neste tipo de
é, entre a mente do compositor e as capacidades do trabalho existe uma partilha de saberes e técnicas entre
intérprete, levaram a que a notação se tornasse cada vez compositor e performer.
mais complexa e diversa, ganhando um novo estatuto e Neste tipo de colaborações, na minha opinião, não existe
papel muito importantes. Esta dimensão escrita ou uma total descontinuidade entre ideia musical e a sua
explícita – privilegiada, de forma geral, na nossa execução: o gesto musical final pode ser feito a partir de
sociedade como forma de conhecimento [13] procurou gestos, técnicas do intérprete que são repensados e
tornar-se cada vez mais “a representante” da “ideia reformulados pelo compositor ou o contrário, ideias do
musical”. Schoenberg, a este propósito, referiu que o compositor que podem ser reformuladas pelo intérprete.
performer é
totalmente desnecessário, excepto pelo facto de as suas 4. Ideia musical e gesto musical – continuidade
interpretações tornarem a música compreensível a um
público que tem a infelicidade de não conseguir ler música. Em relação à trilogia em causa, na qual começo por
[14] (Schoenberg apud Cook in Cook, 2009, p. 204). assumir o papel de compositora, não existe
descontinuidade entre a ideia musical e o gesto musical:
Esta descontinuidade entre ideia musical e execução leva eles nascem e desenvolvem-se em conjunto.
frequentemente a uma dificuldade de interpretação
acrescida da parte do performer, sendo frequente estas Assim, a ideia musical surge muitas vezes aliada a uma
duas dimensões musicais entrarem em “conflito”. Neste coreografia: a ideia musical chave da peça que dá o nome
processo, o intérprete tem muitas vezes de se “reinventar” à obra Elogio da Desordem nasce de uma ideia
na forma de pensar a música e a técnica. Margarethe coreográfica na qual está implícita uma situação de
Maierhofer-Lischka abordou esta questão no seu recital- impotência física e mental perante uma simultaneidade de
conferência “Approaching the liminal in the performance of ações a desempenhar. Esta ideia surgiu a partir de uma
Iannis Xenakis’ instrumental solo works” no PERFORMA parceria com o artista plástico Pedro Diniz Reis, autor do
2015 (na Universidade de Aveiro). Em relação à peça Livro dos AA [17], do qual selecionei algumas páginas , a
“Theraps” para contrabaixo, de Xenakis, escreve: ”esta partir daí determinando regras e materiais musicais para
peça coloca um desafio específico aos limites físicos, os acontecimentos gráficos. Desta forma, três conjuntos
mentais e emocionais do performer transportando-o para de páginas selecionadas foram por mim transformados
um outro estado mental” [15] (Correia, Carvalho & em partituras gráficas. Num desses conjuntos, uma linha
Pestana, 2015, p. 111). Maierhofer-Lischka referia este de “AA” contínua e intermitente foi transformada em
processo como uma profunda luta interior, da qual ostinato rítmico que propulsiona a peça do princípio ao
acabava por nascer um relacionamento diferente com o fim. Foram por mim definidas zonas de ação, no teclado
instrumento, uma maior maturidade musical e pessoal e do piano, à esquerda e à direita (registos grave e agudo)
uma nova qualidade de movimento e de estado mental na do ostinato que ocorre na zona central, e que assim
performance. correspondem à espacialização gráfica de outros grupos e
figurações de “AA” na partitura. Esta, tal como eu a
Este tipo de partitura corresponde, de certa forma, ao concebia e estabelecia, era impossível de ser tocada na
Teatro de Virtuosismo como forma de Teatro de Ação de sua totalidade, já que todos os acontecimentos gráficos
Janet Halfyard: a ideia de virtuosismo como reinvenção e que ocorriam em simultâneo em diferentes registos
reabilitação do virtuosismo (Halfyard 2007) através de (espaciais e sonoros) eram impossíveis de ser realizados
técnicas de expansão sonora do instrumento. com apenas dois braços: assim, a ideia da peça era que,
Quanto à descontinuidade entre ideia e gesto musical, a com o ostinato sempre frenético e unidirecional, eu
sua natureza é diferente consoante compositores. No tentasse realizar o máximo de ações possíveis
caso de Xenakis, este afirma: simultâneas, colocando-me também numa situação limite.
Mais tarde quis voltar a esta ideia, mas sem estar presa a
I do take in account the physical limitations of performers. [...]
um grafismo específico que não tivesse sido pensado de
In order for the artist to master the technical requirements he
has to master himself. Technique is not only a question of
origem como uma partitura. Assim, pude acrescentar
muscles, but also nerves.” [16] (Maierhofer-Lischka, 2015, p. ainda mais camadas sonoras ao ostinato de base e
7). desenvolver outras técnicas, cujas ideias não partiam das
figurações gráficas. Concebi uma instalação de
Para Xenakis não existe propriamente um afastamento do campainhas e sirenes que é tocada por mim com uma
pensamento performativo, antes vê a ideia musical como pedaleira e que permite criar outro nível de ações com
um desafio interior e profundo que deve ser colocado ao sons contínuos ou intermitentes; utilizei também caixas de
performer a vários níveis. ruído [18] que são tocadas com as mãos. Deste modo
eaw2015 a tecnologia ao serviço da criação musical 54

aumentei o número de ações simultâneas possíveis, “não se trata de suprimir a palavra no teatro, mas de lhe
transformando a peça numa espécie de coreografia mudar a sua finalidade e sobretudo de reduzir a sua
mecânica do impossível. Contrariamente à situação em importância” [20] (Artaud, 1964, p.111)
que a máquina tenta desempenhar o papel de uma Steiner escreve na introdução da sua mesma obra A
pessoa, aqui é a pessoa que tenta desesperadamente Poesia do Pensamento – do Helenismo a Celan: “este
desempenhar a simultaneidade, rapidez e insensibilidade ensaio é uma tentativa de escutar melhor” e diz mais à
maquinais. No final, a tarefa torna-se impossível, frente:
conduzindo a um colapso do performer, no caso, eu
É possível conceber que o discurso falado, para nada dizer
própria.
do escrito, seja um fenómeno secundário. Talvez estas duas
Na referida peça, Elogio da Desordem, encontro-me, formas de linguagem encarnem um declínio de totalidades
enquanto intérprete, numa situação limite, sendo que esta primordiais da consciência psicossomática que continuam a
foi imposta e criada por mim própria e não por um outro intervir na música. É demasiado frequente «enganarmo-nos»
quando falamos. Pouco antes de morrer, Sócrates canta.
compositor, o que é novidade face ao que é mencionado
(Steiner , 2012, p. 21)
por Xenakis e Berio. Na minha música é frequente eu
explorar a vários níveis os binómios ou opostos Na sua concepção de Escuta, Jean Luc Nancy,
interior/exterior, recorrendo muitas vezes a situações estabelece uma relação entre som e sentido que muito me
sonoras e performativas limite, sendo estas situações fascinou nesta investigação e que em muito contribui para
limite aquelas que mais me fascinam trabalhar. a minha progressiva formulação da ideia música do
pensamento. Para além da relação entre som e sentido
Na já citada referência que Halfyard faz ao “teatro de
propriamente dita, interessa-me a forma como nela estão
ações ou “teatro do virtuosismo”, encontro conceitos que
envolvidos timbre, movimento (e relação dentro/fora),
também se podem aplicar às peças da trilogia, sendo que,
corpo e espaço (como lugar de ressonância).
por comodidade, passarei a designá-los para o meu caso
específico como coreografia de ações, já que o meu Quanto a esta relação, Nancy escreve:
pensamento musical está próximo de um ideal talvez seja preciso que o sentido não se contente com fazer
coreográfico do que teatral. sentido(...), mas além disso ressoe. (Nancy, 2014, p.17)
A continuidade entre ideia musical e gesto performativo e, avançando no seu discurso, vai mais longe afirmando
nesta trilogia, faz com que haja também continuidade que poderá:
entre movimento do pensamento e movimento exterior ou
gesto musical: eles surgem em conjunto, num gesto único tratar a «pura ressonância» não somente como a condição
mas como o próprio envio e a abertura do sentido, como
que cria a ideia musical.
além-sentido ou sentido que ultrapassa a significação
A primeira relação entre som e imagem na trilogia é, deste (Nancy, 2014, p.54)
modo, esta relação música/ movimento: é música para se
Segundo Nancy, “o sentido e o som partilham o espaço de
ver e ouvir. O sentido completo da música só está
um reenvio”, (Nancy, 2014, p. 22), sendo este “lugar da
presente quando todas as dimensões da linguagem estão
ressonância”(Nancy, 2014), também designado por “entre
presentes.
ou antro do som”, (Nancy, 2014 p. 32) e apresentado
como um “sujeito” não “fenomenológico”, um “sujeito
5. Pensamento/ Linguagem/ Som / Sentido ressoante” (Nancy, 2014, p. 41-42). Este sujeito ou “si”
Na concepção de uma linguagem com várias dimensões refere-se “à forma ou à estrutura e ao movimento de um
na música do pensamento, a palavra é utilizada com um reenvio infinito” (Nancy, 2014, p. 23), ou ainda ao
papel secundário, como uma coisa que emerge da “desdobramento rítmico de um envolvimento entre
música, mas que nunca chega a concretizar-se de uma «dentro» e «fora»” que são, no final, “traduzidos” em
forma efectiva. A palavra tem um papel poético, abstracto sujeito como “ataque do tempo” (Nancy, 2014, p.68), em
e de co-criação de um imaginário, sendo a relação sujeito como o próprio timbre.
estabelecida por mim entre pensamento e linguagem, Quanto ao timbre, Nancy refere-o como “matéria sonora”
muito próxima da estabelecida por Artaud. ou o “real do som” (Nancy, 2014, p. 69), ideias que
Gonçalo M.Tavares escreve no seu Atlas do Corpo e da gostaria de relacionar com a “imaginação material” de
Imaginação que para Artaud Bachelard e com a minha forma de abordagem ao timbre.
Assim, a “imaginação material” de Bachelard “vai pensar a
a relação pensamento-palavra é (...) uma relação de perda
matéria, sonhar a matéria, viver na matéria ou – (...) –
constante: o pensamento perde quando se expressa em
palavras. Há uma diminuição da intensidade, uma diminuição
materializar o imaginário.” [21] (Bachelard, 1943, p.13)
de força racional, de força de entendimento [...]. (Tavares, Num “gesto de materialização minha” da ideia de
2013, p. 260) “imaginação material” de Bachelard, passo a relacioná-la
Artaud leva esta ideia ao limite quando escreve no seu com a minha abordagem ao timbre. Esta abordagem, tem
“Teatro e o seu duplo” que as palavras “param e um carácter quase “alquimista”: parte de uma procura de
paralisam o pensamento em vez de permitir e favorecer o materiais, de diferentes recursos e de experiências feitas
seu desenvolvimento”. [19] (Artaud, 1964, p. 172) com eles em interação com o instrumento. Estas
experiências procuram encontrar sons que me
Quanto à sua utilização da palavra e ao papel que lhe é surpreendam (a ”magia da alquimia”) e que me
atribuído, Artaud afirma que: interessem como matéria-prima para a música. M.
Tavares, a propósito das ideias de Bachelard, fala do
eaw2015 a tecnologia ao serviço da criação musical 55

“potencial de ativação do imaginário” dos objetos. 7. Corpo que pensa/ /corpo rodeado ou espacial
(Tavares, 2013, p. 383).
Ainda relacionado com esta minha abordagem de ideia
A aproximação que Nancy faz entre timbre e sentido e a musical que é construída em grande parte com o corpo e
noção de “imaginação material” interessam-me trazer com o seu movimento, num acto quase “carnívoro” ,
para a concepção da noção de música do pensamento e gostaria de associar, entre outros, a noção de corpo como
para a minha abordagem “material” ao som. Na forma de um organismo que pensa como um todo.
pensar o som desta trilogia, o timbre tem uma importância
Como diz M. Tavares:
fundamental, é o cerne de toda a questão: cada obra tem
um “instrumento único”, ou seja, o piano “é” e soa um Há a tendência imediata para colocar o pensamento no seu
“piano diferente” em cada obra da trilogia. Assim, em meio por excelência – onde a terra lhe será mais propícia – o
“through this looking glass”, o piano é semi-preparado cérebro. No entanto, o corpo humano tem outros meios,
outros sítios, onde o pensamento se dá bem, diríamos.
com diversos objectos (molas, parafusos, rebites), por
(Tavares, 2013, p.485)
diversas ocasiões é processado com efeitos (de pedais de
guitarra), são usados vários utensílios e técnicas de Segundo Wittgenstein o pensamento é:
manipulação do interior do piano e é ainda usado um toy realizado pela mão, quando pensamos por intermédio da
piano como extensão sonora. Em Elogio da Desordem o escrita; pela boca e pela laringe, quando pensamos por
piano é outro: não existe processamento sonoro do piano, intermédio da fala” (Wittgenstein apud Tavares, 2013 p.484).
mas existe um prolongamento sonoro do mesmo através
da instalação de campainhas e sirenes [22] controlada por M. Tavares fala em “mão que pensa”, “laringe que
mim através da sua pedaleira, e ainda por caixas de ruído pensa”. Valéry afirma nos seus “Estudos Filosóficos”:
[23]. O piano é também semi-preparado, mas desta vez, “Esta mão é filósofa”. Ligado a esta ideia está ainda o
através de ímanes de diversos tamanhos e feitios [24] conceito de “atletismo afectivo” de Artaud, descrito pelo
Quanto ao piano de brincar, ele está também presente e é próprio como uma
usado como prolongamento do registo agudo. espécie de musculatura afectiva que corresponde a
localizações específicas dos sentimentos”. (Artaud, 1964, p.
Ainda relacionado com esta questão do timbre e de 199)
escuta, surge uma outra característica destas peças: a
amplificação sonora do piano. O ator é visto como um “atleta do coração”, sendo que
esta concepção inspirou muitos artistas, entre os quais,
O piano é amplificado por diversas razões, sendo a mais
um dos intérpretes mais importantes do séc.XX, o pianista
importante relativa a uma questão estética e uma questão
e compositor David Tudor (Ca
de escala que está associada ao pensamento
electroacústico: eu quero que os efeitos produzidos no bañas, 2012).
piano (por vezes efeitos pouco sonoros) possam ser Relacionado com esta noção de corpo que pensa, e
percepcionados pelos espectadores da mesma forma ligado à noção de “excedente cinético” referida no capítulo
que eu ouço quando estou debruçada dentro do piano e anterior, gostaria de mencionar ainda a noção de M.
quero poder ser eu a manipular a escala de cada registo Tavares de “corpo rodeado” ou “corpo espacial”
ou efeito. Por questões mais práticas do que estéticas, o “influenciado e influenciando o espaço” e ainda que
espaço sonoro destas peças é sempre construído através “rodeia e é rodeado pelo tempo” (Tavares, 2013, p. 189).
de um sistema stereo.
Estas ideias de corpo que não acaba nele mesmo, de
6. Prolongamento do gesto musical: registo música que não acaba com o movimento e de movimento
que continua no espaço são essenciais na minha
electroacústico
linguagem: há uma espécie de rastilho incendiário entre
O que é comum a todas as obras da trilogia (penso fazer todos os elementos presentes.
isso também na terceira obra) é a utilização de registos
electroacústicos que são lançados por mim em concerto. 8. O Espaço
Estes registos funcionam como prolongamento do som do A minha relação com o espaço, enquanto performer é,
instrumento e do gesto musical. Na performance, o som porém, uma relação mais complexa. Nunca fui uma
está quase sempre associado ao movimento, sendo que perfomer nata, uma daquelas pessoas que desde sempre
quando estes registos são lançados e eu deixo de tocar, o sentiram o prazer do palco, muito antes pelo contrário: a
som ganha o seu estatuto autónomo, quase etéreo e minha relação com o “espaço-palco” era uma relação de
desprendido de uma acção específica – ele “continua o pânico e terror absoluto. O meu prazer por tocar em
movimento” da ideia musical, “eleva-se” no espaço. público foi conquistado e construído com a ajuda de
M. Tavares cita Sloterdijk quanto à questão do “excedente diversas “muletas” por mim imaginadas e criadas, sendo
cinético” no movimento: “Quem se move, move sempre que uma delas está ligada à transformação da minha
mais do que a si próprio” (Tavares, 2013, p. 109). relação com o espaço.

Relacionando com esta questão de “rasto” do movimento Na mudança desta relação, o espaço cénico, para ser o
ou do som, M. Tavares, referindo-se a Artaud escreve: espaço da minha música em particular, passou a ter um
papel mais ativo: ele tem de a “continuar”, tem de trazer
Só o corpo e o seu movimento, (...), se expressam no espaço
outras dimensões ao seu imaginário.
de uma maneira excitante, isto é: que não termina ali, que
continua, que faz continuar. (Tavares, 2013, p. 258)
eaw2015 a tecnologia ao serviço da criação musical 56

Por outro lado, há uma outra característica relativa a esta Sobre a deliciosa descrição de ninho de Michelet referida
relação no meu pensamento musical: ele acaba por por Bachelard na “Poética do espaço”, M. Tavares afirma
“ocupar” (literalmente) espaço, ele “molda” o espaço. algo que vai na mesma direcção:
Assim, as extensões sonoras idealizadas (como a vemos aqui o espaço assumir um papel animalesco (de
instalação de campainhas e sirenes, no caso de Elogio da animal apenas), de ser vivo: o espaço é um animal. (Tavares,
Desordem) habitam o espaço cénico comigo, tornam o 2013, p. 414)
solo num solo quase “acompanhado”. Passo a não estar Estas noções interessam-me relacionar com a minha
completamente “sozinha” em palco: aqueles objectos forma de pensar a música e performance onde este lado
sonoros ganham contornos humanos, rodeiam-me, “animalesco” está muito presente a nível sonoro,
“protegem-me”, estão comigo, tornam-se parte do meu performativo, visual e textual.
imaginário e, no fundo, quase fazem música de câmara
comigo. Para finalizar e, como factor mais importante para
concepção de espaço e para a dimensão visual da
Como exemplo de pensamento musical que molda o trilogia, passo a referir as colaborações que tenho feito
espaço, dou um exemplo de uma peça de through this com artistas visuais específicos (Daniel Costa Neves, Rita
looking glass na qual toco debaixo do piano, com um bilro Sá e Pedro Diniz Reis) com competências que eu não
e uma baqueta no tampo e nas traves do piano com o possuo, mas com os quais partilho profundamente
pedal sustain em baixo, sendo o som captado misturado sensibilidades estéticas e artísticas.
com processamento electrónico (um delay com pitch-
shifter). Nesta peça não se trata apenas de explorar o Rita Sá criou um mobile de criaturas gigante para through
potencial sonoro que está, de facto, “escondido” debaixo this looking glass, criaturas estas que se relacionam com
de qualquer piano. Trata-se de trabalhar este potencial o imaginário musical da peça (a primeira parte desta obra
aliado a um imaginário mais alargado – eu vou para baixo denomina-se de 13 mini-creatures for R. Schumann na
do piano (o espaço é moldado, condensado a este qual é feita uma evocação simultânea das Cenas Infantis
recanto do piano) num gesto semelhante ao de uma do compositor referido e do imaginário de Alice). As
criança que vai para baixo de uma mesa e procura um miniaturas desta primeira parte são pensadas como
abrigo fora do real, uma porta/abertura para um mundo estudos sonoros para aquele “piano recriado”. Como
imaginário (e eu vou vestida com um figurino que se cada estudo utiliza técnicas e sonoridades pouco
assemelha ao de Alice de Carroll). Neste contexto, o convencionais, e como a minha abordagem ao piano tem
potencial sonoro é exponenciado: o que se descobre um carácter muito “orgânico” (na qual manipulo todas as
debaixo de uma mesa, (que neste caso é um piano), é, suas partes e uso vários objetos nesta manipulação), eu
para o nosso imaginário infantil, um tesouro precioso, imaginei estas peças como sendo quase “organismos
“magia pura”. Este tipo de imagens fazem parte do vivos”, como sendo uma espécie de “criaturas bizarras”.
nosso imaginário colectivo, tal como Bachelard refere na Assim, a sequência das 13 miniaturas assemelha-se à
sua “Poética do espaço”. Aqui, a música aliada à acção sequência de encontros de Alice (de Carroll) com criaturas
com o espaço ganha, desta forma, uma outra dimensão. estranhas e de comportamentos fora das lógicas
estabelecidas. Estas “minhas criaturas” acabam por
A minha ideia de espaço nesta trilogia não acaba com “ganhar corpo” no final da performance num momento em
estas relações (corpo/ música/ movimento/ espaço) que o mobile desce sobre o toy piano e eu começo a
referidas, ela “continua o movimento” destas dimensões e tocar, manipulando-as.
relações.
Esta concepção da música pode aproximar-se do espaço
O espaço precisa de ser concebido e construído como
uma “casa onírica” (Bachelard), um abrigo imaginário, “animalesco” referido anteriormente, mas aqui é o som
uma utopia pessoal. Para Bachelard “o espaço habitado que ganha contornos “animalescos” que se envolvem e se
transcende o espaço geométrico” (Bachelard, 1957, p.58), corporalizam no espaço de várias formas.
escrevendo M. Tavares a este respeito: Quanto a Pedro Diniz Reis, já aqui foi referido o seu Livro
Uma casa habitada deixa de ser um espaço para passar a dos AA como tendo sido um ponto de partida para ideias
ser aquilo que rodeia um corpo, o que é diferente. (Tavares, de Elogio da Desordem. Na concepção visual desta peça,
2013, p. 414) trabalhou com Daniel Costa Neves nos vídeos de Elogio
da Desordem (que explicarei de seguida) relacionando as
À ideia de “corpo rodeado” já referida, alio assim a ideia
texturas criadas por Daniel com a ideia de desconstrução
de “casa que rodeia um corpo” - transformando-as numa
ou de limiar da palavra no pensamento, usando
única estrutura de interação que pode ser transposta para
procedimentos parecidos com o Livro dos AA (sobre
as dinâmicas no palco: o aspecto “carnívoro” desta
fragmentos de textos do livro animalescos de Gonçalo M.
música é sempre presente.
Tavares que são ditos por Rosinda Costa e incorporados
Louppe, a propósito da coreógrafa Mary Wigman, refere nos registos electroacústicos por mim lançados).
as noções de «absolute space», “um espaço para além do
Por último, mas com a maior importância na concepção
espaço e um espaço-matéria” (Louppe, 2012, p. 189) e
visual das peças como um todo, está o realizador e diretor
escreve “Trata-se de um espaço que o corpo encara
de fotografia Daniel Costa Neves. Os métodos de trabalho
como outro corpo, um espaço como parceiro” (Louppe,
foram diferentes nas duas peças: em through this looking
2012, p. 189).
glass, e já que a dimensão performativa era tão
importante, decidi fazer um filme como registo, que foi
realizado por Daniel. O filme, a preto e branco tem uma
eaw2015 a tecnologia ao serviço da criação musical 57

grande preocupação na qualidade estética da fotografia Notas


(já que Daniel é também diretor de fotografia) e dá uma
[1] TTLG foi editado em DVD+CD pela blinker – Marke für
outra dimensão onírica à música e ao gesto. Nas duas
Rezentes em 2011. ED foi editado em CD pela Shhpuma em
partes são estabelecidos espaços abstractos diferentes, 2013.
nos quais nunca se percebe um “espaço real”, um
“espaço concreto” com perspectiva, limites, etc. O espaço [2] Ambas as obras foram estreadas no Maria Matos Teatro
da primeira parte é um espaço a negro, com pouquíssima Municipal (em 2011 e 2013 respectivamente), sendo que
luz e reflexos de água – é um espaço “para dentro”, Elogio da Desordem esteve enquadrado no ciclo Teatro e
Música Maria Matos/Gulbenkian.
imaginado, com o poder da máxima condensação da
intimidade que Bachelard atribui à miniatura. O espaço da [3] Tradução da autora: une sorte de langage unique à mi-
segunda parte é o inverso, um branco quase ofuscante, chemin entre le geste et la pensée.
que reflete o carácter “para fora” da própria música e
[4] Tradução da autora: un état d’avant le langage et qui peut
performance. Este espaço continua a não ter limites,
choisir son langage: musique, gestes, mouvements, mots.
perspectiva, é uma espécie de espaço de “elevação”.
Para o palco, Daniel trouxe as ideias do filme: a estética e [5] Tradução da autora: un spectacle qui s’adresse à
carácter mantiveram-se e alguns excertos do filme são l’organisme entier.
projetados na primeira parte. [6] Tradução da autora: un spectacle qui ne craigne pas
Quanto a Elogio da Desordem, não foi feito ainda nenhum d’aller aussi loin qu’il faut dans l’exploration de notre
filme (eventualmente será feito depois). Nesta obra o sensibilité nerveuse, avec des rythmes, des sons, des mots,
espaço cénico foi pensado como sendo um espaço des résonances.
textural: Daniel criou vídeos de texturas para cada peça, [7] Autora opta por deixar o original porque a sua tradução
texturas estas que nunca se impõem à música, antes não consegue ser totalmente consistente ou exprimir a ideia
vivem com ela, alimentam-na. As texturas, tornam a de forma satisfatória: A dialéctica do dentro e do fora
música quase “orgânica” e “palpável” e são projetadas em
[8] idem: o dentro e o fora não devem ser entregues à sua
grande dimensão sobre mim e piano com dupla projeção oposição geométrica.
(de frente e de cima).
[9] Tradução da autora: l’être entre-ouvert.
Duas características destas peças fazem com que estas
estejam mais próximas de um ideal cinemático do que [10] Tradução da autora: L’imagination, (...), est, avant tout,
teatral: un type de mobilité spirituelle, le type de la mobilité spirituelle
la plus grande, la plus vive, la plus vivante.
A primeira é a abordagem de Daniel e a sua estética
totalmente ligada ao cinema, a segunda é o facto de cada [11] Tradução da autora: “l’expérience même de l’ouverture,
peça começar e finalizar sem mim em palco. As peças l’éxpérience de la nouveauté.”
começam sempre com som/música (registos lançados) e [12] Tradução da autora: deliberately embroiled itself in the
vídeo e criam um espaço onírico no qual entro e saio universe of signs, as we would now call them, and of ideas.
quase de forma velada. Procura-se uma ideia mais The composer became like the poet and the painter, an
próxima do cinema do que do teatro de que o que está a “artist” whose ideals and whose world view appeared to
acontecer ali não está realmente a acontecer ali: estará disdain the artisan bric-a-brac of professional musicians);
dentro de um écran que não tem fora ou fora de um écran [13] Collins, 2010; Conquergood, 2002; Santos, 2007, 2008.
que não tem dentro?
[14] Tradução da autora: was totally unnecessary except as
his interpretations make the music understandeble to an
Coda audience unfortunate enough not to be able to read it in print.
A formulação da ideia de música do pensamento procura
[15] Tradução da autora: this piece poses a specific challenge
nestas ideias uma linguagem própria mas, o seu desejo to the bodily, mental and emotional limits of the performer into
maior será o de poder ser uma apologia à imaginação, e another mental state.
à criação de imaginários próprios, subjetivos. Como diz
ainda M. Tavares, a imaginação “é uma produtora de [16] Tradução da autora: Eu tenho em consideração as
metros quadrados íntimos” e tem sido, a todos os níveis, o limitações físicas dos performers. [...] Para que o artista
motor do desenvolvimento humano. Numa sociedade em possa dominar as questões técnicas do instrumento, tem de
se dominar a si mesmo. Técnica não é apenas uma questão
que é dada à imaginação uma importância cada vez
de músculos, mas também de nervos.
menor, (tanto na educação e nos seus modelos
adoptados, como na nossa vivência diária cada vez mais [17] Explicação da obra no site do autor:
carregada de imagens e informação, e na qual temos
“O Livro dos AA = The Book of A’s (2011) is a work, produced
cada vez menos espaço mental para pensar, imaginar e within the framework of the exhibition "One dictionary, four
criar) é para mim uma necessidade fazer esta apologia do alphabets and a decimal system" at Culturgest in Porto. The
movimento criativo a todos os níveis. book lists all the words of a Portuguese dictionary, more
Como dizia Pina Bausch: “Dance, dance otherwise we’re precisely 96,715 words. The words were ordered
lost!” alphabetically (A-Z) in four columns by page. All the letters
from the list were deleted except the A's.”
Mais informações sobre a obra em:
http://www.pedrodinizreis.net/Work.aspx?ID=112#
eaw2015 a tecnologia ao serviço da criação musical 58

[18] Caixas construídas por André Castro a partir do Salgado Correia, J., Carvalho, S. & Pestana, M. R. (eds)
princípio/técnica de “circuit bending”. (2015) Performa 2015: Abstracts of the International
Conference on Musical Performance. Aveiro: Universidade de
[19] Tradução da autora: ils (les mots) arrêtent et paralysent Aveiro
la pensée au lieu d’en permettre, et de d’en favoriser le
développement. Santos, B. S. (Outubro, 2007), “Para além do Pensamento
Abissal: Das linhas globais a uma ecologia de saberes”
[20] Tradução da autora: Il ne s’agit pas de supprimer la Lisboa: Revista Crítica de Ciências Sociais, 78;
parole au théâtre mais de lui faire changer sa destination, et
surtout réduire sa place. Santos, B. V. e Meneses, M. P. (Março 2008), Epistemologias
do Sul in Lisboa: Revista Crítica de Ciências Sociais, 80;
[21] Tradução da autora: va penser la matière, rêver la
matière, vivre dans la matière ou bien – (...) – matérialiser Steiner, G. (2012), trad. Miguel Serras Pereira, A Poesia do
l’imaginaire. Pensamento – Do Helenismo a Celan, Lisboa: Relógio
d’Água Editores;
[22] Construída por Luís José Martins.
Tavares, G. M. (2013), Atlas do corpo e da imaginação –
[23] Construídas por André Castro. Teoria, fragmentos, e imagens, Alfragide: Editorial Caminho.
[24] técnica inventada pela pianista e compositora brasileira
Michelle Agnès.

Bibliografia
Artaud, A. (1964), Le théâtre et son double, Paris: Éditions
Gallimard;
Artaud, A. (2007), trad. e apresentação
Aníbal Fernandes, Eu, Antonin Artaud, Lisboa: Assírio e
Alvim;
Bachelard, G. (2014). La poétique de l’espace. Paris:
Quadrige, Presses Universitaires de France;
Bachelard, G. (1943). L’Air et les Songes. Paris: Librairie
José Corti
Berio, L., Dalmonte, R. & Varga, A. B. (1985) Luciano Berio –
Two Interviews with Rossana Dalmonte and Bálint András
Varga, New York/London: Marion Boyars Publishers;
Cabañas, K. M.(ed). 2012, Espectros de Artaud – Lenguaje y
arte en los años cincuenta, Madrid: Museu Nacional Centro
de Arte Reina Sofía;
Collins, Harry (2010), Tacit & Explicit Knowledge, Chicago
and London: The University of Chicago Press;
Conquergood, Dwight (Summer 2002), Performance Studies:
Interventions and Radical Research in TDR Vol.46, No. 2.
The MIT Press;
Cook, N. & Pettengill, R (2009) Music as Performance: New
Perspectives Across the Disciplines, Ann Arbor: University of
Illinois Press, forthcoming;
Halfyard, J. (2007) “Provoking acts: the Theatre of Berio’s
Sequenzas” in Halfyard, J. (ed.), Berio’s Sequenzas: Essays
on Performance, Composition and Analysis. Aldershot:
Ashgate;
Louppe, L. (2012). Poética da Dança contemporânea. Lisboa:
Orfeu Negro;
Maierhofer-Lischka, M. (2015) Approaching the Liminal in the
Performance of Iannis Xenakis' Instrumental Solo Works.
Acedido em 28 de Julho em
https://www.academia.edu/13206513/Approaching_The_Limi
nal_In_The_Performance_of_Iannis_Xenakis_Instrumental_S
olo_Works
Nancy, J. L. (2014). trad. Fernanda Bernardo, À escuta. Belo
Horizonte: Edições Chão da Feira;
eaw2015 a tecnologia ao serviço da criação musical 59

2.4 Poème électronique: Uma obra hipertextual

Leandro Pereira de Souza Universidade Federal de Minas Gerais, Brasil

Abstract modo influenciou novas formas de criação musical,


aproximando e agregando procedimentos antes
This paper deals with the relationship between the circunscritos às artes visuais. As relações sonoras das
concept of hypermedia / hypertext and musical creation. obras eletroacústicas são estabelecidas pelo valor
Therefore, some fundamental ideas such as nonlinearity, plástico do dado sonoro, por sua textura, sua matéria,
media heterogeneity and interactivity are discussed. energia, ou seja, por agenciamentos sinestésicos
Based on these concepts and ideas, a conceptual tool for (CAZNOK, 2008). A acusmática é um processo adotado
musical analysis and creation is proposed exploring the pela música eletroacústica, no qual a fonte geradora e/ ou
characteristics of a hypermedia / hypertext in the process o gesto físico gerador do som não estão presentes
of creation. The electroacoustic work “Poème visualmente; geralmente o som é emitido por uma fonte
électronique”, in which hypertext features was observed is (alto-falante) que não faz referência visual à sua fonte
then analyzed. originária. A música acusmática potencializou a
The “Poème électronique” is an electroacoustic work by possibilidade da criação de imagens virtuais pela mente
Edgard Varèse composed to run along with interaction do ouvinte, desse modo os compositores tem sido
between lights, images and the architecture of the Philips estimulados a criar estratégias de referência visual,
Pavilion, built for the Universal Exhibition of 1958 in relacionadas ao potencial imagético da escuta
Brussels. The work is a precursor of many current eletroacústica. Caesar (2010) propõe que todo som gera
multimedia works. ou é uma imagem e tal relação não foi anteriormente
muito explorada devido à falta de suporte físico de fixação
Hypermedia is a development of hypertext. The different do som.
sign construction processes in a hypermedia operate with
linked information, narratives interconnections, multiplicity, Durante minha pesquisa de mestrado busquei ampliar a
immediacy and non-linear structure. Thus the construction reflexão acerca da interação entre música e outras
of the senses presents itself naked through user linguagens em contextos não hierárquicos, além
interaction. The “Poème électronique” work was apresentar uma proposta de uma ferramenta conceitual
developed from the idea of interaction between media and para análise e desenvolvimento de obras que almejam a
artistic languages, as in a hypermedia creation. The interação entre música e outras linguagens de forma não
immersion experience offered to the public reflects the hierarquizada, denominada de criação musical
ability to the enjoyment of the work by means of a hipermidiática.
hypertext operation, in which each work element acts as a Para tanto, foram revistos conceitos importantes para o
node that associates heterogeneous and / or homogenous estabelecimento da proposta do modelo hipermidiático de
elements, creating the possibility to build new associations criação, tais como hipertexto, mídia e interação entre
by the public that enjoy the work as a navigator who mídias; foram analisadas obras que apresentam aspectos
creates their routes between nodes of the work structure. importantes para a reflexão acerca de criações
Keywords: Music creation, Sound, image, Hypertext, hipermidiáticas. Nesse artigo será abordada uma das
Interaction. obras analisadas na pesquisa: Poème électronique.

Introdução 1. Hipermídia e hipertextos

A interação entre música e outras linguagens artísticas tem O termo hipermídia decorre do conceito de hipertexto
longa data em diversas culturas. Porém a música (e também formulado nos anos 60 por Theodor Nelson, um sistema
as outras linguagens artísticas) tem em seu percurso complexo de interconexões de textos pertencentes a
histórico um período polarizado pela especialização, no qual mídias diferentes, promovendo um complexo de produção
tendeu a se isolar das outras linguagens. A arte significante não sequencial.
contemporânea promoveu um rompimento no processo de A hipermídia é um desenvolvimento do hipertexto,
especialização e potencializou formas hibridas de criação designando a narrativa com alto grau de interconexão, a
(Basbaum, 2008). informação vinculada (...) Pense na hipermídia como uma
coletânea de mensagens elásticas que podem ser esticadas
A música contemporânea buscou novas formas de
ou encolhidas de acordo com as ações do leitor. As idéias
conceber o som, viabilizando assim outras possibilidades podem ser abertas ou analisadas com múltiplos níveis de
de escuta e percepção. Aspectos texturais são mais detalhamento (Negroponte, 1995, p.66).
explorados nessas poéticas, assim é potencializado um
tipo de escuta que presentifica as sensações visuais e Metaforicamente, um sistema hipermídia é a nossa
táteis do som. Dentre essas novas poéticas, a música memória expandida através de mediações técnicas cuja
eletroacústica, desenvolvida com base no processo de carga de informações se atualiza e potencializa a cada
colagem e manipulação do dado sonoro potencializou segundo, formando uma tapeçaria sígnica de textos que
processos de criação, os quais concebem o som de forma dialogam com outros textos, remetem à outras realidades,
similar ao objeto plástico, passível de modelagem. Desse
eaw2015 a tecnologia ao serviço da criação musical 60

interagem com sons e imagens, formando um tecido 1.2 Criação musical hipermidiática
imaterial que denominamos de hipermídia.
Vivemos em um mundo repleto de metáforas, criamos e
Se entendemos a consciência e a imaginação como usamos metáforas diariamente. Trata-se de uma
processos de associação contínua e de reestruturação de tendência humana que tem origem em nossos processos
imagens e conceitos selecionados pela memória, não é difícil
cognitivos, pois criar uma metáfora corresponde ao
perceber que a hipermídia resulta em uma representação
mais adequada dessa mesma consciência ou dessa processo de associar elementos dissimilares, criar
imaginação do que os códigos sequenciais restritivos das relações, dessa forma buscamos sentidos para o mundo
escrituras lineares (Machado, 1997, p.147). que nos cerca.

A hipermídia tem um caráter não linear. Os diferentes As metáforas formam grande parte do sistema conceitual e
afetam a maneira como se dá o pensamento, interferem na
processos de construção sígnica operam com
forma como o ser humano percebe as coisas no mundo e
informações vinculadas, interconexões de narrativas, como age diante disto. O pensamento forma a base de novas
multiplicidade, instantaneidade e estruturação não linear. combinações metafóricas tanto para a questão poética como
Desse modo a construção dos sentidos se apresenta de para a ação comum do cotidiano. Desta forma, é necessário
forma aberta por meio da interação do usuário. Para que se perceba a responsabilidade de cada signo colocado
Santaella (2003) a não linearidade é homóloga aos modos no mundo. (Santana, 2006, p.169)
contemporâneos de viver.
Essa forma de estruturar o pensamento sugere uma
Enfim, a não linearidade das mídias já está encarnada na grande rede metafórica na qual várias informações são
própria maneira de viver. É certo, porém, que essa associadas.
descontinuidade é levada aos extremos nas mídias[...]
Tecnologias são as ferramentas materiais ou conceituais
(Santaella, 2003, p. 97).
utilizadas na realização de alguma tarefa. Segundo Lévy
A interação é outro aspecto fundamental de um ambiente (1993) as tecnologias podem exteriorizar e reificar uma
hipermídia. A interatividade está relacionada com a função cognitiva, uma atividade mental, essas tecnologias
possibilidade de reapropriação, recombinação e são denominadas por ele de tecnologias intelectuais.
personificação das mensagens recebidas, assim quanto Desse modo podemos observar que a hipermídia está
maior é a presença dessas possibilidades maior é grau de relacionada com nossa forma de significar,
interatividade do sistema. correspondendo ao processo cognitivo da construção de
Enfim, o caráter interativo é elemento constitutivo do
metáforas.
processo hipertextual. À medida que a hipermídia se Tecnicamente, um hipertexto é um conjunto de nós ligados
corporifica na interface entre os nós da rede e as escolhas do por conexões. Os nós podem ser palavras, páginas,
leitor este se transforma em uma outra personagem. Dentro imagens, gráficos ou partes de gráficos, sequências sonoras,
dessa perspectiva, minha tese é: o leitor é agora um documentos complexos que podem eles mesmos ser
construtor de labirintos ( Leão,1999, p. 41). hipertextos (Lévy, 1993, p.33).
Mediante as mais diversas opções de leitura e interação, Podemos investigar alguns processos de criação musical
a hipermídia prevê a criação de roteiros e programas que com base no conceito de hipermídia. Como já
sejam capazes de guiar o usuário no processo de mencionado anteriormente, um ambiente hipermídia pode
navegação. ser considerado um hipertexto. Desse modo vamos
Esses roteiros servem para sinalizar algumas rotas de apresentar algumas características de um hipertexto e
navegação do usuário, para que uma imersão compreensiva observar suas possíveis implicações no processo de
se dê."(Santaella, 2003, p. 95) criação musical. Lévy (1993) apresenta alguns princípios
fundamentais de um hipertexto
Segundo Machado (1997) a melhor metáfora para
hipermídia é o labirinto, pois ele reproduz a estrutura 1) Princípio de metamorfose: a rede hipertextual
intricada e descentrada da hipermídia. Três traços que está em constante renegociação e construção.
definem um labirinto podem ser também os traços básicos 2) Princípio de heterogeneidade: os nós e as
de uma hipermídia. O primeiro é que o labirinto convida à conexões de uma rede hipertextual são
exploração. Resolver um labirinto era percorrê-lo como heterogêneos. Na memória são encontradas
um todo, era conhecê-lo por inteiro. O Segundo traço é a imagens, sons, palavras, diversas sensações,
exploração sem mapa: não tendo a visão global de um modelos, etc e as conexões serão lógicas,
labirinto, o navegante precisa fazer cálculos locais, de afetivas , etc.
curto alcance para decidir por onde trilhar. O terceiro traço
é a inteligência astuciosa na qual o navegante avança por 3) Princípio de multiplicidade e de encaixe das
meio da experiência, aprendendo com os erros e escalas: qualquer nó ou conexão, quando
constituindo os roteiros de navegação. analisado, pode revelar-se como sendo
composto por toda uma rede.
Instantaneidade é uma característica da hipermídia:
devido ao caráter digital da informação as trocas 4) Princípio de exterioridade: o crescimento e sua
simbólicas ocorrem em tempo real, gerando permanente diminuição, sua composição e sua recomposição
processo de construção de novas metáforas e sentidos. permanente dependem de um exterior
indeterminado: adição de novos elementos,
conexões com outras redes.
5) Princípio da topologia: a rede é o espaço, tudo
funciona por proximidade.
eaw2015 a tecnologia ao serviço da criação musical 61

6) Princípio da mobilidade de centros: a rede não estabelecimento de uma inteligência coletiva. A criação
tem centro permanente. pode se tornar um espaço de interações entre
conhecimentos e conhecedores, no qual a experiência de
O princípio de metamorfose pode ser verificado quando
cada participante é valorizada.
temos um processo de criação coletivo ou quando se
busca a interação entre diferentes linguagens. Os Para que ocorra a criação hipermidiática de forma efetiva
participantes estão em constante renegociação e é preciso desenvolver os nós da rede. Esses garantem
construção, muitas conexões podem ser estabelecidas que as conexões fundamentais da rede sejam
com os sons. estabelecidas. A estruturação desse processo pode
ocorrer a partir de uma poética já estabelecida ou pela
Os sons podem se conectar a emoções, subjetividades,
experimentação.
palavras, diversas sensações, imagens, cores,
movimentos etc, de forma lógica ou intuitiva. A
heterogeneidade pode se expressar até mesmo entre 2. Dificuldades na documentação e análise
elementos de uma mesma mídia (p. ex., gêneros musicais Encontramos algumas dificuldades de análise quando
distintos). estamos pesquisando obras multimídias devido à falta de
O princípio de multiplicidade e de encaixe das escalas documentação, dentre outros fatores. "A preservação e
pode ser observado em processos criativos com interação manutenção das artes baseadas em tecnologia, tais como
de diferentes linguagens artísticas que, quando instalações multimídias, música eletroacústica, ou
analisados, podem revelar uma rede de metáforas, na multimídia, é um desafio atual" (Lombardo et al, 2009,
qual qualquer nó ou conexão pode ser composto por toda p.25).
uma rede hipertextual. Para Lombardo (2009) os principais problemas de
O princípio de exterioridade pode ser compreendido documentação e preservação podem ser caracterizados
através da ligação que cada agente da criação tem com em três níveis: no nível institucional, nível conceitual e
outras redes metafóricas e conceituais, que sempre estão nível técnico.
presentes em sua interação com a obra em Problemas surgem em nível institucional, pois geralmente
desenvolvimento. Também são possíveis outras formas vários domínios disciplinares estão envolvidos nessas
(físicas) de conexão tais como a Web, redes celulares obras, então a responsabilidade pela documentação e
etc. preservação da obra fica difusa. No nível conceitual,
Podemos associar o princípio da topologia aos processos ocorre algo semelhante devido à fragmentação do
cronológicos (pré-definidos ou não) de uma criação processo de criação entre vários agentes humanos e
musical: se a topologia das redes oferece diferentes tecnológicos, múltiplos momentos de criação, realização e
possibilidades de percurso, uma versão específica de performance, além de vários conceitos teóricos. No nível
uma criação musical sempre realiza um desses técnico, os problemas são devido à rápida evolução da
percursos, criando afinidades temporais entre nós que em tecnologia que dificulta o acesso a obras concebidas
outras versões talvez não seriam tão evidentes. apenas algumas décadas atrás. Lombardo (2009)
apresenta o Poème Électronique, de E. Varèse, como um
Durante performances e obras com interação entre exemplo dessas dificuldades de análise e preservação
diferentes linguagens artísticas podemos observar um desses tipos de obra.
permanente deslocamento do centro da rede metafórica, o
que converge com o princípio da mobilidade de centros e Com a sua complexidade artística e tecnológica, o Poème
Électronique pode ser considerado como um exemplo para o
não linearidade de uma hipermídia.
problema da preservação, porque o pavilhão foi desmontado
A interação é um aspecto importante de um ambiente após o fim da feira mundial, a posteridade foi confrontada
hipermídia que também pode ser observado em algumas com uma documentação fragmentada e com os componentes
criações musicais. Em uma criação hipermidiática a individuais da instalação (Lombardo et al, 2009, p. 25).
interação pode ocorrer em vários níveis: interação entre Geralmente essas obras são desenvolvidas mediante um
mídias diferentes e interação entre os participantes processo de interação de todos os elementos possíveis
mediante a conexão entre os dispositivos utilizados na de participar da obra, tais como som, imagens,
performance. Devemos atentar que também podem arquitetura, espaço, público entre outros. Segundo
ocorrer interações mesmo sem a conexão direta dos Campesato (2009) muitas dessas obras podem ser
dispositivos (troca de dados), mas mediante diversos tipos compreendidas como arte sonora.
de comunicação estabelecidas entre os participantes, já
Por outro lado, na arte sonora o espaço real em que a obra
que os corpos podem ser considerados mídias
se apresenta é parte da própria obra. E não são apenas os
comunicacionais em constante troca com o ambiente. elementos “acústicos” do espaço que entram em jogo, mas
(Santana, 2006) sim a totalidade de sentidos que o espaço gera: dimensão,
Quando observamos ou participamos de processos de cor, textura, imagem, superfície, forma, projeção, etc. Cada
um desses elementos pode adquirir um significado especial
criação musical com as características apresentadas
dentro da obra. Há também a possibilidade de elaboração de
acima, podemos denominá-los de criação musical um espaço representacional, em que a idéia de “lugar”, de
hipermidiática. A criação coletiva potencializa esse tipo de ambiente, pode formar parte do significado da obra. Em
criação, pois as possibilidades de interação serão mais muitas obras são os elementos contextuais (o público, a
amplamente exploradas, podendo integrar participantes iluminação, objetos) constituintes do espaço que vão ajudar a
ligados a diferentes linguagens artísticas e mídias. Nesse montar a obra (Campesato, 2006, p. 3).
contexto é possível observar o processo de
eaw2015 a tecnologia ao serviço da criação musical 62

Uma reflexão importante acerca da análise dessas obras


decorre do fato de que sua documentação tende a ser
fragmentada devido aos motivos já apresentados. Isto
dificulta uma visão geral da obra, fundamental para uma
boa análise, já que mesmo uma documentação por vídeo
não possibilita uma visão clara do processo de interação
da obra com o espaço.

2.1 Poème électronique


Poème électronique foi uma obra eletroacústica composta
Edgard Varèse para ser executada juntamente com
interação de luzes, imagens e arquitetura do Pavilhão
Philips, construído para a exposição universal de 1958,
em Bruxelas. A proposta de Le Corbusier, que foi o
idealizador e supervisor do Pavilhão, era de realizar uma
síntese de som, luz, cor, imagem e ritmo no seu interior.
Iannis Xenakis projetou o pavilhão baseando-se na
geometria descritiva, tendo concebido as paredes na
forma de parabolóides hiperbólicas e seu interior no
formato semelhante a um estômago.
Figura 2 - Tabela com os temas, respectivas imagens e
instrumentação. Fonte: Lombordo et al, 2009, p.28.

O Poème électronique era executado por meio de três


canais para gerar a ilusão de três fontes sonoras
movendo-se no interior do pavilhão. Para isso, o pavilhão
continha cerca de 400 alto-falantes espalhados pelo
pavilhão, alimentados por 20 diferentes amplificadores
que eram controlados por uma fita magnética, que
também controlava as projeções. Foram concebidas
algumas trajetórias a serem seguidas pelos sons ao longo
da apresentação.

Figura 1 - Paredes parabolóides hiperbólicas e o interior com formato


estomacal. Fonte: Lombordo et al, 2009, p. 29.

A composição de Varèse foi elaborada a partir de várias


fontes sonoras como ruídos de máquinas, o som de
aviões, sinos, sons eletrônicos, canto, piano e órgão,
além de contar com processos de transformação da
altura, filtragem, modificação de ataque e decaimento
desses sons. Segundo Varèse, foi divida em 7 seções
temáticas: Genesis, Espírito e matéria, Da escuridão para
a madrugada, Deuses artificiais, Como o tempo modela
as civilizações, Harmonia e Para toda humanidade. A
forma da peça decorre do conceito de atração e repulsão Figura 3 - Diversas trajetórias concebidas para a obra. Fonte:
de padrões e grupos de sons, fazendo referência a um Lombordo et al, 2009, p. 37.
conceito espacial de estruturação da obra. Segundo
Para Lombardo et al. (2009) o Poème électronique é
Lombardo (2009), a espacialidade na obra de Varèse se
precursor de várias obras multimídias atuais.
manifesta de duas maneiras: os sons transmitindo a
localização e/ou por meio da reverberação fornecendo
indicativos para uma simbolização do espaço.
eaw2015 a tecnologia ao serviço da criação musical 63

reconhecer a estrutura interna do pavilhão. (Lombardo et al,


2009, p.8)

Para Lombardo (2009) o Pavilhão foi uma realização da


hipótese wagneriana de uma Gesamtkunstwerk na era
moderna, em que o espaço cênico é transformado em
uma parte da obra de arte.

2.3 Poème électronique e hipertexto/hipermídia


Essa obra apresenta algumas características que podem
ser relacionadas com proposta de criação hipermidiática.
Le Corbusier concebeu a obra como síntese de luz, cor,
imagem, ritmo e som. Assim a obra foi desenvolvida a
Figura4 - Interior do Pavilhão. Fonte: Lombardo et al, 2009, p. 31.
partir da ideia de interação de mídias e linguagens
artísticas, como em uma criação hipermidiática. A
O Pavilhão pode ser separado em três principais
experiência de imersão proporcionada ao público reflete a
ambientes como pode ser observado na figura 4:
possibilidade de fruição da obra por meio de uma
superfície principal (centro - área escura na figura), abaixo
operação hipertextual, no qual cada elemento da obra
do horizonte (área cinza) e os acentos mais altos (as duas
funciona como um nó que associa elementos
áreas de luz superior na figura). No interior estavam
heterogêneos e/ou homogêneos, assim como são
presentes dois objetos iluminados: um manequim
suscetíveis a construção de novas associações pelo
feminino e uma escultura geométrica construída com
público que frui a obra como um navegante que cria suas
tubos de metal.
rotas pelos nós que estruturam a obra, princípio da
Nas paredes havia pontos coloridos, às vezes metamorfose e mobilidade de centros.
preenchidos com fotografias projetadas nas telas. Por
Le Corbusier apresentou o projeto arquitetônico do
exemplo, na Figura 5, a tela principal exibe uma fotografia
pavilhão para Varèse, que a partir disso começa o
da estátua Cabeça do dia, de Michelangelo, enquanto as
processo de composição da música. Desse modo é
duas telas à esquerda cada uma contém uma foto de um
possível observar o processo associativo e metafórico da
bebê. Também havia outros elementos projetados; um
composição, que se inicia a partir das relações
círculo vermelho (o "Sol" no esquema acima, também
arquitetônicas, da forma e espaço onde a música seria
visível na Figura 5), um círculo branco (a "lua"), manchas
executada, Princípio de heterogeneidade. Segundo
coloridas ("nuvens"), e bulbos brilhantes ("estrelas").
Lombardo (2009) em toda obra percebe-se uma relação
paradoxal: imagens de máscaras e objetos primitivos se
contrapondo ao racionalismo da arquitetura, a geometria
complexa da superfície externa em relação ao interior que
se assemelha a uma caverna escura, e na composição
musical, na qual temos sons sintéticos se contrapondo ao
primitivismo de sons vocais sem significação verbal.
Assim podemos inferir que a estrutura e os nós da criação
se desenvolveram a partir desse conceito paradoxal
presente na arquitetura do pavilhão. No desenvolvimento
desse nó, outras redes associativas foram desenvolvidas
como podemos notar na forma de estruturação das
projeções, luzes, objetos e sonoridades. Essa estrutura foi
estabelecida por meio de temas, como apresentado na
figura 2. Então podemos evidenciar uma criação
desenvolvida por meio da operação hipertextual,
associando arquitetura, imagens, luzes e sons por meios
de nós como conceitos (paradoxo) e temas (Genesis,
Figura 5 - Telas e projeções de imagens. Fonte: Lombardo et al,
Espírito e matéria, Da escuridão para a madrugada,
2009, p. 31.
Deuses artificiais, Como o tempo modela as civilizações,
A estrutura interna, construída com materiais para Harmonia e Para toda humanidade). O processo de
absorção do som, permitiu uma exploração complexa da criação coletivo também é notado na obra, tendo sido
espacialização do som; essa característica, somada a Xenakis o responsável pela concepção do prédio e de um
uma complexa sincronização com projeções de pequeno interlúdio musical (“Concrete PH”), E. Varèse
fotografias, filmes e luzes, proporcionava uma responsável pela composição musical principal, o cineasta
experiência de imersão ao público que adentrava pelo P. Agostini responsável pela filmagem e edição do
pavilhão. material a ser projetado, e Le Corbusier atuando como
supervisor.
As telas deformavam as imagens imóveis em branco e preto
projetadas pelo filme, e a centralidade do conteúdo projetado Por meio de uma fita de quinze canais ocorreu o controle
era obscurecida por quatro outros efeitos visuais. O resultado e sincronização das projeções e espacialização do som.
de todos esses quatro efeitos era uma desorientação visual, Proporcionando assim o processo de interação entre som,
e o público, imerso na escuridão, não podia realmente imagens e espaço. Podemos destacar a possibilidade de
eaw2015 a tecnologia ao serviço da criação musical 64

desenvolvimento de uma obra hipermidiática em Machado, Arlindo. Hipermídia: o labirinto como metafóra in:
contextos analógicos e mesmo sem o uso dessas Domingues, Diana.A arte no século XXI. São Paulo: Editora
tecnologias. Unesp. 1997.
Santana, Ivani. (2006) Dança na cultura digital. Salvador:
3. Considerações finais EDUFBA, em http://www.books.scielo.org Acesso em 10 de
outubro de 2013.
Uma hipermídia é uma reificação de um processo
cognitivo: nossa memória e o processo metafórico Santaella, Lucia.( 2003). Culturas e artes do pós-humano.
presente na significação das coisas.Negroponte, 1995). São Paulo: Editora Paulus.
Desse modo podemos observar características de uma Basbaum, Sérgio. (2008). Percepção Digital: sinestesia,
hipermídia em algumas obras tais como o Poème hiperestesia, infosensações, em:
électronique. Estão presentes na obra diferentes http://www.rua.ufscar.br/site/?p=662 Acesso em: 09 de
processos de construção sígnica. Por meio de uma janeiro 2014
operação hipertextual ocorrem associações de
LEÃO, Lúcia.(1999) O labirinto da Hipermídia: arquitetura e
informações, interconexões de narrativas, multiplicidade e
navegação no ciberespaço. São Paulo: Iluminuras.
estruturação não linear.
Lévy, Pierre. (1993). Tecnologias da inteligência.São Paulo:
A obra apresenta uma estrutura nodal com propriedades
Editora 34.
hipertextuais. Os nós são fixados a partir de conceitos e
temas: o paradoxo presentes na arquitetura (caverna - ___________e AUTHIER, Michel.(1995) As árvores do
parábolas, primitivo - moderno) e temas vide tabela 1. conhecimento. São Paulo: Editora Escuta.
Alguns princípios de um hipertexto podem ser _________Cibercultura.(2000) São Paulo: Editora 34.
relacionados com a obra: a interações entre diferentes
mídias estão relacionados com o princípios de
metamorfose e heterogeneidade; a possibilidade de
imersão e associações diversas pelo público pode ser
relacionada com o princípios de mobilidade de centro e
multiplicidade; o princípio da topologia está relacionado a
estrutura de sincronização entre som, imagem e espaço;
o alto potencial associativo da obra proporciona um
processo de associação metafórico em uma rede externa
a obra: as experiências pessoais presentes na memória
do publico. Esse é o princípio da exterioridade.
Através das análises é possível afirmar que a proposta
de um esquema estratégico para a criação musical
hipermiadiática pode ser um recurso útil e viável para
análise e também desenvolvimento de obras que almejam
uma interação não hierárquica entre as mídias e
linguagens artísticas. Com o desenvolvimento das
tecnologias digitais os processos de interação entre
música e outros linguagens artísticas foram ampliados. É
cada mais comum o uso de tecnologia de captação de
movimento, análise de imagens etc, além de uma
crescente disposição de pessoas ligadas à arte e à
tecnologia para o trabalho coletivo. Desse modo é
possível visualizar cada vez mais os processos de criação
hipermidiático em obras desse contexto.

Bibliografia
Caesar, Rodolfo. (2010). O som como imagem. In: IX
Seminário de tecnologia Música ciência e tecnologia: São
Paulo. 2010, pp. 255-262.
Campesato, Lilian.(2006) Som, espaço e tempo na arte
sonora. In: XVI Congresso da Associação Nacional de
Pesquisa e Pós-graduação em Música. Brasília.
Lombardo, Vincenzo; Valle, Andrea; Fitch, John; Tazelaar,
Kees; Weinzierl, Stefan; Borczyk, Wojciech. (2009). A Virtual-
Reality Reconstruction of Poème Électronique Based on
Philological Research. Computer Music Journal, Volume 33,
Número 2, p.24-47.
Negroponte, Nicholas.(1995). A Vida Digital. São Paulo:
Companhia das Letras.
eaw2015 a tecnologia ao serviço da criação musical 65

2.5 Particle system

Roberto Zanata Conservatory of Ferrara, Italy; Associazione Spaziomusica, Italy

Abstract
In this paper we examine the project of a PARTICLE
SYSTEM and the approaches to the NUI (Natural User Particle system
Interface) applied to a multimedia interactive installation A particle system is a collection of point masses (a
done with Processing [1] and Supercollider [2]. collection of a number of individual elements - particles)
Supercollider is a programming language for real time that obeys some physical laws (e.g, gravity or spring
audio synthesis and algorithmic composition. Processing behaviors). Particle systems can be used to simulate all
is an open source programming language and sorts of physical phenomena:
environment for people who want to create images,
animations, and interactions”. It is designed to be used by – Smoke
artists, therefore does not require deep programming – Snow
knowledge and it makes the task of practical – Fireworks – Hair
implementation of ideas rather simple and immediate. It is – Cloth
very close to Java language but the possibility to – Snakes
implement interaction and 2D/3D graphics or animation of – Fish
a particle system is much more easy. A particle system – This section will be dedicated to look for the
as it has been defined by William Reeves in 1983 – it’s a implementation strategies for coding a particle system.
collection of many minute particles that together represent How do we organize our code? Where do we store
a fuzzy object. Over a period of time, particles are information related to individual particles versus
generated into a system, move and change from within the information related to the system as a whole? The
system, and die from the system. A particle system usually examples we’ll look at it will be focused on managing the
it works like an highly chaotic systems, similar to a natural data associated with a particle system. We use simple
phenomena, or simulating processes caused by chemical shapes for the particles and we apply only the most basic
reactions. A ParticleSystem object manages a variable behaviors (such as gravity). However, to use this
size (ArrayList) list of particles. framework and to build it in a more interesting way, the
Keywords: Multimedia, Particle, Processing, rendering of the particles and them compute behaviors,
Supercollider, Audiovideo we can achieve a variety of effects.
We’ve defined a particle system to be a collection of
Introduction independent objects, often represented by a simple shape
or dot. First, we’re going to deal with a flexible quantities of
SuperCollider is a programming language for real time
elements. Sometimes we’ll have zero particles, sometimes
audio synthesis and algorithmic composition. The
one, sometimes ten and sometimes ten thousand
language interpreter runs in a cross platform IDE (OS
particles. Then we’re going to get a more sophisticated
X/Linux/Windows) and communicates via Open Sound
object-oriented approach.
Control with one or more synthesis servers. The
SuperCollider synthesis server runs in a separate process Typical particle systems involve in something called an
or even on a separate machine so it is ideal for realtime emitter. The emitter is the source of the particles and
networked music. controls the initial settings for the particles, location,
velocity, etc. An emitter might emit a single burst of
Processing is an open source programming language and
particles, or a continuous stream of particles, or both. The
integrated development environment (IDE) built for the
point is that for a typical implementation like this, a particle
electronic arts, new media art, and visual design
is born at the emitter but does not live forever. If it were to
communities with the purpose of teaching the
live forever, our Processing sketch would eventually grind
fundamentals of computer programming in a visual
to a halt as the number of particles increases to an
context, and to serve as the foundation for electronic
unwieldy number over time. As new particles are born, we
sketchbooks.
need old particles to die. This creates the illusion of an
The NUI consists in a “Kinect” connected to the software infinite stream of particles, and the performance of our
of the installation (developed in Processing and program does not suffer.
Supercollider) through a proxy software (Osceleton) and
Without forces, the movement of the particles would be a
Openni driver that convert the hand gestures of users in
straight line conditioned only by their initial positions and
3d coordinates. The deformation and displacement of the
velocities. So, simulating usual forces like gravity or air
"particle system" determines the transformation of sounds,
resistance is a necessity to obtain realistic effects. One of
produced by different synthesis techniques, and the
the most important aspect of particle systems is therefore
particular interactive gesture leads to a unusual executive
that particles are submitted to forces. Indeed, these are
modes.
the forces alone which are responsible for particle’s
movements.
eaw2015 a tecnologia ao serviço da criação musical 66

Animation of a particle system


I’ll start from some animation techniques “fishing” from
some external libraries for Processing.
First of all, the library developed by Jonathan Feiberg
called PeasyCam[3] which, as the name suggests, allows
to have some sort of camera wizard, simple but with
effective effects of rotation and zoom (in and out) and
turbulence generated image, rotation and panning from
one axis to another. This library has provided an
Image 1 – Forces.
immediate solution for moving through a 3d space in an
There can be various types of forces, some acting intuitive and stable manner. PeasyCam is meant to
independently on each particle and some others using provide intuitive mouse-driven camera movement, and not
other particle’s attributes. There are also special types of programmatic control (there are a few excellent libraries
forces, like the bounce on the surface of an object. Every for non-interactive camera control).
particle has a cycle of life: it can be created and it can die. Another good library is one developed by Karsten
Schmidt, whose collection is defined with the general
Particle System Representation name of Toxiclibs [4], which can be exploited in areas
such as animation and graphics design. It allows various
Today, particle systems are intensively used in many possibilities of manipulation and interaction of the image
domains of the industry, and are still an active domain of as the color (toxi.color), geometrical aspects (toxi.geom)
research. Some domains of application are scientific and even audio aspects (toxi.audio).
simulation, scientific visualization, movies and video
games. Apart from the special effects, particle systems are The parameters affecting the generation and animation of
also used to simulate complex interactions like the forces the image can be defined, in the Processing language,
which make a textile move in a natural way or to model the with the words: dofRatio, neighborhood, speed, viscosity,
physic properties of deformable objects. spreads, independence, rebirth, rebirthRadius. turbulence,
and cameraRate averageRebirth. For the control of these
• A particle system controls a set of particles that act parameters, we can use the external library controlP5 by
autonomously but share some common attributes. Andreas Schlegel that allows to build a GUI with sliders,
• A particle system is dynamic, particles changing buttons, toggles and more.
foem and moving with the passage of time. ControlP5 [5] is a GUI and controller library for processing
• A particle system is not deterministic, its shape and that can be used in authoring, application, and applet
form are not completely specified. mode. Controllers are easily added to a processing sketch
and they can be arranged in separate control windows,
• The shape of a particle system changes over time. and they can be organized in tabs or group.
• Particles are generated using processes with an
elemento f randomness.
• A particle’s position is found by simply adding its
velocity vector to its position vector. This can be
modified by forces such as gravity.
• The amount of blur is related to a particle trail life. As
a particle moves through space, it may leave a
visible trail behind.
• Each particle has two attributes dealing with lenght of
existence: age and lifetime.
• When the particle age matches it’s lifetime it is
destroyed.
• Particles might be generated at random (clouds), in a
costant stream (waterfall) or according to a script
(fireworks).
Image 3 - GUI.
• Script typically refers to neighboring particles and the
enviroment.
The particle system consists in a system of generation of
particles (or granules) in which the particles come into
contact with each other, in a process of accumulation and
dispersion, within a simulated field of magnetic forces.
This manifests itself, visually, in a series of models of
shapes of particles, resulting from the use of the sequence
generating noise, such as spirals or forms more or less
Image 2 - particles
circular, traveling in the space as well as in explosions.
eaw2015 a tecnologia ao serviço da criação musical 67

is then possible to use “Dust” (it generates random


impulses from 0 to +1, density - average number of
impulses per second) as a trigger and a pulse oscillation
(Impulse) to simulate a sort of dissemination of noise’s
particles.

Image 4 – animation of particles

Sounds and particles


SuperCollider has over 250 unit generators (UGens). The
UGens I used to generate an audio particle system are
mainly noise generators (microevents). The organisation
of such microevents into discrete structures allows the
formation of sonic clouds. Such structures maybe they
sound similar to that one formed through the process of
granular synthesis – the segmentation of audio material
into micro-level grains. The organisation of such particles
into microevents formations may be deemed sound object Image 6 – code.
or meso-level structures. It is clear, therefore, that strong
parallels exist between the formation of audio particles
Conclusion
and visual particle systems, and that some compositional
times scales exist, within which both audio and visual The main goal of my work is that no real equivalent
structures may be organised. approach seems to exist, so it is hard to compare with
other similar works that they have been done with the use
Coding noise’s particles in Supercollider of other softwares. Of course many designing or rendering
softwares include a particle engine in order to model
If we take a look at the supercollider code, we can special effects like smoke or dust. These particle engines
immediately notice that it is necessary to take the are most of the time very powerful and complete, but they
precaution of limiting the amplitude of the signal with a are not designed to run in real time or to be reactive
“Limiter” (since we are dealing with noise generators it is toward user’s interactions.
important to control the peak output amplitude level to
which to normalize the input). After, and only after, it is
Notes
possible to apply to the synth (in other word, our
instrument) a general reverb (UGen GVerb) with relative [1] http://processing.org/
dosage of some of its parameters (ROOMSIZE, damping, [2] http://supercollider.sourceforge.net/
drylevel). The code of the instrument can be basically
divided into two parts. The first part based upon a band [3] http://mrfeinberg.com/peasycam/
pass filter (BPF) which filters a white noise (Whitenoise) to [4] http://toxiclibs.org/
which will be added an oscillator with a band limited
impulse called “Blip” (Band Limited ImPulse generator, in [5] http://www.sojamo.de/libraries/controlP5/
which all harmonics have equal amplitude) that has the
ability to generate a controllable number of harmonic Bibliography
components of a fundamental frequency at equal Reeves, W. (1983). Particle Systems—A Technique for
amplitude. Modeling a Class of Fuzzy Objects. New York: ACM
Transactions on Graphics.
Greenberg, I. (2007). Processing: Creative Coding and
Computational Art. New York: Apress.
Greenberg, I., Xu, D., Kumar, D., (2013). Processing: Creative
Coding and Generative Art in Processing 2. New York:
Apress.
Wilson, S., Cottle, D. and Collins, N. (eds). 2011. The
SuperCollider Book. Cambridge, MA: MIT Press
Image 5 – code.

The second part can be created starting from a low pass


filter (LPF) connected to a bank of resonators called
“DynKlank” (it’s a bank of sine oscillators; unlike Klang,
parameters in specificationsArrayRef can be changed
after it has been started) whose parameters are also
organized into arrays. The resonator has the potential to
change the frequencies even after it has been initialized. It
eaw2015 a tecnologia ao serviço da criação musical 68

III. Composição e ruído | composition and


noise
eaw2015 a tecnologia ao serviço da criação musical 69

3.1 Concealed Rhythmic Interactions Between Live-Electronics and Instrument in


“Spacification”

Diogo Alves Batista Escola de Música Nossa Senhora do Cabo, Portugal

Abstract In “Spacification” the main focus was to obtain less


In a recently composed work I used the periodic variations obvious rhythmic textures and it was done through the
of amplitude as the main rhythmic texture in the first part acoustical phenomenon called beats.
of “Spacification”. This is a result of the proximity of If two tuning forks of slightly different pitch are struck
frequencies between the vibraphone and the oscillators simultaneously, the resulting sound waves and wanes
(live-electronics), of the tuning pitch of each timpani, of the periodically. The modulations are referred to as beats; their
vibraphone’s motor speed and the electric guitar’s wah- frequency is equal to the difference between the frequencies
wah pedal. The complete instrumentation features a of the original tones. For example, a tuning fork with a
french horn with one piano and three timpani as characteristic pitch of 440 hertz, if struck at the same time,
will produce beats with a frequency of six hertz (Oster, 1973,
resonators, an electric guitar with distortion and wah-wah
p.94-103).
pedal, a tenor saxophone, a vibraphone, a narrator and
live-electronics programmed in Pure Data. Through the periodic variations of amplitude I was able to
project through the listener the feeling of rhythm.
The first section has not featured regularly occurring
rhythmic figures written for any instrument. In fact, the
length of the figures is so long that rhythm is completely General Applications of Beats
imperceptible. The rhythm, which is perceived by the Beats are used in the tuning of instruments.s For example,
audience, is a result of the co-interactions between the when tuning harpsichords, the tuner tunes the first note of
different sound sources. This idea was a result of studying the middle octave using a tuning fork (traditionally one that
the usage of “beats” by many composers, such as Warren gives C4) and then the other notes of that octave by
Burt, in his “Beat Generation in the California Coastal counting beats. The number of beats and the notes he
Ranges” (1998-1999), Giacinto Scelsi, in his “L'âme ailée / tunes first and last depend on the temperament that is
L'âme ouverte” (1973) and Iannis Xenakis, in his “Nomos being used. The other notes from different octaves are
Alpha” (1964). This paper presents a detailed comparative tuned by achieving octaves with no beats. The piano’s
study on “beats” not as an acoustic phenomenon but as strings for the same note are tuned with a slight difference
part of the whole rhythmic structure of one’s work. to create beats, which is commonly said to give a
Keywords: Beats; Microtonality; Live-Electronics; “brighter” timbre. “Voix Celéste” is an organ stop that uses
Rhythmic Perception; Spacification. beats to produce a “pulsating” effect (Stauff, 2006). Beats
are sometimes used to transmit signals between places
(Babaj, 1988, p. 64).
Introduction
The phenomenon of beats is of great practical importance.
Through the 20th century a desire to explore new Beats can be used to determine the small difference between
conceptions of rhythm in one’s work has been manifested frequencies of two sources of sound. Musicians often make
by many composers (Burkholder, Grout & Palisca, 2014, use of beats in tuning their instruments. A piano tuner uses
p. 945 and Morris, 1996, p. 30 - 32). In an initial stage, this beats to tell whether his standard tuning fork has the same
desire was fulfilled by the usage of less common tuplets. frequency as the string of his instrument. If the two differ in
Composers started changing very frequently the tempo in frequency, i.e. are out of tune, he will hear beats. He adjusts
the tension in the string and thus changes the frequency of
their works or using simultaneously two or more different
the note emitted by the string and matches it with his fork.
metronomic tempos. Another stage of this pursuit was the Sometimes beats are deliberately produced in a particular
development of complex metronomic tempos scales (used section of an orchestra to give a pleasing tone to the resulting
by Karlheinz Stockhausen) which featuring the already sound. A more complex beat phenomenon, resulting from the
mentioned frequent changes of the tempo. Stockhausen superposition of many harmonic oscillations of different
elaborated a “chromatic” tempo-scale which is described frequencies, is employed to transmit a signal from one place
in his 1956 article, “…wie die Zeit vergeht…”. This scale to another. The beats called wave groups or packets
includes Stockhausen in a serial environment which now propagate in space (Babaj, 1988, p. 64).
includes serial composition of rhythm (Kohl, 1983, p.147-
185). About “Spacification”
In 1974 Stockhausen composed Inori, in which the “Spacification” was composed recently (2015) and
“chromatic” tempo-scale described in 3his 1956 article, “…wie features one narrator, vibraphone, electric guitar (with
die Zeit vergeht….”, reappeared. This tempo-scale had distortion and wah-wah pedal), tenor saxophone, french
completely vanished in his works composed after the writing horn with a piano and timpani working as resonators and
of the article, but since 1974 has appeared with increasing
live-electronics. The piece is divided into four sections (A,
frequency. In fact, nearly every detail of Stockhausen’s
present rhythmic techniques is described, explicitly or B, C and D), since it makes it more simple for the
implicitly, in wdZv, now a classic of twentieth-century rhythmic performers to understand it, although two main sections is
theory and Stockhausen’s most celebrated and controversial what is precept by the receiver. “Spacification” opens with
theoretical work (Kohl, 1983, p.147-185). the narrator reading a short story. The text, in which is a
spiritual journey of a balloon is described, was written by
eaw2015 a tecnologia ao serviço da criação musical 70

the Portuguese painter and writer Sara Pestana. While the resonators. The three timpani are tuned in such when that
narrator's voice is being modified by the live-electronics, when they resonate a resulting pulse exists similar to the
the vibraphone starts playing alongside with the sine wave interference between the vibraphone and the oscillators.
oscillators. The narrator stops, vibraphone continues with
A very subtle method of creating this beats occurs when,
the electric guitar. The sound from the electric guitar is
in the end of the first main section, the same note appears
exclusively modified by the pedals and by the
in the guitar and vibraphone. Since one is tuned in A=440
amplification, while the vibraphone suffers modifications
Hz while the other is tuned in A=442 Hz, the resultant
from the live-electronics, in addition to the vibraphone’s
effect will be a pulse rate of two beats per second for as
motor which causes a amplitude modulation. Then the
long as the sound from each instrument is heard.
horn appears, creating contrast between horn’s timbre
alone and the blend between resonances of the piano The vibraphone’s motor resulting pulse also blends with
and/or timpani and the horn. In the end, right before the beat phenomenon from the timpani, from the electric
finishing the major part of “Spacification” a very simple guitar/vibraphone and from the oscillators/vibraphone.
method of beat producing is used. It is used only one time The final method used was the guitar’s wah-wah. Like the
and it is very subtle and ephemeral: the vibraphone’s vibraphone’s motor resulting pulse, these do not represent
tuning pitch is A=442 Hz while the guitar is tuned in A=440 the main rhythmic texture but just a complementary
Hz. This produces 2 beats per second for as long as the aspect. Often it is indicated for the guitarist to match the
two sounds are heard. The second section of rhythm of other “rhythm source”.
“Spacification” creates humongous contrast with the first.
Harmonically speaking this section contains a richer and
denser texture, traditional rhythmic textures are the ones
Methodological Path
used to create disparity between sections and the sound All of the methods used can be inserted in a perception
heard is exclusively the one that is being played by the scale. From the less perceptible as common beats
performers, except for the electric guitar which features (interference between two sounds of slightly different
the regular amplification and distortion pedal; no wah-wah frequencies) to the one that is easily precept as two similar
in this section). sounds “out of tune”. The guitar’s wah-wah is the one
which is further from the acoustic phenomenon that has
Methods been being discussed. The guitar’s was-wah effect is a
spectral glide (Erickson, 1975, p. 72 - 75). That said, and
The main method to explore the rhythmic proprieties of although this effect resembles a beat tone because of the
beats was the interaction between the vibraphone and the player having the possibility to create a pulse, guitar’s
oscillators (live-electronics). While exploring this method wah-wah does not creates a beat tone. The second place
the vibraphone’s sound does not suffer any type at the bottom of our scale is the vibraphone’s motor.
modulation from it’s motor or any modifications from the Although it’s effect creating a pulse is unquestionably clear
live-electronics. Four ordinary sine wave oscillators were and audible , the way it is created is not the same as the
created in Pure Data [1]. The time needed for one sine one intended to be the dominant in “Spacification”. The
wave to slide between frequencies to the other is fixed and tremolo effect of the vibraphone’s motor is not induced by
in the score it is notated the moment when the slide two very closely pitched sounds “colliding” but by a metal
between frequencies most occur. The oscillators are disk, which rotates inside the resonators creating a
controlled by the live-electronics performer, who triggers amplitude modulation. The proximity between the pitch in
the previously explained “slide”. The proximity between which the vibraphone and electric guitar are tuned causes
frequencies generates the variations of amplitude resulting the third effect on the scale. For the first time, this is actual
in a pulse. Since the frequencies are not stable this pulse beats created by two sound waves with very close
is rarely periodic, creating a sensation of a irregular frequencies. Having that said, the fourth method is the
rhythm. Although a irregular rhythm was intended one timpani one. The reason why I found the timpani method
must be careful because the constant slide between more perceptible is because of the proximity of the sound
frequencies may destroy the perception of rhythm. If one sources. Since the timpani are really close to each other
sine wave does not stay in the same frequency for a while the guitar and vibraphone are supposed to have a
certain period of time, this compromises the existence of a considerable distance between them, the receiver will
beat tone. This “latency period” needs to be longer is the perceive more easily these timpani method. The top of our
proximity between frequencies is greater. For example, if scale is the vibraphone/oscillators method. This is
two sine waves are being heard, one with 440 Hz and indisputably when the beats are recognised more
another with 441 Hz one would need a whole second to efficiently. The importance of this methodological path is to
hear one beat, but if the second frequency had not 441 Hz understand that the rhythmic exploration in “Spacification”
but 445 Hz one would only need 1/5 of a second to hear follows a metaphorical aisle through traditional rhythmic
one beat, while in a whole second five beats would be textures to idiosyncratic concealed rhythmic interactions.
heard. The perception of close frequencies separating is Although this “aisle” exists I chose not to expose it,
described by E. Schubert and R. Parncutt . What they allowing the listener to formulate it in his imagination, while
describe as a tone “in tune” has two different ranges: listening to the “oddly shaped interlocking and tessellating
Category with corresponding to scale step, ± 50 cents; In- pieces of a jigsaw puzzle”.
tune (within category) range, ± 10-30 cents (Schubert &
Parncutt, 2006).
Contextualizing “pacification"
Two timpani are required for the performance of
“Spacification”. They are placed near the horn, working as When composing for live-electronics there are two main
things one must take into account: the use of technology
eaw2015 a tecnologia ao serviço da criação musical 71

which is expensive or difficult to acquire and the lack of Investigation


information the composer gives about the “patch” he
The idea to develop a piece using as main rhythmic
created for the performance of the piece, resulting in
context acoustical beats begun from the study of three
difficulties for the performers who has to perform it
(Bullock, 2005). pieces: Warren Burt’s “Beat Generation in the California
Coastal Ranges” (1998-1999), Giacinto Scelsi’s “L'âme
Many twentieth century works composed for instruments and ailée / L'âme ouverte” (1973) and Iannis Xenakis’ “Nomos
live electronics are seldom performed due to their use of near Alpha” (1964). I focused mainly in the first piece, which
obsolete technology. Some performing bodies avoid such
was briefly explained to me by the composer.
works because the necessary technology is either unavailable
or too expensive to hire (Bullock, 2005). Warren Burt’s “Beat Generation in the California Coastal
Ranges” features vibraphone (or piano) and tape. The
In “Spacification” the “patch” was created always
loudspeakers should be placed very close to the
regarding the performer and to make his job easier a very
instrument. The piece consists in many chords that are
detailed manual was written. The patch is very effortless to
played and sustained for as long as the interference
understand and the performance requires no external
between the vibraphone or piano and the sine waves are
controllers, facilitating its performance since no hardware
heard. The player chooses the chord from which he starts
equipments are required.
and proceeds following the “path” that Warren Burt
After this brief introduction about the use of live-electronics composed. This work may be inserted in a “open form”
in “Spacification” follows the actual contextualisation of category (Burt, 1998).
this piece in the live-electronics background. One finds
The aim is to have the instrument and the electronics merge
that computer based live-electronics are very attractive for into one sound, which is neither mainly acoustic nor mainly
the composer when regarding all its possibilities. It has electronic. Brief electronics only panses between decay of
been used frequently in the creation of s installations, to each acoustic note and the beginnings of the next are
create complex electronic instruments, etc. (Collins, desirable (Burt, 1998).
Rincón, 2007, p. 38-53). Another curious use of computer
“Beat Generation in the California Coastal Ranges” is
generated live-electronics is the preservation of analogical
closely related to “Spacification” since the first part uses
generated live-electronics (Dias, 2009, p. 38). Some of
both vibraphone and sine waves and it is also intended for
these analogical devices have become obsolete and/or
hard to acquire. the vibraphonist to sustain the chords for as long as beats
are heard. Warren Burt did not specified which “pulse rate”
These works require the use of analogue technology that was intended for each chord. These can be very
has become obsolete or difficult to access by the average interesting since in each performance of Warren Burt’s the
performer. We think that migration from electronics to
performer would have the opportunity to explore beat
software, also referred as recast represents a necessary step
to preserve live electroacoustic music (Dias, 2009, p. 38-37). combinations and pulse rates not explored in previous
performances..
The usage of the live-electronics in “Spacification" had two
In Scelsi’s “L'âme ailée / L'âme ouverte” (1973) the usage
main reasons. The first reason was to achieve the exact
of subtle microtonal inflections were also an inspiration to
rhythms desired without forcing the performer to use a
“Spacification”. Scelsi’s approach to microtones may
click track. To create sections in the piece where the
change from a very subtle frequency difference resulting in
tempo was not solid was an ambition and this couldn’t be
a rhythmic perceptible pulse to a very dense pulse or even
achieved using a click track. Tempo in the first main
two different perceptible tones. This type of obsessive
section of “Spacification” should be free so that the
analysis of one single note and the various possible
receiver perceive this texture where there is no tempo or
pulses of two notes perceived as one contributed the birth
rhythm from the instruments but all those characteristics of
of the idea of creating “concealed rhythmic interactions”.
music came from the interaction between instruments.
From “Fanfare” magazine: “The masterworks in this regard
This is the first main reason to use live-electronics: to
are the String Trio, Elegia per Ty, for viola and violin,
avoid a strict metronomic tempo and to make the piece
L’Âme Ailée and L’Âme Ouverte, both for violin solo, in
more accessible for the younger performers. The second
which Scelsi’s microtonalities range in subtle,
reason that made me chose live-electronics over a “tape”
understanded luxuriance; (…)” (Flegler, 1990, p. 369).
was due to the fact that I intended to include a narrator in
the beginning of the piece. That desire caused a conflict: Xenakis’ piece has a wider “palette” of textures and
the text had to be perceptible, so only simple effects could techniques than the other pieces. While Scelsi’s piece
be used, nevertheless every time the piece was performed focus on the various possible microtonal overlays when
the text should sound different, so recording a voice and dealing with notes with very close frequency and Burt’s
editing was not a solution. Live-Electronics was the piece targets the relation between sine waves and
solution for these problem. I could conceive simple vibraphone, Xenakis’ “Nomos Alpha” has a dialectic that
oscillators that had the desired frequencies programmed incorporates many different timbres, exploring effectively
and I could create “transparent” effects for the narrator’s contrast and juxtaposition (DeLio, 1980, p.63-95 and
voice. Recorded sounds and other effects ended to be Stowell, 1999, p. 219).
used in the piece to modify some of the instruments.
Xenakis used the cello in Nomos alpha to contrast status
Although these effects were used, they do not constitute a (through the use of near-unisons with subtly inflected
main focus, in fact they are very simple since it was never microtones) with movement (rapid vertical movement of
an ambition to achieve differentiability in “Spacification” harmonic glissandi, normal glissandi, tremolos and moving
through the use of idiosyncratic effects in the live- fragments). Innovative, too, was his use of notation for
electronics. beatings between micro-intervals, the measured glissandi in
eaw2015 a tecnologia ao serviço da criação musical 72

harmonics, slow glissandi defined against a beat which exploration of instrument tunings and rhythmic aspects of
controls the exact speed, and extreme range made possible one’s piece.
by the use of a gut C string (retuned throughout the work)
extended to a low purring (Stowell, 1999, p. 219). In doing so, it treats a range of microtonal approaches and
philosophies ranging from duplex subdivision of tempered
scales to the generation of intervals in just–intonation– based
Comparative Study schemes, including systems derived directly from the
When comparing Warren Burt’s piece to mine I like to structure of the harmonic series (Bridges, 2012, p. 20).
stress the fact that in Warren Burt’s piece he explores a The possibilities are countless and the number of
single method of producing beats (interference between resources to do this are vast, not only using electronics
vibraphone and sine waves). In “Spacification” I but also using acoustic instruments.
aggregated four different methods of producing a pulse
and my ambition was to compose a section in which the
Notes
receiver would hear various different timbres but the
resulting effect of all those timbres was the same, or at [1] Pure Data, also known as Pd, is an open source visual
least very similar. Another important thing to stress is that programming language. Pd enables musicians, visual artists,
for “Spacification” the frequencies of the sine waves are performers, researchers, and developers to create software
graphically, without writing lines of code. Pd is used to
precise, in other words It is my intention that in a precise
process and generate sound, video, 2D/3D graphics, and
chord, the frequency of the waves is the one notated, that
interface sensors, input devices, and MIDI.
is one of the reasons I opted to choose live-electronics
instead of a tape.
Bibliography
In comparison to Scelsi’s piece I do explore different pulse
Babaj, N. (1988). Physics of oscillations and waves. McGraw.
rates on the same note or chord, nevertheless I avoided
being so obsessive with one single note as Scelsi is. The Bridges, B. (2012). Towards a Perceptually grounded Theory
main reason I did this is a personal aesthetic judgement. of Microtonality: issues in sonority, scale construction and
This was not the only reason, if I opted to choose Scelsi’s auditory perception and cognition. PhD Thesis, National
approach to beats “Spacification” would sound quite akin University of Ireland, Galaway, Ireland.
to “L'âme ailée / L'âme ouverte” as it would sound quite Bullock, J. (2005). “Modernising live electronics technology in
identical to “Beat Generation in the California Coastal the works of Jonathan Harvey”. Proceedings of the
Ranges” if I hadn’t implemented the different timbres and International Computer Music Conference, Barcelona, Spain
the live-electronics.
Burkholder, J., Grout, D., Palisca, C. (2014). A History of
Xenakis’s cello piece “through the use of near-unisons Western Music. Maribeth Payne.
with subtly inflected microtones” incorporates a similar
texture to the one I intended to reflect with “Spacification” Burt, W. (1998). Beat Generation in the California Coastal
Ranges.
(Stowell, 1999, p. 219): a blend between extended
techniques, rich harmonic fields but preserving beats as Collins, N., Rincón, J. (2007). The Cambridge Companion to
the main texture of my piece. “Spacification” definitely Electronic Music. Cambridge; New York: Cambridge
sounds more distant from “Nomos Alpha” than it sounds University Pres.
from “L'âme ailée / L'âme ouverte” or “Beat Generation in DeLio, T. (1980), “Iannis Xenakis' "Nomos Alpha": The
the California Coastal Ranges” and that is because Dialectics of Structure and Materials”. Journal of Music
Xenakis’ “Nomos Alpha” gives a great to timbres that are Theory, Vol. 24 (nr 1), p. 63-95
not possible to include in “Spacification” and if I did the
major focus would be lost Dias, A. (2009). “Case Studies in Live Electronic Music
Preservation: Recasting Jorge Peixinho’s Harmónicos (1967-
Although “Spacification” was immensely inspired in the 1986) And Sax-Blue (1984 - 1992)”. Journal of Science and
works previously talked about if it hadn’t featured this Technology of the Arts, vol. 1 (nr 1), p. 38-37
rather simple idiosyncrasies it wouldn’t sound as personal
Erickson, R. (1975). Sound structure in music. Berkeley:
as it sounds now.
University of California Press.

Conclusion and Future Regards Flegler, J. (1990). “The Common-Sense Audiophile”. Fanfare,
vol. 14 (nr 1), p. 369
The main point of this article is to present a new approach
to rhythmic exploration. This new approach suggests a Kohl, J. (1983). “The Evolution of Macro- and Micro-Time
Relations in Stockhausen's Recent Music”. Perspectives of
rhythmic perception which occurs in the receiver as
New Music, vol. 22 (nr 1/2), p. 147-185
realisation of a phenomenon that has been present in
music for a long time. “Spacification” idiosyncratic Morris, M. (1996). Guide to Twentieth Century Composers.
characteristics come from a untying between my personal Methuen Publishing Ltd.
ambition and the research I did in terms of other Oster, G. (1973). “Auditory Beats and the Brain”. Scientific
composers’ works. This method of rhythmic American, vol. 229 (nr 4)
exploration is still very undeveloped and the opportunities
myriad. Many instruments have the opportunity to easily Schubert, E. & Parncutt, R. (2006). “Perception and
tune it in many different ways and play them Psychoacoustics of Tuning “. Sonic Connections 2006.
“microtonally”. Elaborating unique and unusual Stauff, E. (2006). Celeste/Céleste/Schwebung. Last access
temperaments using this new method could merge the on July the 28th 2015, at
http://www.organstops.org/c/Celeste.html
eaw2015 a tecnologia ao serviço da criação musical 73

Stowell, Robin. (1999). The Cambridge Companion to the


Cello. Cambridge; New York: Cambridge University Press.
eaw2015 a tecnologia ao serviço da criação musical 74

3.2 E se o ruído estruturar a forma?

Eduardo Luís Patriarca Portugal

Abstract 1. Ruído?
Precisamos de definir dois tipos de ruído: um meramente
Based on Kandinsky thought that: “form is the exterior
auditivo, com catalogações subjectivas e físicas; outro
manifestation of inner meaning” and the notion that the
ideológico assente em estrutura e pensamento.
distinction between pure sound and noise is variable, we’ll
follow three stages to question the importance of noise, as Iniciemos com o mais comumente caracterizado, o
acoustic or structural phenomenon supported by acústico. Ainda que de uma forma geral possamos basear
electronics, in the formal structure. So will determine o este conceito de ruído na oposição sons puros/ sons
different ways of noise use, and noise importance in the complexos, nem sempre as definições são concordantes.
path of a musical work, creating five categories of noise,
Subjectivamente entende-se o ruído como algo
regardless of their origin.
desagradável em oposição ao som, este agradável. Esta
Starting on the historical movement “Bruitisme”, then the classificação é naturalmente vaga, classificando
definitions and cataloging of musical object created by indiferentemente estruturas sonoras com as mesmas
Pierre Schaeffer (2003), passing through the appropriation características nos dois lados da percepção.
of noise as tensioning element in the work of Saariaho Os sons podem-se classificar também pela sua resposta
(McAdams, Saariaho, 1991) and Murail (2004), subjectiva, assim os mais usuais como por exemplo a
demonstrates to its application in formal organization and palavra, podem ser considerados como sons, sempre que os
dichotomy relaxation / stress in the cycle “Rituals”. níveis de pressão sonora que produzam não sejam
excessivos, já que nesse caso se teriam de chamar ruídos,
Supported by the creation of the different elements of entendendo para tal, todo o som não desejado. Certos sons
musical speech and formal aggregation of each piece and agradáveis classificam-se normalmente como musicais,
of the cycle. ainda que possam converter-se em ruído, de acordo com a
definição anterior. (López, 2000, p. 373)
Each sound can be noise, each noise can have different
kinds of function, either a sound or a noise can be a sound Ao nível puramente físico definimos o som como sendo as
object, their use schematizes the continuity of speech and estruturas sonoras que obedecem à Lei de Fourier, ou
subsequent formal structure. seja, aquelas que partindo de uma frequência
The final example is up by using different instruments fundamental obtém a sua série de harmónicos
relating to the same sounds / noises and different sound matematicamente estruturada. Esta noção afastaria da
objects arising and its relations with the cataloging definição som musical todos aqueles produzidos pelos
proposals. instrumentos de percussão em geral, por isso
encontramos em Calvo-Manzano a seguinte explicação:
Keywords: Noise, Sound objects, Tension, Form,
Fisicamente o ruído é um som de grande complexidade,
Composition
resultante da sobreposição inarmónica de sons provenientes
de variadas fontes, É uma composição confusa que não
Introdução admite, nem segue, nenhuma lei ou ordem de formação.
(1991, p. 84)
Assumimos frequentemente como a grande revolução
musical que permite a passagem do tonalismo ao As premissas que determinam o som/ ruído acústico
atonalismo com o chamado acorde de Tristão. Este, determinam igualmente o ruído estrutural. Este baseia-se
surgido no Prelúdio da ópera “Tristan und Isolde” de nas diferentes noções de tensão/ relaxamento,
Richard Wagner, trazia essencialmente um elemento de consonância/ dissonância, regular/ irregular, etc. Na
indefinição harmónica, aos olhos do tonalismo. Em 1976, prática estaremos a assumir que tudo o que não faz parte
durante uma conferência sobre a música electro-acústica, da organização regular, tudo o que incomoda, o que é
Luciano Berio afirma que o acorde de Tristão “era apenas indesejado é ruído.
ruído, ou seja, uma configuração sonora que os hábitos Esta característica, não sendo recente, é definitivamente
harmónicos do tempo não podiam admitir." (Nattiez, 1984, mais usada na criação contemporânea, assumindo o
p. 216). Esta noção de “ruído” como elemento estranho, papel de elemento contrastante, e por isso mesmo
de surpresa ou de tensão, ganhou musicalmente força na estrutural. Entendendo a forma como a manipulação das
segunda metade do século XX, passando de mero efeito a estruturas, facilmente integramos este ruído como
elemento estrutural. elemento formal.
Neste artigo passaremos pelas diferentes apropriações do Ao longo deste últimos 100 anos foi ganhando diferentes
ruído iniciadas pelo movimento futurista, até à sua função formas e assumindo importância consciente por parte dos
estrutural da definição da forma enunciada no eixo músicos. Entre usos por efeito, e ainda assim estruturais,
tímbrico de Kaija Saariaho, exemplificada no meu ciclo manipuladores de forma, conscientes da sua função; e
Rituais. Para tal usaremos as diferentes noções de ruído uso causal, meramente provável, mas criador de conceito
para definir a sua condição estrutural, tanto sonora quanto absoluto, a percepção das suas possibilidades,
formal.
associadas aos recursos emergentes e a uma boa
eaw2015 a tecnologia ao serviço da criação musical 75

definição dos seus elementos determinaram o percurso por obter sucessões complicadas de acordes dissonantes,
das renovações estéticas e filosóficas da criação musical. preparando assim o ruído musical. (Russolo, 1913)

O conceito global de ruído associa-se aos diferentes Como som-ruído assumem-se os acusticamente
parâmetros da composição, sejam eles o ritmo, a considerados ruídos, para os quais o Manifesto apresenta
harmonia, a melodia ou mesmo a instrumentação. A uma divisão em seis grupos.
diferenciação de regularidade/ irregularidade permite Neste movimento pretende-se essencialmente que o ruído
integrar esta noção, assumir que se introduziu “ruído” em substitua o som musical, que a tradição instrumental seja
um dos parâmetros. obliterada pelos ruídos mecânicos e pertencentes ao
ambiente. A classificação em grupos nasce, ainda assim,
2. Que Ruído? da tradição e da existência de classificações orquestrais.
A apropriação do ruído como elemento manipulável não Não trazendo uma integração do ruído enquanto estrutura
foi sempre igual. No decorrer dos anos e de diferentes musical abre as portas à definição de objecto sonoro.
abordagens foi criando bases por vezes antagónicas mas
fundamentais para a apreciação final do conceito 2.2. Ruído-Objecto
enquanto estrutura organizadora. As definições criadas pelo movimento italiano associadas
Consideremos brevemente três percursos fundamentais: a desenvolvimentos tecnológicos como o aparecimento da
a apropriação do ruído enquanto ruído por si mesmo fita magnética, e consequente gravador, permitiram a
(raiando as duas definições apresentadas), enquanto manipulação dos elementos gravados, sejam eles quais
objecto (e assim elemento primordial) e enquanto forem.
estrutura. Na década de 1950 várias criações realizadas no Estúdio
de Paris por Pierre Schaeffer e Pierre Henry, estabelecem
2.1. Ruído-Ruído os princípios básicos do que se virá a chamar Musique
Nos inícios do séc. XX o movimento artístico-literário Concrète. Estes ganham a sua definição teórica no
denominado Futurismo rejeitava o passado, baseando-se trabalho de Schaeffer, Tratado dos Objectos Musicais,
nos desenvolvimentos tecnológicos do final do séc. XIX, onde surge e se define o conceito de objecto musical
levando a uma utilização estrutural das onomatopeias na como sendo “todo o fenómeno sonoro que se entenda
literatura, como bem demonstra a “Ode Triunfal” de Álvaro como um conjunto, como um todo coerente, e que se oiça
de Campos [1]. O movimento desenvolveu-se noutras mediante uma escuta reduzida aproximada a si mesmo,
artes, surgindo na Música em 1913 com o Manifesto independentemente da sua procedência ou significado”
“L’Arte dei Rumori" de Luigi Russolo. (Schaeffer, 1988).

Além da criação de instrumentos específicos e de alguns O objecto ouvido pode ter qualquer origem acústica, seja
eventos, sempre envoltos em polémica e violência, como ruído ou som. Na verdade a ausência dessa origem
pretendiam os próprios autores [2] o texto de Russolo enquanto audição, permite que o objecto perca
define dois conceitos para o ruído: o ruído musical e os referências tradicionais, podendo ser relacionados
sons-ruído. objectos tais como o galope de um cavalo e uma nota de
um clarinete. Existem por si mesmas, sem relação com o
1 cavalo ou o clarinete. Aqui a noção de ruído desvanece-se
rumores água a cair mergulho ao criarmos um elemento único, contínuo que permite a
junção de qualquer estrutura sonora. Ainda que o autor
2 pretenda definir os conceitos de estrutura e forma
assobios roncos etc.
apresenta uma diferenciação das relações tradicionais
exclusivamente ao nível das nomenclaturas, “já não se
3 tratará de um conjunto organizado (…) mas sim de
múrmurios burburinhos sussurros
actividades que tendem a organizar os conjuntos.”
(Schaeffer, 1988, p. 167)
4
estridentes estalidos etc.
Enquanto forma a Musique Concrète não traz qualquer
inovação, mantendo-se fiel a uma imagem de miniatura
5 percussão em metais, madeiras, peles, integrada numa determinada macro-estrutura. Acaba, sim,
pedra por criar mais um elemento da catalogação de estruturas
sonoras, aqui mais minuciosa do que nos antecessores
6 vozes de homens, animais: gritos, italianos, e aproximada já a conceitos acústica físico-
gemidos musical.
Figura 1 - Grupos de ruídos enunciados por Russolo (adaptado) Estes elementos criam mesmo assim contradições,
descritas pelo próprio Schaeffer:
O ruído musical na verdade corresponde ao
desenvolvimento de toda a tradição anterior, é a música O quadro analítico que propomos está viciado à partida,
devido à dupla contradição de pretender analisar e
nova e as suas consequentes emancipações da
descontextualizar um objecto musical, quando não pode
dissonância. existir valor musical em mais do que um contexto. (1988, p.
Para excitar a nossa sensibilidade, a música desenvolveu-se 289)
procurando uma polifonia mais complexa e uma variedade
maior de timbres e de coloridos instrumentais. Esforçou-se
eaw2015 a tecnologia ao serviço da criação musical 76

1 2.3. Ruído-Estrutura
Massa
Enquanto que os movimentos descritos anteriormente
usam o ruído, enquanto característica física, como
2 elemento de manipulação e anulação sem grandes
Dinâmica
consequências formais, a corrente espectral determina já
3 o uso de diferentes espectros para consequências de
Timbre estruturação do discurso temporal. A divisão dos
espectros utilizados em harmónicos e inarmónicos, bem
4 como a determinação das suas funções é um passo
Perfil Melódico fundamental para a criação de um pensamento formal.
Variação
Os compositores espectrais determinam um caminho de
5 Perfil de massa trabalho que por si só indica a noção de esquema formal,
o processo. Este, “instável, é limitado por fenómenos
6
Granulado estáveis. É um objecto temporal fechado (…).” (Baillet,
Estagnação 2000, p. 66) “A sucessão de processos é frequentemente
7 Marcha englobada num envelope dinâmico e tensional que
caracteriza a forma da obra.” (idem, p. 70). Esta utilização
exclui o uso de elementos sem relação global na
Figura 2 - Critérios de percepção musical
estrutura.
A transição de elementos é feita através desta fórmula,
1 permitindo dar papéis de tensão e relaxamento aos
Tipos diferentes espectros utilizados. Neste sentido a passagem
de uma estrutura a outra funciona como pontos de apoio e
2 de movimento. Consideremos os apoios como a chegada,
Classes
e consequente partida, de um dado espectro, seja ele
harmónico ou não, e movimento como o uso dos seus
3 elementos transitórios, e formates, que gradualmente
Géneros
misturam um espectro noutro.
4 No inicio de Partiels, o espectro harmónico do mi do
Tessitura contrabaixo e do trombone é actualizado por dezoito
Altura instrumentos. Este espectro natural deriva a cada repetição
para um espectro de parciais inarmónicos. A zona
5 Separação
formântica, progressivamente deslocada para o grave, é
colorida por frequências cada vez mais inarmónicas (…) A
6 Peso duração dos transitórios de ataque e de extinção evoluem,
Espécies Intensidade também elas, em cada repetição na razão inversa:
7 relevo os transitórios de ataque crescem, os de extinção decrescem.
As durações de zonas estáveis flutuam à volta de uma
8 Impacto constante.
Duração As mudanças de componentes do espectro, as mudanças de
timbre de cada parcial e os acontecimentos transitórios que
9 Módulo ocorrem no corpo do som são adicionados. Constituem o
grau de alteração entre um estado de evolução espectral e o
Figura 3 - Qualificação e Evolução dos critérios dos objectos seguinte. (Grisey, 2008, p. 94)
musicais
A relação com o ruído físico, nas suas diversas
definições, é assumido no espectralismo como elemento-
Estas classificações servem como estruturação dos
chave do processo harmónico, enquanto que a noção de
objectos, não permitindo uma relação de oposições
ruído estrutural, adaptado a diferentes parâmetros ganha
estruturais, acentua a ausência de contrastes no âmbito
igual importância para a definição dos limites funcionais
da dicotomia tensão/ relaxamento. Encontramos vários
da obra.
exemplos na produção dos principais compositores desta
escola, mas ficaremos pelas Variations pour une porte et Encontramos esta última definição de diferentes formas
un soupir de Pierre Henry, obra na qual a estrutura é feita em Gérard Grisey e em Tristan Murail, bem como, em
pela sequência de diferentes andamentos/ variações que cada um, nas obras de diferentes fases. Enquanto numa
usam, cada um, uma transformação/ elemento retirado fase inicial a partida de um espectro e chegada a outro
das bases primordiais, a porta e o suspiro. As ideias de determinavam pontos da construção, ou a estruturação de
tensão e relaxamento passam despercebidas ao longo do elementos ritmos regulares a irregulares, e de elementos
discurso micro-estrutural, criando-se estas percepções ao estáveis a caóticos validam esta noção, nas obras mais
nível macro-estrutural [3] por oposição dos dois ruídos- recentes [4], a fusão dos elementos cria um contínuo
objecto. formal mais equilibrado e mais fluído.
Em obras como Vortex Temporum de Grisey ou L’esprit
des dunes de Murail o elemento ruído ganha diferentes
contextos. Na primeira obra a utilização de ruídos criados
pelo excesso de pressão nos instrumentos de cordas
eaw2015 a tecnologia ao serviço da criação musical 77

(violino, viola e violoncelo) que gradualmente se vão dissonância, enquanto que uma textura lisa e límpida
sobrepondo a estrutura regular dos sopros, ganhando em corresponderá a uma consonância. (Saariaho, 1991, p. 413)
dinâmica e definição rítmica e determinando a passagem Não só a noção de intervalo harmónico serve para criar a
da secção A à secção B. Na segunda a elaboração de dinâmica temporal da obra como dentro do plano
objectos sonoros complexos que determinam os materiais inarmónico surge a mesma situação. No inicio de
e as suas relações no contexto das tensões e Laconisme de l’aile, a passagem falada correspondente
relaxamentos. ao “ruído” transforma-se em som com ar e flatterzunge e
Aqui é muito clara a diferenciação de elementos e as suas termina num som estável em nota grave da flauta. Este
funções: processo (usando aqui a mesma definição do conceito em
Grisey) encerra a noção absoluta do eixo.
Tensão, acumulação de objectos sonoros, aumento de
intensidade, aumento de velocidade de encadeamentos Ganha a Harmonia, com esta noção, uma nova
dos objectos direccionais e aceleração do processo; identidade, diferente da renovação tentada pelo
Espectralismo. Aqui o timbre transforma-se na verdadeira
Relaxamento, atraso do desenvolvimento da trama, harmonia, cria a essência da relação entre estruturas,
aumento dos valores rítmicos, diminuição de intensidade
e distensão dos objectos. “ao utilizarmos o timbre para criar a forma musical, é
precisamente este que toma o lugar da harmonia como
Este objectos são criados com base na gravação de elemento progressivo da música.” (Saariaho, 1991, p. 413)
ruídos e amostras de sons instrumentais e vocais, que
após análise e modelização dos harmónicos, se procede 3.2. Ruído-Forma
à selecção de parciais, quantificação de frequências,
escolha de partes de espectros para criar novos A noção de eixo tímbrico, associada ao uso do processo
espectros, derivados, dilatados ou comprimidos. espectral define a base da construção das peças que
constituem o ciclo Rituais. Constituído por treze peças,
A relação, no tempo destes objectos determina, após uma breve apresentação centrar-nos-emos nas três
finalmente, a forma. primeiras. O ciclo usa um total de cinco instrumentos
distribuídos em diferentes organizações e divididos em
3. Qual Ruído? duas grandes partes, num total de quatro blocos.
Os desenvolvimentos da apropriação do ruído físico e a A primeira parte junta as peças de I a VII, organizadas
sua transformação e fusão com o ruído estrutural ganham entre da I à IV e da V à VII. A segunda parte divide-se
forma no pós-espectralismo, de forma muito particular pelo conjunto das peças VIII a XII e pela peça XIII final.
com Kaija Saariaho. Aqui o ruído é visto como analogia da Distinguem-se estas duas partes pela inclusão e a não
relação dissonância-consonância que presidiu toda a inclusão de electrónica, elemento fundamental para a
música tonal. Enquanto Murail, na obra citada, se estruturação da primeira parte como se verá.
aproxima da organização formal clássica, pela
As peças I a VII partilham um mantra tibetano que, divido
apresentação, manipulação (desenvolvimento) e retorno
em quatro partes organiza o discurso de diferentes
dos objectos sonoros, Saariaho introduz a noção de
conjuntos: I a III, IV, V e VI, VII.
relaxamento e tensão no plano duplo de consonância-
dissonância e complexo-simples. As peças I a III estão escritas para flauta, clarinete-baixo e
No plano da experiência auditiva, podemos comparar, por um
violoncelo, respectivamente, enquanto a IV junta o trio
lado, a percepção de uma tensão que se descarrega na (mudando o clarinete-baixo para clarinete) todas com
tónica (ou uma consonância se o contexto não for tonal), e electrónica.
por outro, uma textura ruidosa que ao se amplificar
Juntam-se as três primeiras peças visto usarem
transforma-se em sons claros: existe aqui uma certa
analogia. (Saariaho, 1991, p. 413) exactamente as mesmas estruturas-base e o mesmo
processo. Este caracteriza-se pelo uso de uma estrutura,
à vez, dinâmica e de repouso. O que varia em cada peça
3.1. Ruído-Timbre
é o momento transitório, devido precisamente ás
Na obra de Saariaho não só o ruído é integrado no diferenças do ruído utilizado.
discurso como faz parte da organização e do pensamento
Consideram-se duas estruturas, paralelas, ao mesmo
estrutural. Ao uso de sons de pássaros e vento (sem
tempo inter-ligadas e opostas. Uma que define a tensão e
manipulações tais que mascarem a origem) em Lohn ou
outra o repouso. Para cada uma existe um elemento
L’Amour de loin, juntam-se as estruturas faladas e os
correspondente que transformará o som ruidoso num som
vulgarmente chamados efeitos em Laconisme de l’aile ou
claro. Cada estrutura usa três elementos possíveis entre
L’Aile du songe.
dinâmica, timbre e ruído.
Nestas obras surge a noção de eixo tímbrico que
É precisamente esta organização das estruturas que se
determina os elementos estruturantes de forma, tensão/
situa no limiar do eixo tímbrico.
dinâmica/ movimento e repouso.
Comecei a utilizar o eixo som/ ruído para elaborar, sejam
frases musicais, sejam formas mais importantes e determinar
por aí as tensões interiores da musica. Num sentido
abstracto e atonal, o eixo som/ ruído pode, em qualquer
altura, se substituir á noção de consonância/ dissonância.
Uma textura ruidosa (bruitée) e granulada será assimilada à
eaw2015 a tecnologia ao serviço da criação musical 78

espectro harmónico

sons
“ruido branco”
“sinusoidais”

sons inarmónicos

Figura 4 - Eixo tímbrico

Partindo desta organização determinaram-se os possíveis


objectos para cada situação e as usas relações.
Obviamente que muitas das decisões são arbitrárias, não
só devido às características de cada instrumento, como
às decisões pessoais e à integração destes elementos na
electrónica utilizada. Essencialmente cada decisão é
tomada em cada momento pela sobreposição do som
instrumental e da sua ligação aos objectos da electrónica.
A constituição dos diferentes espectros/ materiais
determina as decisões formais.
Entenda-se que a forma aqui corresponde ao discurso e
não a uma forma tradicional, ainda que seja fundamental
a noção de retorno. Este não sendo absoluto é
suficientemente próximo para activar elementos de
memória. Esta ideia de forma tem raízes no
“determinismo temporal: material novo, acontecimento,
ruptura…” (Baillet, 2000, p. 39).
Nos Rituais cada repetição destes passos está interligada,
já que a ruptura implica a criação de um material novo.
Em dado momento a sobreposição de um som com um
um elemento da electrónica implica uma análise
especifica feita com o programa Orchids, que mostra os
parciais resultantes bem como o seu comportamento no
tempo. Estes parciais constituem o material que se
deslocará no tempo, como elemento transitório usando os
diferentes recursos do eixo, e permitindo que o ruído
criado (aqui a natural alteração de um espectro natural)
crie o desenvolvimento formal. Neste aspecto o ruído-
forma utilizado varia obviamente de peça para peça, já
que a sobreposição de um espectro de flauta com um
som sintetizado de taça tibetana e a mesma sobreposição
baseada num espectro de clarinete-baixo ou violoncelo
irão criar análises completamente distintas. Assim o ruído- Figura 5 - Elemento 1 de Rituais
forma existente é diferenciado em cada peça.
Juntamente com cada uma destas situações encontram-
se os elementos que podem fazer variar o material,
através de cada estrutura do eixo.
As figuras 5, 6 e 7 demonstram o atrás descrito. Na 5
vemos as frequências usadas para a análise do momento
1 e os diferentes espectros criados. Facilmente
percebemos que os diferentes instrumentos criam
espectros mais harmónicos ou mais inarmónicos, o que
aliado às escolhas das situações apresentadas nas
figuras 6 e 7 levam a transitórios completamente
diferenciados.
eaw2015 a tecnologia ao serviço da criação musical 79

pertinência do seu uso. Na prática, ao não terem uma


cresc.
razão formal/ estrutural serviam exclusivamente como
elementos exteriores, mas que dentro dos conceitos
Dinâmica decresc. apresentados acabavam por criar o ruído estrutural
descrito por Berio na Introdução.
sfz
O uso destes das duas formas de ruído levaram a uma
bisibgliando obrigatória integração deste no discurso musical, e as
Timbre preocupações de organização/ composição que se forma
Tensão multifónicos desenvolvendo facilmente abriram espaço para esta
mesma integração.
eólicos Por vezes os movimentos novos, também ele ruidosos,
não progridem como estéticas ou ganham um verdadeiro
Tongue-ram sentido de estabilidade, mas podem criar bases para
Ruído movimentos futuros.
chaves
Há alguns anos não seria certo que as variantes de
ruídos, nas diversas classificações surgidas na acústica,
flatterzunge
viessem a criar elementos compositivos, que viessem a
Figura 6 - Elementos de tensão para a flauta
integrar o discurso musical de tantas obras.
Compositores como Helmut Lachenmann e Salvatore
cresc. gradual Sciarrino foram obviamente fundamentais para a
integração do ruído/ efeito no discurso musical fazendo
Dinâmica decrsc. gradual parte da própria estrutura. Por diversas razões foram
ficando na definição de experimentalismo e não
estável suscitaram o interesse imediato. Hoje são exemplos
fundamentais de como materiais ruidosos podem criar
microtons uma obra na íntegra, permitindo contrastes que se podem
encaixar no conceito de forma.
Timbre harmónicos As incursões destes usos no campo teórico, aqui sim mais
Relaxamento recentes permitiram criar definições mais precisas e a
voz integração num pensamento estético da dicotomia som/
ruído, esbatendo as suas fronteiras.
whistle tones
Não sendo um elemento exclusivo da composição, e
frequentemente associado a outros elementos criadores,
pizzicato
permite no entanto estabelecer limites contrastantes e
Ruído direccionais no desenrolar da obra, ainda que por vezes
staccato possam levar à noção de mero efeito. Mas a audição de
um efeito acaba sempre por criar a sensação de
ordinário alteração, de mudança, e por isso dinâmica. Quando
pensado como estrutural a sensação de equilíbrio
Figura 7 - Elementos de relaxamento para a flauta
aumenta e estabelece com o auditor um maior conforto e
entendimento dos recursos.
Por aproximação de elementos semelhantes, a transição
e mesmo substituição dos conceitos é fácil de atingir.
Num dado momento o elemento de ruptura, que pode ser Notas
o atingir um relaxamento ou uma tensão permite-se ser [1] Ainda que esta fuja a alguns dos preceitos de
transformado num material novo por inversão da sua Marinetti, autor do Manifesto de 1909, por exemplo nas
função, por exemplo uma estrutura em sons eólicos evocações do passado e da infância, ilustra bem a
facilmente se transforma em whistle tones [5] e vice versa, utilização de elementos onomatopaicos.
criando assim num mesmo momento elementos distintos, [2] Esta violência e polémica faziam parte da doutrina
simultaneamente ponto de partida e ponto de chegada. enunciada pelos Futuristas.
Na situação descrita facilmente se atingem ruídos-forma [3] As noções de micro e macro-estrutura surgem aqui
distintos, pelo processo da análise. O que é verdade ao para diferenciar os diferentes andamentos e a obra
mudar o emissor da frequência base também o é ao integral, precisamente para afastar a noção tradicional e
mudar o ataque desse mesmo emissor. Assim os andamento.
transitórios surgidos são naturalmente distintos. [4] Entendemos aqui como mais recente obras de uma
fase mais próxima deste texto, por oposição ás primeiras
referências, sendo que no caso de Grisey corresponde já
Conclusão à sua última fase de composição.
As questões levantadas ao longo do artigo prendem-se [5] A produção dos sons eólicos e dos whistle tones é
essencialmente com o uso das novas (algumas não tão basicamente a mesma. Altera-se através do controlo do
novas) técnicas de execução. Durante bastante tempo o lábio, estando mais ou menos relaxado. Na verdade os
uso destas como mero efeitos levantava dúvidas sobre whistle tones ficam no centro entre sons eólicos e
eaw2015 a tecnologia ao serviço da criação musical 80

harmónicos, definindo o verdadeiro eixo tímbrico como


descrito na figura 4.

Bibliografia
Baillet, J. (2000). Gérard Grisey: Fondements d’une
écriture. Paris: L’Harmattan
Calvo-Manzano, A. (1991). Acústica físico-musical.
Madrid: Real Musical.
Garant, D. (2011). Tristan Murail: Les objects sonores
complexes, analyse de L’Esprit des dunes. Paris:
L’Harmattan.
Grisey, G. (2008). Écrits, ou l’invention de la musique
spectrale. Paris: Éditions MF.
López, M. R. (2000). Ingeniería acústica. Madrid: Editorial
Paraninfo.
Murail, T. (2004). Modèles & Artifices. Strasbourg:
Presses Universitaires de Strasbourg,
Nattiez, J.-J. (1984). “Som/ ruído”. In R. Romano (org.)
Enciclopédia Enaudi: volume 3, Artes - Tonal/ atonal,
(p.212 – 228). Lisboa: Imprensa Nacional-Casa da
Moeda.
Saariaho, K. (1991). “Timbre et harmonie”. In J.-B.
Barrière (ed.) Le timbre, métaphore pour la composition,
(p.412 – 453). Paris: Ircam.
Schaffer, P. (1988). Tratado de los objectos musicales.
Madrid: Alianza Editorial.
eaw2015 a tecnologia ao serviço da criação musical 81

3.3 Quando os dialetos da língua portuguesa europeia falada se transformam em


elementos composicionais

José Luís Postiga Universidade de Aveiro, Portugal

Abstract: criação do Studio di Fonologia Musicale di Radio Milano,


por Luciano Bério e Bruno Maderna, em 1955, bem como
Sound elements. Those are what make us identify the a contribuição de Humberto Eco nas investigações
geographic region of origin of a particular speaker. Since desenvolvidas, são apenas um dos exemplos que se pode
language is the first process of sound organization through dar da relação entre as suas áreas de conhecimento. Por
noise, is searched on the different dialects of European outro lado, a adoção do IPA (Internacional Phonetic
Portuguese, according to Cintra classification (1971), and Alphabet) por Dieter Schnebel em für stimme (…missa
adapted by Saramago (2001), in the poetic text est) (1956/68) e Brian Ferneyhough, com o objetivo de
pronunciation, by spectral analysis, the bases for music melhor codificarem os sons vocais resultantes do uso de
composition on different platforms and instrumental casts. textos não semânticos (Gee, 2013, p.175), a partilha de
To do this, the research starts on a categorization of the conceitos nas diferentes disciplinas.
different permutations of the resulting sound phonemes of
the geographic region where are verbalized, addressing Contudo, uma língua falada possui diferentes variedades
them as timbral, rhythmic, harmonic, textural and formal de pronunciação, denominadas de dialetos. Segundo
definition elements to composition. On the one hand, it Mateus (2005a, pp.6-7),
seeks to define a map of the musicality of European o dialecto não é hoje considerado uma forma ‘diferente’ (e até
Portuguese Language, in other, it interprets the data as desprestigiada) de falar uma língua, mas é ‘qualquer’ forma
elements for carrying to universal language, the identifying de falar uma língua conforme a região a que pertence o
characteristics of nationality. The development of falante.
computational tools that allow a careful analysis of the Assim, cada região geográfica portuguesa apresenta
recorded sound data, first, as well as the mutation of diferenciações fonéticas, fonológicas e semânticas que a
language study domain elements for material of musical identificam. No entanto, Mateus (2005, p.7) afirma que as
setting, then, becomes the main research methodology, diferenças entre eles são basicamente de caráter fonético.
following the developments achieved in Speakings (2008), Pela análise FFT do discurso procuram-se demonstrar
by J. Harvey, and grounded in speech synthesis neste artigo as características sonoras de cada um, bem
procedures developed in the last century. The results are como um caminho de representação musical que melhor
presented in “Eternity”, a composition cycle of 5 pieces: a identifique. Para melhor caracterizar a problemática em
the first to piano for four hands; the second to Percussion estudo, é usado como base um texto único, no caso
and Electronics; the third to Percussion, Alto Saxophone Explicação da Eternidade de José Luís Peixoto (2002),
and Electronics; the fourth to chamber group; the fifth to registado na pronúncia nativa de três regiões dialetais do
Symphonic Orchestra. país: o norte-litoral, o centro-sul e o insular (no caso dos
Keywords: Language, Noise, Spectral Analysis, açores), a partir do qual se buscam as características
Computer Assisted Composition harmónicas e tímbricas que melhor os caracterizam.
São definidos então parâmetros de organização musical
Introdução dos elementos linguísticos, realçando as características
Segundo Ferdinand de Saussure (2006), a língua é um impostas pela articulação de diferentes tipos de
consoantes - os ruídos que definem a métrica e o ritmo de
sistema de signos onde, para cada signo, existe um
uma determinada frase - e vogais - os sons que definem
significante e significado. No entanto, ela expressa-se
harmónica, tímbrica e melodicamente o espectro sonoro.
verbalmente como um conjunto de diferentes sons
Os conceitos de ataque (consoante), corpo (vogal) e
organizados, transformados em elementos de
queda (vogal ou consoante) característicos do som verbal,
comunicação através da categorização neurológica de
são então manipulados e transformados com vista à
cada objeto sonoro. Neste sistema, pode-se encontrar
estruturação formal da obra musical.
elementos tão pequenos como uma consoante ou uma
vogal, que formam palavras de diferentes dimensões e A representação sonora de texto falado apresenta
sonoridades. A forma como são articuladas, acentuadas e diversos obstáculos oriundos da complexidade do sistema
conjugadas, juntamente com o contorno melódico que fonológico humano. Para melhor os ultrapassar, são
apresentam, é responsável pela sua classificação identificados e trabalhados elementos de informática
enquanto afirmação, interrogação, imperativo, musical que mais fielmente representam os elementos
exclamação, etc. É por isso que Carvalho (1910 apud sonoros. Neste sentido, é observado o sistema
Mateus 2005b, p. 79) afirma que “Falar é tocar um desenvolvido no IRCAM para a composição de Speakings
instrumento de música, o mais perfeito de quantos (2008) de Jonathan Harvey, cujo patch desenvolvido para
harmónicos têm sido inventados”. OpenMusic serve de base de representação musical dos
elementos registados, quer ao nível da estrutura rítmica
De entre as diferentes áreas de estudo da linguística, a
como harmónica.
fonologia, a fonética, a morfologia e a semântica, são
aquelas que maior ligação apresentam com a música. A
eaw2015 a tecnologia ao serviço da criação musical 82

O produto artístico da investigação realizada é aqui <ch> de chave) e a fricativa palatal /ʃ/ (o <x> de
apresentado na obra A eternidade não existe, para grupo xaile);
de câmara, quarta peça do ciclo Eternidade.
2. os dialetos centro-meridionais refletem a
Ressalve-se que não é objetivo desta investigação substituição das consoantes ápico-alveolares /ʂ/
desenvolver um modelo de síntese de fala quer eletrónico e /ʑ/ pelas dentais [s] e [z], a diminuição do
como acústico [1], mas antes procurar bases de ditongo /ow/ a [o] (de pouco/pôco) e de /ej/ a [e]
informação musical nos diferentes dialetos da língua (de leite/lête ou feira/fêra);
portuguesa, que permitam a criação de materiais
3. as características especificas do dialetos
composicionais que os distinguem e representam.
insulares: no micaelense as vogais palatais [ü] a
[ö] correspondendo diretamente a /u/ (de
1. Os dialetos do português europeu uva/[ü]va) e /o/ (de boi/b[ö]i), a elevação de /o/
Segundo Cintra (1972, adaptado por Segura & Saramago em posição tónica para [u] (como em
2002), o território português é dividido em cinco regiões doze/d[u]ze); no madeirense o /a/ de posição
dialetais: tónica velariza-se em aproximação a [ɔ] (como
em casa/c[]sa), o /i/ de posição tónica por [ɐj]
1. Setentrional transmontano e alto minhoto (como em ilha/[ɐj]lha ou jardim/jard[ɐ̃j]), e ainda a
2. Setentrional baixo-minhoto duriense e beirão palatização do /l/ quando precedido de [i] (como
em filetes/fi[ʎ]etes).
3. Centro-meridionais do centro litoral
Além disso, realce-se ainda a existência de registos
4. Centro-meridionais do centro interior e do sul linguísticos que são característicos do contexto em que
5. Insulares do centro litoral determinada fala é produzida. Assim, é comum a
supressão da vogal [ɨ] (de devagar/d[ɨ]vagar), que de
resto diferencia o português europeu dos restantes
espalhados pelo mundo, num registo coloquial ou familiar,
tal como a produção da semivogal [j] em palavras com o
ditongo /ia/ (como em criado/kr[j]ádu).

1.1 Análise dialetal em A eternidade não existe


Para a composição desta peça do ciclo escolheu-se o
seguinte excerto:
por si só, o tempo não é nada
a idade de nada é nada
a eternidade não existe
no entanto, a eternidade existe(Peixoto, 2002)

O texto foi lido por três pessoas representativas de três


regiões dialéticas diferentes: uma do norte-litoral, outra do
centro interior e uma última do arquipelago dos Açores,
nomeadamente micaelense. Do registo resultante
apresenta-se a seguinte transcrição fonética:
1. Norte-litoral:
pur si suó, o tiêmpu num é náda
a idáde de náda é náda
Figura 1 - Mapa dos dialetos portugueses (www.http://cvc.instituto- a itrenidáde num izixte
camoes.pt/hlp/geografia/mapa06.html) nu intántu, a itrenidáde izixte
(em IPA)
Cintra refere ainda a existência de sub-regiões dialetais: puɾ si suɔ,ut je͂mpunũ ɛ nadɐ
Baixo-Minho e Douro Litoral - região sub-dialetal do ɐ idadɨ dɨ nadɐ ɛ nadɐ
Setentrional baixo-minhoto; Beira Baixa e Alto Alentejo - ɐ itɾɨnidadɨ nũ iziʃtɨ
do sul da região setentrional beirã e norte da centro nu ĩtãtu ɐ itɾɨnidadɨ iziʃtɨ
meridional do interior; Barlavento algarvio - da região 2. Centro Interior:
centro meridional do sul. pur si só, u têmpu não é nádâ
â idáde de nádâ é nádâ
Mateus (2005a, p.7) caracteriza-os da seguinte forma:
â êternidád’ não êzixt’
1. Os dialetos setentrionais demonstram uma fusão nu êntânt’, a êternidád’ izist’
numa única das consoante /b/ e /v/, pela (em IPA)
permanência das fricativas apico-alveolares /ʂ/ e puɾ si sɔ, u te͂ pmu n ɐ͂w̃ ɛ nadɐ͂
/ʑ/, cuja representação gráfica é <s> ou <ss> (de ɐ idadɨ dɨ nadɐ͂ ɛ nadɐ͂
saber e passo), pela conservação do ditongo ɐ e͂tɨɾnidadɨ nɐ͂w̃e͂ iz ʃtɨ
/ow/ (de soube ou pouco), pela permanência da nu͂eãt t ɨ ɐ e͂tɨɾnidadɨ iziʃtɨ
diferenciação entre a africada /tʃ/ (graficamente 3. Micaelense
p’r ssi ssó, u tsémp’ nâ é nadsã
eaw2015 a tecnologia ao serviço da criação musical 83

a idáds’ d’ nadã é nadsã


êtsernidad’ nâ êzists
nü êntãnts, êtsernidad’ êzists
(em IPA)
p[]ɾ i ɔ, u t e͂pn ɐ͂ ɛ nadʂɐ͂
ɐ idad d’ nadɐ͂ ɛ nadʂɐ͂
e͂ tɨɾnidad’ nɐ͂ e͂zi ʃt
nü͂eãt t ɐ e͂t ɨɾnidda ’ ͂e iz ʃt

Da análise fonética resultante verifica-se no dialeto do


norte-litoral uma maior abertura da vogal /a/, exceto
quando ela se encontra no final da palavra, bem como
uma ditongação de vogais como /o/ (em [uo]) e [ê] (em
[iê]). Além disso, o ditongo [aõ] é transformado em [um], a
transformação do [e] no início da palavra em [i] e a
transformação dedo fonema [er] em [ɾ]. No dialeto do
centro interior, constata-se acima de tudo a supressão
parcial ou total das vogais no final das palavras, a
palatização de /a/ e /e/ quando não em posição tónica.
Em relação ao micaelense sobressai a maior sibilação Figura 2 - Sonograma representativo da vocalização de “só”
das consoantes, com maior intensidade na dental /t/ que
na pico-dental /d/. Além disso, constata-se o maior fecho
na produção da vogal /a/, sendo anasalada no final das
frases, o a palatização de /e/ no início das frases, e a
supressão de [ɨ] no final das palavras e do [u] em por.
Refira-se ainda a alteração do ditongo [aõ] na palatizada
[ã].

2. Do texto ao material composicional


Do ponto de vista sonoro, os conceitos de consoante e
vogal têm correspondência direta com o grau de definição
harmónica do som. Segundo Reyes & Muñoz (2015), a
análise fonética a um texto permite transportar para o
âmbito musical os elementos de ataque e duração, pelas
consoantes, melodia e harmonia, pelas vogais. Por outro
lado a análise FTF permite verificar a maior indefinição
harmónica e menor vocalização da consoante em
comparação com a vogal. Na figura 2 é possível verificar
essa relação no fonema “só”: enquanto o /s/, consoante
vocalizada pode ser vista como um ruído, já a vogal /o/ Figura 3 - Sonograma da fonação da palavra “por” no dialeto do
norte litoral
apresenta uma maior definição harmónica, realçando a
sua formante característica.
2.1 Prosódia: melodia, intensidade, duração e
Contudo, é a vogal a primeira que é parcial ou ritmo.
integralmente suprimida sem que tal afete a compreensão
da palavra, como se verifica na articulação de “por” no Ainda referindo Reyes & Muñoz (2015, p. 4), o estudo de
texto anteriormente definido. A análise espectral diferentes dialetos permite observar alterações ao nível da
comparativa entre os dialetos do norte-litoral e acentuação das palavras e frases, bem como das
micaelense, na qual existe uma acentuação do [u] do cadencias rítmicas e contornos melódicos que cada
primeiro (figura 3), em oposição à supressão da vogal, no região demonstra, isto é, a sua prosódia. Pereira (1992
último (figura 4), revela um comportamento que apud Mateus 2005b, p. 82) define:
demonstra como o ruído provocado pela articulação da “Prosódia é um termo que vem do grego προσοδια(formado
consoante bilabial /p/ e passagem para a consoante por προσ pros, junto, e οδη odé, canto). Tal etimologia atribui
alveolar /r/ (sem que as duas sejam conjugadas à prosódia a significação de melodia que acompanha o
conjuntamente) é suficiente para a significação cerebral discurso e, na língua grega, mais precisamente, o acento
da palavra. melódico que a caracteriza”
eaw2015 a tecnologia ao serviço da criação musical 84

c)

Figura 5 - Articulação ritmica e contorno melódico aproximado,


obtidos pela análise espectral: a) dialeto setentrional norte-litoral; b)
dialeto centro-meridional alto-alentejo; c) dialeto insular de S. Miguel
dos Açores

Do ponto de vista da métrica (duração e ritmo), o norte-


litoral apresenta uma prosódia com articulação regular
binária, enquanto o centro-interior manifesta uma divisão
ternária, demonstrando o micaelense uma maior
irregularidade, quer do ponto de vista métrico (com
mudança de divisão binária para ternária) como do ponto
de vista da acentuação (com maior articulação
sincopada). Além disso, a velocidade de articulação dos
fonemas possui um sentido inverso ao apresentado, ou
seja, é mais rápida na pronúncia insular, sem interrupção
no discurso, e mais lenta e marcada no norte litoral.
Figura 4 - Sonograma da fonação da palavra “por” no dialeto Melodicamente (tom), para este verso, verifica-se que o
micaelense maior âmbito de registo é atingido no dialeto centro-
interior (aproximadamente de uma oitava), seguido pela
Mateus (idem) refere que a prosódia se define pelos seus
pronúncia do norte-litoral (de 7ª), sendo o dialeto
traços - tom, intensidade, duração e ritmo - e constituintes micaelense o de contorno menos acentuado (4ª).
- sílaba, a palavra, sintagmas fonológico e entoacional.
Obin (2011, pp.43-4), por seu turno, apresenta como Por outro lado, a acentuação melódica (intensidade) é
dimensões acústicas da prosódia: F0, ou variações da também semelhante nos dialetos continentais e diferente
frequência fundamental no tempo, corresponde à no insular: em a) e b) a elevação atinge o pico máximo de
intonação frásica, referenciada como contorno melódico; registo em “si” e apoia em relaxamento o “só”, do primeiro
Timing, as variações métricas e rítmicas do discurso, segmento, para se repetir o movimento no início do
desde a sílaba à oração, passando pelas alterações segundo, desta feita com apoio métrico e ascensão de
inerentes resultantes das diferenças dialetais; Intensity, registo no fonema [tem] de “tempo”, com pequena
referindo-se à intensidade do sinal sonoro ao longo da elevação em [po] em a), e em “é” em b); em c) o
fala, podendo ser medida através de uma unidade movimento de elevação termina em “só”, no final do
convencional de curta-duração (short-term intensity primeiro segmento, sendo a ascensão do segundo quase
mesure), ou pela medição da integração global da inexistente e seguido de imediato por uma distensão
intensidade sobre as unidades prosódicas (long-term melódica até ao final do verso. Verifica-se então que o
integration); Voice quality, pela descrição de parâmetros contorno é mais linear no discurso micaelense, e mais
como a tensão e respiração, características que quebrado no dialeto norte-alentejano, onde existe uma
correspondem à formação do som na glote; Articulation maior articulação por intervalos musicais mais largos.
Degree, que corresponde à qualidade fonética resultante
da combinação do contexto fonético, velocidade e 2.2 Harmonia: Consoantes e vogais
dinâmica dos movimentos de articulação no trato vocal.
Observadas as relações entre prosódia e elementos
O estudo destes parâmetros apresenta-se como basilares verticais do discurso sonoro, passou-se para o estudo das
para a diferenciação acústica dos diferentes dialetos. diferentes harmonias resultantes dos diferentes dialetos.
Assim, trabalhando-se sobre os aspetos linguísticos de Para além do contexto harmónico em que se desenvolve
um texto, e a sua verbalização, existe uma série de dados o discurso melódico, a diferente fonação quer de
sonoros que são apresentados no próprio corpo de texto. consoantes como vogais, de acordo com os parâmetros
O excerto poético em análise possui uma estrutura observados na pronúncia de cada região analisada, foram
eneassilábica, sendo os versos inicial e final compostos alvo de trabalhos específicos realizados por ferramentas
de dois segmentos, o primeiro de três e o segundo de seis de composição musical por computador.
sílabas. Contudo, a prosódia apresenta características Mateus (2005b, p.81) cita a Gramática Filosófica de
rítmicas próprias de cada dialeto, como se exemplificam Jerónimo Soares Barbosa para realçar a diferença
na primeira frase representada na figura 5. fisiológica que se encontra na produção dos diferentes
contextos sonoros da fala:
Os sons fundamentais, assim vogais como consoantes,
a) formam-se todos no canal da boca, onde só se articula e
forma em vozes o som informe e confuso da glote, pelas
diferentes posturas imóveis da boca [...] As modificações
prosódicas, porém, [...] têm outro órgão, que é o da glote em
b)
que se termina o tubo inferior da traqueia artéria
eaw2015 a tecnologia ao serviço da criação musical 85

Constata-se então que existe um complexo sistema físico a eternidade não existe
na produção de sons vocais, o que dificulta a análise dos
v n n n vv v v v nn
diferentes conceitos que os compõem. Tal acontece pelo
facto de existir um discurso contínuo, não só entre no entanto, a eternidade existe
consoantes e vogais (dentro de determinado fonema),
Legenda: n - Não-Vozeada; v - Vozeada
como entre fonemas (no seio da produção de palavras),
bem como entre palavras (na produção de frases, orações Verifica-se que o texto inicia com uma sequência de
e períodos). Este desafio levou Jonathan Harvey e uma consoantes não-vozeadas, destacando-se o segundo
equipa de investigadores do Ircam a desenvolverem um verso por usar apenas consonantes vozeadas. Assim,
patch para OpenMusic (figura 6), que pela deteção de tomando como exemplo o sonograma da figura 4, a
transientes e um módulo de leitura de parciais transporta diferença entre este tipo de sons pode-se concretizar num
para linguagem musical as harmonias concretizadas na conjunto de objetos sonoros de características diferentes.
fala e quantifica ritmicamente a duração de cada. No primeiro caso, [p] o seu nível de intensidade e
indefinição faz com que apenas possa ser usado como
De acordo com o que foi dito anteriormente, consoantes e
elemento de articulação, um ataque ao qual se segue de
vogais apresentam parâmetros harmónicos
imediato o elemento [r], cuja vocalização promove
consideravelmente diferentes. De entre outras
harmonia resultante dos seu parciais inarmónicos e não-
classificações fonéticas [2], as consoantes podem ser
harmónicos, apresentado na figura 7. A característica
vozeadas (sonoras) ou não-vozeadas, ambas
harmónica é então atribuída a consoantes que usam som
musicalmente consideradas como ruído, as primeiras com
produzido pelas cordas vocais, não obstante as suas
corpo sonoro, as segundas no domínio da articulação do
qualidades ao nível de articulação que se fundem com o
som.
antecedente não sonoro.
O comportamento das consonantes é muito semelhante
nos diferentes dialetos. Tal como se verificou na análise
realizada ao texto de Peixoto, as únicas diferenças
significativas prendem-se com a pronúncia de [tr] em
[etrenidade] do norte-litoral, bem como o referido [p’r] e a
sibilação de /t/ e alguns /d/ da fala micaelense.

Figura 7 - Resultado da leitura de parciais de [ɾ] do fonema [p’r] no


dialeto micaelense.

Figura 6 - Patch para OpenMusic desenvolvido para a composição


de Speakings de Jonathan Harvey. Figura 8 - Representação harmónica do fonema [wɔ] do dialeto do
(http://forumnet.ircam.fr/tribune/making-an-orchestra-speak/) norte-litoral.

Do excerto poético em análise essa classificação Na definição harmónica de um dialeto, é atribuído às


apresenta-se da seguinte forma: vogais, semivogais ou glides [3] e ditongos o principal
papel de identificação regional [4]. Foneticamente, as
n vn n n n v v v vogas são classificadas pela altura - altas [i, ɨ, u], médias
por si só, o tempo não é nada [e, ɐ, o] ou baixas [a, ɛ, ɔ] - e pelo ponto de articulação -
anteriores ou palatais [i, e, ɛ], centrais [ɨ, ɐ, a] e
v v v v v v v posteriores ou veladas [u, o, ɔ] - podendo ainda ser
a idade de nada é nada observadas questões de acordo com a posição dos lábios
- arredondados [u, o, ɔ], e não arredondados nas
n vv v v v v nn restantes vogais.
eaw2015 a tecnologia ao serviço da criação musical 86

Mantendo como exemplo o primeiro verso do poema realce espectral de bandas de frequência, ou seja, a
Explicação da Eternidade, verifique-se a pronúncia de “só” representação das formantes vogais num instrumento ou
nos três dialetos: na pronúncia norte-litoral a vogal baixa e grupo de instrumentos; em terceiro a definição prosódica
velada [ɔ] é transformada no ditongo [wɔ], permanecendo de fonema, palavra, oração ou frase, correspondendo a
igualmente acentuada na fala do norte alentejo e diferentes pontos de maior tensão ou distensão no
açoriano. Por outro lado, a mesma situação acontece com domínio do tímbre, o registo, contorno e articulação.
o fonema [em] de tempo: passa a ditongo [jɛ̃] no norte,
Como foi já demonstrado, apesar de apresentarem
mantendo-se [ɛ̃] no alentejo, e abrindo-se em [ɛm] no
menores níveis de intensidade no espectro harmónico,
dialeto micaelense.
são as consoantes as responsáveis pela definição dos
Um estudo intervalar das harmonias representadas nas parâmetros rítmicos e métricos da obra. Sendo elas
figuras 8 e 9, permite constatar a presença de alguns consideradas como ruído, dado o seu elevado grau de
subgrupos harmónicos que se mantêm em dialetos indefinição harmónica, é este o responsável pela estrutura
diferentes. Tal acontece pelo facto das vogais possuírem timbrica e formal da peça. Neste sentido, foram
regiões formânticas comuns a todos os fonadores, o que considerados três tipos de presença destes objetos: o
permite estabelecer relações de variação e em alguns primeiro está relacionado com as consoantes não sonoras
casos translação do conteúdo intervalar de cada campo [p], [t] e [ʃ], e funciona analogamente ao ataque e
harmónico criado. Assim como no discurso verbal, são as articulação de determinado som - são transientes de início
nuances harmónicas que criação elementos diferentes e da vogal ou consoante sonora que lhe seguem; no
identificativos de cada dialeto, mas que formam a mesma segundo este ataque é também corpo sonoro, isto é,
unidade linguística/harmónica. possui alguma definição harmónica, estando relacionado
com as consoantes vozeadas [ɾ], [s], [ʂ], [ʑ], [n] -
correspondem ao corpo sonoro de determinado objeto; no
terceiro estão os ruídos produzidos na fonação de
consoantes que terminam a palavra ou frase, quer sejam
consoantes sonoras ou não - funcionam como transientes
de fim do som.
O ruído apresenta-se assim como elemento central de
organização global do discurso musical, pois a
categorizarão tímbrica elencada reflete-se no discurso
global do texto e consequentemente da obra: o excerto do
poema começa com uma predominância de consoantes
não vozeadas - ataque, passa para um discurso mais
sonoro (fruto da articulação de [d] e [n]) - corpo, termina
Figura 9 - Representação harmónica da vogal [ɔ] de “só” do dialeto novamente em consonantes surdas ([ʂ] e [t]) - decay e
do centro-interior. transitórios finais, formando assim a macro estrutura da
peça.
2.3 O timbre, a forma e correlações textuais As vogais comportam-se como elementos de
A eternidade existe possui um elenco instrumental diferenciação. Por um lado surgem como resultado da
composto por: Flauta, Clarinete (alternando com Clarinete fusão tímbrica de diferentes instrumentos, assentes nos
Baixo), Harpa, Piano, trio de cordas (Violino, Viola d’arco registos de definição formântica. Por outro, estas regiões
e Violoncelo), percussão (Marimba, Tam-tam, caixa e funcionam como filtros de banda que regem a entrada de
Prato suspenso) e Voz (Mezzo-soprano). Pretendendo novas consoantes ou vogais, isto é, novos ruídos e
com o ciclo desenvolver materiais musicais regiões harmónicas. Neste âmbito, são ainda distinguidos
representativos dos dialetos da língua portuguesa, a os ditongos e as glides. Os diferentes dialetos são então
instrumentação não foi escolhida mediante qualquer representados musicalmente de três maneiras distintas:
critério específico em função do objetivo, pois é objetivo de forma individualizada, onde demonstram a sua
que os objetos musicais resultantes possam, também identidade musical; como eco uns dos outros, onde
eles, serem capazes de funcionar no seio de diferentes exibem as suas diferenças; em simultâneo, onde se
“dialetos” musicais. fundem na lingua única que representam.

Neste sentido, existe a procura em cada instrumento ou Tratando-se de uma obra para suporte acústico, são
conjugação instrumental de soluções tímbricas que façam usados sons menos convencionais de cada instrumento
corresponder objetos musicais com objetos linguísticos. para recriar situações de articulação de ruído, enquanto a
Para tal, o uso do software Orchids (Ircam, 2014) execução tradicional é usada como elemento de
apresenta extensas soluções instrumentais aos níveis de sustentação do corpo sonoro. A figura 10 é representativa
registo, técnica de execução, tipos de sonoridade para da maneira como o discurso musical pode funcionar.
representação instrumental de sinais sonoros, inclusive Neste caso, o ritmo prosódico do dialeto centro norte
oriundos do discurso verbal. O seu uso na composição serve de base de articulação do discurso orquestral, sobre
desta obra incidiu sobre um mapa de organização o qual se articulam os três fonemas “por”, “si”, “só”, de
tímbrica imposta pelo próprio texto: em primeiro lugar a acordo com a análise harmónica do dialeto micaelense.
padronização de ruídos segundo os seus critérios de As articulações das cordas em pizzicato bartók,
maior ou menor definição harmónica, isto é, as juntamente com o piano etouffee e a marimba com
consoantes vozeadas ou não vozeadas; em segundo, o baquetas de madeira articulam o ruído [p], seguido de
eaw2015 a tecnologia ao serviço da criação musical 87

trémulos e trilos sobre a harmonia de [r]. A ligação entre Esta é uma temática que tem vindo a ser alvo de uma
os fonemas é obtida com recurso ao gongo, raspado com investigação mais ativa, sobretudo no domínio das teorias
haste metálica, e ao glissando da harpa, realçando os de informação e comunicação, com o objetivo de criar
elementos agudos de [ss], seguidos de [i] e [o]. Trata-se ferramentas informáticas que permitam uma maior
então de um exemplo de unificação dos materiais obtidos equidade na percepção pelo computador de diferentes
pela análise FFT e transientes aos dialetos estudados. tipos de discurso. São disso exemplo os trabalho
desenvolvidos por Obin (2011) sobre a análise e
modelagem da prosódia da fala, assim como o
desenvolvimento de ferramentas de assistência à
composição desenvolvidos pelo Ircam (como é o caso do
Orchids, ou de vários vocoders).
Também no âmbito da linguística se tem desenvolvido um
especial interesse em torno da melhor definição dos
elementos em que se apoiam os dialetos regionais. Tal
advém do facto de não se considerar mais que uma
determinada língua possui lugares onde se fala bem e
outros menos bem (como demonstra Cabeleira, 2006),
pois as condicionantes geográficas influenciam e muito as
várias mudanças que a língua vai sofrendo e dessa forma
se vai tornando viva (como o refere Mateus 2005b).
A junção de metodologias das áreas da linguística e da
composição musical poderá ser o caminho a seguir,
unindo processos e trocando informações, numa atitude
interdisciplinar de produção de novos objetos científicos
em ambos os campos.

Notas
[1] Também em Speakings, “fazer a orquestra falar” não
corresponde a uma representação dos valores semânticos da
fala através de processos musicais por computador, mas
antes o realce de estruturas não verbais do discurso,
produzidos quer pela escrita instrumental como pela
realização eletrónica em tempo real (Nouno, Cont, Carpentier
Figura 10 – Composição resultante da fusão dos diferentes materiais & Harvey, 2009).
analisados: fusão de discursos em “por si só
[2] As consoantes podem ser classificadas segundo os
seguintes parâmetros: ponto de articulação - bilabiais, lábio-
3. Conclusão dentais, apico-dentais, alveolares, palatais, velares; modo de
Através do estudo linguístico de determinado texto, articulação - oclusivas, fricativas, laterais e vibrantes;
propõe-se um estudo aprofundado das potencialidades e natalidade e oralidade; vozeadas e não-vozeadas (Cristófaro-
qualidades sonoras dos resultados do discurso falado em Silva, 2012; Instituto Camões, 2015). [3] As glides são
português europeu. Para tal, recorre-se a uma fonemas caracterizados por não serem nem vocalizados,
nem consonânticos, ficando num meio termo entre a vogal e
transposição dos diferentes signos linguísticos orais para
a consoante (Dubois, 2014, p.308). São exemplo [j] e [w] em
as suas características acústicas, e com assistência à
[serju] (sério) e [agwa] (água), respetivamente, e tendo
composição feita pelo computador, procura-se encontrar a características semelhantes às vogais [i] e [u] distinguem-se
melhor forma de representar as suas potencialidades destas por serem mais breves e “ocorrerem sempre a seguir
enquanto objetos musicais. às vogais com as quais formam ditongos decrescentes”
Depois de realizado o trabalho do domínio do estudo (Instituto Camões, 2015).
fonético, surgem questões relativas à representação de [4] Recorde-se a importância das consoantes referidas no
sonoridades que pertencem à esfera do ruído, e que início deste artigo, nomeadamente a troca de /v/ por /b/ e
definem sobremaneira o discurso verbal. O estudo das produção de /tʃ/ na definição do dialeto setentrional norte-
características de cada ruído tipo e seu enquadramento litoral.
sobre a égide dos prismas da harmonia, forma e timbre,
indo ao encontro da métrica imposta pelo texto. Referências Bibliográficas
Tal como foi referido, este tipo de trabalho não tem por Cintra, L. (1972). Nova proposta de classificação dos dialetos
objetivo realizar um modelo de síntese acústica ou portugueses. Boletim de Filologia, n.º XXII (1964-1971), p.
eletrónica da fala, mas antes encontrar caminhos que vão 81-116.
ao encontro da fonética responsável pela identificação Gee, E. (2013). The Notation and use of the voice in non-
auditiva de uma determinada região de origem de quem a semantic contexts. Phonetic organization in the vocal music
produz. of Dieter Schnebel, Brian Ferneyhough and Georges
eaw2015 a tecnologia ao serviço da criação musical 88

Arpeghis. In C. Utz & F. Lau (eds.). Vocal Music and Reyes, J., Muñoz, M. (2015). The influence of text in
contemporary identities. New York: Routledge, p.175-201. computer music composition: an approach to expression
modeling. Acedido em 20 de Julho de 2015, em
Mateus, M.H.M. (2005a). A mudança da língua no tempo e no https://ccrma.stanford.edu/~juanig/articles/textInComp.html
espaço. In Mateus e Bacelar (orgs.). A língua portuguesa em
mudança. Saussure, F. (2006). Course in general Linguistics. Illinois:
Open Court Publishing Company.
Mateus, M.H.M. (2005b). “Estudando a melodia da fala –
traços prosódicos e constituintes prosódicos”. Palavras – Peixoto, J.L. (2002). A casa, A escuridão. Lisboa: Temas e
Revista da Associação de Professores de Português, n.º 26, Debates.
p.79-98.
Segura, L., Saramago, J. (2002). Variedades dialectais
Obin, N. (2011). MeLos: Analysis and Modelling of Speech portuguesas. In M.H.M. Mateus (org.), 2002. Catálogo da
Prosody and Speaking Style. Signal and Image processing. Exposição Caminhos do Português. Lisboa: Biblioteca
Tese de Doutoramento, Université Pierre et Marie Curie - Nacional. pp. 221-240.
Paris VI.
eaw2015 a tecnologia ao serviço da criação musical 89

3.4 Autómatos da Areia (1978/84), Lendas de Neptuno (1987) e Oceanos (1978/79) de


Cândido Lima: como o som concreto se torna outro nos primeiros trabalhos
electroacústicos do compositor

Maria do Rosário da Silva Santana Unidade de Investigação para o Desenvolvimento do Interior, Instituto
Politécnico da Guarda, Portugal
Helena Maria da Silva Santana INET-MD, Universidade de Aveiro, Portugal

Abstract vão exteriorizando ideias, formas e materiais ao longo da


sua produção artística. De notar a reutilização sucessiva
The work of Cândido Lima and his creative process are que nos é proposta de uma forma contínua. Revelando-se
imbued with the principles of unity and identity who spend no novo, o material não perde a sua identidade, pois que
all their production. For him, the work has to have a a combinatória no sentido lato e estrito, o indeterminismo e o
"nuanced link", a sound unit achieved through diversity controlo na sua ampla acepção convivem com estruturas
and simultaneously identity, it is imperative that the livremente tratadas ao longo do devir musical. A
listener can develop a sense of identity while enjoying a fenomenologia do som, a monadologia como filosofia do
piece yours. intervalo, do som-timbre e da cor, da espacialização e do
tempo teatral e cinematográfico, tudo coexiste com técnicas
Through the works - Automatic Sand (1978/84), Neptune diversas oriundas da prática musical específica e de outros
Legends (1987) and Oceans (1978/79) - we aim to show domínios (Lima, 2002, p.79),
how this correlation is manifested in its early examples of
transpondo-se na obra em quadros expressivos de
electroacoustic works as well as the various techniques
inigualável beleza e rigor. É nestes quadros que o autor
and discursive compositional proposes that these musical
mostra a sua mestria na construção de estruturas
pieces shows. These works are part, according to the
musicais tendo por base um só intervalo que, assim,
author, of "a series that has its origins in the fine arts and
surge como monada definidora de toda a estrutura
the ongoing transformation processes (topology) [whereas
musical da obra. É nestes quadros obra que o autor
here] music is interpreted as energy, such as speed, as
espelha o conhecimento das técnicas compositivas mais
pure movement, as "beings" in itself ". To characterize
tradicionais como a repetição, a variação ou a
these beings from a technical point of view, stylistic and
transposição, mas também a derivação ou a mutação, e
aesthetic allow us to contextualize these works not only in
as combina com princípios de construção oriundos de
the author's production, as national production, defining
outros ramos do conhecimento como as ciências
the processes and means of building unity and identity on
nomeadamente a biologia, a física, a matemática, etc.
these musical pieces. Understand the nature of its sound
as well. Lima has a sound nature arising from the use of Autómatos da Areia (1978/84), Lendas de Neptuno (1987)
concrete sounds. Understand how handles is also our goal e Oceanos (1978/79) enquadram-se na sua “época das
and our proposal. Telas”. Esta referência alude a um conjunto de obras que
se apoia no conjunto das obras denominadas de Toiles, e
Keywords: Automatic Sand, Neptune Legends, Oceans,
que para o autor são “painéis que se cruzam, como os
Cândido Lima, Contemporary Portuguese music
tranquilos tapetes aquáticos dos mares de Bissau,
memórias de 1966 a 1968” (Lima, 2002, p.88), altura em
Introdução que permaneceu na Guiné, na Guerra do Ultramar. Se
Depois da audição e análise, estudo e reflexão, de esta realidade surge, ora oculta ora manifesta, em toda a
diversas obras de Cândido Lima, verificamos que a sua produção musical, não podemos escamotear o seu
construção de um universo sonoro de autor, leva o interesse pelo sempre novo, pelos “novos meios” de
compositor a uma demonstração de lealdade não só à produção e manipulação sonora, pela tecnologia de uma
noção de sensorialidade, como a um conjunto de ideias e forma geral [1] e pelas outras artes [2]. Assim se expressa
ideais enquanto ser pensante e questionador do universo e se espelha o autor, utilizando, desbravando,
que o rodeia. Neste ser e fazer, entra em diálogo com o aprisionando, entregando não só sons, como cores,
público através da obra que, por ele escrita, é, de forma identidades, alteridades, tanto de si, como em si, e no
inquestionável, destinada a ele. Desta forma, a obra é, outro, do outro... É neste fazer artístico, ou musical, que a
exteriormente, um processo de impressões e agitações, relação se dá e a interação se exprime, na unidade e na
choques que se descrevem na perturbação que se cria a identidade que aqui apontamos como elementos de
nível cerebral, e na necessidade que o compositor tem de análise, conduzindo uma dialética que, aqui expressa, se
exteriozar esses elementos, partilhando-os com o público; pretende esclarecedora para o entendimento da obra de
partilha interiormente ligada a inúmeros processos Cândido Lima.
mentais que se exteriorizam como bastante complexos
entre si. A sua obra como que está, assim, carregada de Como o som concreto se torna outro nos
aparentes contradições, que seguem uma linha primeiros trabalhos electroacústicos do
composicional bem definida mas nunca limitada. Saturada compositor
de referências históricas e enquadrada dentro de uma
linhagem musical própria, a sua obra impele as fronteiras Concomitante com a sua expressão musical nas três
do novo e do moderno, construindo-se outra. E assim se obras referidas, Cândido Lima, cujo espírito criador
eaw2015 a tecnologia ao serviço da criação musical 90

irrequieto busca resposta nas diversas manifestações almejando o reconhecimento da descoberta, do uso
artísticas do seu tempo, interessa-se pelos meios primeiro da técnica, se determina em obras singulares e
audiovisuais e pela música electroacústica como forma de originais (à época, e sobretudo, em contexto nacional).
atingir o irrepetível em termos de criação. Assim,
Por outro lado o uso de sons concretos e a sua
aparecem na sua obra exteriorizações que denotam o
transformação ao longo do seu dizer artístico leva, no
gosto pelas artes no geral [3]. São leituras que resultam
nosso entender, aquilo que Menezes em seguida
de relações especiais de sons e espaços e os
expressa:
transformam em várias das suas dimensões. Nesta
relação, e em particular para Toiles, o autor refere que A bem da verdade, o que se tem aqui é o esboço de uma
necessária dialéctica entre Matéria e Forma, dialéctica cujas
foi pensada enquanto obra sonora, enquanto espaço sonoro relações serão preconizadas por Schaeffer em vista de uma
e espaço plástico. Tem uma série de rolos em forma de concepção “concreta” da música. Ao realçar o processo que
pergaminhos com várias partituras, com a linear, a 3 consiste em repetir um elemento sonoro até à perda de sua
dimensões, com 5 cores para destrinçar as 5 famílias das identificação e, com essa, de sua identidade, e cujo método
cordas…mas foi pensado, de facto, como se eu pudesse não é nada mais nada menos que o da saturação semântica
estar ao mesmo tempo a escrever em pauta e em clave e ao (saturazione semantica de Fernando Dogana) ao qual se
mesmo tempo a desenhar aquilo e a fazer uma espécie de refere Fónagy no domínio da linguagem verbal, Schaeffer
Kandinskiana ou talvez Mondriana ou aquele outro… afirma que “todo fenómeno sonoro pode, pois, ser visto
[pensamos nós que se refere a Malevitch] que é uma espécie (assim como as palavras da linguagem ou pela sua
de quadro branco a partir do qual se iriam fazendo significação relativa, ou por sua substância própria. Enquanto
iluminações na partitura… (Azguime, 2004, s.p.) predomine a significação, e na medida em que se opere
sobre esta, há literatura, não música”. Para que se possa
O conjunto das obras revela-se fundamental para o nosso
“esquecer a significação”, isolando o “fenómeno sonoro em
trabalho, sendo que algumas são reutilizadas, si”, Schaeffer descreve-nos duas operações preliminares:
manipuladas, transformadas, reescritas, ou não, nas “Distinguir um elemento (ouvi-lo em si, por sua textura, sua
obras que agora nos propomos tratar. Assim, e do seu matéria, sua cor)”; e repeti-lo. Repita duas vezes o mesmo
conjunto, verificamos que Toiles II (1978-80; música por fragmento sonoro: não haverá mais evento sonoro, haverá
computador (UPIC-I, fita magnética) aparecerá em música (Menezes, 1996, p.18).
Oceanos, e Toiles IV (1978-80; música por computador
A reutilização de materiais, produzida a partir de leituras
UPIC – I e II, síntese analógica) em Lendas de Neptuno.
distintas e transformadas é uma constante nos grandes
Toiles III (1978-81; música por computador UPIC-II, fita
autores do século vinte. Cândido Lima não foge a esta
magnética) permanecerá, originalmente, como peça
regra e algumas das suas obras são disso o exemplo.
autónoma, sendo que o seu processo criativo contempla a
Segundo o autor,
determinação também de um espaço visual, pictural e
plástico. os materiais de Lendas de Neptuno e de Autómatos de Areia
repousaram em arquivo ao longo de 10 anos, aparentemente
Enquanto ser criador, o Homem comunica com o seu irrecuperáveis, como aconteceu com Meteoritos (1973-74)
semelhante na moldagem de pensamentos e ideias, na para piano e fita magnética. Reouvidos, foram assumidos
criação de utopias e quimeras que se juntam em espaços pelo compositor como um documento, como um risco, como
comuns de audições internas e externas, na confluência uma aventura. Essas obras constituem uma espécie de
de seres que se aglutinam e conjugam para a emanação manifesto sobre a problematização do fenómeno sonoro face
à pureza absoluta das técnicas recentes: pela subversão da
de sons singulares e originais. A plasticidade da partitura,
infalibilidade tecnológica por fenómenos periféricos,
representando a fluidez do gesto criador, faz renascer o parasitas, ou por acidentes que a própria máquina gera ou
Homem enquanto criador. E a plasticidade surge na recebe, e que o homem rejeita como fator negativo de
interligação de sons, de cores que renascem e se organização estética [...]. Impurezas fundiram-se e
transformam, moldam, adaptam e se transfiguram nos fragmentaram-se em personagens, num desafio à alta
desejos da criação. E o movimento transcorre destas fidelidade dos meios de sínteses, de produção e de difusão,
ações enérgicas cujos movimentos se entrelaçam em pela recuperação e integração de resíduos, de restos, de
telas e teias cujos fios são as palavras ocultas dos detritos, de fugas de som [...]. [Parafraseando Picasso que
afirma “se não procuro encontro”, Cândido Lima diz “não
pensamentos inquietos de um pensador de emoções.
recuso, integro”] (Lima, 2002, p. 89).
Despontam assim obras onde “a fluidez, a moldagem, a
contração e a dilatação em todos os sentidos, surge como Neste sentido, o homem é um ser mutante cujo
uma extensão das transformações clássicas” (Lima, 2002, entendimento de si e do mundo se altera em face do
p. 88). conhecimento que possui. Assim, o real e o ilusório, o
Modular (moldar) o movimento de Oceanos, modular (moldar) belo e o torpe, a apropriação ou rejeição dos objetos
um feixe de sons em Autómatos de Areia, modular (moldar) artísticos na obra de arte resultam da leitura que o autor
vagas de som em Lendas de Neptuno; com botões, com tem de si e do outro, da evolução que lhe é impressa no
potenciómetros, com lápis eletromagnético, com tábuas sonho e nas realizações que faz da obra e da arte. E o
gráficas. Moldar o movimento, moldar o som sinusoidal o belo altera-se, o feio integra-se e a obra renasce em
complexo, modelar uma textura ou uma massa sonora e dizeres ocultos, ditames da criação. Vejamos como
dominar o mundo que se esconde nas técnicas e nas artes perscrutar unidade e identidade no ambiente dinâmico e
da harmonia e do contraponto (Lima, 2002, p. 89),
diverso em que se constroem e se definem as obras
eis o que Cândido Lima procurou e enfrentou. E o propostas.
homem/criador revela-se, mostra o dom que possui de
conceber. Nele, e relativamente à construção e definição
da obra deste autor, procuramos a unidade e a identidade
num conjunto específico de obras. Também o compositor,
eaw2015 a tecnologia ao serviço da criação musical 91

Autómatos de Areia sonora podem servir a fins menos previsíveis, a percursos


mais criativos. (Pousseur, 1970, p.102)
Obra composta entre 78 e 84, com a duração de 11’20’’,
apresenta, segundo o compositor, No nosso entender para Lima também.
configurações automáticas programadas e não programadas
[...] ordenadas com sons estáticos e pequenas texturas Lendas de Neptuno
escritas ao computador. Curtas fórmulas no registo “médio” e De uma maneira geral, os processos de Lendas de Neptuno,
“grave” (electrónico), sons e configurações no registo agudo [obra de 1987, com a duração de 11’12’’] seguiram o mesmo
(computador) intercalam-se com longos fios sonoros, estes percurso, mas, aqui, o compositor agiu da seguinte maneira:
também no registo sobre agudo, obtidos pelo baloiçar e pelo em primeiro lugar escreveu uma série de páginas de música
friccionar de quatro seixos ao longo das cordas de um piano; no computador (UPIC-A), de que resultou uma pequena peça
como distinguir o som mecânico e o som concreto (o piano e de 8 minutos (1978), que, com 8 instrumentos, constitui a
os seixos), o som analógico (o sintetizador) e o som digital obra Mare-a-mare (1980)” (Lima, 2002, p. 91).
(os computadores) é um desafio para o ouvinte. Cada uma
destas interações resulta da improvisação e do controle Estreada nos Encontros Gulbenkian de Música
prévio, pelos potenciómetros, na electrónica, em primeiro Contemporânea pelo Grupo de Música Contemporânea
lugar, pelo domínio do “contraponto”, no trabalhar dos seixos, de Lisboa, sob a direção de Cândido Lima e Jorge
depois pela escolha final e pelo controle das misturas, Peixinho ao piano, esta obra reflete mais uma vez os
sucessivamente” (Lima, 2002, p. 89).
princípios de unidade e a identidade enquanto meios de
Assim se dá corpo a uma obra que desenvolve uma construção e definição de obra que transparecem nas
estrutura estático-pontilhista numa estrutura estático- obras em análise. Permitimo-nos esta afirmação pois a
contínua construída com base no som glissando. unidade e a identidade transparece na maneira como o
Segundo Shaeffer entramos no domínio de perda de compositor determina uma e outra obra, definindo e
significação por parte da matéria sonora. Como o próprio redefinindo materiais que surgem de um núcleo base não
afirma: mutante, assim como na forma como emprega a obra
primeira para determinação da seguinte. O compositor
Se extraio um elemento sonoro qualquer e o repito sem me
preocupar com a sua forma, mas fazendo com que a sua
não pretende, no nosso entender, restituir a obra original
matéria varie, anulo praticamente esta forma, e ele perde sua modificando certos detalhes, mas reinventá-la guardando
significação; somente sua variação de matéria emerge, e a mesma distribuição dos espaços, embora tendo por
com ela o fenómeno musical (Schaeffer, 1973, p. 21). base um movimento gerador musical diferente.
Posteriormente desenvolvida num continuum estático, a Unificadas por uma estrutura que unifica e determina a
harmonia resultante destaca-se numa frequência importância de cada uma no interior do todo, as
dominante que agrupa todas as outras num centro interações surgem graças às similitudes melódicas,
polarizador. Este centro aglutinador cristaliza as forças rítmicas e tímbricas, determinando a conciliação de
criadoras e geradoras do movimento abrindo caminho à elementos musicais diversos, assim como a sua
inovação e à polarização criativa. Autómatos de Areia sobreposição e transmutação. A sua manifestação em
permite que o autor nos traga a inconstância na conjuntos instrumentais múltiplos origina, em função das
constância, a tensão na distensão, o som e o ruído, a características de cada instrumento, ou meio de criação,
harmonia e a enarmonia num continuum criativo, onde a diferentes hierarquias, e consequentemente, estratos de
sucessão de estádios sonoros se concretiza de forma significação. Lima procura sempre a coerência, a unidade
contínua e gradual. O discurso nunca se estabelece num e a identidade. Através da citação, no caso autocitação,
abrupto sonoro. A clareza e subtileza das estruturas realiza um conjunto de obras que integram o antigo e o
assomam dos princípios técnicos e estéticos que o autor novo, o reconhecível e o inatingível, o processo e a
patenteia em toda a sua produção artística. A tensão emoção.
cumulativa desenvolve estados de ansiedade crescente Por outro lado o computador, por princípio, não compõe. É
que não ultrapassam todavia o suportável. O sonoro o compositor presente em todos os momentos da escrita
criado é tenso, não possibilitando um estado de deleite musical que elabora a obra musical. A obra que assim
relaxado. A natureza sonora do objecto não o permite pois nasce, será o produto de um homem criador atento aos
que o som-ruído é integrado e assumido enquanto veículo novos meios de informação e comunicação e cuja
de uma interação expressiva e artística, do tipo homem- originalidade e individualidade de pensamento
máquina. O erro é aqui manifesto objecto, e como o autor permanecem suas. Instrumento de auxilio à elaboração
afirmou: “não recuso, integro” (Lima, 2002, p. 89). Por desta obra, o sistema UPIC, foi concebido por Iannis
outro lado leva-nos a pensar que o autor revela uma Xenakis quando o compositor pesquisa não só nos
forma de estar compatível com as afirmações de domínios da música por computador mas, e também, nos
Pousseur pois através do uso de sons concretos parte domínios da síntese sonora [4]. Neste contexto, surge a
para o uso e aplicação de formas de utilizar o som pré- componente electrónica da primeira obra Mare-à-Mare
gravado mais ousadas e definidoras de um espírito (1978-80; para fita magnética e 8 instrumentos – fl, cl, trp,
inquieto e criativo. Para Henri Pousseur: vl, vla, vc e pno), o visual determinando e influenciando a
A “música concreta” tem o grande e incontestável mérito de produção do sonoro e do musical. A plasticidade do som e
chamar a atenção tanto sobre as possibilidades gerais dos do sonoro não se concretiza agora somente ao nível das
meios electroacústicos, conhecidos já antes mesmo da suas estruturas internas mas, e também, nas suas
guerra, mas sim sobre os novos horizontes musicais determinações visuais. Em finais dos ano 80,
tornados acessíveis a ela pela invenção da gravação
magnética. Ela mostra assim que os meios de reprodução a “partitura” do computador é injetada novamente no
sintetizador analógico VCS3, de que resultou uma nova obra
eaw2015 a tecnologia ao serviço da criação musical 92

de características completamente diferentes: texturas, numa visceral ligação às artes do espaço, o que conduz à
timbres, ritmos, etc. Mais tarde, para a inauguração dos plasticidade (tactilidade) do som (Lima, 2002, p. 92).
Novos Paços do Concelho de Matosinhos, Cândido Lima
cruzou as duas obras, isto é, misturou Toiles IV (inédita na O compositor socorre-se destes elementos e desta
época) com a obra original, ou obra-mãe, Mare-a-Mare” sobreposição de técnicas para fazer renascer uma obra
(Lima, 2002, p. 91), única onde o espaço é um elemento preponderante na
definição de um discurso novo e original, reutilizando,
reutilizando, reavaliando, reafirmando, reagrupando e
reescrevendo, reafirmando, dizendo. Simultaneamente
reformulando os espaços sonoros e plásticos dos seus
trabalha o som ao nível da micro e da macroestrutura,
materiais. A nível discursivo o seu desenvolvimento faz-se
pois que não só cria o som génese como, e depois de o
de forma bastante lenta e gradual contribuindo para o
trabalhar seja de forma individual ou em camadas
estatismo das estruturas musicais propostas. Do ponto de
sucessivas, assoma ao nível máximo de modificação da
vista sonoro, o universo fruído está datado relativamente
estrutura primeira da obra. No dizer do autor,
à tipologia de sons propostos pelo autor, bem como às
técnicas de criação, definição e manipulação do som e do foram percorridos vários níveis ou graus de micro e de
discurso musical. O ato criativo massivo, a continuidade, o macro-composição, desde a síntese do primeiro som digital e
analógico, passando sucessivamente por camadas
glissando, o relevo textural criado pela manipulação
intermédias até às etapas finais de misturas e de
dinâmica sobressaem, sendo fruto não só de uma espacialização. [Cândido Lima interessou-se] tanto pelo som
intenção, como de uma aniquilação expressiva. O homem em si, enquanto matéria bruta ou neutra, como pelos
revela-se aqui sujeito ao poder e aos limites técnicos e processos composicionais da sua elasticidade no devir
expressivos da máquina. E nós perguntamos se não é na musical. Aqui, a música é interpretada como energia, como
superação dos limites que se advinha o génio. Não é, velocidade, como movimento puro enquanto dimensões em si
afinal, no saber utilizar-se que a inteligência se mostra? (Lima, 2002, p. 92).
Em Lendas de Neptuno, bem como nas outras obras em O som torna-se assim o gérmen criador de estruturas
estudo, o arquétipo água domina. Segundo o compositor, sonoras e musicais usadas para formalizar o conjunto das
sabemos que: “de uma série de obras que têm como tema obras tanto ao nível dos seus conteúdos como da forma
a água como manifestação simbólica (Oceanos, seguindo no nosso entender o pensar de Menezes.
Autómatos de Areia, Meteoritos, A-mèr-es, Mar-a-mare),
Lendas de Neptuno impôs-se à posteriori como uma Utilizando-se de objetos sonoros concretos provenientes dos
mais diversos contextos e, em consequência,
paisagem nítida do mundo subaquático dos grandes
irrevogavelmente evocadores de situações significantes, a
oceanos”. Cândido Lima afirma que sente esta música aventura concreta pretendia, apesar de todo reconhecimento
como “um écran onde passam as cenas da vida real ou potencial, constituir uma música essencialmente não-
imaginária” como que num “painel sonoro” que nos referencial, na qual nos encontraríamos defronte da recusa
envolve em torno do mar. “Como um documentário ou absoluta de toda e qualquer linguagem (Menezes, 1996, p.
uma reportagem, mesmo se é integralmente na sua 22).
essência uma obra abstrata” (Lima, 2002, p. 91-92).
E assim se manifesta a plasticidade do som aliada ao
Neptuno, equivalente latino do deus grego Poseidon, “é o
condicionamento dos espaços e ao desenho de formas
senhor dos mares e das águas calmas [...]. O mar como
musicais, bem como da intenção criativa e expressiva do
mito, paisagem ou espaço onde se desenrolam o
seu autor. De complexidade variável, as suas obras
maravilhoso e o trágico, é o cenário de marinheiros e de
exigem do ouvinte a inclusão para as perceber e fruir. E
pescadores [...]. um dos arquétipos com que nasço e
assim surge também a inovação em espaços de criação
durmo é o mar” (Lima, 2002, p. 91). O mar, fonte de vida,
mais tradicionais onde
fecunda e imensa, onde a descoberta e a fluidez das
águas descobrem paisagens únicas e mutantes a cada descobrir as fronteiras entre a liberdade e a submissão aos
instante com a luminosidade translúcida dos raios do sol modelos e aos sistemas – gramaticais e tecnológicos –,
em mares de sal, é o quadro cinético, onde o prisma de descobrir onde está a subversão do determinismo e da
racionalidade, é um desafio [que o autor] coloca ao ouvinte,
cores se revela na imaginação do autor. Recordações de ao analista [e a ele próprio] (Lima, 2002, p. 92).
uma África distante, como que (ir)real juntam-se-lhe. E
dessa inquietude se renova no sublime e na outorga de si Um desafio a todo aquele que se interessa por arte no
ao mundo, presente nas obras que, pétalas orvalhadas espaço contemporâneo que já vem de longe. Para
são o retrato das perdas, das lágrimas cerradas em duras Schaeffer já nos anos 40 e 50:
recordações. A fluidez líquida das águas transparece Mesmo quando o material do ruído me
obra. A continuidade a isso conduz, assim como a garantia uma certa margem de originalidade
natureza dos materiais projetados instrumentalmente ou em relação à música, eu era... reconduzido ao
em fita. E essa fluidez induz o autor num discurso onde a mesmo problema: privar o material sonoro de
confluência de materiais de origens diversas se todo contexto, dramático ou musical, antes de
transforma pelo uso de instrumentos de auxílio à criação conferir-lhe uma forma (Schaeffer, 1952, p. 46)
musical vindos de outros domínios que não o musical.
Assim, e
Oceanos
das várias fases de misturas da macro-composição [o
compositor salienta] o recurso à transformação que Oceanos, segundo a mitologia grega, é o primeiro deus
geradores electrónicos impuseram à música original das águas. A obra que Lima nos propõe foi composta
composta ao computador – Mare-à-Mare, sendo a mistura entre 1978 e 1979, para fita magnética, e tem a duração
definitiva, resultado da combinação da obra-mãe com a de 26’25’’. Na sua apresentação, surge como
transfiguração que dela foi feita – Toiles IV. As vagas que transformação da obra Toiles II (1978-80; música para
emergem de grandes massas sonoras têm as suas raízes
eaw2015 a tecnologia ao serviço da criação musical 93

computador UPIC-I e fita magnética). No catálogo de outro lado também, na reutilização de obras (na totalidade
obras, Cândido Lima lista um conjunto de duas obras ou em parte) ou em objetos sonoros de natureza diversa,
denominadas Oceanos. A primeira intitula-se Oceanos presentes em outras obras do autor. A continuidade
Cósmicos (1975-76-79, para orquestra e fita magnética). parece aqui absoluta, e a circularidade também. A
Segundo refere: presença avassaladora do sonoro pode interpretar-se
os princípios em que se apoia esta composição são os que numa tentativa de dominar o ouvinte que, como que
provêm, por analogia, da noção de topologia, ramo das imerso num oceano de som, se dilui a pouco e pouco no
matemáticas que estuda as transformações das figuras sem imaginário e nas sensações que assim se propõe
perda das suas características essenciais. [No caso de vivenciar. Sublinhamos que a obra se propõe sobretudo
Oceanos] trata-se de uma modulagem das massas sonoras, para quem a frui; se determina no tempo e no espaço de
da pulsação rítmica e da estratificação de vários níveis de uma fruição; e que cada momento fruitivo é único e
composição quanto à geração do som (electrónica tradicional irrepetível. Assim sendo quais dos conteúdos emergentes
– som linear – e informática – som massa), e quanto à
afinal influenciam mais a fruição da obra? Os do autor ou
arquitetura, formada por dois grandes planos: a pulsação
ininterrupta que se transforma sem perda da sua
os dos fruidores? Afinal quem constrói? Quem propõe? E
individualidade, e, as grandes massas de sons, ora estáticas, quem define? E a identidade, quem a manifesta ou
ora em movimento que se transformam de forma contínua determina?
[...]. [As grandes massas de som] escondem, submersos,
espaços e micro organismos que estão na origem da obra: Conclusão
um filamento aleatório descendente (um nome escrito
musicalmente), um motivo de dois sons repetido ao longo de A obra de Cândido Lima é as mais das vezes
intervalos de 3ª, um som, ao mesmo tempo ascendente e problemática e profundamente livre. Os seus processos
descendente [...](Lima, 2002, p. 99). criativos são caracterizados pelas ideias de unidade e de
Por outro lado, constatamos, no conjunto destas obras, identidade. Para o compositor, a obra tem que possuir
uma dissensão clara relativamente ao que o compositor uma “conexão matizada” como referimos, ou, por outras
vinha a desenvolver ao nível da sua produção musical palavras, uma unidade sonora conseguida através da
para o universo puramente instrumental ou vocal. Aquilo diversidade e, ao mesmo tempo, da identidade. A obra
que a tecnologia podia trazer e fomentar enquanto sonoro tem que ser coerente em termos de identidade e esta
foi aqui agarrado e reforçado como objecto de arte. Nota- identidade tem que ser transparente. De acordo consigo é
se, no entanto, uma limitação clara ao nível da progressão absolutamente imperativo que o ouvinte formule uma
de um discurso veiculativo que perpassa a capacidade de noção de identidade ao apreciar uma peça sua. Fruindo e
determinação das estruturas formais da obra; a analisando as primeiras obras eletroacústicas de Cândido
construção de um objecto fruível nas dimensões Lima percebemos que essa identidade se expressa nas
propostas, só possível graças ao engenho tecnológico obras e se percebe na sua respectiva fruição. Manifesta-
que, no entanto, não se sobrepõe ao engenho arte. E se não só nas técnicas como nos materiais, de modo
assim: perceptível relativamente não só ao conteúdo musical
mas também à sua forma. As obras referidas neste artigo
Com efeito, na busca dos materiais que se identificassem emergem de um núcleo duro de matérias que foram
directamente com as matérias, restava ao músico concreto a
sucessivamente propostas, analisadas, decompostas,
única alternativa de se livrar das convenções próprias da
música escrita, na qual a matéria (ou seja, o instrumento
recompostas, reavaliadas e submersas em mundos
musical tradicional) era abstraída e se anulava em favor do sonoros aparentemente diversos do original. Contudo, a
material – ou ainda (considerando-se o factor timbre como identidade nunca é traída, bem como a sua unidade. A
componente fundamental do material), na qual o instrumento obra surge num ato de construção e composição baseado
se anulava enquanto fenómeno causal para aí constituir um na responsabilidade de se declarar como tal.
factor estrutural intrínseco do material.
Para a música concreta, matéria e material constituiriam No dizer do compositor:
apenas um único aspecto da mesma coisa. Contrariamente criar, [...], é um acto puro de liberdade e de responsabilidade,
às leis da escrita musical, para as quais a matéria engendra onde os limites se estabelecem por códigos de ética pura e
o material para informar a obra – cujo processo de de princípios normativos impostos pelos próprios
concretização, para se atingir este último estágio, evoca um mecanismos da linguagem e do comportamento psicológico
caminho percorrido pela ideia musical até à sua realização do indivíduo. Para mim, compor, não é um acto de combate
acabada, que vai da matéria à forma por meio do material e público, é um acto de combate interior. Compor, como acto
das suas evoluções -, a música concreta, procurava anular de competição ou de concorrência, valor tão impregnado no
não somente o instrumento enquanto fenómeno causal, mas ser humano, está nos antípodas do caminho que foi, e será, o
sobretudo o material em si, devido à sua identificação meu acto criativo. Se a minha natureza produz arte que pode
absoluta com a matéria, renunciando-se à forma e às suas gerar conflito, será da sua essência provocá-lo por forças de
significações (Menezes, 1996, p. 26). trajecto interior do indivíduo e do trajecto exterior dos que
rodeiam esse trajecto interior. A expressão musical é, para
E Lima, o que faz? Utilizando o instrumento e a fita, o som
mim, um acto livre que não se submete às construções e
concreto e o transformado informará ele a obra? normas da sociedade, quaisquer que elas sejam: tendências,
Engendrará a matéria o material nas palavras de escolas, instituições, grupos, círculos, sistemas, políticas,
Menezes e no fazer do músico concreto? Renunciará ele pressões, censuras, economias, mecenas, protectores
à forma? (encomendadores...): o risco da liberdade inata e das
fronteiras entre a essência do indivíduo e a existência do
Sabemos que a tipologia de sons não se distancia muito indivíduo” (Lima, 2003, s.p.).
daquela proposta em Autómatos da Areia, conferindo
unidade e continuidade entre as duas obras na produção A sua obra é, por isso, repleta de aparentes contradições
do autor. A unidade e a identidade transparecem, por seguindo uma linha composicional definida mas nunca
eaw2015 a tecnologia ao serviço da criação musical 94

limitada. É uma obra impregnada de referências históricas se perde na noção de tempo e viaja nas imagens que lhe
e enquadrada numa linhagem musical muito bem definida chegam de imaginários outros que não o seu.
alargando as fronteiras do novo e do moderno. Neste [4] Neste sentido, Xenakis cria em Paris o EMAMu – Équipe
fazer se manifestam os materiais ao longo dos tempos. A de Mathématiques et d'Automatique Musicales em 1966 que
reutilização sucessiva e simultânea dos mesmos ergue o se tornará o CEMAMu – Centre d'Études Mathématiques et
novo. A identidade encontra-se nos objetos, a unidade Automatiques Musicales em 1972, em parceria com a École
nos princípios de manipulação do proposto, sendo que, e Pratique des Hautes Études. Este centro investiga nos
no nosso entender, o seu sonoro, em Autómatos da Areia, domínios da síntese de som, da psico-acústica, da
Lendas de Neptuno e Oceanos, se projeta como informática, da pedagogia e da composição musical. Graças
modulação contínua de elementos, de uma mónade, a uma equipa de pesquisadores pluridisciplinar, constitui um
manifestação do arquétipo água, tão próximo afinal do ser instrumento interdisciplinar para a extensão da composição
português. musical a outros domínios do conhecimento, permitindo a
generalização dos procedimentos de produção musical
Simultaneamente, a forma como desenvolve algumas das através dos meios técnicos disponíveis. No CEMAMu,
suas estruturas a nível rítmico e temporal, a forma como Xenakis cria o UPIC – Unité Polyagogique du CEMAMu – um
elabora e estratifica os seus conteúdos linguísticos e sistema informático gráfico. Graças a este sistema, o
imagísticos, a forma como concebe, transforma e compositor unifica o processo de composição: a micro-
diversifica os seus coloridos sonoros e a violência de composição – síntese do som, e a macro-composição – o
alguns dos gestos musicais e de alguma da gestualidade todo, fundem-se sobre o mesmo princípio, o mesmo método
– o gráfico.
compositiva que necessita para realizar os seus intentos,
ou seja, o sucesso na veiculação das realidades que
constrói e leva a percepcionar pelo público, revelam um Bibliografia
ser imbuído do saber e fazer de um mundo em Azguime, Miguel, Entrevista a Cândido Lima, 2004.
transformação. Boulez, Pierre (1989), Jalons (Pour une décennie), Paris:
Esta diversidade apresenta-se, não só ao nível da forma, Christian Bourgois Éditeur.
como dos conteúdos. Criativo, Cândido Lima transmuda- Lima, Cândido (2002), “Livre. Sem Limites (Quase…)”, in
se constantemente na obra de arte sendo que, e através Cândido Lima, Porto, ed. Pedro Junqueira Maia, Porto: Atelier
de Composição.
do ato de criar, surge sempre igual, contudo, sempre
diverso. Se olharmos ao conjunto da sua obra, a Lima, Cândido (2003), Origens e Segredos da Música
diversidade e multiplicidade de formas e cores apresenta- Portuguesa Contemporânea – Música em Som e Imagem,
Porto: Edições Politema.
se também nas diversas formas que utiliza para
disseminar o seu pensar. Assim, Maia, Pedro Junqueira (2002), Cândido Lima, Porto, Porto:
Atelier de Composição.
As palavras ditas à volta destas obras, mesmo neste Menezes, Flo (1996), Um Olhar Retrospectivo sobre a
contexto, são o menos importante, porque nenhuma chave História da Música Electroacústica, Música Electroacústica –
pode revelar o último sopro de uma obra de arte, por mais História e Estética, São Paulo: Editora da Universidade de
modesta que seja. Estas palavras podem orientar, mas São Paulo.
nunca substituirão a própria essência da criação musical, a
Pousser, Henri (1970), Fragments théoriques I sur ka
natureza da sensibilidade, da emotividade, da “situação” e da
musique expérimentale, Bruxelles: Éditions de l’institut de
“circunstância” do emissor e do receptor, do compositor e do Sociologie, Université Libre de Bruxelles.
ouvinte. Nenhuma ferramenta de análise o conseguirá! (Lima,
2003, p. 23). Schaeffer, Pierre (1952), À la recherche d’une musique
concrète, Paris: Éditions du Seuil.
No entanto, e como nos diz Boulez: Schaeffer, Pierre (1973), La musique concrète, Paris: Presses
A bem da verdade, no coração de qualquer evolução do Universitaires de France.
pensamento musical encontra-se a escritura; não se pode
escapar desta a não ser sob o risco de precariedade e de
obsolescência (Boulez, 1989, p. 377).

Permitimo-nos dizer que de toda a criação musical.

Notas
[1] Salientamos o interesse que manifesta mas também o uso
que faz de diversos meios como o computador, o sintetizador
analógico VCS3, ou o UPIC por exemplo, meios disponíveis
na altura e que se revelam fundamentais na construção das
obras em análise.
[2] Em particular as artes plásticas, o teatro e a arte
multimédia.
[3] Relativamente a este aspecto em particular, o compositor
refere duas peças: Polignos em Som e Azul (1988-89; para
16 instrumentos e fita magnética) e Toiles I (1977-78; para
orquestra de cordas). Em relação a Toiles, Cândido Lima diz
que foi pensada atendendo ao plástico e ao sonoro, ao
espaço e ao visual, ao carácter pictural de uma partitura,
onde o traço se revela som, e o som imagem, onde o ouvinte
eaw2015 a tecnologia ao serviço da criação musical 95

IV. Espacialização | Spatialization


eaw2015 a tecnologia ao serviço da criação musical 96

4.1 Polytopes de Iannis Xenakis; determinações arquiteturais de som, luz e cor

Helena Maria da Silva Santana Dep. de Comunicação e Arte, Universidade de Aveiro, Portugal

Abstract sonoras cuja origem se encontra nas interações criadas


entre as diferentes fontes sonoras, estas formas e
In Polytopes, we find many examples of spatial sound arquiteturas são moduláveis e móveis quanto à sua
architectures composed of different figures and estrutura e mobilidade. Em Polytopes, encontramos
transformed sound constellations, exchanged and diversos exemplos de arquiteturas sonoras espaciais
combined on an ongoing basis throughout the work. The compostas por diferentes figuras e constelações de som.
mobility of these figures and sound architectures is Estas arquiteturas são continuamente transformadas,
present in its structure, location and spatial position. The permutadas e combinadas ao longo da obra, do seu
opposition and fusion of sound elements, a constant, it tempo e espaços tornando-se assim móveis. Esta
creates a moving space by imposing a multidirectional and mobilidade está presente não só na sua estrutura, como
multidimensional listening. localização e posição espacial. A oposição e fusão de
elementos sonoros, uma constante, cria uma arquitetura
In these works, Xenakis still associate the light with the espacial móvel impondo uma escuta multidireccional e
sound creating a second space. It consists on points, lines multidimensional. Nestas obras, Xenakis associa ainda a
and luminous figures, using bright flashes and laser luz criando uma segunda arquitetura espacial composta
beams. In Polytopes, Iannis Xenakis convey a new way of de pontos, linhas e figuras luminosas, utilizando flashes
thinking about space and do the work. By the time they luminosos e raios laser [1].
are built they reveal the innovation demonstrated in the
way to work by the composer. Simultaneously, they Como nos afirma Sterken:
introduce light component. This component demarcates a In the Polytopes, Xenakis inserts - by means of loudspeakers
simple musical activity, to show an audiovisual and flashing lights - several layers of light and sound into
component, unusual at that time. existing architecture or a given historical site. The resolution
of these layers is such that they almost allow him to draw, or
Through this paper we intend to show how these art even to construct in these superimposed, immaterial spaces.
objects are different from each other and, as well as how Transposing his abstract and geometrical vocabulary (based
they differ from those built at the time. The pre-recorded on the axiomatic entities of point and line) to the sphere of
component comes with the innovative nature of sound light and sound in the Polytopes, Xenakis realizes a global
brought to fruition and it takes place two sound universes: and parallel formalization in the spaces of architecture, light
on the one hand it is electronic, the other concrete. and sound. Doing so, he pursues in a certain way
Reflecting on these aspects we will try to understand how Kandinsky's theories as exposed in Point and Line to Plane,
where the latter developed the vocabulary of abstract painting
to classify them. We also reflect on the size and shape of
as based on the elementary notions of point, line and
the proposed works trying to understand the outdated movement. (Sterken, 2001, p. 267)
proposed.
Pontos, linhas, planos, constituem o material do obra de
Keywords: Iannis Xenakis, Polytopes, Diatope, Sound, arte tornando-se a materialidade de um dizer sensível.
Space.
Os Polytopes – timbres e luzes no espaço
Introdução
Os Polytopes são obras chave de uma arte do espaço e
Ao longo da história percebemos várias formas de do tempo onde o tempo rege e se submete ao espaço,
espacializar o som e a obra musical. O eco, a organização “onde o espaço é ordenado para revitalizar o tempo”
responsorial, a distribuição das diferentes fontes sonoras (Revault D’Allones, 1975, p. 19). Para criar os Polytopes,
no local de concerto obedecendo a um esquema prévio, e para além de combinar som e luz, Xenakis organiza e
a combinatória entre as várias formas de espacializar o utiliza elementos musicais e cénicos simples, e
som são disso um exemplo. A forma da sala, essencial, conhecidos, inovando não no tipo de materiais, mas, e
conduz à construção de espaços próprios, com objectivos sobretudo, na forma como os utiliza. “Ce qu'il faut
e formas novas. O espaço torna-se a par do tempo um changer, ce n'est pas la réalité, c'est la façon de s'en
elemento constitutivo da obra musical, sendo concebido servir” (Revault D’Allones, 1975, p. 72). Os materiais, e as
segundo exigências estéticas e acústicas que se revelam ferramentas, funcionam de maneira distinta, sob a
determinantes na definição de um estilo. No século XX condição de que “cette intelligence où se retrouveront
eleva-se à categoria de parâmetro do discurso sendo l'artiste, l'oeuvre et le spectateur, s'en empare” (Revault
trabalhado e articulado segundo princípios e técnicas D’Allones, 1975, p. 72). Nestas obras, o compositor
próprios, obedecendo a princípios estéticos e filosóficos mostra-nos
rígidos. A sala/local de concerto, fundamental na
percepção da obra musical, será estudado, transformado une réalité inconnue, incongrue, extra-terrestre, bien au
e empregue de uma forma nova. A disposição dos contraire; ce qu'il montre a un air de quotidienneté, tout ou
moins dans les sociétés technologiquement avancées. Ce qui
instrumentistas em diferentes locais e a diferentes níveis
est radicalement autre, c'est la façon dont fonctionne le
em altura, assim como, a difusão de música pré-gravada, système des stimuli sensoriels, les sons et les lumières, qui
permite ao compositor a criação de formas e arquiteturas ne sont plus régis par l'utilité, mais par les lois qui, même si
espaciais e sonoras de relevo. Concebidas por estruturas elles ne sont pas comprises dans leur structure en
eaw2015 a tecnologia ao serviço da criação musical 97

mouvement, apparaissent immanquablement comme tout elementos. Operações matemáticas controlam o material
autres que les lois du faires et de l'utile” (Revault D’Allones, através da criação de conjuntos e subconjuntos de pontos
1975, p. 72). luminosos. A composição luminosa, concebida com base
Xenakis concebe vários Polytopes. No seu artigo Towards na teoria dos conjuntos, está registada numa partitura
a Space-Time Art: Iannis Xenakis’s Polytopes, Sterken luminosa, uma sequência de ordens nos seus princípios
informa-nos que o termo Polytopes base comparável aos rolos de papel dos pianos
mecânicos [5]. Assim, quando projetados num écran
is the collective name of a series of multimedia installations,
constituído por centenas de células fotoeléctricas
including sound, light and architecture, conceived by Iannis
Xenakis during the 1960s and 1970s. The word "polytope" is representando cada uma delas um circuito luminoso da
Greek; in this context it has to be interpreted literally: poly estrutura de cabos portadora dos pontos luminosos, dá
means "a lot, several," while topos means "place." [E que,] origem ao espetáculo [6].
every Polytope bears the name of the site or the city where it
Em oposição encontramos a parte musical constituída por
has been installed. (Sterken, 2001, p. 262)
glissandos. Difundida no decurso das representações por
Neste fazer Xenakis realizou 7 espetáculos onde vai diversas colunas disseminadas no espaço, a música,
desenvolvendo diferentes formas de formalizar, contínua, é independente do espetáculo luminoso
concretizar e exteriorizar as suas ideias e processos contrastando com este último. “La lumière occupe le
criativos: Polytope de Montréal (1967), Hibiki-Hana-Ma temps, car son effet dépend du rythme et de la durée,
(1969-70), Persépolis (1971), o 1º Polytope de Cluny alors que la musique donne force à l'espace” (Matossian,
(1972), o 2º Polytope de Cluny (1973), La Legende d’Eer 1981, p. 272). Contrastando com a complexidade do
(1977 e 1978) e Polytope de Mycènes (1978). programa luminoso, a banda sonora é bastante simples.
Contendo vários timbres modulantes e pulsações muito
Na sua determinação, Xenakis trabalha pela primeira vez
variadas, percebe-se em contraponto com o ritmo e a
com raios laser, utilizando o computador para controlar as
densidade dos pontos luminosos. O jogo espacial de
estruturas concebidas. A posição de escuta é estática [2].
diferentes materiais sonoros é uma constante, resultando
Pela disposição das colunas, o compositor favorece a
numa longa modulação timbrica criada pelas diferentes
audição de diferentes relevos sonoros criando arquiteturas
formas de ataque, dinâmicas, registos e harmonias.
sonoras espaciais, uma escuta multidirecional e
Encontramos igualmente diversas texturas e cores
multidimensional do som. Como esculturas espaciais, são
espaciais.
um movimento de espaço e tempo, onde formas visuais,
acústicas e arquiteturais, valorizam o espaço acústico,
abrindo novos horizontes de criação e fruição. Hibiki-Hana-Ma
Concebido para o Pavilhão da Federação Japonesa do
Polytope de Monte Real Ferro e do Aço da Exposição Universal de Osaka em
1970, Hibiki – Hana – Ma é um espetáculo de luz e som,
Foi em Monte Real que foi criada a primeira destas obras.
um polytope. Difundida por 800 colunas dispersas por
Para 4 orquestras disseminadas pelo público, Polytope de
toda a superfície do pavilhão, a banda magnética foi
Montreal faz parte de um espetáculo realizado num
gravada no estúdio NHK de Tóquio. Paralelamente,
espaço físico particular – O Pavilhão Francês da
assistimos a um espetáculo de raios laser concebido por
Exposição Universal de Monte Real - uma sala de
K. Usami. A parte sonora compreende uma banda
exposições composta por várias galerias a diferentes
magnética que alterna sons não trabalhados de
níveis, com um sistema de escadas que se ergue a partir
instrumentos de cordas com sons trabalhados de vários
de uma zona central. Neste espaço, Xenakis fixa uma
instrumentos da música orquestral e tradicional japonesa
estrutura de cabos de aço semelhante ao esqueleto do
como o koto, que Xenakis transforma criando um
Pavilhão Philips [3]. Esta estrutura composta de 5
bombardeamento metálico de sons. Rudes e ásperas, as
superfícies será o suporte de 1200 flashes [4]. Xenakis
texturas apresentam conjuntos de formas de ataque e de
concebe com esta estrutura uma arquitetura
ritmos que variam constantemente de forma anárquica.
"transparente". Preenchendo o vazio do interior do
pavilhão, serve de suporte aos pontos luminosos.
Baseada em formas rígidas, esta estrutura permite criar Persépolis
relações entre as suas subestruturas e os diferentes Persépolis, espetáculo de luz e som com música
andares do Pavilhão. As formas, móveis, obedecem a leis electroacústica para banda magnética 8 pistas, foi uma
de progressão matemáticas. Xenakis diferencia cada encomenda do V Festival Internacional de Artes de Chiraz
andar pelos ritmos de iluminação que são característicos – Persépolis, Irão. A obra criada em 26 de Agosto de
às diversas zonas espaciais. Depois de estabelecidos 1971, nas ruínas do palácio de Darius I - o Apadana, tem
cruzam-se criando uma dinâmica própria na definição e uma duração de 56 minutos. O espaço físico do palácio
coloração do espaço circundante. oferece ao público a possibilidade de se movimentar em 6
As cores, em número de 5, são introduzidas, a pouco e áreas de escuta diferentes sendo a música difundida por
pouco, e uma por uma: primeiro o vermelho, depois o um conjunto de colunas dispostas em três círculos nas
amarelo, o branco, o verde e o azul. No decorrer da obra, ruínas do palácio. Na montanha que se encontra em
são deslocadas no espaço caracterizando zonas frente, e perto dos túmulos reais de Darius II e de
diferentes à medida que a obra se desenvolve, sendo Artaxerxes I, encontram-se vários projetores que difundem
tratadas como alturas sonoras. Depois de atribuído um as suas luzes para o universo. No cume, estão dispostas
ritmo de iluminação a cada uma das estruturas, andares e várias fogueiras. Ao longo da montanha, vários pontos de
cores, o compositor cria relações entre os diferentes fogo aparecem lentamente descendo de forma mais ou
eaw2015 a tecnologia ao serviço da criação musical 98

menos lenta e desordenada a montanha. Vários grupos continua ao longo de toda a obra. O espectador
de jovens transportam tochas de fogo criando linhas de confrontado com a simultaneidade de diferentes
fogo que se dispersam e movimentam pela montanha realidades sonoras e visuais, performativas e formativas
formando um conjunto de figuras geométricas e de de uma nova dimensão artística, contribui, ativamente,
constelações de luz. Em seguida, juntam-se entre os dois para a construção da obra. Neste sentido, Sterken afirma:
túmulos e escrevem com o fogo, de forma legível do The audience has to contribute actively to the construction of
Apadana, o palácio, a frase – “Nós trazemos a luz da the sense of these art works; the spectator himself has to
terra”. No final do espetáculo 150 estudantes do liceu da effect the operation of synthesizing the poly-temporality of the
vila, trazendo tochas de fogo, passam a ravina, entram proposed spectacle. Therefore, instead of focusing the
pelo público desaparecendo a pouco e pouco na floresta spectator's attention by simply playing with his reflexes or his
de colunas do Palácio de Darius. corporality, or hypnotizing him with sequences of familiar
images, Xenakis's abstract and multi-layered Polytopes try to
Gigantesco, Persépolis é um espetáculo "ouvert sous le open the audience's mind to diversity and simultaneity. This
ciel de l'Orient, et incarné par des enfants, par des way, these electronic poems express the idea of an intelligent
hommes de demain" (Revault D’Allones, 1975, p. 22) e space, long before it became a fashionable concept in
uma obra “abstraite, dense, complexe, dont la puissance contemporary architectural theory. (Sterken, 2001, p. 271)
abrupte investit autant les sens que l'intellect. [Para
Xenakis], elle correspond au rocher sur lequel ont été La legende d'Er
gravés des messages hiéroglyphiques ou cunéiformes,
La Légende d’Eer, uma das obras mais longas de música
d'une manière compacte et hermétique, au point de ne
electroacústica do compositor, tem a duração de 46
délivrer leurs secrets qu'à ceux qui veulent et savent
minutos e foi criada num espaço de características únicas
comment les lire" [7].
- O Diatope [8]. A concepção desta obra foi inspirada pela
leitura de vários textos: A Lenda de Er da República de
Polytopes de Cluny Platão, Poimandre de Hérmes Trismégistre, um texto
Na Europa, Xenakis produz alguns destes espetáculos: o sobre o infinito incluído em Pensamentos de Pascal e um
primeiro em 1972 e o segundo em 1973, para as termas texto sobre a Supernova de Kirschener. Na nota de
romanas de Cluny em Paris. O espaço em forma de T programa para o espetáculo dado no Diatope, Xenakis
oferece ao compositor um novo desafio: como conceber escreve uma afirmação controversa e muito importante.
uma estrutura para fixar os pontos de luz e som. A Nela o compositor afirma:
estrutura, dupla, e diferente da de Polytope de Montreal, Music is not a language. Every musical piece is like a
permite ao compositor uma grande liberdade na complex rock, formed by ridges and designs engraved within
disposição das estruturas luminosas e sonoras. No and without, that can be interpreted in a thousand different
entanto, o compositor dispõe estes pontos de uma forma ways without a single one being the best or the most true. By
bastante simples utilizando uma estrutura ortogonal. Em virtue of this multiple exegesis, music inspires all sorts of
fantastic imaginings, like a Crystal catalyst. (Xenakis, 2006, p.
seguida questiona-se: o que fazer com todos estes
261)
pontos? Como estruturar tudo isto? Que figuras sonoras e
luminosas utilizar? Quais as melhores face ao resultado Esta afirmação de Xenakis, releva, segundo Meric,
pretendido?...
an important change in music conception, which has little by
Inicialmente concebe um conjunto de figuras, estruturas e little gained ground during the twentieth-century. When he
elementos que nomeia metaforicamente de nuvens, said that “Music is not a language”, Iannis Xenakis probably
criticizes the idea that music can only be structured - and
labirintos, rios, lagos tentáculos... Inicialmente trata-se
conceived - depending on a temporal and chronological axis.
somente de utilizar termos que descrevam uma vontade. In other words, like language, music should be a succession
Esses termos dão-nos a aparência de um fenómeno "en- of entities, of well-defined phenomena. In this way, we can
temps". Em seguida, Xenakis inicia um trabalho que lhe use these entities to establish a global sense of the musical
permitirá formalizar as estruturas segundo uma work. (Meric, 2011, p. 2).
representação científica. Tomando como exemplo o rio,
Estes fatos, bem como aquele em que percebemos que
podemos afirmar ser constituído por moléculas, um
no Diatope, se assiste à remoção virtual de fronteiras,
conjunto de partículas muito pequenas devendo
remoção essa que foi iniciada com a concepção do
necessariamente ter uma corrente, uma direção e um
Pavilhão Philips, fazem com que a experiência artística
débito específico. Cada molécula, representada por um
fruida seja única. Neste fazer, e segundo Nunzio: “o chão
conjunto de dados que pode ser tratado matematicamente
também parecia estar ausente, pois foi feito de quadrados
através das leis da estocástica e representada por pontos
de vidro, o que fez com que o visitante parecesse estar
de luz, é transportada a vários pontos do espaço
flutuando na sala” (Nunzio, 2006, p. 981). Por outro lado,
favorecendo, no entanto, sempre uma direção
determinada, sem impor, contudo, um itinerário. o espectador, suspenso pelo som, pela luz, pela
envolvente espacial, adquire uma nova percepção e
Para além dos pontos de luz, surgem ainda 3 raios laser, vivência do espaço que se vivencia, para Solomos e
um verde, um vermelho e um azul. Reflectidos por Raczinski, nas suas três dimensões (Solomos e
espelhos animados segundo 2 planos perpendiculares por Raczinski, 2002). Por outro lado, verificamos que o
pequenos motores eléctricos, desenham no espaço espaço do Diatope era aberto para o exterior. De vidro,
figuras geométricas. Realizadas à fracção de segundo, a também, eram seis colunas. Por outro lado, a concha
sua variação cria diferentes esculturas cinéticas. A banda externa era de uma membrana de vinil vermelho
sonora, composta no Estúdio Acousti em Paris, possui semitransparente, que filtrava e modulava não só o som,
diversos espaços de timbres modulados de forma como a luz e a temperatura. A filtragem, passiva, era
eaw2015 a tecnologia ao serviço da criação musical 99

complementada por uma outra estrutura, uma membrana A espiral sonora obtém-se pela transformação de um
interior ativa, uma rede de metal, na qual as fontes de luz movimento de rotação do som. Esta ação afecta o registo,
e som eram presas. a intensidade e a velocidade de rotação da mesma. A
combinatória e modificação dos diversos parâmetros
De acordo com P. Oswal o espaço sonoro e visual desta
origina diferentes espirais. Modificando o registo, o
obra, deste polytope, não foi organizado por massas e
compositor age sobre o seu timbre, sobre a sua cor
concavidades, desenvolvendo campos de energia de
sonora tornando-as mais claras ou mais escuras ao
diferentes densidades no espaço e tempo da obra
utilizar respectivamente frequências mais agudas ou mais
(Oswalt, 1991). Encomenda da Westdeutscher RundFunk,
graves. Se a sua velocidade de rotação e energia cinética
La Legende d’Eer comporta sons electrónicos concebidos
são mais fortes, as espirais tornam-se cada vez mais
no CEMAMu com a ajuda de conversores numérico-
abertas e vice-versa. A modificação da intensidade de um
analógicos, micro-sons, sons concretos de diversos
som age sobre o número de parciais que este contém.
instrumentos tradicionais como a harpa de boca de Africa,
Conforme seja mais ou menos elevado, as espirais
o Tzouzoumi japonês, e ruídos de objetos e materiais
tornam-se mais ou menos brilhantes. Em La Legende
batidos uns nos outros [9]. Ao conceber esta obra, Iannis
d’Eer Xenakis combina várias espirais simultaneamente
Xenakis faz com que o ouvinte e a sua audição penetrem
que se sobrepõem, sobem e descem no espaço, sendo
no espaço tanto sonoro como visual e imagético
mais ou menos presentes, fortes e independentes
propostos. Simultaneamente, o ouvinte é levado a
consoante o processo de construção, dinâmica e registo
confrontar-se com um outro espaço, o espaço que
usado, surgindo uma textura contrapontística.
assoma dos sons.
Difundida a partir de 11 colunas dispersas no espaço do
É importante para percebermos e compreendermos a
Diatope, nesta obra, a localização do som no espaço, a
forma de abordar a composição musical e o fenómeno
modificação do volume de cada uma das 7 pistas da
sonoro por parte do compositor, que Xenakis trabalha o
banda magnética, o estado dos 1680 flashes luminosos, a
espaço físico e virtual onde se constrói o som e o
posição dos 400 espelhos e prismas ópticos de reflexão
complexo sonoro, de uma forma inovadora à época.
dos 4 raios laser, são comandados pelo compositor
Assim, para compreender os diferentes aspetos da sua
através de uma folha de instruções, uma descrição
obra electrónica e eletroacústica
numérica das 1200 ordens de iluminação dos vários
é necessário, primeiro: pontos de luz, e todas as informações relativas à posição
to understand the advent of some Iannis Xenakis’s dos espelhos e prismas refletores [10]. A sua realização
composing techniques like, for instance, granular synthesis, comporta a modificação de estado todos os 25
stochastic systems, probability theory, cellular automaton, centésimos de segundo sendo descodificada por um
and so on. In these complex systems, the smallest constituent dispositivo que transforma a descrição em ordem. O
element cannot be listened as a musical data (and, most espetáculo luminoso contém diferentes configurações
often, it is individually imperceptible). These objective luminosas móveis, pontos (flashes luminosos), linhas
elements don’t constitute the musical phenomena, which can (raios laser). A organização dos diferentes movimentos
be listened to. […], we are not looking at the atomic structure,
luminosos, contínuos ou descontínuos, é regida por
but we are looking at the endless number of “ridges and
designs engraved within and without”. In other words, we funções matemáticas várias desde as funções dos
don’t perceive a static musical structure composed of números imaginários até distribuições de probabilidades
elements set in time, but we perceive a dynamical space. The várias. O compositor realiza ainda efeitos de massa,
smallest elements are only milestones (or an imperceptible movimentos rápidos de conjuntos de pontos no espaço e
spatial background). (Meric, 2011, p. 3) nas paredes do espaço físico da sala.
Paralelamente, e devido a pluralidade e diversidade das A projeção mais ou menos livre de raios laser sobre o
fontes sonoras, a audição da obra permite e origina um tecto da sala e o controle dos níveis de luminosidade dos
vasto conjunto de imagens mentais. A obra baseia-se e vários flashes, realiza um espetáculo onde as superfícies
"joue sur cette pluralité de significations, incluant et curvas das paredes da tenda põem em evidência
associant l'infiniment petit et l'infiniment grand, le quotidien movimentos de pontos luminosos, e onde observamos um
et le sublime" (Xenakis, 1995, s. p.). Épica, mergulha o movimento contínuo das duas componentes do
público na matéria sonora que o envolve. Xenakis quer espetáculo – luz e som. Espaços de timbre e luz cobrem e
"traiter des abîmes qui nous entourent et parmi lesquels invadem todo o espaço visual e acústico da tenda – O
nous vivons. [Diz-nos o compositor] quand j'ai composé la Diatope. Como nos afirma, e mais uma vez Meric,
légende d'Eer, je pensais à quelqu'un qui se trouverait au
Iannis Xenakis leans on different spaces—sound space,
milieu de l'Océan. Tout autour de lui, les éléments qui se visual space and architectural space— that he creates. These
déchaînent, ou pas, mais qui l'environnent" (Xenakis, different spaces, which are always dense or complex, are
1995, s. p.). dependant and confront each other. So, he builds up a global
perceptive space, which is extremely unstable. The audience
A continuidade da obra é absoluta realizando-se através
cannot consider—listen and watch—the work—all the
da utilização de sete estratos sonoros simultâneos que events—in its entirety. Each member of the audience can
progridem de forma insensível, uns em relação aos elaborate the work in a different way and from different
outros. A estrutura formal, clássica, desenvolve-se em experiences. So we can say that music—and its
arco partindo do silêncio para um máximo de tensão meaning(s)—emerges from the confrontation between the
voltando ao silêncio inicial, predominando a modulação different dynamical spaces and listening (or imagination).
dos timbres e do som que se movimenta continuamente (Meric, 2011, p. 3-4)
remodelando o espaço em espirais e atmosferas de sons
de rugosidades várias.
eaw2015 a tecnologia ao serviço da criação musical 100

Polytope de Mycènes dérobé au monde contemporain certaines données qu'il


renvoie ensuite, mais organisées, restructurées, et l'opération
Xenakis visita Mycènes pela primeira vez quando tem finale de remise en spectacle n'est pas exempte de malignité.
catorze anos aquando de uma viagem escolar. A beleza Il y a de l'ironie dans les Plytopes comme dans toute l'œuvre
natural do sitio fascina-o imediatamente, as ruínas e os de Xenakis peut-être. – O. Revault d'Allonnes
túmulos que aí encontra deixam-no bastante perplexo.
Nos Polytopes o som é difundido em todo o local de
Ce que je voyais me paraissait familier, mais en même temps interpretação da obra. Circulares, em espiral, da direita
extraordinaire, comme appartenant à un autre monde. para a esquerda, da frente para trás ou vice-versa, os
J’enfouis ce souvenir très profondément en moi. Puis, plus movimentos sonoros existem pelo deslocamento do som
tard, dès que je fus libre de visiter le même endroit, conduit
no espaço, sendo controlo do som feito de várias formas e
par ce que je sentais instinctivement comme quelque chose
de nécessaire et de primordial, j'eus l'idée de tenter une
em locais inabituais como Le Diatope [14]. Por outro lado,
"renaissance artistique" à l'échelle de la citadelle de Mycènes, Xenakis concebe espetáculos de luz e som ao ar livre
un Polytope de Mycènes (Gill in: Fleuret, 1981, p. 294-295). (Persépolis e Mycènes). O som encontra-se livre
desenvolvendo-se sem constrangimento no espaço.
Nesta obra, Polytope de Mycènes, o público encontra-se Consoante o sítio onde se encontra, o espectador percebe
sobre o flanco de uma montanha face à cidade. Entre eles de forma diferente os vários efeitos sonoros podendo
encontra-se um grande vale de onde se avista o monte mesmo ter sensações contrários a um outro espectador
Elias. A obra combina 18 pontos sonoros e dramáticos do situado num local oposto. Únicos na forma como concebe
espaço, récitas de Homero, hinos de Sófocles, versos de e trabalha o espaço, os polytopes permitem a disposição
Eurípedes, coros de Esquilo, 12 projetores DCA em vários níveis em altura das fontes sonoras. Este facto
antiaéreos, uma procissão de crianças, um rebanho de permite criar formas espaciais específicas, e
cabras com sinos e tochas de fogo e uma banda sonora. procedimentos e processos de se desenvolverem, no
No inicio do espetáculo são entoados por um coro espaço e no tempo, próprias. A junção de som e luz
feminino e de crianças, textos de Helena de Eurípedes produz a fusão de duas arquiteturas distintas uma sonora
[11]. De seguida, por um conjuntos de colunas, dispostas e outra visual. Superfícies, vagas, espirais e movimentos
de forma a que todo o vale seja inundado de som, ouvem- sonoros surpreendentes atiram a atenção do público para
se declamações em linguagem antiga, posteriormente diferentes pontos do espaço submergindo-o num turbilhão
traduzidas em grego moderno, e várias obras musicais, de som e luz sem precedentes na história da música
entre elas, Mycènes Alpha para banda magnética 2 pistas ocidental. O espaço transfigurado pelo som e pela luz
especialmente criada para este acontecimento artístico, torna-se plástico pela magia do compositor.
Persephassa e Psappha. Os percussionistas encontram-
se em frente e em volta do público. Percebemos que uma das maiores preocupações dos
criadores do século anterior, século XX, é a conquista do
A partir de um palco que permite a repercussão através espaço. Cada obra desenvolve nesse mesmo espaço
do eco de uma montanha para a outra, são executadas uma forma que lhe é própria. Em Polytopes, e para
diversas obras orquestrais e corais terminando o Sterken,
espetáculo com Oresteia para coros e instrumentos.
Paralelamente decorre uma procissão de crianças que Xenakis […] builds in light and sound space. His architecture
is not defined solely by columns, beams or walls, but by
passa entre o público oferecendo flores (Cfr. Fleuret,
atmospheres and energetic waves that provoke dynamic and
1988, p. 159-188, e, Lacouture, 1981, p. 291-293). spatial experiences”. (Sterken, 2001, p. 270)
Concebida especialmente para este espetáculo, Mycènes
Alpha é a primeira obra escrita com o sisyema UPIC. A A criação de espaços de timbres, a sua transformação e
sua macroestrutura baseia-se em glissandos lineares e na movimentação no espaço físico e acústico do local de
arborescência. interpretação da obra, através da metamorfose das
diferentes formas de ataque do som, produzem vários
A parte luminosa do espetáculo comporta diversos timbres que quando aplicados a texturas específicas
momentos. Inicia-se com a criação de um tecido luminoso originam cores e luminosidades diversas, resultando, em
saído de vários projetores antiaéreos. Situados perto das nosso entender, em experiências sonoras e espaciais
cidades de Tirynthe e Argos, formam, pouco a pouco uma únicas. Associando-os a outros elementos como os
pirâmide de luz estática [12]. De seguida surgem no vale registos extremos dos instrumentos e da orquestra, a
um conjunto de tochas, pontos de fogo, desenhando utilização do glissando, a sobreposição de ritmos do tipo
vários motivos plásticos. Um fogo imenso surge estocástico, ou não, e a utilização de grandes efetivos
regularmente no cimo do monte Elias, e um filme, instrumentais, ou outros, Xenakis concebe sonoridades
apresentando os tesouros dos túmulos antigos, é originais espacializando-as de uma forma nova. Espaços
mostrado sobre os muros da cidade. Xenakis faz subir de timbres são misturados, transformados, confrontados
pela montanha um rebanho de 200 cabras criando uma ou fundidos, produzindo vastas coreografias sonoras
outra constelação [13]. Um grupo de soldados descendo tornando-se o espaço função do timbre.
a montanha e transportando tochas acesas anuncia o fim
do espetáculo. Polytope de Mycènes foi o maior O compositor distingue ainda e através das dinâmicas e
espetáculo de som e luz realizado pelo compositor (cfr. formas de ataque, fraseados, acentuações, ritmos,
Matossian, 1981, cap. 11). harmonias e objetos sonoros diversos, os vários estratos
e planos que compõem a sua música. As superfícies
Conclusão sonoras, visuais ou outras, possuem zonas timbricas e
rítmicas bem definidas e delimitadas, obedecendo, num
Alors on peut dire que Xenakis, souvent qualifié comme le dit segundo nível de composição e análise, a um ritmo
M. Philippot de "cambrioleur de l'inspiration", a en effet
próprio, um ritmo de timbres. A nossa percepção do som,
eaw2015 a tecnologia ao serviço da criação musical 101

da luz, da cor, da textura e da obra musical, encontra-se [10] Os raios laser, graças às propriedades físicas da luz,
condicionada pela sua localização no espaço e pelo permitem a obtenção através de diversos elementos ópticos,
timbre empregue como meio de caracterização desse múltiplas reflexões. O feixe luminoso pode ser reenviado
mesmo espaço. O espaço existe em função do timbre e o várias vezes de um ponto a outro do espaço, sem que a sua
timbre em função do espaço. intensidade, a sua cor ou diâmetro de modifiquem.

No entanto, e apesar destas nossas afirmações, [11] O coro encontra-se numa plataforma instalada na base
das muralhas da vila. No cume, encontra-se a Acrópole e o
queremos alertar, à semelhança do que faz Meric, que:
Palácio Real de Agamemnon que será inundado de luz por
In most of the Iannis Xenakis’s electroacoustic works, there diversas vezes no decorrer do espetáculo.
was no stage, which the listener were facing and, so, there
was no landmark like in a stereophonic situation, where the [12] Estas vilas distam dez quilómetros de Mycènes.
listener could spot a right side and a left side, the front (where [13] No céu encontram-se diversas constelações fixas, no
the music normally happens) and the back (which normally
solo encontram-se diversas constelações móveis.
remains silence). In the Polytopes or in the Philips Pavillon,
the audience could look—and hear—in the direction that he [14] A sua forma característica é um factor primordial para o
choses, he could stand— or sit—where he wanted and could tipo de resultado sonoro concebido.
move around the structure. Furthermore, for La légende d’Eer
and for Concret PH, the buildings created by Xenakis were
asymmetric. At last, these works were conceived like Bibliografia
multimedia art forms (Solomos 2005) (with sound, light, video Fleuret, Maurice (1981), Regards sur Iannis Xenakis, Paris:
screening, architectural structure): perceived space was not éd. Stock.
only sound space (visual space interacted with music and it
forms a part of it). Here, we can say that Iannis Xenakis Fleuret, Maurice (1988), "Il teatro de Xenakis", Xenakis,
composes a dynamic space with no strong landmark. (Meric, Torino: E.D.T. Edizioni di Torino.
2011, p. 4). Gill, Dominic, "Le Polytope de Mycènes" in: Fleuret, Maurice
(1981), Regards sur Iannis Xenakis, p. 294-295, Paris: éd.
Não se revelará isto um outro fator de criação e escuta da
Stock.
sua obra? Não será isto um fator de provocação sentido e
expresso em obra à semelhança de tantos outros? Lacoutre, Jean, "Le Polytope de Mycènes", in: Fleuret,
Deixamos a provocação... Maurice (1981), Regards sur Iannis Xenakis, p. 291-293,
Paris: éd. Stock.

Notas Matossian, Nouritza (1981), Iannis XENAKIS, Paris:


Fayard/Sacem.
[1] Em algumas destas obras, o público deita-se sobre o solo
encontrando-se “fora” do espetáculo em oposição a outras Meric, Reanud (2011), “Music is not a language...” Listening
obras do compositor. to Xenakis’s electroacoustic Music Proceedings of the
Xenakis International Symposium Southbank Centre, London,
[2] A relação entre música e arquitetura é bastante forte na 1-3 April 2011 - www.gold.ac.uk/ccmc/xenakis-
obra do compositor nomeadamente entre os glissandos de international-symposium; Acedido em 12 de maio de 2015
Metastaseis e as superfícies deste pavilhão. http://www.gold.ac.uk/media/05.2%20Renaud%20Meric.p
df
[3] 800 brancos e 400 coloridos – 50% de cores quentes,
50% de cores frias, Nunzio, Mário del (2006), “Obras multimídia: solução de
Xenakis à apresentação pública de música eletroacústica”, in:
[4] Se um flash se ilumina segundo um ritmo dado, ele XVI Congresso da Associação Nacional de Pesquisa e Pós-
poderá modificar o seu ritmo quando é invadido por um outro graduação em Música (ANPPOM) Brasília.
ritmo, ou guardar somente o que é comum aos dois.
Verificamos que estamos em presença de operações da Oswalt, P. (1991), “Iannis Xenakis' Polytopes”. In: Arch+, no.
107. Aachen: Archplus Verlag.
lógica matemática: a conjunção, a disjunção ou a
complementaridade. Xenakis utiliza ainda outros processos Revault D’Allones, Olivier (1975), Xenakis/Les Polytopes,
matemáticos como o cálculo das probabilidades, as Paris: Balland.
estruturas lógicas ou as estruturas de grupo. Solomos, M. e Raczinski, J.M. (1999), La synthèse des arts à
[5] Quando o filme é projetado sobre a placa sensível à luz, o l'ère du multimédia. A propos du Diatope de Iannis Xenakis.
raio do projetor atravessa a mancha branca do filme e cai http://www.iannis-xenakis.org/solom.htm
sobre uma célula fotoeléctrica que aciona o circuito luminoso Sterken, Sven (2001) “Towards a Space-Time Art: Iannis
correspondente. Xenakis's Polytopes” in: Perspectives of New Music, Vol. 39,
[6] Quando o filme é projetado sobre a placa sensível à luz, o No. 2 (Summer, 2001), pp. 262-273 acedido em 29 de maio
de 2015 em http://www.urbain-trop-urbain.fr/wp-
raio do projetor atravessa a mancha branca do filme e cai
sobre uma célula fotoeléctrica que aciona o circuito luminoso content/uploads/2012/03/Iannis-Xenakis_s-Polytopes.pdf
correspondente. Xenakis, Iannis (2006), Music and architecture. Edited,
translated and presented by Shanon Kanach. New York:
[7] Iannis XENAKIS, Persépolis, Philips 6521 045. Pendragon Press.
[8] O Diatope é uma estrutura móvel que pode ser erguida em
qualquer espaço desde que possua as condições mínimas Discografia
para o comportar.
La légende d'Eer (1995), Xenakis, Iannis,
[9] No CEMAMu foram realizados os cálculos necessários à AUVIDIS/SALABERT, MO 782058 WDR, CD.
concepção da obra; nos estúdios da WDR em Colónia, a Electronic Music (1997), Xenakis, Iannis, EMF CD 003, CD.
síntese dos sons.
Persépolis, Xenakis, Iannis, s. d. , Philips 6521 045, CD
eaw2015 a tecnologia ao serviço da criação musical 102

4.2 A interação relacional existente entre o espaço construído e o espaço percebido no


discurso da música eletroacústica, sob a visão de filósofos do século XX

Henderson J. Rodrigues Santos Universidade do Rio Grande do Norte, Brasil

Abstract: [...] A relação entre o gesto musical, veículo tradicional


das intenções do compositor, e o gesto espacial com que
The spatial movement of the sound, ordinarily called este é veiculado no espaço torna-se assim
sound spatialization, It has been thought and practiced in particularmente relevante na análise das diferentes
the Electroacoustic Music since its inception. The perspectivas sobre a espacialização na música
importance of the spatial and timbre plan is undeniable not electroacústica (Penha, 2014, p. 15)
only for the Electroacoustic Music but for instrumental
music, though in the latter becomes not always clear this A espacialização musical assume-se basicamente em
relationship. The musical spatialization is constructed duas dimensões, a construção espacial composicional,
basically in two dimensions. The construction of the que envolve as técnicas e abordagens do espaço por
compositional spatial, which involves the techniques and parte do compositor e intérpretes, seja em tempo real ou
approaches to space by the composer and performers, pré-gravado, e a dimensão da perspectiva do ouvinte, que
whether real or pre recorded time, and the listener's nos direciona a compreender alguns aspectos
perspective, which directs us to understand some aspects relacionados às possibilidades e limites do ato da escuta
related to the possibilities and limits of the act of the e como a espacialização se coaduna com a experiência
listener and how it relates to the spatial listening auditiva. A dimensão do ouvinte parece ser muito
experience. This communication aims to present the significativa quanto à construção da perspectiva espacial
relationship between these two dimensions supported in por parte do compositor. Penha, ao demonstrar como a
space concepts of philosophers such as Martin Heidegger, proposta de Denis Smalley centra-se na percepção do
Michel Foucault, Gilles Deleuze and Felix Guattari, in ouvinte para a definição do gesto musical, coloca:
order to elucidate the interaction between the built No caso particular da música puramente electrónica, e na
environment and the perceived, allowing to delinear the inexistência dos movimentos de um instrumentista, uma
possible limits on the use of techniques of sound incontornável contribuição para a definição do gesto musical
spatialization, based on the construction of a musical é a proposta da espectromorfologia por Denis Smalley (1986;
1997). Também esta se centra na percepção e na audição do
intelligible speech. In this context, the spatial
receptor, inspirando-se directamente na tipo-morfologia
representation of sound is observed as a heterotopic (Schaeffer, 1966) e na escuta reduzida (Chion, 1983),
place, its identification builds a network of rhizomatic propostas por Pierre Schaeffer. A espectromorfologia refere-
relations, sometimes territorializing the tonal qualities and se à evolução do conteúdo espectral de um som ao longo do
sometimes deterritorializing them. This multiplicity of tempo (Penha, 2014, p. 16)
possible meanings is selected by cognitive, physiological
and subjectivism of the listener. É importante salientar que aparentemente o processo de
apreciação dos aspectos sonoros que se desenvolvem no
Keywords: Sound Spatialization, Musical Speech, espaço e no tempo é uma forma mais detalhada da
Electroacoustic Music, Gilles Deleuze, Félix Guattari. apreciação do timbre e da textura. Isto porque a difusão
do timbre e da textura não se pode evidenciar
Introdução desvinculado do espaço e, portanto, sua total expressão
perpassa a apreciação do espaço que o envolve, esta
O processo composicional do último século foi
relação permeia muito do trabalho de Henriksen (2002).
desenvolvido por abordagens sonoras diversas e tem sido
Para este autor, o espaço percebido e construído se
motivado pela crescente busca de um novo sempre mais
localizam em um nível intermediário em seu modelo
significativo. Na construção de obra deste período
explicativo. Segundo o autor, existiria uma diferença entre
queremos destacar o uso da espacialização. A
o espaço de escuta (Listening Space) e o espaço
movimentação espacial sonora, ordinariamente chamada
percebido (Perceived Space), sendo o este último
de espacialização sonora ou musical, tem sido praticada e
baseado na interação entre o ospaço de escuta e o
pensada no âmbito da música eletroacústica (ME) desde
espaço construído. Neste texto, tratremos o espaço
seus primórdios, colocando esta no foco da discussão
percebido como ação de compreensão musical por parte
daquela.
do ouvinte, sendo esta, em última análise, uma ação
A intensificação do uso do espaço como elemento formal volitiva do ouvinte.
em música acorreu particularmente após a Segunda
Este contexto torna evidente a necessidade de elucidar a
Grande Guerra. A espacialização em música
interação entre os espaços construídos e os percebidos, o
eletroacústica tem relação direta com o gesto musical e a
que nos permite demarcar limites possíveis na utilização
intencionalidade da construção estrutural da obra.
de técnicas de espacialização sonora, tendo por base a
Verificamos isto nas palavras de Penha:
construção de um discurso musical inteligível. Sendo
A colocação do som em movimento no espaço, quer assim, sentimos a necessidade de conhecermos como
durante a composição, quer na difusão da música parâmetros teóricos dos conceitos de espaço de filósofos
electroacústica, pode ser feita de forma a veicular uma como Martin Heidegger, Michel Foucault, Gilles Deleuze e
intencionalidade inteligível, formando um gesto espacial.
eaw2015 a tecnologia ao serviço da criação musical 103

Félix Guattari, contribuem para o entendimento da (2012, p. 86), “nos encontramos a partir das ações que
abrangência da espacialização em música. tencionamos, que nada mais são do que exercícios do
querer”, e é nesta perspectiva que a ação do ser-aí e do
O espaço na perspectiva filosófica do século XX ser-com-o-outro se evidencia na apreciação da arte, ou
seja, por meio do diminuir distâncias entre os seres, “isto
Heidegger foi um filósofo Alemão conhecido pela porque a existência é movimento interior: intencionalidade
refundação da ontologia e por subjugar o espaço ao e volição” (Marandola 2012, p. 86). Nesta perspectiva que
tempo. Heidegger se fundamenta em seu método podemos descrever uma espacialização ontológica, não
fenomenológico e hermenêutico e constrói ampla apenas do ser, mas a partir dele. Ampliando este tema
consideração sobre o espaço, o ser-no-mundo (Dasien) e Marandola comenta:
o tempo, relacionados à linguagem e à arte. Para
Heidegger o espaço passa a ser uma questão a ser Heidegger fala de uma proximidade direcionada, composta
pelo distanciamento (o distanciar fundado na possibilidade de
explicada e explicitada, o que não ocorre com conceitos
aproximar ou diminuir distâncias), pela região ou circundade
tais como os de lugar, localidade ou região, por exemplo. (ambiente onde uma coisa particular pode mover-se) e pela
Isto porque, em sua abordagem fenomenológica, o orientação (o norteador do ser-no-mundo). [...] A circundade
filósofo volta-se ao ponto de vista do ser que experiência se refere explicitamente à espacialidade, estando associado,
os lugares, assim ele é direcionado a conceituar dois tipos no pensamento de Heidegger, diretamente a lugar. [...] Essa
de espaços, um relacionado a uma ontologia espacial e espacialidade nada tem que ver com a localização
outra fenomenológica, como coloca Saramago: geométrica, sendo, ao contrário, temporal e espacial. Isto
porque a existência é movimento interior: intencionalidade e
Para o autor, ao contrário do lugar, ou dos topos, o espaço – volição. [...] O nosso querer possui uma relação análoga ao
no singular – é aquele desprovido do tempo, da ser-assim das coisas, assim como o nosso intelecto nos
heterogeneidade das paisagens do mundo, das diferentes conduz à sua essentia. (Marandola Jr 2012, p. 86-87).
direções, o invisível e intangível, ainda que facilmente
objetivado pelas ciências exatas. Como contraponto a este O fato de o espaço (e o tempo) ter uma ligação com a
espaço-objeto, há o espaço interior da consciência de um percepção interior e exterior do ser permite extrapolar a
sujeito, que se coloca “diante” do mundo, estabelecendo um noção de um espaço físico por excelência e
esquema interior-exterior tão avesso à unidade inseparável redimensionar a noção de objetos, de proximidade, de
entre Dasein e mundo, por exemplo; unidade que sempre
limites, de lugar, de território. A distância física pode ser
permaneceu inalterada para Heidegger, ainda que formulada
de diferentes modos. Nos lugares do mundo há espaços,
rompida pela proximidade subjetiva. Nesta perspectiva, a
mas jamais o espaço (Saramago 2008, p. 66). arte pode redimensionar os lugares, nos aproximar ou nos
distanciar, ou ainda criar novos espaços.
Heidegger se deterá especialmente ao espaço
fenomenológico e buscará compreender a abrangência do A modernidade do pós-guerra operacionaliza a
ser-no-mundo atual e como os lugares originam os multiplicação das micro e macro tecnologias e o
espaços. Seu pensamento inicia com a constatação da assujeitamento do homem e da natureza ao conhecimento
existência do lugar, remetendo-os as coisas que são em científico, tornando-os objetos de um humanismo
si mesmas, lugares. São, portanto, os lugares existentes a metafísico. Nesta perspectiva, inserem-se as análises de
priori aos espaços, sendo portanto confundidos com as Foucault e Heidegger da modernidade e das relações
próprias coisas e por isto sendo afirmado como o ser- existentes entre o ser e seus espaços. A gerência das
lugar. Os espaços são então, resultado de uma relações entre social e político, individualidade e
necessidade criada pelo lugar necessário ao ser-no- coletividade, subjetividade e a objetividade geográfica do
mundo. A obra de arte cria uma realidade distinta ser implicam em movimentos que são regulados por
comentada por Saramago: forças disciplinares. Foucault descreve esta relação e o
faz definindo a abrangência da disciplina na
Por remeterem sempre a si mesmas e a nada fora delas, as regulamentação dos movimentos. Duarte coloca que:
obras de arte instalam seus próprios espaços, a partir de seu
ser “obra-lugar”, se assim podemos dizer. Da mesma forma, Em síntese, Foucault demonstra que a disciplina é uma forma
as obras da escultura in-corporam em si lugares, instalando- de organização do espaço e de disposição dos homens no
os e abrindo a partir de si seus espaços (Saramago 2008, p. espaço, visando otimizar sua atividade, bem como é uma
69). forma de organização, divisão e controle do tempo em que as
atividades humanas são desenvolvidas, com o objetivo de
Heidegger amplia sua abordagem apresentando que não produzir rapidez e precisão de movimentos [...] Foi assim que
apenas a escultura, mas a música possui a capacidade de Foucault descobriu um corpo individual produzido pelo
evidenciar o espaço. Sendo assim “a possibilidade de se investimento produtivo de uma complexa rede de micro-
pensar a arte sem que se recorra à oposição banalizante poderes disciplinares, que atuavam de maneira a tornar
entre artes temporais e espaciais é aqui sutilmente possível a utilização dos corpos em nome da exploração
otimizada de suas capacidades e potencialidades (Duarte,
anunciada, uma vez que o som, para que seja ouvido,
2006, p. 111)
requer a mesma abertura de espaço” (Saramago 2008:
71). Neste contexto, a disciplina pode ser compreendida como
aquela imposta por relações sociais, políticas, etc., ou
Com Heidegger, o espaço não é visto como um ente
como a disciplina auto-imposta. Esta última nos interessa,
realizado e acabado, mas por meio de um processo de
pois demanda um ato de escolha, de filtragem cognitiva
dar-lugar, e assim o espaço é fluido, ora se evidenciando
da espacialidade possível em dado momento. Sendo
e ora se esvaziando, a este processo o filósofo chama de
assim, a espacialização musical só é percebida por meio
espacialização. A espacialização desenvolve-se por meio
de um ato disciplinar consciente e auto-imposto pelo
de ações de aproximação e de diminuir distâncias,
ouvinte em busca de elementos que dêem sentido ao
embora não necessariamente físicas. Para Marandola
fenômeno auditivo experimentado. A ação auditiva
eaw2015 a tecnologia ao serviço da criação musical 104

consciente já foi descrita por Pierre Schaeffer quando E assim vemos parte da conceitualização de Foucault
expôs seus conceitos de objeto sonoro e tipos de escuta. sendo elaborada por Deleuze e Guattari. A multiplicidade
Sendo assim, mais do que uma escuta reduzida, é heterogênea de espaços possíveis em lugares
necessária uma auto-imposição disciplinar do ouvinte na heterotópicos engendra a correlação entre as
busca de apoiar o processo cognitivo gestáltico natural. possibilidades conjuntas e lineares e as disjuntas e
Entretanto, a espacialização, embora um elemento múltiplas. A existência das realidades múltiplas ou da
intrínseco ao desenrolar de uma música, nem sempre multiplicidade como elemento básico da criação das
participa ativamente do discurso musical, e, portanto, realidades passa a ser tema da elaboração em Deleuze e
ainda que percebido no processo gestáltico acima citado, Guattari. Sobre esta elaboração esclarecem-nos
pode não ter relevância no sentido final do evento sonoro Haesbaert e Bruce:
escutado. Disto decorre que, a espacialização pode A filosofia de Deleuze e Guattari é denominada pelos próprios
ocorrer de duas formas básicas, como elemento autores de uma ‘teoria das multiplicidades’. Estas
discursivo ou não discursivo. Estas formas tornam-se multiplicidades são a própria realidade, superando assim as
elementares na relação do espaço construído e dicotomias entre consciente e inconsciente, natureza e
percebido. história, corpo e alma. Embora os autores reconheçam que
subjetivações, totalizações e unificações são ‘processos que
Para Foucault, a gênese dos espaços encontra-se na se produzem e aparecem nas multiplicidades’, estas ‘não
compreensão dos lugares possíveis. Este, pode ser supõem nenhuma unidade, não entram em nenhuma
percebido nas relações institucionais da sociedade. totalidade e tampouco remetem a um sujeito’. Seu ‘modelo de
Foucault descreve os tipos de lugares como sendo reais, realização’, portanto, não é a hierarquia da árvore-raiz, mas a
utópicos e heterotópicos. Estes últimos, são os lugares pluralidade do rizoma (Haesbaest e Bruce, 2002, p. 09).
outros da sociedade, existentes enquanto lugares Seria, portanto, o conceito de rizoma o responsável por
nomináveis, mas deslocados das realidades que o equilibrar e explicar o funcionamento das multiplicidades.
cercam, pois se constituem lugares de múltiplas O rizoma seria o princípio de localização do múltiplo, em
experiências, múltiplos significados, onde as dimensões oposição ao sistema cartesiano linear (ou arbóreo). Para
se intercruzam e os tempos se reduzem ao momento não Deleuze e Guattari, “um rizoma não começa nem conclui,
vivido, mas extraído, colocado à parte, destituído, por ele se encontra sempre no meio, entre as coisas, inter-
assim dizer, do fluxo contínuo da realidade. O espaço se ser, intermezzo. A árvore é filiação, mas o rizoma é
manifesta na construção e desconstrução do ser-no- aliança, unicamente aliança” (Deleuze e Guattari, 1995, p.
mundo, sendo ainda dotado de múltiplos significados, 39). Sendo o rizoma “a cartografia, o mapa das
sendo, portanto, passivo de uma gama de adjetivações, multiplicidades” (Haesbaest e Bruce, 2002, p. 10), este
como o filósofo exemplifica: não negaria o modelo arborescente onde o
O espaço de nossa percepção primeira, o de nossos desenvolvimento se dá por múltiplos caminhos auto
devaneios, o de nossas paixões, detém em si qualidades que excludentes com início em um ponto central, mas se
são como intrínsecas; é um espaço leve, etéreo, transparente combina com este, formando uma teia de caminhos. “O
ou, então, é um espaço obscuro, caótico, saturado: é um rizoma é uma proposta de construção do pensamento
espaço do alto, um espaço dos cimos ou é, ao contrário, um
onde os conceitos não estão hierarquizados e não partem
espaço de baixo, um espaço da lama; é um espaço que pode
ser corrente como a água viva; é um espaço que pode ser
de um ponto central, de um centro de poder ou de
fixado, imobilizado como a pedra ou como o cristal. [...] Nós referência aos quais os outros conceitos devem se
não vivemos no interior de um vazio que se revestiria de remeter” (Haesbaest e Bruce, 2002, p. 10).
diferentes espelhamentos; nós vivemos no interior de um
Em oposição à lógica linear e arbórea, Deleuze e Guattari
conjunto de relações que definem alocações irredutíveis
umas às outras, e absolutamente não passíveis de descrevem que o rizoma se manifesta por meio de
sobreposição (Foucault, 2013, s/n). princípios, de forma que seria possível identificarmos
realidades rizomáticas quando tais princípios pudessem
O espaço torna-se uma prerrogativa pessoal e subjetiva ser observados. Princípios como de conexão e de
que assume uma dimensão, não apenas plural, no que heterogeneidade, de multiplicidade, de ruptura a-
tange aos tipos de espaços possíveis e suas conexões significante, de cartografia e de decalcomania (Cf.
com o mundo real, mas abrange relações com as Deleuze e Guattari, 1995, p. 17-24), se manifestariam
instituições sociais e individuais ao mesmo tempo que se quase sempre concomitantemente para caracterizar o
manifesta múltiplo de significados. A experiência espacial rizoma.
vincula-se à formação da própria individualidade. As
heterotopias são formas de camadas de lugares disjuntos O rizoma é a imagem da mudança do paradigma. Seu
que se sobrepõem. Este fenômeno foi melhor explorado potencial se estende a toda realidade seja factível ou
por Deleuze e Guattari em textos como Mil Platôs e na cognitiva. É perceptível na música desde seus primórdios,
formulação de seu conceito de disjunção inclusiva. Estes mas especialmente demonstrável na música do século
autores explicam a coexistência no processo de disjunção XX. Os autores fazem referência ao rizoma em música
impondo a noção de camadas, quando falam de duas quando descreve que:
segmentação a flexível e a endurecida: Quando Glenn Gould acelera a execução de uma passagem
não age exclusivamente como virtuose; transforma os pontos
Se elas se distinguem, é porque não têm os mesmos termos,
musicais em linhas, faz proliferar o conjunto. Acontece que o
nem as mesmas correlações, nem a mesma natureza, nem o
número deixou de ser um conceito universal que mede os
mesmo tipo de multiplicidade. Mas, se são inseparáveis, é
elementos segundo seu lugar numa dimensão qualquer, para
porque coexistem, passam uma para a outra, segundo
tornar-se ele próprio uma multiplicidade variável segundo as
diferentes figuras [...] sempre uma pressupondo a outra
dimensões consideradas (primado do domínio sobre um
(Deleuze e Guattari, 1996, p. 90).
eaw2015 a tecnologia ao serviço da criação musical 105

complexo de números ligado a este domínio) (Deleuze e território pode ser então, desterritorizado, quando se deixa
Guattari, 1995, p. 19). o agenciamento específico.
A multiplicidade variável, ou a variedade múltipla de Para Deleuze e Guattari, a dinâmica do território impõe
significados, presente em potencial nas linhas e pontos sua desterritorialização sendo este processo indissociável
musicais, constrói uma rede complexa de caminhos a reterritorialização. Sendo assim, não ocorrerá a
possíveis na execução e na percepção interpretativa. Este desterritorialização sem a reterritorialização em outro
rizoma de possibilidades está relacionado à presença de território. Simplificadamente podemos afirmar que a
um significado geral de uma obra, que faz com que todas desterritorialização é o movimento pelo qual se abandona
as interpretações sejam entendidas como um único objeto o território, ‘é a operação da linha de fuga’ e a
de arte, embora percebam-se as diferenças entre reterritorialização é o movimento de construção do
interprete e interpretações, além das possibilidades de território (Deleuze e Guattari, 1997, p. 224);
escutas. Este significado amplo e rizomático estará
A territorialização, assim como a desterritorialização (e a
presente, de forma singularizada, em todas as dimensões
possíveis de uma percepção focada. concomitante reterritorialização) necessitam de uma força
que desenvolve o movimento em direção a um novo
Poderíamos falar em termos de um rizoma de alturas, de agenciamento. Esta força ou movimento é chamado de
ritmo, de timbre e em nosso caso, de espacialização. O ritornelo por Deleuze e Guattari. Para os autores “num
rizoma é por si uma multiplicidade de caminhos, sendo sentido geral, chamamos de ritornelo todo conjunto de
neste caso, a constelação formada pela multiplicidade matérias de expressão que traça um território, e que se
rizomática de rizomas possíveis em cada dimensão. desenvolve em motivos territoriais, em paisagens
Deleuze e Guattari continuam construindo o conceito de territoriais (há ritornelos motores, gestuais, ópticos, etc.)
rizoma ao relacioná-lo ao de multipheidades, para os (Deleuze e Guattari, 1997, p. 115). Sendo assim, não
autores: existe território sem ritornelo.
As multipheidades são a própria realidade, e não supõem O ritornelo, no entanto, permanece como uma força de
nenhuma unidade, não entram em nenhuma totalidade e dinamismo, impondo movimento aos territórios. O ritornelo
tampouco remetem a um sujeito. As subjetivações, as
impõe um movimento constante, uma “linha de fuga” dos
totalizações, as unificações são, ao contrário, processos que
se produzem e aparecem nas multipheidades. Os princípios territórios, para outro, para um novo vetor. Os ritornelos
característicos das multipheidades concernem a seus podem ainda ser classificados, porém sempre em
elementos, que são singularidades; a suas relações, que são comparação a outros ritornelos. Esta comparação
devires; a seus acontecimentos, que são hecceidades (quer remonta a uma classificação do ritornelo em várias
dizer, individuações sem sujeito); a seus espaços-tempos, categorias possíveis, neste sentido “um ritornelo está
que são espaços e tempos livres; a seu modelo de sempre em relação com outros ritornelos. [...] Sempre um
realização, que é o rizoma (por oposição ao modelo da mau e bom uso do ritornelo, um pequeno e um grande
árvore); a seu plano de composição, que constitui platôs
ritornelo, um ritornelo malevolente e um ritornelo
(zonas de intensidade contínua); aos vetores que as
atravessam, e que constituem territórios e graus de benevolente, um ritornelo territorial e um cósmico” (Costa,
desterritorialização (Deleuze e Guattari, 1995, p. 11) 2006, p.3). Ora um ritornelo pode ser, como visto,
pequeno e territorial, ou malevolente e grande, isto em
As multipheidades são as multiplicidades de realidades comparação com outros ritornelos. Deleuze e Guattari, ao
possíveis na realidade conhecida, a heterogeinidade descreverem o ritornelo e o território, o fazem muitas
imposta sobre o mundo perceptível, capaz de relacionar vezes com termos musicais. Em suas palavras:
lugares reais, utópicos e heterotópicos. As multipheidades
Seria preciso dizer, de preferência, que os motivos territoriais
se realizam por meio dos rizomas, ao permitir que seus
formam rostos ou personagens rítmicos e que os
planos de composição ou significação (platôs) sejam contrapontos territoriais formam paisagens melódicas. Há
atravessados por vetores que se constituem de territórios. personagem rítmico quando não nos encontramos mais na
“O território é o primeiro agenciamento, a primeira coisa situação simples de um ritmo que estaria associado a um
que faz agenciamento, o agenciamento é antes personagem, a um sujeito ou a um impulso: agora, é o
territorial”(Deluze e Guattari, 1995, p. 16). próprio ritmo que é todo o personagem, e que, enquanto tal,
pode permanecer constante, mas também aumentar ou
Cada caminho de um rizoma, pode ser criado por uma diminuir, por acréscimo ou subtração de sons, de durações
rede de territórios e processos de territorialização. Mas sempre crescentes e decrescentes, por amplificação ou
importa entendermos que o conceito de território “aqui é eliminação que fazem morrer e ressuscitar, aparecer e
entendido num sentido muito amplo, que ultrapassa o uso desaparecer. Da mesma forma, a paisagem melódica não é
que fazem dele a etologia e a etnologia [...] O território é mais uma melodia associada a uma paisagem, é a própria
sinônimo de apropriação, de subjetivação fechada sobre melodia que faz a paisagem sonora, tomando em
contraponto todas as relações com uma paisagem virtual
si mesma” (Guattari e Rolnik, 1986, p. 323).
(Deleuze e Guattari, 1997, p. 110).
O conjunto de elementos que se organizam a volta de si
Personagens e paisagens, vistos como territórios sonoros,
mesmos são os territórios. Os territórios são necessários
são figuras que possuem imenso potencial significante e
para a organização de visão de mundo e, portanto, dos
espacial. A espacialidade está intimamente relacionada
espaços e lugares. Sendo assim, os territórios se
ao estar-presente, ao ser-no-mundo, destes personagens
organizam no interior dos espaços reais, utópicos ou
ou passagens. No entanto para que sua relevância seja
heterotópicos fundamentando as relações entre estes
percebida faz-se necessário que seus ritornelos
espaços. Mas estar em um território, ou melhor, estar
estabeleçam sua importância, sua qualidade, suas
territorizado, não é um fenômeno eterno, mas transitório a
relações com as demais dimensões sonoras da paisagem
depender das circunstâncias a serem administradas. O
ou dos personagens.
eaw2015 a tecnologia ao serviço da criação musical 106

O processo de espacialização é realizado por meio do Podemos entender que os lugares vivenciados e
ritornelo, que é a forma motriz para a mudança de percebidos permitem a criação de um background que
caracterização do espaço (ou território) e pela permite a compreensão do espaço. Este pode ser
territorialização ou agenciamento que é o processo que percebido de forma real, utópico ou heterotópico, ou
torna real a nova caracterização. Sendo assim, o ritornelo ainda, de acordo com suas conexões com outros espaços
impõe um movimento de saída do eixo de uma ou internamente, arbóreo ou rizomático, sendo o espaço
característica do espaço e a territorialização permite a heterotópico sempre rizomático. Entretanto o espaço, ora
ancoragem em uma nova. territorializado, não permanece assim indefinidamente,
mas permite desterritorializar-se impulsionado pelo
Na música, o ritornelo se manifesta por meio de diversas
ritornelo.
situações, sejam auditivas, performáticas, ou relacionadas
à memória. Nesta dinâmica, aparentemente, o compositor Tomaremos estes conceitos de forma abstraída de seu
tem a possibilidade de propor de forma direta os contexto original, e generalizaremos o uso destes termos
ritornelos, por meio de recursos relacionados à forma de no entendimento do espaço percebido (territorializado).
construção musical e performática. Porém, a Acreditamos ser possível esta ampliação de contexto,
territorialização é um processo que ocorre por meio da uma vez que os autores os apresentam dentro de um
interpretação do ouvinte e, portanto, foge ao domínio do contexto de fundamentação para a individualidade e
compositor. Quem realiza o agenciamento (e com ele a subjetividade do ser-no-mundo.
territorialização) segundo sua subjetividade é o ouvinte.
O mecanismo de percepção do espaço, como visto, está
Salientamos, portanto, que a espacialização pode também relacionado à experiência de compreensão do mundo. Isto
ocorrer fora do domínio do compositor, uma vez que pode ser exemplificado no mecanismo auditivo de
ritornelos podem se formar sem a indução destes. Para localização das fontes sonoras que nos rodeiam. Pois se
tanto, bastaria que o ouvinte se permitisse territorializar por um lado nosso sistema auditivo possui uma acuidade
novos espaços interpretativos. Nesta perspectiva, o ato de relacionada ao timbre das fontes sonoras frontais, esta
espacialização proposta pelo compositor, tem por objetivo mesma acuidade não está presente quando da percepção
a construção de um discurso musical, porém, ele controla de sons vindo de trás. Este mecanismo demonstra a
parte do processo e portanto precisa compreender as seletividade a que estamos inseridos quando
possibilidades de territorializações com o objetivo de reconhecemos a localização de fontes sonoras no
induzir no ouvinte as perspectivas espaciais esperadas espaço. Ou seja, a percepção do espaço tronou-se algo
em sua obra. natural e implícito nas experiências do cotidiano.
O espaço percebido é parte da ação de interpretação O processo mais amplo dentro da dinâmica espacial diz
realizada pelo ouvinte na apreciação da obra. Este respeito à espacialização, que definiremos como o
processo é envolvido em meio aos vários elementos processo de evidenciação do espaço, ou o processo de
presentes na música ou elementos extra-musicais. O ato mudança de características espaciais. O processo de
de reconstruir a composição, aceitando ou não as espacialização é realizado por meio do ritornelo. Sendo
propostas espaciais ou discursivas do compositor, é assim, o ritornelo impõe um movimento de saída do eixo
apresentado por Ferraz ao descrever esta relação com o do agenciamento e a territorialização permite a
objeto sonoro: ancoragem em um novo.
A composição só se realiza com a escuta (seja ela ao vivo, O ritornelo em música pode se manifestar por meio de
escuta interior, leitura analítica da partitura, etc.) e esta diversos recursos sejam auditivos, performáticos, ou
relação está não só relacionada ao objeto sonoro e ao relacionados à memória. Nesta dinâmica, aparentemente,
observador mas também ao meio ambiente que os
o compositor tem a possibilidade de propor de forma
componentes deste sistema permitem: limites e
especificidades do aparelho receptor (o ouvido, no caso), direta, os ritornelos, por meio de recursos relacionados à
condições de reprodução do espaço físico (índices de forma de construção musical e performática. Porém, a
ressonância e reverberação), conhecimentos anteriores territorialização é um processo que ocorre por meio da
relativos à composição, conhecimentos referentes ao interpretação do ouvinte e, portanto, foge ao domínio
compositor, afeições pessoais a sons específicos, etc. direto do compositor.
Aparentemente estamos diante de uma solução já bastante
comum. Porém, o que queremos dizer aqui é que, o ouvinte Salientamos que a espacialização pode também ocorrer
não fecha um elo comunicativo com o objeto sonoro. O fora do domínio do compositor, uma vez que ritornelos
ouvinte literalmente constrói o que ouve, é ele quem compõe. podem se formar sem a indução deste. Para tanto
O objeto sonoro apenas dispara, ele não determina este bastaria que o ouvinte se permitisse territorializar novos
processo cognitivo (Ferraz, 1998, p. 11). espaços. Sendo assim, a perspectiva do espaço
Nesta perspectiva, a espacialização pode ser entendida construído pelo compositor em que há a construção de
sobre o ponto de vista do compositor ou sobre o ponto de um ambiente favorável para o entendimento do discurso
vista do ouvinte. Sendo assim, a perspectiva do espaço proposto, permitiria prover ao ouvinte um caminho para
construído pelo compositor em que há a construção de sua percepção espacial.
um ambiente favorável para o entendimento do discurso
proposto, diz respeito à colaboração entre os diversos O espaço como elemento discursivo em música
parâmetros e dimensões de uma composição, sejam, eletroacústica
aspectos rítmicos, tímbricos, temporais, etc...
O processo de espacialização em música eletroacústica
Os conceitos apresentados até então constroem um mapa (ME) acusmática, embora imponha a percepção de
no qual percebemos a relação do indivíduo e do espaço. movimentação sonora, se faz, em sua maioria, sem um
eaw2015 a tecnologia ao serviço da criação musical 107

referencial visual compatível com as movimentações momentos em que o som seria percebido em sua real
percebidas. Mesmo na ME considerada mista, as posição geográfica e em outros momentos a percepção
movimentações sonoras corresponderão a uma pequena sonora do ouvinte seria direcionada a uma localidade
parcela do referencial visual. Este fato impõe uma sonora virtual (para o caso de localizar o som entre duas
prerrogativa maior no ato de escuta, pois implica em um fontes sonoras reais).
uso maior da memória e da imaginação. Este fenômeno
Sendo assim, o espaço geográfico real e o virtual estão
deve ser considerado na construção dos ritornelos
em constante relação, e a mobilidade sonora nesta
espaciais.
perspectiva de performance dependerá diretamente na
Entendemos que o processo de compreensão e criação forma de transição entre a percepção do espaço real e do
do discurso espacial na música subordina-se, em uma virtual.
proposta em que o espaço é pensado como parte
estrutural da obra, à ideia musical proposta ou percebida. Conclusão
Sento assim, nos termos apresentados, a ideia se
manifesta no mundo por meio de uma territorialização Os conceitos levantados dos filósofos abordados, se por
desta dimensão. Nesta perspectiva, tudo que é trazido à um lado nos apontam diversas possibilidades de
existência, incluindo a compreensão da dimensão abordagens do espaço e, portanto, das técnicas
espacial, manifesta-se por meio do processo ritornelo- conhecidas de mobilidade sonora, por outro lado nos
territorialização. permite observar como a percepção espacial é um dos
desafios a ser compreendido no contexto da Música
Nesta perspectiva podemos destacar e conceituar três Eletroacústica.
tipos de espaço sonoros que podem ser percebidos. O
primeiro, o espaço geográfico real, onde o som evidencia As inferências realizadas neste texto são possíveis
a localização real de uma fonte sonora, ou sua mobilidade questionamentos que poderão dar continuidade à
no espaço. O segundo, o espaço geográfico induzido (ou compressão da percepção e criação de perspectiva
virtual), onde o som evidencia uma localização em que espacial. Estes caminhos deverão considerar as
não há produção sonora efetivamente, ou seja, o som características específicas relativas às escolhas das
parece ser produzido fora dos limites da performance. O técnicas de espacialização sonora, as diferenças entre as
terceiro seria o espaço referenciado, onde o som características dos espaços criados e as limitações e
evidencia uma localidade referenciada pelas qualidades tendências na percepção destes espaços propostos.
do próprio som, como na paisagem sonora. As duas Por fim, seria importante a observação destes conceitos
primeiras estão mais ligadas às características acústicas no âmbito da espacialização em música estritamente
ou fisiológicas da percepção sonora e a terceira se vincula instrumental e em Música Eletroacústica mista. Esta
à memória espacial construída. observação poderia alimentar novas questões e novas
Tendo em vista a natureza dos ritornelos que induziria abordagens no campo da ME acusmática, assim
cada um dos tipos de espaços acima, podemos inferir esperamos.
que, de forma geral, a tendência de territorilização do
espaço geográfico real, seria a mais provável, seguida do Bibliografia
geográfico induzido e do referencial. Sendo assim, o ato Costa, L. (2006) “O ritornelo em Deleuze de Guattari e as três
de territorialização do espaço referencial, tenderia a ser éticas possíveis”. Comunicação apresentada no II Seminário
observado na ausência de estímulos à territorilização dos Nacional de Filosofia e Educação, Santa Maria, 27 a 29 de
outros tipos. setembro. Acessado em 20.03.2015. Disponível em:
http://coral.ufsm.br/gpforma/2senafe/PDF/005e2.pdf
Neste contexto o compositor, ao utilizar as técnicas
disponíveis para criar perspectivas espaciais, propõe meio Deleuze, G. & Guattari, F. (1995) Mil Platôs: capitalismo e
de indução da mobilidade sonora no espaço, entretanto esquizofrenia. Vol. 1. Rio de Janeiro: Ed. 34.
quando o uso das técnicas de espacialização criam
Deleuze, G. & Guattari, F. (1996) Mil Platôs: capitalismo e
perspectivas multi-espaciais, é possível inferir que, o fato esquizofrenia. Vol. 3. Rio de Janeiro: Ed. 34.
do espaço geográfico real teria uma maior aderência no
que diz respeito à sua compreensão, ou seja, o ritornelo Deleuze, G. & Guattari, F. (1997) Mil Platôs: capitalismo e
construiria uma territorialização mais facilmente no caso esquizofrenia. Vol. 5. Rio de Janeiro: Ed. 34.
deste espaço, o ouvinte seria tendencioso a perceber em Duarte, André (2006) “Heidegger e Foucault, críticos da
primeiro plano este espaço, em detrimento dos outros. modernidade: humanismo, técnica e biopolítica”.
Sendo assim, a construção de uma demanda espacial do Trans/Form/Ação, 29(2), p95-114. Acessado em 20.03.2015.
Disponível em:
ponto de vista do compositor, quando não interessa
http://www.scielo.br/scielo.php?script=sci_arttext&pid=s0101-
ambigüidades interpretativas por parte do ouvinte, deverá 31732006000200008&ing=en&tlng=pt.10.15907s0101-
levar em consideração o tipo de espaço criado e por meio 31732006000200008.
de quais recursos estes serão propostos pelas escolhas
de manipulação espacial. Henriksen, Frank Ekeberg (2002) "Space in Electroacoustic
Music: Composition Performance and Perception of Musical
Uma segunda inferência seria realizada tendo em vista o Space." Doctoral Thesis, City University, London
fato de que na maioria dos casos, a composição de ME
tem sido realizada por meio de fontes sonoras fixas no Ferraz, Sílvio. (1998) Música e repetição. São Paulo, Educ.
espaço, onde o som cria a sensação de mobilidade Foucault, Michel. (2013). “De espaços outros”. Estudos
espacial. Nesta realidade a fonte sonora não se Avançados, 27(79), p113-122. Acessado em: 15.01.2015.
movimenta, e, portanto, existe certa alternância entre Disponível em:
eaw2015 a tecnologia ao serviço da criação musical 108

http://www.scielo.br/scielo.php?script=sci_arttext&pid=S01
03-40142013000300008&lng=en&tlng=pt. 10.1590/S0103-
40142013000300008.
Guattari, E. & Rolnik, S. (1996) Micropolítica: cartografias do
desejo. Petrópolis: Vozes.
Haesbaert, R. & Bruce, G. (2002) “A desterritorialização na
obra de Deleuze e Guattari”. Revista GEOgraphia, ano IV,
n.7, p.7-31.
Marandola E. (2012) “Heidegger e o pensamento
fenomenológico em geografia: sobre os modos geográficos
de existência”. Geografia, v. 37, n. 1, p. 81-94. Acessado em:
15.01.2015. Disponível em:
https://fenomenologiaegeografia.files.wordpress.com/2012
/11/marandola-jr-heidegger-e-o-pensamento-
fenomenolc3b3gico-em-geografia-2012.pdf
Penha, Rui (2014) Modelos de espacialização: integração no
pensamento composicional. Tese de Doutoramento.
Universidade de Aveiro, Aveiro, Portugal.
Saramago, Ligia (2008) “Sobre A arte e o espaço, de Martin
Heidegger”. Revista Artefilosofia, n.5, p. 61-72, Acessado em
14.01.2015, disponível em
http://www.raf.ifac.ufop.br/pdf/artefilosofia_05/artefilosofia
_05_01_dossie_heidegger_06_ligia_saramago.pdf
eaw2015 a tecnologia ao serviço da criação musical 109

4.3 Parametric loudspeakers array technology: a 4th dimension of space in electronic


music?

Jaime Reis INET-MD (FCSH-UNL), Portugal

Abstract It’s important to clear that such terminology isn’t fixed and
that it’s possible to find different definitions to similar
In late December of 1962, a Physics Professor from projects (commercial, scientific or of other nature), uses,
Brown University, Peter J. Westervelt, submitted a paper products and implementations of such theoretical
called Parametric Acoustic Array (Westervelt, 1963) background, sometimes even by the same authors and in
considered primary waves interacting within a given the same articles. Some of them being “parametric
volume and calculated the scattered pressure field due to loudspeakers” (Croft & Norris, 2003; Shi & Gan, 2010),
the non-linearities within a small portion of this common “parametric speakers “ (F J Pompei, 2013; “SoundLazer,”
volume in the medium (Croft & Norris, 2003, p. 6). Since n.d.), “parametric acoustic array” (Gan, Yang, &
then, many outputs of this technology were developed and Kamakura, 2012; Westervelt, 1963), “parametric array” (F
applied in contexts such as military, tomography, sonar J Pompei, 2013; Shi & Gan, 2010), “parametric audio
technology, artistic installations and others. system” (F J Pompei, 2011), “hypersonic sound” (Norris,
Such technology allows perfect sound directionally and 2004), “beam of sound” (Westervelt, 1963), “audible sound
therefore peculiar expressive techniques in electroacoustic beams” (F Joseph Pompei, 1999), “superdirectional sound
music, allowing a very particular music dimension of beams” (Roads, 2015), “super directional loudspeaker”
space. For such reason, it’s here treated as a idiosyncrasy (Nakashima, Yoshimura, Naka, & Ohya, 2006), “focused
worth to discuss on its on terms. audio” (“BrownInnovations,” n.d.), “audio spotlight”
(“Holosonics,” n.d.; Yoneyama, Fujimot, Kawamo, &
In 2010-2011 I composed the piece "A Anamnese das Sasabe, 1983), “phased array sound system” (Milsap,
Constantes Ocultas", commissioned by Grupo de Música 2003), among others. The term PLA is being used here
Contemporânea de Lisboa, that used a parametric since it seems to reunite the main concepts that converge
loudspeakers array developed by engineer Joel Paulo. in this technology. Nevertheless, it isn’t meant to be
The same technology was used in the 2015 acousmatic presented as an improved terminology over others. This
piece “Jeux de l'Espace” for eight loudspeakers and one discussion solely has the purpose of showing that one
parametric loudspeaker array. who might not be familiar with such technology, and wish
This paper is organized as follows. A theoretical to research more about it, will find different terms that
framework of the parametric loudspeaker array is first were originated due to particular historical contexts,
introduced, followed by a brief description of the main manufacturers patents and arbitrary grounds.
theoretical aspects of such loudspeakers. Secondly, there
is a description of practices that use such technology and Theoretical framework
their applications. The final section describes how I have
A parametric loudspeaker is guided by a principle
used it in my music compositions.
described by Westervelt as:
Keywords: Parametric loudspeakers array, Space,
two plane waves of differing frequencies generate, when
electroacoustic music, Directional sound, Spectrum spatial traveling in the same direction, two new waves, one of which
distribution. has a frequency equal to the sum of the original two
frequencies and the other equal to the difference frequency
Introduction (1963, p. 535).

The fundamental theoretical principles of a parametric However, to trace a proper theoretical framework of the
loudspeaker array (PLA) were discovered and explained parametric acoustic array in modern applications, Gan et
by Westervelt (1963). Interestingly, this was in the same al. makes a more clear description, based on Westervelt’s
year of the publication of an article by Max Mathews theory:
where the author said there were “no theoretical limits to When two sinusoidal beams are radiated from an intense
the performance of the computer as a source of musical ultrasound source, a spectral component at the difference
sounds” (1963, p. 553), a text that then was mentioned by frequency is secondarily generated along the beams due to
composers who changed the history of computer music, the nonlinear interaction of the two primary waves. At the
such as John Chowning, as very promising ideas same time, spectral components such as a sum-frequency
(Chowning, 2000, p. 1), who certainly influenced this and component and harmonics are generated. However, only the
difference-frequency component can travel an appreciable
other composers.
distance because sound absorption is generally increased
A relation between Westervelt discoveries and further with frequency, and amplitudes of higher-frequency
developments in parametric loudspeakers array components decay greatly compared with the difference
technology were described by Croft and Norris (2003, pp. frequency. The secondary source column of the difference
frequency (secondary beam) is virtually created in the primary
6–12), including the technological developments by
beam and is distributed along a narrow beam, similar to an
different scientists and in different countries and how it has end-fire array reported in antenna theory. Consequently, the
moved from theory and experimentation to implementation directivity of the difference-frequency wave becomes very
and application. narrow. This generation model of the difference frequency is
referred to as the parametric acoustic array (2012, p. 1211).
eaw2015 a tecnologia ao serviço da criação musical 110

The result is that the sound projection from a PLA machines, exhibition booths, billboards, multilanguage
becomes very narrow, much more than with the use of a teleconferencing (Shi & Gan, 2010, p. 20); acoustic
regular moving-coil loudspeaker (figure 1). metrology in non destructive testing used on ancient
paintings (De Simone, Di Marcoberardino, Calicchia, &
The dispersion pattern of a loudspeaker may also vary
Marchal, 2012); estimation of acoustical parameters
broadly, from omnidirectional to superdirectional. Although
(Paulo, 2012); mobile communication environment
it’s rare for a speaker to have a truly constant directionality
creating possibilities for stereo phone calls having a high
across its entire passband, in part from the fact that most
level of privacy (Nakashima et al., 2006); public safety,
are at least somewhat directional at mid and high
security / alarm systems, public speaking (“SoundLazer,”
frequencies, and, because of the long wavelengths
n.d.); digital signage, hospitals, libraries (“Holosonics,”
involved, almost unavoidably omnidirectional at low
n.d.); control room, tradeshows (“BrownInnovations,” n.d.);
frequencies (Rossing, 2014, p. 765). Loudspeaker
automotive applications, slot machines, mobile
systems exhibit their own radiation patterns, characterized
applications (“Ultrasonic-audio,” n.d.); underwater
by the technical specification called dispersion pattern.
acoustics, measurement of environmental parameters,
The dispersion pattern of a front-projecting loudspeaker
sub-bottom and seismic profiling and other naval
indicates the width and height of the region in which the
appliances (Akar, 2007); and many others, some of them
loudspeaker maintains a linear frequency response
to be further discussed.
(Roads, 1996, p. 469). Most conventional loudspeakers
are broadly directional and one can say they typically While many of the applications use self built devices, there
project sound forward through a horizontal angle spanning are commercial products that sell PLA, namely, ®
80 to 90 degrees (Roads, 2015, p. 272). Soundlazer (“SoundLazer,” n.d.), ® Holosonics
(“Holosonics,” n.d.), ® Brown Innovations
Tests in PLA systems have demonstrated angles of circa
(“BrownInnovations,” n.d.), ® Acouspade (by Ultrasonic
15 to 30 degrees at 1 kHz, depending on the used model
Audio) (“Ultrasonic-audio,” n.d.), ® Hypersonic Sound
(Pokorny & Graf, 2014). Loudspeakers that act as
(LRAD corporation) (“Hypersonic Sound,” n.d.), and
superdirectional sound beams behave like an audio
others.
spotlight, focusing sound energy on a narrow spot,
typically about 15 degrees in width, making possible that a Defining the application of PLA in artistic fields or within
person can hear a sound, while someone nearby, but musical practices isn’t obvious. In that sense, Blacking
outside the beam, does not (Roads, 2015, p. 272). Such clears that
systems are quite peculiar, even when compared to the “no musical style has ‘its own terms’: its terms are the terms
so-called narrow coverage loudspeakers, that feature of its society and culture, and of the bodies of the human
dispersion in the 50 degree range, such as some Meyer beings who listen to it, and create and perform it" (2000, p.
Speakers (“UPQ-2P : Narrow Coverage Loudspeaker,” 25).
2008), and potentially have new applications in many
diverse fields. In such terms, is Hiroshi Mizoguchi human-machine
interface (named ‘Invisible Messenger’), that integrates
real time visual tracking of face and sound beam forming
by speaker array (Mizoguchi, Tamai, Shinoda, Kagami, &
Nagashima, 2004), an art work? For the purpose of the
present paper, Mizoguchi’s work will not be considered as
an art form, since the authors don’t consider themselves
as doing art. As Bourdieu mentions, one may view the
‘eye’ as being a product of history reproduced by
education, being true for the mode of
“artistic perception now accepted as legitimate, that is, the
aesthetic disposition, the capacity to consider in and for
themselves, as form rather than function, not only the work
designated for such apprehension, i.e. legitimate works of art,
but everything in the world, including cultural objects which
are not yet consecrated” (Bourdieu, 1998, p. 433).

For the purpose of the present paper, the perspective of


the creators will be the base to integrate the use of PLA
technology as an application in their artistic expression or
Figure 1 – Comparison between hypothetical dispersion patterns for as other form of expressive behavior. The importance of
a conventional loudspeaker and for a PLA
clearing such categorization is not to imply any form of
hierarchy, but merely to formulate a context in
Parametric Loudspeakers applications presentation, order and grouping of the presented and
The proposed applications for such technology vary discussed works.
greatly within the manufacturers of PLA, scientific and Other forms of applications are explicitly affirmed as art
artists based proposals, creating a rich interdependence practices, such as the case of Yoichi Ochiai’s experiments
between all fields and hopefully inspiring all involved with ultrasonic levitation (Ochiai, Hoshi, & Rekimoto,
actors in the creation of new products and synergies. 2014), that presents himself as a media artist (“Yoichi
Proposals range from: applications in museum or art Ochiai Youtube Channel,” n.d.). The use of PLA may also
galleries, private messaging in vending and dispensing be seen in installations such as Misawa’s “Reverence in
eaw2015 a tecnologia ao serviço da criação musical 111

Ravine” (Misawa, 2011), or “Guilt”, by Gary Hill (Hill, 2006; human hearing. Helmholtz argued that the phenomenon had
Morgan, 2007; Somers-Davis, n.d.), and reported in sound to result from a non linearity of air molecules, which begin to
art and music by artists such as Miha Ciglar, head of IRZU behave nonlinearly (to heterodyne or intermodulate) at high
amplitudes (Roads, 2015, pp. 273–274).
– Institute for Sonic Arts Research, Ljubljana, Slovenia
and creator of several devices and works using PLA The author continues detailing that the main difference
(Ciglar, 2010a, 2010b). Other artists that have been using between regular loudspeakers and loudspeakers that use
such technology relate to the DXARTS - Seattle Arts and acoustical heterodyning (PLA), is that they project energy
Technology, such as Michael McCrea Acoustic Scan in a collimated sound beam, making an analogy to the
(McCrea & Rice, 2014) and Juan Pampin that have beam of light from a flash-light and giving the example that
present in 2007, with other colleagues, works that were one can direct the ultrasonic emitter toward a wall and a
using PLA technology as ultrasonic waveguides, as an listener in the reflected beam perceives the sound as
acoustic mirror and as wearable sound (Pampin, Kollin, & coming from that spot. Mentioning that, however, “at the
Kang, 2007). Furthermore, Pampin has used PLA time of this writing, there has been little experimentation
technology in musical pieces such as 2014 “Respiración with such loudspeakers in the context of electronic music”
Artificial”, for bandoneon, string quartet, and electronics (Roads, 2015, p. 274).
using PLA, as he mentions in an interview:
“The piece is about breathing cycles. The bandoneon has a Parametric Loudspeakers in my music
big bellow and is able to hold a note for a very long time. The In 2010-2011, I composed the piece "A Anamnese das
timing of the inhale and exhale of the instrument was used to
Constantes Ocultas", commissioned by and dedicated to
define the time structure of the piece. The beginning of my
piece is all in the very upper register (above 1000 Hertz, Grupo de Música Contemporânea de Lisboa (GMCL). The
around the C above treble clef). When you hear up there, you piece was conceived for nine players - soprano voice,
hear in a different way. Your ear is not able to resolve what is flute, clarinet, percussion, harp, piano, violin, viola,
happening with pitch, the notes tend to shimmer, it builds violoncello; with conductor and electronics: six regular
sensation. This piece is all sensorial it’s not theoretical. Its loudspeakers, one directional PLA loudspeaker, amplified
more neurological if you want. In terms of the electronics, I hi-hat, using a click track for the conductor (figure 2).
am using a 3D audio system and ultrasonic speakers that we
developed in DXARTS. These speakers can produce highly
localized beams of sound – akin to spotlights – which can
move around the audience and bounce off the architecture of
the room” (Myklebust, Karpan, & Pampin, 2014)

Despite the motivations to interfere in space in peculiar


ways, that can be read in many of the mentioned articles
and websites, the use of PLA in artistic practices hasn’t
been studied as something particular, possibly, because:
it’s too recent; such creations operate at individual levels
or even when within institutions, they appear to occur
locally; or simply because there may be no particular
feature that makes worth of distinction by musicologists,
art historians, anthropologists or other scientists in the
field of social sciences. There are many other applications
of PLA being developed at this very moment. The ones
presented here represent only a short research about
such topic and are not expected to cover the full length of
the use of such technology.
Independently of using PLA technology or not, the idea of
directing sound in precise ways or, one could say, the idea
of working with space as a parameter in sound creation,
has been a very important concept in electroacoustic
music. Curtis Roads has referred to superdirectional
sound beams and their developments, focusing on audio
technology and on electroacoustic music (Roads, 2004,
2015). Among other technologies, the author emphasizes
the specificity of PLA technology, explaining the involved
principles of acoustic heterodyning, first observed by
Figure 2 – Schema for the disposition of loudspeakers and
Helmholtz.
instruments for the performance of “A Anamnese das Constantes
When two sound sources are positioned relatively closely Ocultas”.
together and are of sufficiently high amplitude, two new tones
appear: one lower than either of the two original ones and a The players are to be set on stage and the electronic
second one that is higher than the original two. The two new diffused in the six conventional loudspeakers, to be
combination tones correspond to the sum and the difference distributed around the audience. The PLA requires an
of the two original ones. For example, if one were to emit two
operator to play it. The score has specific instructions
ultrasonic frequencies, 90 KHz and 91 Khz, into the air with
sufficient energy, one would produce the sum (181 kHz) and
demonstrating at each moment where to point (what kind
the difference (1 kHz), the latter of which is in the range on of surfaces to point at, or “swipe” the complete audience
eaw2015 a tecnologia ao serviço da criação musical 112

or just parts of the audience). One extra musician is in the hi-hat timbre, in order to create the idea of playing a
required to operate the electronics, in order to control the huge non pitched gong that it should sound as if it was in a
amplitude of the fixed media electronics (both for the big pyramid; different areas of the spectra are distributed
regular loudspeakers and the PLA), the hi-hat in space using both the regular loudspeakers (generally
amplification and the players amplification (when using low and mid frequency, whose range usually
necessary). changes gradually) and the PLA (dedicated to higher
frequencies and distributed in the room in reflective
The experimentation and development of the piece was
surfaces such as walls, ceiling and floor); resonators were
only possible by the dedication of GMCL and engineer
also applied to such timbres that have common pitches
Joel Paulo, who developed a parametric loudspeakers
with the instrumental textures;
array for this piece. At the beginning of the composition I
had only heard about such technology, but had never 2) new dimensions in instrumental spatialization,
tested it. using the PLA as an extension of instrumental melodic
lines, besides punctual diffusion in the regular
loudspeakers; the combined use of a timbre and pitch in
acoustic instruments, regular loudspeakers and PLA
generates very peculiar perceptions of location and source
identification;
3) a semantic approach to unveil hidden
messages that are sung live and in the electronics (mainly
in the PLA); the poem to be sung is polysemic and its
different meanings are suggested in the prosody, mainly
differentiated in this piece by rhythm; the use of the so-
called hidden messages appear as a reinforcement for the
intended meaning, punctually completely revealed by the
singers in spoken text; due to the high degree of
directivity, such passages should be pointed directly at the
audience, making it possible that only specific parts of the
audience will listen to those exact passages; more than
Figure 3 – GMCL playing “A Anamnese das Constantes Ocultas”;
the usual problem of a member of the audience staying
Salão Nobre of Escola de Música do Conservatório Nacional outside of the sweet spot and not being able to listen to
(Lisbon); 26th May 2012; Musicians: Susana Teixeira (voice), the spatialization in the same way (as often occurs in
Cândido Fernandes (piano), João Pereira Coutinho (flute), José acousmatic music), here the purpose is to make each
Machado (violin), Luís Gomes (clarinet), Ricardo Mateus (viola), performance unique and somehow personalized, in the
Fátima Pinto (percussion), Jorge Sá Machado (cello), Ana Castanhito
(harp); conductor: Pedro Neves. Photo: Cristina Costa.
sense that the PLA operator may direct sound just for one
person or a group of people (that I call direct operations);
this is different from the reflective operations of the PLA (in
both figures 5 and 6), constituted by the moments when
the PLA is pointed at a surface and the sounds are
diffused in the room.
The use of the PLA was integrated from the beginning in
the piece’s structure and it isn’t possible to play the piece
without such technology.
Other piece that requires PLA is the 2015 acousmatic
work “Jeux de l'Espace”, for eight regular loudspeakers,
equidistant around the audience (such as in a regular
octophonic system) and one directional PLA loudspeaker
(to be operated during performance either in the center of
the octophony or in front of the audience).

Figure 4 – Rehearsals in the same concert. PLA operator: Joana


Guerra; electronics: Jaime Reis.

In this piece, the electronics have three fundamental


grounds:
1) generate large architectural spaces through
the hi-hat amplification, using very close miking (less than
1 cm, using a condenser microphone) of the hi-hat,
combined with timbre transformations and spatialization of
the signal trough the six regular loudspeakers and the
PLA; with such close miking, there are significant changes
eaw2015 a tecnologia ao serviço da criação musical 113

4) compose moments of independent


spatialization of both systems; such as PLA solos than
can be arranged in different ways with different degrees of
elucidating the listener to the on going spatial processes:
to play the PLA as a soloist (as if it was an instrument
playing with an orchestra) with direct operations while
operating the octophony in a more detached way from the
PLA; use PLA solos (without octophony) using mainly
reflective operations and punctually direct operations.

Figure 6 – Performance of the piece “Jeux de l'Espace” in Santa Cruz


airfield (Aeroclube de Torres Vedras); 25th June 2015; schema
exemplifying reflective operations, in this case, pointing the PLA to
the floor. Photo: (“Festival DME - Dias de Música Electroacústica
Website,” n.d.)

Figure 5 – Premiere of the piece “Jeux de l'Espace” in Festival Other elements about the construction of sounds, form
Monaco Électroacoustique, 30th May 2015; playing the PLA in the and other compositional elements could be discussed, but
center of Théâtre des Variétés, using Michel Pascal’s (on the left) and
will be left for further discussions and at the light of new
Gaël Navard’s Acousmonium du CNRR de Nice. Photo: (“Jaime Reis
- Personal Website,” n.d.). research in this field.

Although the PLA movements also have to be precise for Conclusions


each moment of the composition (requiring adaptations to
By the its name, the concept of a “4th dimension” could be
the performance architectural space), there were other
expressed in the sound system 4DSound (Connell, To, &
principles involved for this composition. It was inspired in
Oomen, n.d.; Hayes, 2012). The creators of this system
space as a musical parameter and as the cosmos,
decided to refer to the idea of a fourth dimensional sound
integrating sounds derived from processes of sonification
not by using superdirectional sound beams, but using
from NASA and ESA. The intention is to create an
omnidirectional loudspeakers, with experiments in different
imaginary of a cosmic momentum were space is
fields, one of the most significant being from one of it's
experienced in a tridimensional octophonic sound system
designers, Paul Oomen, in his opera “Nikola” (Oomen et
with an additional spatial dimension of sound created by
al., 2013).
the PLA.
However, the title of this presentation wasn’t taken from
In this piece, the main principles of working space as a
4DSound, but from a reflection based on my experience
musical parameter are:
with PLA technology. To answer to the proper application
1) working on the limits of perception of spatial of the concept of a fourth dimension in (electronic) music
movements, for example, varying the speed of rotations, isn’t simple. In modern physics, space and time are unified
based on my own perception of what is heard as a rotation in a four-dimensional Minkowski continuum called
or, if too fast, as a texture of points whose movements in spacetime, whose metric treats the time dimension
space cannot be perceived in their directionality; differently from the three spatial dimensions. Since a
2) create spatial movements that are similar, fourth dimension is considered the spacetime continuum
meaning, identifiable as being connected, like identical and sound waves exist within it in the way such continuum
paths, or in opposite direction, or symmetric; where the has in itself three dimensional material space which is
used sounds change in envelope, timbre, rhythm and pitch where sound waves exist, one could argue if such
in order to make such paths more or less identifiable, such dimension could exist in sound by questioning where and
when such dimension could be found.
as in a gradual scale of levels of identification that is used
to make such paths more clear in some situations than in Considering this, one question remains: is PLA a fourth
others; dimension of space in electronic music? I would have to
3) compose moments of hybrid spatialization answer: no, because I don't think the concept of a fourth
using the octophonic system and the PLA in dimension is applied to sound simply by using PLA
indistinguishable ways were the fusion between PLA technology. However, I do believe that such use implies
sound and the regular loudspeakers sound doesn’t allow a indeed a new dimension in space and in our perception of
precise perception of the sound source; this is usually it, making a new parameter to consider while composing
achieved by using reflective operations of the PLA or working with sound. And, if not, one could ask why even
simultaneously with the use of the octophonic system as to consider such concept as a main question. The answer
an extension of the PLA (or PLA as an extension of the to that is merely empirical, because in the last five years
octophony), and connecting both in timbre and gestures; that I have worked with PLA technology and presented it
eaw2015 a tecnologia ao serviço da criação musical 114

in my tours in Europe, America and Asia, this question Hypersonic Sound. (n.d.). Retrieved July 15, 2015, from
would very often come from people in audiences of www.atcsd.com
concerts and conferences: is it like a fourth dimension of Jaime Reis - Personal Website. (n.d.). Retrieved July 15,
sound? So, it seemed to be a good question to reflect. 2015, from http://www.jaimereis.pt
The use of PLA is in expansion in many fields. Mathews, M. V. (1963). The Digital Computer as a Musical
The novelty doesn't appear to be in the technology itself Instrument. Science, 142 (3592), 553–557.
(since it’s around for decades), but on the way it's being doi:10.1126/science.142.3592.553
used. The hows and whys for each creator or group of
creators are to be intensively developed and studied. McCrea, M., & Rice, T. (2014). Acoustic Scan. Retrieved from
https://dxarts.washington.edu/creative-work/acoustic-scan

Bibliography Milsap, J. (2003). Phased array sound system. Google


Patents. Retrieved from
Akar, A. O. (2007). Characteristics And Use Of A Nonlinear http://www.google.com/patents/US20030185404
End-Fired Array For Acoustics In Air. Naval Postgraduate
School. Misawa, D. (2011). installation: “Reverence in Ravine.”
Retrieved from http://www.misawadaichi.net/?page_id=1393
Blacking, J. (2000). How Musical is Man? (6th Editio.). USA:
University of Washington Press. Mizoguchi, H., Tamai, Y., Shinoda, K., Kagami, S., &
Nagashima, K. (2004). Visually steerable sound beam
Bourdieu, P. (1998). Distinction & The Aristocracy of Culture. forming system based on face tracking and speaker array. In
In J. Storey (Ed.), Cultural Theory and Popular Culture: A IPCR - Conference on Pattern Recognition.
Reader (pp. 431–441). Athens: The University of Georgia
Press. Morgan, R. C. (2007). Gary Hill. Retrieved July 15, 2015, from
http://www.brooklynrail.org/2007/03/artseen/gary-hill
BrownInnovations. (n.d.). Retrieved July 15, 2015, from
http://www.browninnovations.com Myklebust, S., Karpan, R., & Pampin, J. (2014). Musical
Experimentation Happens on Campus with the JACK.
Chowning, J. M. (2000). Digital sound synthesis, acoustics Retrieved July 15, 2015, from
and perception: A rich intersection. In COST G-6 Conference http://uwworldseriescommunityconnections.blogspot.pt/2014/0
on Digital Audio Effects (pp. 1–6). Verona. 3/musical-experimentation-happens-on.html
Ciglar, M. (2010a). An ultrasound based instrument Nakashima, Y., Yoshimura, T., Naka, N., & Ohya, T. (2006).
generating audible and tactile sound. In Conference on New Prototype of Mobile Super Directional Loudspeaker. NTT
Interfaces for Musical Expression (NIME) (pp. 19–22). DoCoMo Technical Journal, 8(1), 25–32.
Sydney.
Norris, W. (2004). Hypersonic sound and other inventions.
Ciglar, M. (2010b). Tactile feedback based on acoustic USA: TEDTalk. Retrieved from
pressure waves. In ICMC - International Computer Music http://www.ted.com/talks/woody_norris_invents_amazing_thin
Conference. New York. gs
Connell, J., To, F., & Oomen, P. (n.d.). 4DSOUND. Retrieved Ochiai, Y., Hoshi, T., & Rekimoto, J. (2014). Three-
July 15, 2015, from http://4dsound.net dimensional Mid-air Acoustic Manipulation by Ultrasonic
Croft, J. J., & Norris, J. O. (2003). Theory, History, and the Phased Arrays. PLoS ONE, 9(5).
Advancement of Parametric Loudspeakers: A TECHNOLOGY doi:10.1371/journal.pone.0097590
OVERVIEW Rev. E. HYPERSONIC SOUND - American Oomen, P., Minailo, S., Lada, K., Gogh, R. van, Sonostruct~,
Technology Corporation. One/One, … E@RPORT. (2013). Documentary: Nikola
De Simone, S., Di Marcoberardino, L., Calicchia, P., & Technopera. Retrieved from http://www.4d-opera.com/
Marchal, J. (2012). Characterization of a parametric Pampin, J., Kollin, J. S., & Kang, E. (2007). APPLICATIONS
loudspeaker and its application in NDT. In S. F. d’Acoustique OF ULTRASONIC SOUND BEAMS IN PERFORMANCE AND
(Ed.), Acoustics 2012. Nantes, France. Retrieved from SOUND ART. In International Computer Music Conference
https://hal.archives-ouvertes.fr/hal-00810908 (ICMC) (pp. 492–495). Copenhagen.
Festival DME - Dias de Música Electroacústica Website. Paulo, J. V. C. P. (2012). New techniques for estimation of
(n.d.). Retrieved July 15, 2015, from http://www.festival- acoustical parameters. Universidade Técnica de Lisboa.
dme.org/
Pokorny, F., & Graf, F. (2014). Akustische Vermessung
Gan, W.-S., Yang, J., & Kamakura, T. (2012). A review of parametrischer Lautsprecherarrays im Kontext der
parametric acoustic array in air. Applied Acoustics, 73(12), Transauraltechnik. In 40. Jahrestagung der Deutschen
1211–1219. doi:10.1016/j.apacoust.2012.04.001 Gesellschaft für Akustik. Oldenburg.
Hayes, T. (2012, December 6). How The Fourth Dimension Of Pompei, F. J. (1999). The Use of Airborne Ultrasonics for
Sound Is Being Used For Live Concerts. FastCoLabs. Generating Audible Sound Beams. J. Audio Eng. Soc, 47(9),
Retrieved from http://www.fastcolabs.com/3023116/how-the- 726–731. Retrieved from http://www.aes.org/e-
fourth-dimension-of-sound-is-being-used-for-live-concerts lib/browse.cfm?elib=12092
Hill, G. (2006). Guilt - media installation. Retrieved July 15, Pompei, F. J. (2011). Parametric audio system. Google
2015, from http://garyhill.com/left/work/media-installation Patents. Retrieved from
Holosonics. (n.d.). Retrieved July 15, 2015, from https://www.google.com/patents/US8027488
http://www.holosonics.com/ Pompei, F. J. (2013). Ultrasonic transducer for parametric
array. Google Patents. Retrieved from
https://www.google.com/patents/US8369546
eaw2015 a tecnologia ao serviço da criação musical 115

Roads, C. (1996). The Computer Music Tutorial.


Massachusetts: The MIT Press.
Roads, C. (2004). Microsound. Cambridge, MA: MIT Press.
Roads, C. (2015). Composing Electronic Music: A New
Aesthetic. New York: Oxford University Press.
Rossing, T. (Ed.). (2014). Springer Handbook of Acoustics
(2nd ed.). New York: Springer-Verlag. doi:10.1007/978-1-
4939-0755-7
Shi, C., & Gan, W.-S. (2010). DEVELOPMENT OF A
PARAMETRIC LOUDSPEAKER: A NOVEL DIRECTIONAL
SOUND GENERATION TECHNOLOGY. IEEE Potentials,
NOVEMBER/D, 20–24.
Somers-Davis, L. M. (n.d.). Postmodern Narrative in
Contemporary Installation Art. Retrieved July 15, 2015, from
http://www.nyartsmagazine.com/?p=4179
SoundLazer. (n.d.). Retrieved July 15, 2015, from
http://www.soundlazer.com/
Ultrasonic-audio. (n.d.). Retrieved July 15, 2015, from
http://www.ultrasonic-audio.com/
UPQ-2P : Narrow Coverage Loudspeaker. (2008). Meyer
Sound Laboratories. Retrieved July 15, 2015, from
http://www.meyersound.com/pdf/products/ultraseries/upq-
2p_ds_b.pdf
Westervelt, P. J. (1963). Parametric Acoustic Array. The
Journal of the Acoustical Society of America, 35(4), 535–537.
Yoichi Ochiai Youtube Channel. (n.d.). Retrieved July 15,
2015, from
https://www.youtube.com/user/KurotakaOchiai/about
Yoneyama, M., Fujimot, J., Kawamo, Y., & Sasabe, S. (1983).
The audio spotlight: An application of nonlinear interaction of
sound waves to a new type of loudspeaker design. J Acoust
Soc Am, 73(5), 1532–1536.
eaw2015 a tecnologia ao serviço da criação musical 116

4.4 Spatial Hearing and Sound Perception in Musical Composition


Joan Riera Robusté INET-MD, Portugal

Abstract Musical space, Doppler effect and sound


movement
This paper explores the possibilities of spatial hearing in
relation to sound perception, and presents three The following list shows the most common acoustic
acousmatic compositions based on a musical aesthetic situations where sound perception is clearly influenced by
that emphasizes this relation. An important characteristic space:
of these compositions is the exclusive use of sine waves 1. Musical space (architectonic space)
and other time invariant sound signals. Even though these
2. Sound movement
types of sound signals present no variations in time, it is
possible to perceive pitch, loudness, and tone color 3. Sound distance
variations when they move in space due to acoustic
Musical space is defined as “the sound consequences
processes involved in spatial hearing. To emphasize the
formed through the acoustic characteristics of the space,
perception of such variations, this research proposes to
which occur basically during the performance of a musical
divide a tone in multiple sound units and spread them in
work in a architectonic space“ (Nauck, 1997, p.25). Each
space using several loudspeakers arranged around the
architectonic space molds sounds and music differently
listener. In addition to the perception of sound attribute
depending on the levels of sound reverberation,
variations, it is also possible to create rhythm and texture
absorption, and resonance, which produce a temporal and
variations that depend on how sound units are arranged in
spatial deformation of the musical shapes and spatial
space. This strategy helps to overcome to a certain extetnt
forms proposed by the composer.
the so called "sound surrogacy" (Smalley, 1997, p.110)
implicit in acousmatic music, as it is possible to establish Sound movement refers here to the Doppler effect, which
cause-effect relations between sound movement and the defines pitch, amplitude and timbre variations produced by
perception of sound attribute, rhythm, and texture time-varying filtering, phase shifts, and other distortions,
variations. Another important consequence of using sound that occur when a sound signal in movement approaches
fragmentation together with sound spatialization is the and leaves us. These alterations, which saturate our
possibility to produce diffuse sound fields independently everyday life, are difficult to be reproduced electronically.
from the levels of reverberation of the room, and to create One of the few examples of the use of the Doppler effect
sound spaces with a certain spatial depth without using in music composition are Stockhausen's Rotationsmühle
artificial sound delay or reverberation. or Chowning's Doppler shift.
Keywords: Musical composition, Electronic music, Sound Sound distance also has important consequences for
spatialization, Spatial hearing, Sound perception. sound perception. Within 15 meters of a listener, the
consequences of distance on the perception of timbre and
Introduction loudness are very similar to those achieved by manually
altering the amplitude of sound (dynamic). Beyond this
Even though sound spatialization has an important role in distance, particular sound attribute variations occur that
the inner and outer form of the pieces, it is normally used depend on the characteristics of the environment, which
to reflect or emphasize certain aspects that exist already determine the delay time and amount of reflections and
in the composition, or to solve problems that appeared reverberation. At the same time, air paths determine the
with new musical aesthetics, so as to achieve musical pressure level of the direct sound, and the early and late
clarity when using polytempi, polytonalities and the collage reflections. For instance, as distance increases, higher
of different recorded sounds, or to solve the problem of frequencies are more attenuated than lower ones. These
musical motionlessness in serial or stochastic organization variations are not possible to achieve by just varying the
of notes. Moreover, formal elements such as melodies, amplitude level of the sound itself.
rhythms, sound objects, gestures, textures, timbres etc,
are usually designed a priory, and sound spatialization Even though architectonic space, sound movement and
and sound movement are used either to provide distance have a clear influence on sound perception, it is
transparency to the musical discourse or to achieve difficult to use them to create a musical discourse, at least
dynamism by molding sound with a specific shape and in the way intended here. Certainly, using the above-
directionality. mentioned strategies, in order to create contrasting and
dynamic variations of one sound, it would be necessary to
To increase the importance of sound spatialization in move sound through different architectonic or musical
music composition, it is necessary to use it as part of the spaces, high speeds, or large distances. This is maybe
formal structure of sound itself, and not only as part of the not impossible, but it would be at least very difficult to
formal structure of music. Even though space is not a achieve.
sound parameter, it obviously has an influence on how we
perceive sound. To what extent space alters the This research proposes to create constant variations on
perception of sound attributes as pitch and loudness, and sound perception on the one hand, and to articulate sound
how these alterations can be used to articulate a musical shapes and sound directionalities on the other through two
discourse is discussed in the following chapters. strategies,
eaw2015 a tecnologia ao serviço da criação musical 117

1. the psychophysics involved in spatial hearing, The experiments show also that the perception of such
and variations is so subtle that they tend to be masked when
using sounds with a more complex spectra or whose
2. sound-space density variations.
frequency, timbre, and amplitude change over time. For
this reason, both experiments and compositions presented
Spatial hearing and sound localization here use sound signals whose energy or power spectra
Studies on the psychophysics of human sound localization are constant over time, such as sine waves, square waves
undertaken by researchers such as Batteau, Blauert, and triangle waves.
Teranishi, Shaw or von Békésy, explain why a given
The next step of my research consisted of finding a way to
sound signal is perceived with pitch, loudness and tone
emphasize the variations that occur together with sound
color variations depending on the position of the sound
localization. This research proposes the following
source in relation to the external ear. The pinna and the
strategies:
head are by far the most important organs in transforming
the sound attributes of the sound events. Several tests
show that the pinna functions as a linear filter whose 1) Division of a tone in small sound units
transfer function depends on the direction and distance of First, each tone is not continuous but constituted by the
the sound source. Blauert states, “The pinna codes spatial sum of successive small sound units called "notes" (see
attributes of the sound field into temporal and spectral Figure 2).
attributes […] by distorting incident sound signals linearly
and differently depending on their position” (Blauert, 1997,
p.63). Reflection, shadowing, dispersion, diffraction,
interference, and resonance are some of the physical
phenomena that occur when sound reaches the pinna,
and all have important consequences in relation to sound
perception. Figure 2 Division of a continuous tone into several units called
notes.
Figure 1 shows how the external ear functions as a linear
system, as the distortions that a sound signal undergoes These units or notes have the exact same frequency and
at the external ear can be measured according to the duration to avoid the perception of melodic contours and
system's transfer functions.
rhythms. The experiments show, concurring with Boulez's
statements, that the attention focus first on pitch and
duration variations, followed by rhythm and amplitude
changes (Boulez, 1963. p.37). When sound itself
experiments these variations, sound attribute variations
associated to sound localization become very much
discriminated.

2) Organization of the notes in “8-note groups”


Second, these notes are organized in groups of eight
units, referred to as "8-note groups". The number of notes
in each group coincides with the number of loudspeakers,
so eight consecutive notes can be radiated by a different
loudspeaker before one loudspeaker is repeated. The
loudspeakers are arranged around the listener in the
approximate shape of a regular octagon (see Figure 3). As
it is explained later, the numbers also correspond to the
audio tracks in the Pro Tools session.

Figure 1 Interaural transfer functions for several directions of sound


incidence in the horizontal plane (Blauert., 1997).

To experience empirically how important these variations


are to sound perception, my research includes several
experiments. These experiments, which are part of my
doctoral research, show that a sine wave radiated
successively through eight loudspeakers around the
listener is perceived with pitch, loudness, and tone color
variations.
Figure 3. Spatial arrangement of the loudspeakers.
eaw2015 a tecnologia ao serviço da criação musical 118

3) Organizing the 8-note groups in space using a 1) The relative position


different number of loudspeakers: sound-space The relative position assigns a different loudspeaker to
density each note in the 8-note groups. The relative position uses
The third sound-space strategy consists of organizing the always eight loudspeakers, or sound-space density 8/8.
notes in the 8-note groups in space using a different As the possibilities to organize them are almost infinite,
number of loudspeakers, from one to eight. Depending on the notes are organized following specific relative position
the number of loudspeakers used, it is possible to create rows (see Figure 5). These rows are used in both the
different “sound-space densities“. To understand this experiments and compositions:
concept it is necessary to metaphorically consider sound
as having mass and occupying a space. Where density is
"the quantity of things or mass in a given area or space"
(The Oxford Dictionary, 2001, p.324), here, the 8-note
groups represent a mass of eight units that occupy a
different area or space according to the number of
loudspeakers that radiate them (see Figure 4).

Figure 5. (O), (R), (I), (RI) relative position rows.

Figure 6 shows the organization of the notes in 8-note


groups in each track as they appear in a Pro Tools
session, following the Original (O) relative position rows
(see figure 5).

Figure 4. Graphic representation of sound-space densities 8/1 (a) Figure 6. Graphic illustration showing different arrangements of the
and 8/8 (b). notes in the 8-note groups in relation to the audio tracks and
loudspeakers. Aud stands for „audio track“.
The maximal sound-space density occurs when all the 8-
note groups are radiated by one loudspeaker, so there is Each row organizes the notes differently in relation to the
no space between the notes (see Figure 4 (a)). On the audio tracks, which are at the same time assigned to one
contrary, the minimal sound-space density occurs when loudspeaker. The numbers associated with each audio
the 8-note groups are radiated through the eight track (Aud 1, 2, 3 etc.) correspond to the loudspeaker
loudspeakers, as the notes are radiated in space using the numbers as shown in Figure 3. In Figure 6, for instance,
maximal distance between them (see Figure 4 (b)). audio 1 corresponds to loudspeaker 1, audio 2 to
loudspeaker 2 etc. In this way, the first 8-note group
The experiments show that moving sound through
arranged following the (O) relative position row 1357-2468
different sound-space densities produces the perception of
assigns the first note to loudspeaker 1, the second note to
important texture variations, as it is perceived with a
loudspeaker 3, the third note to loudspeaker 5, etc.
“rougher” or “smoother” sound quality. At the same time,
the movement of a texture through different sound-space The experiments show that each relative position is
densities implies a change of spatial position, which perceived differently, creaing particular sound trajectories.
creates an internal rhythm and gives a particular shape For instance, Figure 7 shows the sound trajectory created
and sound directionality to each texture. by the first relative position (O-1) 1357- 2468:

Molding sounds in space using relative and


absolute position variations
To organize the 8-note groups so they undergo constant
sound location and sound-space density variations in such
a way that they can be used in musical composition, this
research proposes to follow two steps, which I call:
1. the "relative position”
2. the "absolute position"
As we will see, the relative and absolute positions
represent a way of organizing the movement of the notes
in the 8-note groups in space, creating different sound
trajectories and molding them into different spatial shapes
Figure 7. Front-rear discontinuous movement described by the (O)
and sound directionalities.
relative position row n.1.
eaw2015 a tecnologia ao serviço da criação musical 119

Following the octagonal organization of the loudspeakers The temporal and spatial organization of the
in space shown in figure 3, the notes in the 8-note group notes in the 8-note groups
are organized so tracks 1, 3, 5, and 7 sound in the left
aural hemisphere, while 2, 4, 6, and 8 sound in the right. The relative position assigns a different loudspeaker to
Therefore, in the example above the four loudspeakers in each note in the 8-note groups. As each note successively
the left hemisphere radiate the notes in the 8-note groups appears after the other, the loudspeakers sound also one
first, followed by the four loudspeakers at the right. At the after the other. Therefore, the relative position assigns a
same time they describe two front-rear sound trajectories, position of the notes in the 8-note groups in space and at
i.e. the four sound sources at the left hemisphere describe the same time a chronological order for the successive
a front-rear discontinuous movement, as do the four sound positions, creating, as previously mentioned, different
sources at the right. sound trajectories.
Similarly, the absolute position organizes the notes in the
2) The absolute position 8-note groups using less than eight loudspeakers. Besides
determining which loudspeakers are used, it is necessary
The absolute position organizes the 8-note groups using
as well to determine which loudspeaker sounds first. For
less than eight loudspeakers. The absolute position is also
instance, if we use sound-space density 8/2 and
organized in rows like the ones shown in Figure 8.
loudspeakers 3 and 4, the notes in the 8-note groups
could be organized so the first four notes sound at
loudspeaker 3, and the last four at loudspeaker 4, or
alternating both loudspeakers, etc.
The chronological order for the successive note positions
Figure 8. Example of twelve absolute position rows used in the in space is here achieved by combining the relative and
experiments. absolute positions. To describe how the relative and
absolute positions are combined it is necessary to explain
As it can be seen, some loudspeakers are repeated in the the methodology used.
same absolute position, so a particular sound-space
density is achieved. The resulting sound-space densities Using Pro Tools to organize the notes in space
are specified next to each absolute position. At the same
and time
time, the same sound-space density is repeated several
times using different loudspeakers or sound locations. For Both the temporal and spatial organization of the 8-note
instance rows 3, 5, 9, and 11 use sound-space density 8/2 groups is done through Pro Tools. The notes are first
radiated from loudspeakers 8-6, 1-3, 3-5, and 1-2 organized on the eight tracks following the relative position
respectively (see Figure 9). rows. For all the experiments using relative positions only,
the audio tracks are organized successively, so audio 1 is
radiated through loudspeaker 1, audio 2 through
loudspeaker 2, etc., as shown in Figure 10 (see also
Figure 6).

Figure 9. Spatial arrangement of the same sound-space density 8/2


radiated through different loudspeakers.

In these cases, although the 8-note groups are perceived


with the same texture, using different sound locations
prevents the same auditory experience:
• first, each one of them is perceived from a
different position,
• second, the perception of the same 8-note
groups with the same sound-space density from
Figure 10. Organization of the tracks, outputs, and loudspeakers in
different angles of incidence to the ears Pro Tools.
distinguishes them with a particular pitch,
loudness, and tone color, and The absolute position rows change the loudspeaker of
• third, in the case the same sound-space density each track so less than eight loudspeakers are used, while
is repeated consecutively, the space confers a keeping the same arrangement of the notes indicated by
spatial rhythm. the relative position. Figures 11 and 12 show the
organization of the notes in the 8-note groups in each
track following the absolute position 1234-5678 and the
relative position 1357-2468 (see figure 5, relative position
O-1).
eaw2015 a tecnologia ao serviço da criação musical 120

The spatial and temporal organization of the notes in the


8-note group is as follows: first, two consecutive notes
appear at loudspeaker 6, then two notes at loudspeaker 8,
followed again by two notes at loudspeaker 6, and two
notes at loudspeaker 8 (see Figure 14).

Figure 11. First example showing the relation between relative and
absolute positions: relative position 1357- 2468, absolute position
1234- 5678, sound-space density 8/8.
Figure 14. Graph of the temporal and spatial organization of the
notes in the 8-note groups using relative position 1357-2468 and
absolute position 6666-8888.

Figure 15 below shows how changing the absolute


position to 1133-6688 while keeping the same relative
position 1357-2468 produces a new spatial and temporal
organization of the notes. In this example the aural right-
left hemispheres are used again, although the spatial
Figure 12. Graphic illustration showing the organization of the notes trajectory of the notes in the 8-note group is different
in the 8-note groups in relation to the tracks as it appears in a Pro
compared to the first example. The spatial and temporal
Tools session.
organization of the notes in the 8-note group is as follows:
Figure 12 should be considered as having two the first two notes appear at loudspeakers 1 and 3, then
coordinates: the vertical axis representing space, i.e. two notes at loudspeakers 6 and 8. This sound trajectory
audio tracks assigned to particular loudspeakers, and the is repeated again for the next four notes (see Figure 16).
horizontal axis representing time, i.e. successive notes in
the 8-note groups. When the notes in the 8-note groups
are displaced following a specific relative position, the
vertical axis organizes the notes in the 8-note groups in
space; each note is assigned to a specific loudspeaker,
following the order specified by the absolute position. At
the same time, the horizontal axis organizes each
loudspeaker in time, so loudspeaker 1 sounds first,
followed by loudspeaker 3, etc.
Changing the absolute position reorganizes the temporal
Figure 15. Third example showing the relation between relative and
and spatial organization of the same relative position. In
absolute positions.
the following example (see Figure 13), the previous
absolute position 1234-5678 changes to the absolute
position 6666-8888. The original sound-space 8/8
becomes 8/2, so the spatial extension of the 8-note group
is reduced dramatically. Consequently, the perception of
the same relative position is very different compared to the
previous example, as only two loudspeakers are used (6
and 8), and both are situated at the right-rear of the rear
aural hemisphere.

Figure 16. Graph of the temporal and spatial organization of the


notes in the 8-note groups using absolute position 1133-6688 and
relative position 1357-2468

Finally, the perception of sound attribute, rhythm, texture,


and sound location variations can also be achieved by
changing the relative position while keeping the same
absolute position. However, this option is less efficient.
Indeed, the experiments show that when playing the same
(O) and (R) relative-position rows with the same absolute
Figure 13. Second example showing the relation between relative position, the 8-note groups are perceived almost
and absolute positions.
identically. In spite of this, relative position does provide
eaw2015 a tecnologia ao serviço da criação musical 121

enough variations to the successive 8-note groups that the most static sound material, as all parameters including
they are not perceived as exactly the same. sound-space density remain invariable, and only relative
position variations occur (see Figure 17).
Hence, the perception of sound attribute (i.e. pitch,
loudness, and tone color), rhythm, and texture variations
associated with the movement of sounds through different Textures and continuums
sound locations and sound-space densities can be Textures and continuums use shorter note durations
achieved by compared to sustained tones, and the distances between
1. keeping the same relative position while going the notes are larger than 0 ms. The sound quality of
through different absolute positions, textures and continuums is more or less pitched
depending on the duration and distance of the notes in the
2. changing the relative position while keeping the 8-note groups.
same absolute position, or
Continuums represent the most homogeneous texture
3. changing both the relative and the absolute possible, as in general, they maintain the same sound
positions. signal, frequency, note duration, and relative and absolute
The experiments show conclusively that the use of distances, while varying the sound-space density only.
simultaneous relative and absolute position variations Textures are more dynamic than continuums, as they
accentuate the perception of sound attribute, rhythm, and present variable relative and absolute distances between
texture variations between successive 8-note groups. the notes and 8-note groups. In addition, textures present
However, the perception of such variations depends also two contrasting sound qualities, which are achieved by
on the sound signal, frequency and note duration of the using notes with or without fade-in and fade-out
notes in the 8-note groups, as well as on their “relative“ envelopes. Using different distances between the notes
and “absolute“ distances. Relative distance refers to the and two sound qualities imply that sound itself undergoes
distance between the notes in the 8-note groups, while variations in time, meaning that sound attribute variations
absolute distance refers to the distance between the 8- are less evident, although they are fully experienceable
note groups. To investigate to which extent sound location using continuums.
and sound-space density variations can be used to
articulate a musical discourse, this research involved
several experiments that use different types of sound
materials.

Sound materials used to experiment sound


location and sound-space density variations
As already mentioned, the 8-note groups represent the
most elemental formal organization of the sound material
presented here. Each 8-note group is defined by a
particular type of sound signal, frequency, and note
duration as well as relative and absolute distance. The 8- Figure 18. Graphic example of a polyphonic texture
note groups are used to create different types of sound
materials, which can be divided into two main groups: Using different textures or continuums simultaneously
1) textural, including sustained tones, continuums and creates polyphonic textures and continuums, as shown in
textures, and Figure 18. In this example, each texture is represented by
a different color. As it can be seen, they move through
2) gestural, including attacks and short sound objects. different sound-space densities as specified at the left
margin.
Sustained tones
Sustained tones use successive 8-note groups with the Sound objects, attacks and gestures
same frequency and note duration. Both relative and Sound objects use several textures with the same
absolute distances, i.e. between notes and 8-note groups, frequency but different sound qualities, i.e. with different
are always 0 ms. note durations and relative and absolute distances. To
assess the importance of sound location and sound-space
density variations to sound perception, the tests present
the same sound object several times, varying the duration
of each individual texture as well as its position and
sound-space density (see Figure 19).
Figure 17. Pro Tools image of a sustained tone.

They use long note durations permits them to be


perceived with a clear and definite pitch. In addition, the
notes in the 8-note groups are always radiated using eight
loudspeakers, so they undergo no sound-space density
variations. As a consequence, it is not possible to perceive
rhythm and texture variations. Sustained tones represent
eaw2015 a tecnologia ao serviço da criação musical 122

Polyphonic Continuum and Musical Situation 1 are based


on long continuums and sustained tones respectively.
These sound materials present any or little variations in
time, and are long enough to permit the perception of
sound attribute, rhythm and textures variations related to
sound localization and sound-space density variations.
Figure 19. Graphic example of three sound objects using four
textures with the same frequency (159 Hz), but different durations, In the particular case of Polyphonic Continuum, in addition
positions and sound-space densities. to the perception of sound attribute variations, the listener
is able to identify each continuum and follow its movement
The attacks use notes with the shortest note durations, in space through different sound-space densities, which
from 1 ms to 46 ms. Using such short note durations permits the percption of cause-effect relations between the
implies that the perception of sound attribute variations is movement of sound and the perception of sound attribute,
inexistent, as they are perceived as tone bursts. Only the rhythm and textures variations. The perception of such
position of the notes is perceptible. To better differentiate cause-effect relations are perhaps, as it is explained later,
each individual note, instead of 8-note groups with the the most important characteristic of the compositions
same frequency, the attacks use 5-note groups with presented here.
different frequencies. Using different frequencies in each
5-note group permits as well each attack to be The importance of using sound location and
differentiated, and accentuates the perception of its spatial
sound-space density variations applied to 8-note
positions.
groups: “Source Bonding”
Explanation of three acousmatic compositions: The use of 8-note groups together with sound location and
Topos, Polyphonic Continuum, and Musical sound-space density variations establishes a “source
Situation 1 bonding”or cause-effect relations between the movement
sound and the perception of sound attribute and texture
The sound materials that proved to be most efficient in variations.
relation to the perception of sound attribute, rhythm and
The term “source bonding” is used by Smalley to define
texture variations were used to create three compositions:
Topos, Polyphonic Continuum and Musical Situation 1, “the natural tendency to relate sounds to supposed
sources and causes, and to relate sounds to each other
each one using specific sound materials. For instance,
because they appear to have shared or associated
Topos uses gestural sound materials as attacks, sound
origins” (Smalley, 1997, p.110). Smalley states that source
objects, gestures and short textures, while Polyphonic
bonding disappears with electronic music due to the
Continuum and Musical Situations 1 use textural sound
materials as continuums and sustained tones respectively. artificiality implicit in the electronically generated sounds,
which prevents their being related to recognizable physical
These compositions show that the most successful sound or natural gestures. This is not the case with instrumental
discourses using 8-note groups together with sound music or musique concrète. In instrumental music, for
location and sound-space density variations are those instance, source bonding is produced by the physical
created using textural sound materials. The main reason is interaction of the performer with the instrument, as the
that, as already mentioned, the more variations in time, spectromorphological design of the sound is indicative of
the less perceptible are the variations produced by varying how the instrument is excited by the performer. Musique
sound location and sound-space density. Therefore, the concrète uses sounds that are recorded from the real
piece Topos is the least successful of the three precisely world, so it is almost impossible not to recognize them and
because it uses relate them to their origins.
1. different types of sound materials, as attacks, Smalley considers the possibility to generate sounds
gestures, sound objects and textures detached from their sound sources and physical gestures,
2. three types of sound signals which vary which he calls “gestural surrogacy”, as one of the main
constantly and rapidly achievements of electronic music. The “reduced listening”
proposed by Pierre Schaeffer in his book Traité des Objets
3. two differentiated sound qualities, as notes in the Musicaux (Schaeffer, 1966, p. 270-2) inspires this
8-note groups with and without fade in fade out approach. Schaeffer attempted to focus the attention on
envelops the properties of the sound itself, detached from its
4. very short gestures and sound objects, changing extrinsic or referential properties. This type of listening is
permanently and rapidly, which implies that the extremely difficult to achieve, due to listeners' natural
musical discourse is based on constant variations tendency to relate any sound to a source and a cause.
in sound signals, frequencies, note durations and However, even though Smalley highlights the advantages
relative and absolute distances. of gestural surrogacy in electroacoustic music, he also
In Topos, sound material is too dynamic, masking to a considers that “music that doesn’t take some account of
great extent the perception of sound attributes, rhythms the cultural embedding of gesture will appear to most
and textures occurring as consequence of sound listeners a very cold, difficult, even sterile music” (Smalley,
spatialization and sound-space density variations. 1997, p.112).
However, they are indeed perceived and contribute to In my opinion, gestural surrogacy is one of the weak
create a more dynamic and interesting musical discourse. points in some electronic music precisely because the
eaw2015 a tecnologia ao serviço da criação musical 123

natural tendency to establish cause-effect relations is not time. Thus the attention is focused on the internal activity
fulfilled. The interest of listening to a piece of music is instead of the forward motion. In this case the internal
proportional to the expectations that the musical discourse activity of both textures and continuums consists in the
is able to generate in the listener. The accomplishment or perception of the cause-effect relations between the
not of what is expected produces different emotional movement of sounds in space and the perception of sound
reactions. In this sense, the possibility to relate a particular attribute, rhythm, and texture variations. Time is perceived
source or cause to a particular sound, makes it possible to as a function of the changes that occur in space, and the
create a dialog between what is expected and what is musical discourse is perceived as a dialog between sound
perceived as a consequence of a particular action, and the and its movement in space.
following reaction can be analyzed and judged according
to a previous expectation. Other spatial functions used in the compositions
In this case, we consider that the perception of cause- The compositions presented in this paper include as well
effect relations between sound movement and sound other spatial functions, giving a more relevant spatial
attribute, rhythm, and texture variations contradicts the character to the pieces.
“gestural surrogacy” attributed to acousmatic music. Even
though the sound signals used in the compositions Sound transparency
presented in this thesis are electronically generated and
thus cannot be related to any original source or cause, the One important spatial function is to achieve sound
relation between sound movement and the perception of transparency and musical clarity. Together with the
sound attribute, rhythm, and texture variations gives sound separation of the musical events in areas, as described
a certain “natural” behavior and level of physicality similar below, this thesis introduced the "principle of no
to gestures, as they result from the physical movement of overlapping", by which simultaneous attacks, gestures,
sound in space and the psychophysics of human sound sound objects, continuums or sustained tones cannot be
localization. In this sense, such cause-effect relations radiated by the same loudspeaker (see figure 20).
allow electronic sounds to overcome to a certain degree
their artificiality.

The importance of amplitude variations using


sustained tones and continuums
However, the perception of cause-effect variations is not
enough to sustain long musical discourses. The
composition Musical Situation 1, for instance, uses
sustained tones only, radiated through eight loudspeakers.
The only variations that occur are relative position Figure 20. Graphic illustration of the principle of no overlapping.
variations. As sound undergoes no sound-space density
variations, rhythm and texture contrasts are not perceived, Spreading different sound materials through different
reducing cause-effect relations to sound attribute loudspeakers also permits the perception of an internal
variations. In order to invigorate the musical discourse, it rhythm, which in this case in not created a priori, but is a
was necessary to incorporate amplitude variations. The consequence of the spatial movement of sound through
experiments and compositions show that amplitude different sound positions.
variations, in contrast to frequency and timbre variations,
do not mask the cause-effect relations between sound Formal functions
location and sound movement and the perception of
sound attribute, rhythm and texture variations, specially Each composition uses different spatial arrangenments of
when the curve of the amplitude envelope is too mild and the sound sources, so different sound spaces are
gradual to create any attack or decay that could be achieved. The sound space represents "the acoustic
perceived as a short gesture. The piece Musical Situation perception of the geometric spaces via the composition,
1 shows how amplitude variations create endless whenever the musical form, structure or Gestalt is
loudness and tone color variations to the same sound spatialized" (Nauck, 1997, p. 23).
signals and sound materials, which, together with sound Using different sound spatializations differentiates sections
movement, permit to articulate what I consider an of the same piece that use the same sound material. In
interesting musical discourse. Polyphonic Continuum, for instance, the continuums are
spread in space using four aural hemispheres A, B, C, D,
The perception of time flow as shown in Figue 21.
Another important consequence of using textural sound
material is that the perception of time flow is somehow
dissolved, which does not occur when using gestural
musical structures. As pointed out by D. Smalley, gestures
“are governed by a sense of forward motion, of linearity, of
narrativity” which is provoked by the energy of an external
impulse (Smalley, 1997, p. 112). In textures, the primary
sound impulse is inexistent, and the energy is dilated in
eaw2015 a tecnologia ao serviço da criação musical 124

At the same time, some continuums also change their


spatial extension, which occur when changing from the
two loudspeakers used for one aural hemisphere to eight
loudspeakers, thus occupying the whole extension
determined by the loudspeakers. In this case, as the same
loudspeakers are used to radiate different continuums, the
principle of no overlapping is not achieved, although the
contrasting registers used between the continuums help to
differentiate them.
By using these two strategies, it is possible to present the
same material and avoid the perception of exact repetition,
as each time it is perceived with new and renewed sound
attribute, rhythm, and texture variations.

The perception of exceptional “diffuse“ sound


fields
Figure 21. Aural hemispheres used in Polyphonic Continuum
Normally, diffuse sound fields occur when it is not possible
The four areas are sufficiently separated in space to to locate the position of the sound sources because they
permit the differentiation of simultaneous continuums. This are masked by reflections, as in large reverberant halls.
spatial separation permits: However, this thesis show that diffuse sound fields can be
achieved independently of the dimensions and acoustic
1. sound clarity and transparency, specially when
characteristics of the room. In Musical Situation 1, for
using simultaneous sound objects, textures and
continuums, as well as instance, the use of short sound units radiated in space
through multiple loudspeakers spread around the listener
2. dialogs between sound materials and sound creates diffuse sound fields in rooms with few reflections.
locations, and These exceptional sound fields may result from the
3. the perception of sound attribute and t exture superposition of multiple primary sounds radiated by the
variations. loudspeakers, the early and late reflections, and the
amount of uncorrelated signals generated by the
Polyphonic Continuum presents 10 independent sections. reflections and the multiple sound sources. We could say,
The spatial arrangement is used to differentiate them, as that the short sound units spread in space through several
the last five sections present the same sound material as loudspeakers situated around the room function both as
the first five but inverts its sound spatialization (see Figure primary sounds and reflections, as the different positions
22). of the sound sources imply that each sound signal
presents level and time differences depending on the
angle of incidence to the ear, which is exactly what occurs
with reflected sounds.

Conclusions
This research introduces a new function of space in
musical composition based on the psychophysics of
spatial hearing in order to manipulate the perception of
sound attributes such as pitch, loudness, and tone color.
This new function uses spatialization as a formal element
of sound itself, emphasizing the perception of such sound
attribute variations to the point that they can be
incorporated as a fundamental element of a musical
discourse, explored in the three compositions presented
here.
Two important techniques or methods involving the formal
organization of sound are employed: sound division and
sound-space density. First, the sound signals are divided
into small units, or notes, organized in 8-note groups,
which are radiated by eight loudspeakers arranged in the
approximate shape of a regular octagon around the
listener. When sound reaches the ears from different
directions, the transfer functions occurring at the external
ear differentiate them with subtle pressure level, phase,
and time variations, which are perceived as pitch,
loudness, and tone color variations. In addition to these
Figure 22. Graphic showing the spatial inversion used in Polyphonic sound attribute variations, when successive 8-note groups
Continuum. are organized in space following different sound-space
eaw2015 a tecnologia ao serviço da criação musical 125

densities, i.e. using a different number of loudpeakers, The Oxford Dictionary, Thesaurus, and Wordpower Guide
they are perceived as well with rhythm and texture (2001). New York: Oxford University Press.
variations. Schaeffer, P. (1966). Traité des objets musicaux.
Paris: Editions du Seuil.
However, there is much left to explore; the importance of Shaw, E. A. G., and R. Teranishi (1968).In:Sound
this approach to spatialization in musical composition must pressure generated in an external-ear replica an
be further evaluated. Further projects in this research will real human ears by a nearby sound source. J.
aim to determine if the strategy presented in this thesis is Acoust: Soc. Amer. 44.
Shinn-Cunningham, B. G. (2003). Spatial hearing advantages
relevant to a wide variety of compositions and musical
in everyday environments. In: Proceedings of ONR Workshop
discourses beyond the ones presented here, and to on Attention, Perception, and Modelling for Complex Displays,
discern whether this investigation has reached an ending Troy, New York: cns.bu.edu/shinn/pages/spatial.html
point or not. For this reason, it is necessary to investigate (accessed December 2012).
with other sound signals and sound materials, and to find Smalley, D. (1997). Spectromorphology: Explaining Sound-
more formal relations between sound and space that can Shapes”, Organised Sound, Vol. 2(2). Cambridge University
enrich the perception of sound and music. Press.
________, (2007). Space-Form and the Acousmatic Image,
in: Organised Sound, Vol. 12(1). Cambridge University Press.
Bibliography Von Békésy, G (1938). Über die Entstehung de
Batteau, D. W. (1967). The role of the pinna I Entfernungsempfindung beim Hören (On the
human localization. London: Proc. Roy. Soc. origin of the sensation of distance in hearing):
Blauert, Jens (1997). Spatial Hearing: The Psychophysics of Akust. Z. 3.
Human Sound Localization (revised edition).Cambridge: MIT ________, (1971). Auditory backward inhibition in concert
Press. halls, in: Science 171.
Boulez, P. (1963). Penser la musique aujourd'hui. Paris:
Éditions Gonthier.
Nauck, Gisela (1997). Musik im Raum-Raum in der Musik:
Ein Beitrag zur Geschichte der Seriellen Musik. Stuttgart
Franz Steiner Verlag.
eaw2015 a tecnologia ao serviço da criação musical 126

4.5 Why Yet Another Sound Spatialisation Tool?


Rui Penha INESC TEC and Faculty of Engineering, University of Porto

Abstract exceptions to this rule). It is also more likely to have its


This paper presents a perspective on the importance of development stalled as soon as the composer's interests
interface design for musical software, based on my have shifted towards another direction, abandoning the
experience developing spatium, a set of free and open tool to a path of obsolescence that can even lead to the
source software tools for sound spatialisation. It also loss of relevant cultural artefacts (Pennycoock, 2008). So
addresses the relevance of the composers’ perspective on why bother? Why do so many composers feel the urge to
the development of musical software. bypass commercial software and spend their time
developing their own tools?
Keywords: Musical software, Interface, Spatialisation,
Design
What does interface design has to do with
Introduction musical composition?

It is now three years since the launch of spatium (Penha & The relation between the development of new instruments
and the development of new music for those instruments
Oliveira, 2013), my set of free and open source software
is obvious and well documented throughout the history of
tools for sound spatialisation. Since then, I have received
western music. Some of the musical instruments we use
a sustained influx of feedback from users all around the
today were developed with the help of composers, who
world and I have presented it at several conferences and
would continuously push the boundaries of the technique
festivals, discussing both its merits and its flaws. The most
made possible by a specific invention. We can identify two
recurrent criticism I've received [1] is: why did I choose to
main factors that influence the development of a specific
develop another sound spatialisation tool? Why do I think
composition vocabulary for a given musical instrument: its
spatium is relevant when compared to the myriad of
timbre and its user interface, i.e., the elements that the
similar tools that have matured in the past decades? In
player manipulates in order to produce and control the
this paper, I will address these questions and I will try to
sonic output. While the timbre is clearly the most important
make the case for the importance of the musical software
tools developed by composers. feature in the listeners’ perspective, it seems easy to make
the case for the prevalence of the user interface over the
I must start by acknowledging that these criticisms do not timbre in the development of new vocabulary for any given
come as a surprise: I am perfectly aware that there are instrument.
many trusted sound spatialisation tools out there. I have,
It is well known that, in the history of electroacoustic
as a matter of fact, tested and analysed the vast majority
music, new sonic devices (such as, e.g., the modular
of them before developing spatium. I also recognise that
synthesiser), inventive ways of manipulating recording
some of them are far more sophisticated and technically
media (such as, e.g., tape slicing), and even technical
capable than the original version of spatium, with some of
deficiencies (such as, e.g., scratched vinyl records) have
the best of those being also free and open source. In fact,
brought to life a myriad of new musical possibilities. Some
the working name of spatium during its development was
of the interfaces that are used to manipulate these new
YASST - Yet Another Sound Spatialisation Tool. Still, the
tools arose as consequences of mechanical constraints
main goal of spatium, since the very beginning of its
(as previously happened with musical instruments) and
conception, was to foster the development of a new
the pioneers of their new musical roles were forced to
vocabulary for sound spatialisation in electroacoustic
develop particular techniques to be able to play them as
music composition by enabling an experimental approach
instruments. Nonetheless, and as the original focus for the
to digital sound spatialisation. The main emphasis was
development of most studio equipment was the technician
thus geared towards the development of interfaces that
and not the musician, the design of their user interface
the composers can use to experiment in real-time [2] with
a diverse set of approaches to sound spatialisation. often resorts to the solution of enabling the fine control
over every parameter with, e.g., a button or a given type of
Composers were amongst the first artists to embrace the potentiometer. As a consequence, even when this
potential of digital technology, long before the current equipment transcends the condition of a technical
trend of digital art, in great part thanks to the tools and appliance and evolves into a real musical instrument, the
ideas developed by people such as Max Mathews, John user interface paradigm is usually maintained in order to
Chowning, Jean-Claude Risset or Miller Puckette, benefit the users that have already developed a working
amongst many others. Some composers devoted a knowledge of their use, thus bypassing the opportunity for
significant part of their careers to the development of new a higher level approach to user interface design that would
digital tools for musical composition, both for their own use facilitate real-time expressive performance by a single
and to share them with other composers. These operator. The solution of one control for each function,
idiosyncratic tools exist alongside the commercial ones, generally well regarded as a user interface strategy, has
which are usually developed by teams of experienced rendered many of these tools difficult or even impossible
computer programmers. A tool developed by a sole to control by a single operator, which led to the
composer on his or her free time is naturally more prone to development of supplementary devices capable of
erratic behaviour than a tool developed by a group of
professionals (even though there are some notable
eaw2015 a tecnologia ao serviço da criação musical 127

sequencing, recording and playing the automation of their careers and they have studied and worked alongside
parameters [3]. other performers. Some have even played in chamber
groups or orchestras for part of their lives. Consequently,
One of the main advances brought by digital media is the
they know the traditional instruments very well and they
decoupling between the interface and the signal
are familiar with their idiomatic vocabulary. The limitations
processing: anything can be mapped to anything else. Yet
of these instruments are also very well documented and
many of the interface metaphors that we have grown to
whenever a composer decides to push the boundaries, he
depend on while creating music with computers were
or she usually does so by integrating those experiments in
originally conceived for other, often older, realms.
an otherwise traditional framework. In most cases, he or
Consequently, they reflect its original purpose’s strengths
she even has direct access to a professional performer of
and, more importantly, its drawbacks [4]. Despite the fact
the instrument (often the person who originally
that we can nowadays identify a trend for new approaches
commissioned the work) to experiment some ideas at
to interfaces for musical expression, to which the effort of
various stages of the compositional process. In computer
the NIME community [5] is particularly relevant, countless
music, on the contrary, it is not uncommon for composers
user interfaces of professional music software still rely on
to experiment inside what are, even if solely for them,
the one control for each function strategy and on exploring
completely uncharted territories. Some of those
the resemblance to the analogue equipment they replace
experiments do fail miserably. Some other are particularly
(such as, e.g., the analog mixing board metaphor that is
successful and help to establish paradigms to those who
ubiquitous in digital audio workstations). While it is true
then want to follow a similar path.
that professional musicians can capitalise their acquired
vocabulary and musicianship when approaching computer Interface design for digital sound spatialisation
music by using interfaces based on, e.g., piano
John Chowning's Turenas (1999) is one of those very
keyboards, scores or mixing boards, beginners can face a
successful examples and the software that was developed
double challenge imposed by them: not only do they have
for the sound spatialisation of that piece (Chowning,
to find ways to explore the possibilities of the software,
1971) remains a cornerstone of all the spatialisation
they will also have to overcome the potential difficulties of
software that followed. Despite its very innovative gestural
approaching an analogy to an interface that they don’t fully
control over the spatialisation trajectories, one of its
control to begin with [6].
limitations was the fact that it did not render the results in
At the onset of computer music, the fact that most (if not real-time, due to limitations of the computer processing
all) processes were not rendered in real-time established power available at the time. F. Richard Moore's "General
the paradigm of a highly iterative workflow: Model for Spatial Processing of Sounds" (Moore, 1983)
had to wait almost two decades for a version that would
a) the composer thinks of an idea;
enable real-time control over spatialisation (Yadegari,
b) the composer notates that idea on a highly Moore, Castle, Burr & Apel, 2002). The user experience
abstract form (i.e., the algorithm); with this early spatialisation software was thus an example
c) the computer renders the sonic result; of the aforementioned iterative refinement workflow, a
process that inspires a more analytical, geometrical
d) the composer listens to the result and refines the approach to spatialisation. This iterative process does not
algorithm; necessarily serve the best interests of the composer, as
The whole process can the be restarted from stage 2) we will explore later, and can even contribute to an
onwards until he or she considers the results close unfavourable mismatch between the composers' intentions
enough to the original idea, or different but nevertheless and the listeners' cognition (Begault, 1986). While non-
satisfactory. This workflow, as the non-real-time real-time processes were to be expected by the pioneers
sequencing of automation parameters, is not entirely of computer music, it is striking how the vast majority of
different from the process of composing traditional written composers still use today the non-real-time automation of
music, which can help to explain why composers were so panning in audio sequencers, along with its inherent
fast to embrace computer technology: we just need to iterative process and geometrical approach to
replace the algorithm with the score on stage 2) and the spatialisation, as their main spatialisation interface
the computer with the musicians on stage 3). We can (Peters, Marentakis & McAdams, 2011).
nonetheless find at least two fundamental differences We can safely presume that when a composer starts to
between this iterative process in computer music and in work with moving sound sources, he or she has previously
traditional written music. listened to several real examples of moving sound
The first, and by far the most important, is that the sources. He or she is less likely, however, to have had
musicians are human and thus very different from the hands-on, real-time experience with the intentional
computer: on one hand, they imprint a personal movement of sound sources inside a given acoustic
temperament to the interpretation of the score and, on the space. The spatial cognition is, therefore, there to drive
other hand, they are usually less willing to engage in as the spatial imagination of the composers - as Francis
many iterations of the workflow as the composer would Dhomont writes, ”the space, too, belongs to memory" (Vol
wish for. The second, more relevant to the context of this d’arondes, 2003) - but the intuitive control over sound
article, is that on the written music tradition the composer spatialisation is not. Whilst composers often describe the
is dealing with instruments that he usually knows from first movement they want to imprint to sound sources using
hand experience and that have a long-standing tradition dynamic terms - such as, e.g., "It is as if the sound-object
that he has previously heard and studied. Most western is thrown out from its point of origin on an elastic thread
composers were trained as performers at some point in whose tension slows down its motion and then causes it to
eaw2015 a tecnologia ao serviço da criação musical 128

accelerate back towards the source." (Wishart, 1996) - or Dynamic spatialisation interfaces
even poetic intentions - such as, e.g., "Through this deep,
Dynamic spatialisation interfaces introduce a higher level
blemishless blue, the flight of swallows: a strident,
control over the sound spatialisation, as the composer is
constantly changing feeding dance." (Vol d’arondes, 2003)
no longer exerting direct control over the position or the
-, they often need to resort to geometrical descriptions of
trajectory of a sound at any given time. Instead, he or she
these ideas in order to be able to implement them using
is in control over the parameters of forces in a physical
common sound spatialisation software. That incoherence
simulation [7]. This enables a new kind of real-time
becomes particularly incisive when we become aware of
spatialisation, since even when the user is not directly
the fact that a significant part of the composers are
controlling the interface, the sound spatialisation is being
unsatisfied with their current tool for sound spatialisation,
generated with an outcome that the composer can
acquainted with some of the alternatives and looking
intuitively predict, based solely on his vast experience with
forwards to change their current spatialisation workflow
real-world physics and previous contacts with the
(Peters, Marentakis & McAdams, 2011).
software.

The making of spatium The modeling of forces, such as gravity, for sound
spatialization started naturally as an exploration of the
As previously mentioned, the main goal for spatium was to dynamic metaphors that I’ve heard and read countless
enable an experimental, real-time approach to digital times from other composers when describing the
sound spatialisation. It was decided early on to build movement of sound sources they sought for their pieces.
spatium as a set of tools integrated into a modular The first experiments with these dynamic interfaces
architecture, as a stratified approach had been previously revealed a playful and inspiring alternative to the regular
identified as a fruitful solution for sound spatialisation, spatialisation interfaces, while at the same time hinting at
adapting to various compositional needs and enabling the possibility that their use could in fact enhance our
different combinations of rendering algorithms and perception of sonic movement, velocity and acceleration.
controlling interfaces (Peters, Lossius, Schacher, Baltazar, This hypothesis was further supported by the current
Bascou & Place, 2009). A modular architecture greatly research about the representational momentum - i.e., the
facilitates the integration of a tool into different displacement in the direction of motion of the perceived
compositional workflows, as the composer is free to final position of a moving object - and other related
choose amongst the modules the ones he or she is most phenomena that happen with both visual and acoustic
interested in working with. The existence of a well- stimuli. Whilst the source of these phenomena is still
documented communication protocol between elements unclear, an established approach explains them “by
also makes it easier for the end user to develop and cognitive factors dealing with principles of internalized
integrate his or her own modules for more idiosyncratic dynamics. Experiences of the physical world might have
goals. The fact that we can render the spatialisation become incorporated into our mental representation of the
instructions with several different engines, including ones world” (Getzmann, 2005, p. 229). This suggests that our
developed purposely for specific concert venues, cognitive perception of motion in space is not only related
maximises the portability of the music into different to the localization of sounds via their acoustic stimuli, but
environments and contributes to its longevity. The ongoing also to a comparison with internalized mental
trend of stratified approaches for sound spatialisation representations of physical phenomena that could explain
software, however, has its main focus on the possibility of their movements. My piece pendulum [2012] is a study on
controlling “different spatial rendering algorithms from one dynamic spatialisation and was integrally developed using
common interface” (Peters, Lossius, Schacher, Baltazar, spatium. The dynamic interfaces were useful not only for
Bascou & Place, 2009). I believe that, from a composer’s spatialisation, but also for the development of the video
perspective, the most interesting capability of this modular component of the piece and for the composition of the
approach is actually the other way around, as each musical gestures, as I described previously in (Penha,
spatialisation interface has its own interaction vocabulary 2013).
and enforces a specific approach, even if solely by making
the composer’s vision cumbersome to implement. Conclusion
spatium currently has ten different spatialisation Programming is quickly becoming an universal skill
interfaces. Some of them are devoted to gestural amongst composers, being more or less present in
spatialisation - i.e., using physical interaction to spatialise virtually every composition degree. Whilst professional
sound, evoking tools such as Schaeffer’s pupitre d’espace music software already presents a modularity that
(Poullin, 1999) or Stockhausen’s Rotationstisch (Maconie, exceeds what is commonly found in other types of
1990) - and to kinematic spatialisation - i.e., using software (e.g., via the common use of plugins from
geometrical interfaces to control the automation of the different origins inside a DAW), I believe that we will
sound’s position in space, as it is the case with most user witness an increase in the number of open doors that the
interfaces for sound spatialisation. The composer is free to composers will search as a way to customise their
use any interface he or she recognises as ideal for each workflow for electroacoustic music composition. Tools
and every sound that is to be moved in space. The main such as, e.g., Max for Live [8] or libpd [9] will increase the
contribution of spatium is, however, the introduction of number of possibilities for composers to integrate custom
dynamic interfaces for sound spatialisation. elements within software developed by professionals. The
current division between composers that use mostly tools
that they develop themselves and composers that prefer
commercial tools will therefore give way towards an
eaw2015 a tecnologia ao serviço da criação musical 129

ambivalent approach, with electroacoustic music Bibliography


composers expecting all of their tools to be highly
Begault, D. (1986). “Spatial Manipulation and Computers: a
versatile.
Tutorial for Composers”. Ex Tempore, vol. 4 (no. 1), p.56-88.
By opening the door for the easy mapping of any
Chowning, J. (1971). “The Simulation of Moving Sound
expressive interface to any parameter of any signal Sources”. Journal of the Audio Engineering Society, vol. 19
processing software, we can greatly increase the potential (no. 1), p.2-6.
for the development of new expressive vocabulary for
electroacoustic music. Composers can therefore greatly Davis, T., & Karamanlis, O. (2007). “Gestural control of sonic
benefit from software that is developed with these new swarms: composing with grouped sound objects”.
needs in mind: with simplified and well-documented Proceedings of the Sound and Music Computing Conference
2007, Lefkada, Greece.
communication protocols; giving easy access to control
parameters via OSC [10] and clearing the way for complex Getzmann, S. (2005). “Representational momentum in spatial
mappings that exceed the simple midi control with hearing does not depend on eye movements”. Experimental
standard interfaces; enabling the seamless mix-match of Brain Research, vol. 165 (no. 2), p-229-238.
different interfaces within the same work session, Kim-Boyle, D. (2005). “Sound spatialization with particle
optimising the workflow and fostering the emergence of systems”. Proceedings of the 8th International Conference on
creative fusions. After all, playfulness and swift expressive Digital Audio Effects, Madrid, Spain.
pleasure can be as inspiring in musical software as they
always have been in traditional acoustic instruments. Maconie, R. (1990). The Works of Karlheinz Stockhausen
(2nd ed.). Oxford: Clarendon Press.

Notes Moore, R. (1983). “A General Model for Spatial Processing of


Sounds”. Computer Music Journal, vol. 7 (no. 3), p.6-15.
[1] Often from people with a more technical background, i.e.,
non-practicing musicians or composers. Penha, R. (2013). “Composing from spatial gestures: the
making of pendulum”. Sonic Ideas, vol. 6 (no. 11), p.39-49.
[2] Throughout this paper, real-time processing refers to the
digital processing of signals with negligible latency. Penha, R., & Oliveira, J.P. (2013). “Spatium, tools for sound
spatialization”. Proceedings of the Sound and Music
[3] If we compare the consoles of the largest pipe organs in Computing Conference 2013, Stockholm, Sweden.
the world to the big modular synthesisers of the last decades
we can clearly see the difference of designing a very complex Pennycook, B. (2008). “Who will turn the knobs when I die?”.
instrument that is fully controllable by one skilled player on his Organised Sound, vol. 13 (no. 3), p.199-208.
or her own versus a very capable instrument that exposes all
Peters, N., Lossius, T., Schacher, J., Baltazar, P., Bascou, C.,
of its parameters simultaneously and is thus practically
& Place, T. (2009), “A stratified approach for sound
unusable in its full potential and in real-time by a single
spatialization". Proceedings of the Sound and Music
musician without resorting to some sort of parameter
Computing Conference 2009, Porto, Portugal.
automation.
Peters, N., Marentakis, G., & McAdams, S. (2011). “Current
[4] The ubiquitous piano roll interface is perhaps on of the
technologies and compositional practices for spatialization: a
most striking examples of this idea: what was born out of the
qualitative and quantitative analysis”. Computer Music
mechanical constraints of 19th century piano technology (that,
Journal, vol. 35 (no. 1), p.10-27.
as most musical instruments and interfaces, gave birth to its
own idiomatic musical vocabulary, such as, e.g., the music of Poullin, J. (1999). “L’apport des techniques d’enregistrement
Conlon Nancarrow) is nowadays one of the most widely used dans la fabrication de matières et de formes musicales
interfaces for computer music composition! nouvelles: applications à la musique concrète”. Ars Sonora,
vol. 9. p.31-45.
[5] http://www.nime.org
Turenas (1999). Chowning, J. Mainz: Wergo. CD.
[6] As an example, one can think of the piano keyboard,
which is probably the most common kind of MIDI controller for Vol d’Arondes (2003). Dhomont, F. Montréal: empreintes
computer music and almost omnipresent as an interface on DIGITALes. CD.
computer music software. Should it be natural to expect
everyone to immediately relate to an interface that not only Wishart, T. (1996). On Sonic Art. London: Routledge.
makes, e.g., microtonal exploration hard, but also presents Yadegari, S., Moore, R., Castle, H., Burr, A., & Apel, T.
some equivalent musical intervals in a different and unnatural (2002). “Real-Time Implementation of a General Model for
(i.e., with no discernible acoustical relation) way? Spatial Processing of Sounds”. Proceedings of the
[7] Or, in the case of the spatium.flocking interface, in control International Computer Music Conference 2002, Gothenburg,
over the behaviour of autonomous agents, in an example of Sweden.
sound spatialisation with particle systems (Kim-Boyle, 2005;
Davis & Karamanlis, 2007).
[8] https://www.ableton.com/en/live/max-for-live/
[9] http://libpd.cc
[10] Open Sound Control:
http://opensoundcontrol.org/introduction-osc
eaw2015 a tecnologia ao serviço da criação musical 130

4.6 Introducing the Zirkonium MK2 System for Spatial Composition


David Wagner, Ludger Brümmer, Götz Dipper and Jochen Arne Otto ZKM | Institute for Music and Acoustics,
Karlsruhe

Abtract performance venues for spatial music, when in the early


2000s Ludger Brümmer and his research group at the
The Institute for Music and Acoustics is a production and Institute for Music and Acoustics (IMA) at the ZKM |
research facility of the ZKM | Center for Art and Media Center for Art and Media conceived and implemented their
Karlsruhe. It is well known for the ”Klangdom”, a multi- Klangdom (Ramakrishnan, 2006), extending the IMA
loudspeaker facility for spatial sound diffusion with the aim performance and production space within the prominent
to provide artists and composers with new possibilities. In ZKM Cube. This dome-shaped setup of 43 loudspeakers
this paper we present the overall revised and extended and four subwoofers however en- ables the direct
software solution for controlling the Klangdom, the positioning and movement of virtual sound sources by way
Zirkonium MK2. Its origins in the previous version are of its dedicated software Zirkonium, making the Klangdom
briefly outlined and the advances are thoroughly an advanced concert and production instrument for spatial
described. Due to a very flexible client-server architecture, music.
a hybrid spatial rendering engine and a very gestural
trajectory editor it is already a useful toolkit for the Moreover, the Klangdom and Zirkonium have been
institute’s guest composers. designed following an open strategy, bearing a number of
practical issues in mind. Particular emphasis has been put
Keywords: Spatial Audio, Klangdom, VBAP, Application upon basic functionality to enable both adaptability and
extendability, but also to facilitate the creative process for
Introduction users that aren’t at the same time programmers. In
particular, a fusion of the levels of sound and spatial
In his utopian text ”New Atlantis”, published in 1627,
composition should be scaffolded.
Francis Bacon describes the prototype of a research
institute and museum which comprises a department
occupied with the processing and display of sound: ”We
have also sound-houses, where we practise and
demonstrate all sounds, and their generation. [...] We
represent small sounds as great and deep; likewise great
sounds extenuate and sharp; we make divers tremblings
and warblings of sounds, which in their original are entire.
[...] We have certain helps which set to the ear do further
the hearing greatly. We have also divers strange and
artificial echos [...]. We have also means to convey sounds
in trunks and pipes, in strange lines and distances.”
(Vickers, 2008)
However, it was not before the 1950s that the technical
means for the creative use of virtual sonic space in music
were actively developed, with Pierre Henry and Jaques
Poullin’s ”Potentiomètre d’Espace”, the Poème Figure 1. The Klangdom in the ZKM Cube concert hall, with 43
loudspeakers and 4 subwoofers (© ZKM | Zentrum für Kunst und
Electronique of Le Corbusier, Edgar Varse and Iannis Medientechnologie Karlsruhe, Photo: Fabry)
Xenakis, Jordan Belson and Henry Jacob’s Vortex
concerts from 1957-1959, and Stockhausen’s premiere of This general strategy has been further developed in the
Gesang derJünglinge in 1956, which in a 1958 speech he new, revised and extended version of Zirkonium,
conceived of as the attempt to include sound direction and Zirkonium MK2, which is the subject of the remainder of
movement as a new dimension of musical experience this paper. For a recent overview of the Klangdom, see
(Stockhausen, 1988). In this historically prominent speech, also (Brümmer, 2014).
Stockhausen likewise expressed the vision of spherically
shaped music venues specifically designed for the
2. ZIRKONIUM
presentation of spatial music, all around equipped with
loudspeakers for an immersive presentation of sound. This For controlling the ZKM Klangdom the IMA is developing
conception was actually put into practice about ten years the free software Zirkonium since 2004. Its central aim is
later as part of the German pavilion at the Expo ’70 in to simplify the use of space as a compositional parameter.
Osaka, Japan. The ”Kugelauditorium” spherical concert Therefore positions and movements of sounds can be
hall featured 50 loudspeakers together with equipment for created and arranged in an event-based timeline or
the spatial positioning and movement of sound sources: a remote controlled using Open Sound Control (OSC). It is
tape recorder carrying a control signal, and the so-called designed as a standalone application for Apple OS X and
”rotary mill”, a device for the live routing of a source signal handles multichannel sound files or live audio. For the
along up to ten channels by the turning of a crank. spatial rendering of virtual sound sources it uses Vector
Base Amplitude Panning (VBAP) within a user-defined
The immersive approach of the Kugelauditorium re-
loudspeaker setup, or binaural filtering for headphones.
mained relevant for the further development of
When working with real speakers it is moreover possible
eaw2015 a tecnologia ao serviço da criação musical 131

to modify the size of a sound source by using a technique integration of several third party clients like the ZirkOSC
called Sound Surface Panning (Ramakrishnan, 2006). To plugin (Normandeau, 2013) or the Spatium interfaces
avoid comb-filter effects a source can optionally be (Penha, 2013). New project- based extensions can easily
snapped to the nearest speaker. Zirko- nium has been a be linked with the system due to its distributed and open
reliable tool in a wide variety of productions and architecture. Another positive side effect is the simplified
performances. It has been constantly enhanced with maintainability. The audio engine and spatial rendering
different developers involved which to a certain extent system in the original Zirkonium is implemented with a
results in a patch-like source package. version of the Apple Core Audio library that is becoming
more and more outdated. To remain compatible with
2.1 ZirkoniumMK2 contemporary OS X updates it would be necessary to
constantly revise a set of rather low- level functions. By
In 2012 the IMA started reengineering the system taking using Max the audio functionality can be easily accessed
into account the experience of the staff and guest with a few clicks while the respective bindings to the
composers. The result is a more stringent modular client- operating system and hardware updates are maintained
server based toolkit which comes along as a bundle of by the creators of Max.
several applications and plugins, an extensive text and
video documentation as well as a steadily growing The original Zirkonium breakpoint editor pursues a
collection of example patches and code snippets for relatively rudimentary approach for the positioning and
creating custom control units in programming movement of virtual sound sources: in an event-based
environments like Max or SuperCollider. timeline one can create spherical rotations with a fixed
speed. Zirkonium MK2 contains a graphical Trajectory
Editor (section 5) which uses quadratic Bézier splines for
2.1.1 Motivation
the definition of movements around the speaker setup but
The general necessity of a flexible system in composing also for a modification of speed and acceleration along
spatial music is thoroughly discussed in Penha (Penha, time. Furthermore it is capable of recording live panning
2013) and Peters (Peters, 2009). The following text instructions in the underlying representation. This has
highlights certain aspects that coincide with experiences been a strongly demanded feature by many guest
made with the original Zirkonium serving as a background composers since it is a very intuitive and natural way of
for a description of new modules and features. describing movements which can also be exported as
The variety of compositional and technical needs, compositional patterns. By maintaining the event-based
especially in the area of computer music, requires a data structure from the classic Zirkonium old pieces can
system which can be easily modified by a programmer or easily be imported by applying a resampling of the text-
the composer himself. This can be achieved by a more based rotational figures as Bézier splines which can be
immediate access to the logical components of the extended or modified just like newly created paths.
software which in turn encourages the use of creative
performance techniques including mobile devices, 2.1.2 Architecture
controllers and sensors. Figure 2 shows the elementary architecture of the system
These either permanent or project-based extensions of the and the possible combinations of the individual
software can cause a huge effort when being forced to components that are represented by rectangular boxes.
break into someone else’s code and understand the Each box is subdivided into two sections: the In and Out
internal infrastructure. The client-server paradigm provides section. The different entities that are listed in the Out
a remedy for this by enabling new ZKM or third party section of one component can be sent to another
developers to write extending modules in their familiar component if they are contained in the respective In
programming languages and development environments section. Since the spe- cific combinations strongly depend
which ac- cess the core functionality of the remaining on the composer’s intention and workflow the following
system by networking communication. This was already text gives a more general overview of how the
considered in the original Zirkonium by means of the components are usually applied.
possibility to receive and send OSC messages for a fully The Spatialization Server (section 4) receives live audio
externalized control of the spatialization. In Zirkonium MK2 streams delivered by another software like a DAW by
this rather distributed approach is pursued even more using Jack or Soundflower or an external source through
severe which results in an isolated entity: the the hardware audio interface or the network by services
Spatialization Server (section 4). It is a Max-based like netJack and the streaming audio project [1].
standalone application which does all the audio related Furthermore it can be instructed to play back certain
tasks like the handling of live inputs / soundfiles and the sound files by a custom controller client or the Trajectory
spatial rendering according to a desired speaker setup. Editor (section 5). The real-time spatialization is done with
When working with Max, which to a certain extent is a respect to specified loudspeaker setup (section 3) while
modular framework itself, it is much easier to re-use the Server’s out- put goes directly to the audio interface.
already existing software and libraries since its community The instructions of how to position and move certain
is built on sharing. This also encouraged the development sound sources can also be transmitted from an external
of a hybrid spatial rendering engine by use of modified client, from the featured Trajectory Editor or forwarded and
versions of current VBAP and Higher Order Am- bisonics exchanged between them to complement each other’s
(HOA) implementations. Simultaneous combinations of functionality (section 8). A time stamp that keeps track of
these techniques can be chosen according to their the ongoing piece can be exchanged between all clients to
aesthetic properties. Furthermore it is optimized for the
eaw2015 a tecnologia ao serviço da criação musical 132

fire up certain spatial events or update positions according


to a timeline.

DAW / Max / SC
Sensor / Controller Data, Source Source Positions, Soundfile Paths,
Positions, Timing
, Live Audio Timing, 
Live Audio, Spline Templates
IN OUT

Trajectory Editor
Soundfile Paths, Timing
, Spline
Source Positions, Soundfile Paths,
Templates, Zirkonium Pieces, Speaker
Timing,
Spline Templates
Coordinates
IN OUT

Spatialization Server Figure 3. User Interface of the Speaker Setup application


Source Positions, Soundfile Paths,
Timing, Speaker Coordinates, Live Source Positions, Timing
, Live Audio
Audio
IN
OUT 4. Spatialization server
Figure 2. Possible combinations in a Zirkonium MK2 Setup The Spatialization Server is a standalone application
based on the Max runtime that can be considered as the
core of the MK2 system. It is responsible for every
3. Speakersetup
realtime-audio related task such as soundfile playback or
The Speaker Setup application emerged from the original recording, handling live audio streams and applying the
Zirkonium where it existed in a subwindow for explicit spatial rendering of virtual sound sources. It is designed to
internal use. It enables the user to create and maintain be remote controlled by means of the MIDI-, OSC or
different loudspeaker configurations which are TCP/IP protocols and provides an adjustable
corresponding to the studios where one is planning to threedimensional visualization of the ongoing piece.
work. At this point the advantage of an object-oriented
encoding based on spherical coordinates becomes clear 4.1 Everything at a glance
which is employed both in VBAP and Ambisonics. The
spatial score which implies sounds, their positions and The front end to the Spatialization Server (see Figure 4)
movements is decoded in regard to a desired speaker enables the user to organize the entire infrastructure of the
setup which can be exchanged by one click. Thus the respective piece in one window. A tweakable 3D-
composer can present his piece in a different studio visualization in a secondary window can be used to
without having to prepare a new mixdown. Certain aspects monitor the moving sound sources in relation to the used
like the individual dimensioning and room acoustics of a speaker setup. The main window is subdivided into three
studio can make a piece sound different from the initial sections providing level meters for all incoming and
composing space (see [9] for a more detailed outgoing audio signals concluding with a collection of
consideration) and should always be kept in mind when buttons and fields which provide access to every
intending to relocate the presentation. configurable feature of the Spatialization Server.

Speaker Setup (Figure 3) provides both a three-


dimensional interface for a free-handed positioning of
speakers as well as numerical tables to register either
spherical or rectangular coordinates. By default it contains
several sets of loudspeaker coordinates for basic layouts
like a quad or octophonic and the studios in the ZKM
including the Klangdom. These presets can either be
modified or a new one can be started from scratch. The
tool also maintains output patching presets which are
associ- ated with one or several speaker setups. Once a
setup has been successfully created it can be exported to
Figure 4. Main window of the Spatialization Server
XML to be re-used in the previously mentioned
Spatialization Server or the Trajectory Editor. Such a setup
can also be created with a regular text editor since the 4.2 Generic Presets
XML-Scheme is also part of the documentation. The User Interface design of the Server including the
overall color scheme and types of the respective buttons,
sliders and level meters is inspired by the very appealing
set of spatium renderers which are thoroughly introduced
in (Penha, 2013) and can be accessed in the open source
domain [2]. These renderers consist of different OSC- or
MIDI controllable standalone applications for Ambisonics
rendering or Amplitude-Panning of live inputs, sound field
record- ings or stereo sound files respectively. The
spatium·ambi renderer for instance provides configurable
soundfile- and Ambisonics related settings and a fixed
eaw2015 a tecnologia ao serviço da criação musical 133

amount of 16 live inputs and different pre-declared


exchangeable loudspeaker setups.
4.3.2 Scalability
To extend this limitation of pre-defined in- and outputs the
The system is not primarily designed to pursue a
Spatialization Server makes use of the JavaScript
distributed processing approach. However, due to its
capabilities of Max which are provided by the js object.
network oriented communication and synchronization the
Thus the amount of inputs can be freely chosen without
performance limitations can be exceeded by deploying the
any logical limit. Depending on the power capacity of the
software on separate computers. The outgoing audio
user’s computer this maximum supported number is
signals which represent the real-time result of the spatial
limited physically and can vary in each scenario. The
rendering merely have to be joined before going to the
amount of outputs is determined by the created speaker
speakers. This can be achieved on the hardware side by
setup and thus is also assigned dynamically. The same
means of an appropriate mixing interface or on the
counts for direct outs. This in- and output configuration as
software side by applying a networked audio streaming.
well as the spatial rendering assignment can be saved into
a preset which might refer to one or several pieces.
5. Trajectory editor
4.3 HybridSpatialRendering The Trajectory Editor is a document-based application
client which provides a very intuitive and gestural user
One of the key features of the Server is the possibility to
interface for the spatial distribution of sound along time. It
simultaneously process incoming sound sources with
can either be for the live recording or construction of
different spatial rendering algorithms. Each channel of the
automations as cubic Bézier splines with a set of graphical
Input section can be spatialized with VBAP according to
tools. It provides a very powerful remote access to the the
(Pulkki, 1997) with the Sound Surface Panning extension
Spatialization Server and can be used in numerous
described in (Ramakrishnan, 2009), a mixed order
scenarios (also see section 8).
Ambisonics proposed in (Penha, 2013) or sent to a
dedicated direct out right next to the Output section.
Furthermore a channel can be declared as one of the four 5.1 Terminology
B-format channels coming from a soundfield microphone The underlying data structure and the corresponding
recording. The idea behind this mixed approach is to terminology was mostly inherited from the original
enable the composer to use the desired rendering Zirkonium. Virtual sound sources are considered as IDs
algorithm and its acoustic properties as a stylistic device. which can be gathered in Groups. Both can be the target
Concrete aesthetic qualities of spatial technologies are of spatial Events which are arranged in a timeline and
described by Marije A.J. Baalman in (Baalman, 2010) determine movements or modifications of a parameter
where she applies Ambisonics for giving the recording a called Span which describes the extension of a sound
rather ”spatial and diffuse impression” or Wave Field source along the surface of the underlying speaker setup.
Synthesis (WFS) for projecting sound inside the listening
area and thus creating ”a very intimate effect”. Keeping this data structure simplifies the import of original
Zirkonium pieces which can be extended with the new set
Advances have also been made in the sound quality of the of tools or partially re-used in a new context.
binaural filtering for headphone use. It applies the FFT-
based fast convolution with impulse responses from the 5.2 The Loudspeaker Canvas
CIPIC HRTF database [3] as proposed in (Andersen,
2014). Figure 5 shows the user interface of a Trajectory Editor
document. On the right side the previously mentioned data
4.3.1 Performance entities and their relationships are configured in text-based
table entries while on the left side the user mainly
Pieces for the Klangdom typically comprise about 16-32 operates on an interactive canvas while creating a spatial
sound sources. Of course this number strongly depends score. This canvas is a two-dimensional plain with a
on the composer’s intention, workflow and base material. desired loudspeaker configuration as its background. It
For our eight core Mac Pro model from early 2008 a shows the speak- ers from above which enables the user
number of 50 sound sources is a realistic upper limit for to access every point of the semispherical surface of the
VBAP / Ambisonics rendering. dome. This view can also be altered between a spherical
Before actually working with the system composers tend or planar representation while the first one implies a
to be worried about the performance limitation of the spherical distortion of a camera which would be positioned
spatial rendering although it rarely remains an issue directly in front of the uppermost speaker. The second
further on. Experience has shown that an increased representation shows the speakers as if they were
amount of different sound sources can not only complicate flattened in one plain which keeps more realistic
the composer’s work but also make it harder for the proportions of the distances between the speakers.
listener to perceive and distinguish individual sound
sources and spatial gestures.
The binaural filtering is more demanding in this regard with
a maximum of about 10-12 simultaneous sources. For
efficiently deploying the available CPU power an arbitrary
number of input channels can be excluded from the
binaural processing.
eaw2015 a tecnologia ao serviço da criação musical 134

shifting, x/y- scaling and rotation of all elements. By


default each spline has a linear timing which means a
constant speed determined by the duration of the event
and its own length. This timing can be modified in the
timing view right below the canvas. The x-axis of this view
represents the duration of the event while the y-axis
represents the amount of the spline which has been
passed. Each automation can also be looped by an
arbitrary percentage referring to the overall path. A further
loop mode specifies the direction of the loop and the way
its acceleration curve is considered. Hereby the
acceleration can be applied for each loop segment or once
across the entire looped event. Loops are a very easy and
effective way of giving the piece a subtle vividness without
the necessity of programming many individual
Figure 5. User Interface of a Trajectory Editor document movements. Besides constructing or painting trajectories it
is also possible to record incoming live data into an
5.3 Global vs. Local Event Context underlying spline representation. The resulting trajectory
including its acceleration curve can also be modified by
The canvas view is subdivided into three different tabs the graphical tools described above. Splines can be ex-
which also set the lower timing representation into an and imported as Templates by a given XML interface.
individual context. The Initial tab is a more global overview
of the entire piece - each ID and group can be monitored
5.5 External Synchronization
and highlighted while the individual splines of the actually
processing events are hidden. It is also used to define the The Trajectory Editor has its own internal master timing
intitial position for each ID or freely play with a certain which gets controlled by the transport section in the lower
target. If this tab is active the lower timing view is scaled right corner of the User Interface. This timing however is
onto the length of the entire piece and can be used to only relevant if the Trajectory Editor is used to spatialize
observe the progress which is represented by a running the playback of audio source files. If it is used for live
cursor or step into a certain point in time. audio streams it is possible to synchronize its timing to
MIDI Time Code (MTC) and OSC-based timing messages.
The Trajectory as well as the Span-Tab are both
associated with a selected Event. They display the
corresponding splines and the target as shown in Figure 6. 6. Pluginsandcontrollers
The lower timing view switches to a local timing mode - it A very convenient and more and more established way of
displays the progress and acceleration within the current recording spatial movements in a direct relation to sounds
event along the time axis. and an underlying waveform representation is by the use
of a DAW and the respective plugin interface (Penha,
2013; Normandeau, 2013; Kronlachner, 2013; Melchior,
2011). This way the variation of spatial coordinates either
in spherical or rectangular representation can easily be
recorded and played as an internal automation curve.
Hereby each dimension of the coordinate (X, Y and Z vs.
Azimuth, Elevation and Radius) gets its own parameter in
the plugin and therefore its own curve which is
corresponding to the desired track’s timeline while the
resolution of these curves and their type of interpolation
strongly depend on the individual DAW. The advantage of
this approach is that the spatial movements can be
arranged just like the equivalent sound snippets and
become part of the typical workflow in a DAW-based
composition. A disadvantage lies in the fact that most
DAWs and their respective plugin SDKs don’t provide a
Figure 6. A Trajectory Editor document in the Event- based context time-independent access to the automation data which
can only be modified with the internal tools curve- and
5.4 Create and modify Automations parameter wise. Another disadvantage refers to the
archiving of the piece – each piece and its pos- sibilities of
For each time-based automation the Trajectory Editor
further modifications and access to the spatial score is tied
uses cubic Bézier-Splines which can easily be created and
to the specific DAW it was created in. Using the Trajectory
modified with the toolbar on the lower right side of the
Editor with external MTC synchronization provides a
document. These tools are inspired by contemporary
workaround for keeping the typical DAW workflow for the
software solutions for scalable vector graphics like Adobe
soundwise composition and having a full DAW-
Illustrator or Inkscape. The resulting graphical lines can
independent dataset for the spatialization. To remain
freely be modified by accessing the individual spline
flexible the open source Audio Unit plugin introduced in
elements or globally transformed by operations like
Normandeau (Normandeau, 2013) is included in the
eaw2015 a tecnologia ao serviço da criação musical 135

Zirkonium MK2 bundle or can be downloaded under the setup. As well as the IDs for the tape part the live inputs
given url. Other contents are example patches for Max, had representations in the Trajectory Editor and were
SuperCollider, the Lemur and TouchOSC as well as an included in the previously programmed spatial score.
Audio Unit / VST Plugin that hands the DAW’s current Hedmann and Sidén did the same thing with microphone
timestamp to the Server if it doesn’t support MTC sending. inputs which were capturing sound from a piano and
several metal ob- jects. For an easier handling of the
7. Excursion: optimizing sounds for spatial different sessions during the concert the Pro Tools tracks
perception were consolidated and deployed as mutlichannel
soundfiles which were played by the Spatialization Server.
A critical point in the field of spatial audio can be found in The pieces consisted from 12 up to 44 individual sound
the complex interaction between sound qualities and sources. Norelius used MK2 in her residency in a different
spatial appeareance in terms of ambiguity, localisation and way. A Max patch is generating live audio which is handed
distance. Our perceptive system is used to work with to the Spatialization Server by Soundflower. A part of the
indifferent cues in the way that it creates an ambiguous spatialization is created from pre-recorded data of
impression. We are used to accept these ambiguities in movement sensors while the other part is created in the
real life, but composers seem, when working with spatial Trajectory Editor. The timing is also generated in the patch
audio, to be obsessed with the fact that every sound and sent to the Spatialization Server by OSC which
should have a clear position in space. Especially when synchronizes the Trajectory Editor with the progress of the
using low frequency signals it is important to learn that the patch. Norelius’ piece will be presented at a concert in
spatialization technique in use cannot deliver a more June 2014.
distinguished outcome than we are able to perceive.
Extending this observation it becomes clear that other 9. Conclusions
qualities of sounds might influence their spatial perception.
A sound without a percussive timbre envelope will allways The MK2 system emerged from a rather conceptual state
spread in space as opposed to sounds containing one or towards a set of tools that are actually used to play
more high frequency impulses. The latter will clearly be concerts on a regular basis in the ZKM Kubus. Since the
perceived as point sources. The impression of distance is software is free to use many composers take it along after
influenced by the amount and loudness of higher partials. their residency at the IMA and apply it in their private
Sounds with loud high frequency partials seem to be studios or institutions. Upcoming developments will focus
closer to the listener than sounds with fewer high on improving the current spatial rendering techniques and
frequency partials. Listeners and composers seem to also incorperate an external Wave Field Synthesis ren-
accept these facts when sounds with a real reference are dering module. Thus the composers will be enabled to
used. But using artificial sound information these combine the rather surface-oriented VBAP and
perceptive strategies of the ear and brain interact Ambisonics spatialization techniques with the aesthetics of
unintentionally with the different timbres and create an sounds that are physically departing from the surface of
interpretation of a spatial scenery sometimes contradicting the dome. One major challenge will be the revision of the
the initial intention of the composer. As a conclusion compositional tools in regard to a distance encoding that
composers should know whether to blame the spatial also conforms with the actual acoustical domain. In
reproduction system for those artefacts or the regularities context of the ”European Art-Science-Technology
of how our perceptual system localizes sounds differently Network” project new strategies to generate spatial
based on their physical characteristics. gestures with the help of physical modelling systems like
”Genesis” or ”Mimesis” (Cadoz, 2003) will be implemented
8. Examples in the near future. Other efforts will lie in building synergies
with other non-commercial parties in the field of spatial
The field of application for MK2 reaches from tape music audio. By establishing compatibilities to environments with
to interactive live performances. Within this context both a different focus like Matthias Kronlachner’s Ambisonics
the audio as well as the spatialization can either be Plugin Suite (Kronlachner, 2013) and its non- realtime
prepared files or generatively created data streams. This capabilities a different workflow and set of aes- thetics can
leads to a variety of combinations which are benefitting be proposed to the guest composer without loosing the
from the flexibility of the system. advantages of the flexibility in the Zirkonium MK2. This will
In context of the 50 year anniversary of the Elektron- also be encouraged by taking into account established
musikstudion in Stockholm five Swedish musicians and interchange formats for the description of spatial audio
composers Helene Hedsund,Lars Åkerlund, Lise-Lotte scenes like the SpatDif described in (Miyama, 2013).
Norelius, Jens Hedmann and Eva Sidén played a concert
in the ZKM Kubus in March 2014 which was also part of a Notes
live broadcast for a Swedish radio station. The pieces [1] “A streaming audio system from the Sonic Arts R&D Group
were prepared and played with the MK2 system, each in a at CaliT2 UCSD.” [Online]. Available:
different context. Hedsund pursued the pure tapemachine https://code.google.com/p/streaming-audio
approach and worked on a Pro Tools session sending the
au- dio streams to the Spatialization Server with [2] “Spatium, Tools for Sound Spatialization.” [Online].
Soundflower. The entire spatialization was done in the Available: http://spatium.ruipenha.pt
Trajectory Editor which was synchronized to Pro Tools by [3] “The CIPIC HRTF database.” [Online]. Available:
MTC. Åkerlund worked in a similar way with the difference http://interface.cipic.ucdavis.edu/sound/hrtf.html
that he also included a live electronics performance in the
eaw2015 a tecnologia ao serviço da criação musical 136

Bibliography Format,” Journal of the Japanese Society for Sonic Arts, vol.
5, no. 3, pp. 1–5, 2013.
B. Vickers, Ed., Francis Bacon: The Major Works. Osford
University Press, 2008.
K. Stockhausen, Musik im Raum, 2nd ed., ser. Texte Zur
Elektronischen Und Instrumentalen Musik, 1. Aufsa ̈tze 1952:–
1962 zur Theorie des Komponierens:152–75, D. Schnebel,
Ed. DuMont Buchverlag Köln, 1988.
C. Ramakrishnan, J. Goßmann, L. Brümmer, and B. Sturm,
“The ZKM Klangdom,” Proceedings of the 2006 Conference
on New Interfaces for Musical Ex- pression, pp. 140–143,
2006.
L. Brümmer, G. Dipper, D. Wagner, H. Stenschke, and J. A.
Otto, “New Developments for Spatial Music in the Context of
the ZKM Klangdom: A Review of Tech- nologies and Recent
Productions,” Divergence Press, vol. 3, 2014.
R. Penha and J. Oliveira, “Spatium, Tools for Sound
Spatialization,” SMC, 2013.
N. Peters, T. Lossius, J. Schacher, P. Baltazar, C. Bas- cou,
and T. Place, “A Stratified Approach for Sound Spatialization,”
Proceedings of the 6th Sound and Mu- sic Computing
Conference, pp. 2019–224, July 2009.
R. Normandeau, “ZirkOSC. Audio Unit plug-in to control the
Zirkonium,” 2013. [Online]. Available:
http://code.google.com/p/zirkosc/
“A streaming audio system from the Sonic Arts R&D Group at
CaliT2 UCSD.” [Online]. Available:
https://code.google.com/p/streaming-audio
G.S. Kendall and A. Cabrera,“Why things don’t work: what
you need to know about spatial audio,” ICMC, 2011.
“Spatium, Tools for Sound Spatialization.” [Online]. Available:
http://spatium.ruipenha.pt
V. Pulkki, “Virtual Source Positioning Using Vector Base
Amplitude Panning,” Journal of the Audio Engi- neering
Society, vol. 45, pp. 456–466, 1997.
C. Ramakrishnan, “Zirkonium: Noninvasive Software for
Sound Spatialisation,” Organised Sound, vol. 14, pp. 269–
276, 2009.
M. A. Baalman, “Spatial Composition Techniques and Sound
Spatialization Technologies,” Organised Sound, vol. 15, no. 3,
pp. 209–218, 2010.
“The CIPIC HRTF database.” [Online]. Available:
http://interface.cipic.ucdavis.edu/sound/hrtf.html
J. H. Andersen, “FFT-based binaural panner,” 2014. [Online].
Available: http://cycling74.com/toolbox/ fft-based-binaural-
panner/
M. Kronlachner,“Ambisonics plug-in suite for produc- tion and
performance usage,” Linux Audio Conference, pp. 49–54,
2013.
F. Melchior, U. Michaelis, and R. Steffens, “Spatial Mastering
- a new concept for spatial sound design in object-based
audio scenes,” ICMC, 2011.
C. Cadoz, A. Luciani, J.-L. Florens, and N. Castagné,
“ACROE - ICA: artistic creation and computer interactive
multisensory simulation force feedback gesture transducers,”
NIME ’03 Proceedings of the 2003 con- ference on New
interfaces for musical expression, pp. 235–246, 2003.
C. Miyama, J. C. Schacher, and N. Peters, “Spatdif Li- brary –
Implementing the Spatial Sound Descriptor In- terchange
eaw2015 a tecnologia ao serviço da criação musical 137

V. Interpretação/performance |
Interpretation/performance
eaw2015 a tecnologia ao serviço da criação musical 138

5.1 Desiring Machines – A Decentred Approach to Interactive Composition

Daniel Thorpe University of Adelaide, Australia

Abstract human control less immediately simple are, somewhat


ironically, reflective of the level of automation and
As a concept in the 21st century, “Networking” has broad reactivity inherently valuable in network structures.
technological and social implications. My research How to move towards a compositional practice where
investigates this cultural force by exploring the Network these values are core — a practice in which performers
paradigm as a framework through which we can reimagine take part in crafting performances, rather than simply
the relationships between performers and the composer executing a static rendering of one possible outcome of
as the basis for composition. How a political ontology of the composer’s intentions? This is perhaps the driving
networks that shifts notions of control and production question that has developed my interactive compositional
through mediation with noise, is explored — with particular practice. This paper and Mercury Vapour Seismology
focus on a critical queer utopian reading of the politics of explore a compositional practice that uses network design
technology. This will be explored further through the case as an approach to decentring interactions in performance,
study of a large ensemble work and the software drawing primarily on the work of Galloway & Thacker,
framework developed in its creation: Mercury Vapour