Você está na página 1de 113

Human-Computer Interaction Models in Musical Composition

by Alexis Perepelycia alexisperepelycia@gmail.com Student Number: 221290

Under direction of : Prof. Horacio Vaggione

Master 2 Research Musicology, Creation, Music and Society - Option : Music Musical Informatics, Composition and Research

Universit Paris 8 Vincennes Saint-Denis

Academic Year: 2005/2006

To renew art or to revolutionize science, meant to create new content and concepts, and not just new forms.
1

Werner Heisenberg Physik und Erkenntnis, Gesammelte Werke, vol. 3, Munich, 1985.

Index

Section I Human-Computer Interaction Models in Musical Composition

Forewords 1_Introduction: 1.1_The (Intuitive) Need of Interaction? 1.2_The Principle of Human-computer interaction (HCI) 1.3_Aims of Human-Computer-Interaction (HCI) 1.4_HCI Design methodologies 2_Aesthetic Considerations of Interactive Systems (I.S.) 3_Brief History of the Interactive Musical Performance System (IMPS) 3.1_Non standard models of IMPSs (a few examples) 4_ Music as Interactive Art (or the real intention to interact) 4.1_Beyond Interaction = Automation 4.2_Multimodal Interactive Systems (Towards Integrated Interactivity) 4.3_Interaction in the Field of New Media Art 4.4_Do we need Interaction in Computer Music Performance? 5_ Computer Music 5.1_Performativity of Computer-based Interactive Systems 5.2_Performance of Computer-based Music 5.3_Computers as Instruments 5.4_The Computer as a Meta-Instrument

p.8 p.9 p.14 p.15 p.16 p.17 p.18 p.20 p.22 p.24 p.28 p.32

p.33 p.35 p.38 p.39 p.40 p.42 p.43

5.5_Laptop Music 5.6_Other Mobile Devices (that enhance interactivity) 6 _Types of Interaction 6.1_Palpable (Haptic) vs. Touch-less Interaction 6.2_Gestural Interfaces 6.2.1_New Devices and Hybrids ones 6.2.2_Hyper, Meta ,Cyber 6.2.3_The meaning of The Musical Gesture 6.3_Non-Gestural Interaction 6.4_Single- and multi-users Interactive systems 6.5_... and it goes through the Internet (network and web-based interaction) 6.6_How Interactive a system is? 7_Programming & Coding (Everything comes from the source!) 7.1_Objects Oriented Programming 7.2_Live Coding Practice 8_The (Concert) Space as an interactive element 8.1_Some Considerations on Sound Diffusion 8.2_ The importance of the physical space 8.2.1_Musical implications of the sound space

p.44 p.45 p.46 p.47 p.50 p.50 p.53 p.55 p.59 p.61 p.65

p.66 p.67 p.69 p.72 p.74 p.76 p.78 p.79

Section II Description of the creative process of the piece un puntotodos los puntos

1_Overview 2_Aims 3_General Description 4_Overall Musical Considerations 5_Programming 5.1_Conception of the Program 5.2_Description of the Different Sections 5.3_Real Time DSP features 5.4_Creation of Sound Objects (S.O.) 5.5_Sound Spatialization (Relationships between the inner and the outer space) 5.5.1_FFT Analysis for Sound Spatialization (Section IV) 6_GUI Design 7_ Performance of the Program and Future Work

p.83 p.84 p.85 p.87 p.88 p.88 p.88 p.89 p.90 p.91

p.95 p.96 p.98

Section III Conclusions

1_Conclusions

p.101

Section IV Acknowledgements 1_Acknowledgements p.104

Section V Resources 1_References 2_Bibliography 3_Websites p.106 p.111 p.112

ote1

NOTE: Please note that all textual quotations from papers, websites, magazines and books preset in this text but originally in languages different to English (i.e., Spanish, Italian, French, Portuguese), were translated by the author. To the best of my understanding, I have been trustful to the original texts. At least, I did my best. However, if any mistake is found, I apologize with the author(s) and the readers as well. In addition, I invite the readers to contact me in order to correct any translation mistake.

Section I

Human-Computer Interaction Models in Musical Composition

Forewords

With the increment of Live Electronics Music and Electro-acoustic Music including acoustic instruments in live performances, the correlation between gesture and sonic representation became vague.

The paradigm of achieving almost any imaginable sound by hitting a key of a Laptop keyboard, clicking a mouse button or by turning knobs and sliders of a MIDI controller seems to have left expressiveness aside form the performers. They seem to spend most time trying to remember which actions they have assigned to their computers and what does each button, knob and fader from their MIDI controller does rather than focusing on the actual performance and music.

Therefore, integration between the instrument and the performer became an issue that needs to be solved if we want the computer and electronic devices to enhance gestures being made by performers.

A proper translation of the Performers gestures into adequate musical representations would enrich the actual performance by providing the Performer with reliable musical feedback from his/her gestures. [] (Perepelycia 2005b)

1_Introduction:
[] which is produced by cultures is the result of interactions between living systems, as well as between living systems and their specific environment []
1

How many time you have heard the question: Does technology drives music or is it viceversa?

It is impossible for us to separate the progress on the music field without considering the scientific and technological improvements that have promoted musical advances regardless of the year, decade or century.

We would simply like to refer to the considerable increment widen on the palette of musical possibilities (parameters) implemented (or considered) either by composers or performers since the public distribution and employment of electricity (after the efforts of Thomas Alba-Edison inventor of the first commercial electrical energy distribution network who made electricity available for us).

But there were two episodes we would like to mention that came from the scientific and technology empowerment, changed forever the way music is produced and perceived:
First, the revolution of the computer era that also affected musicians. Two clear examples are: the implementation of computers for music composition by Lejaren Hiller in 1956 and the computer synthesis by Max Mathews with his own program Music I in 1957. The second is the birth of the digital era and its undeniable social implications which, directly affected music. Namely, since the 80s, musicians are dealing with program languages and codes of computer programming languages giving birth to the so called computer-music.

1 Humberto Maturana, Kognition, in Der Diskurs des Radikalen Konstruktivismus, Siegfried J. Schmidt (ed.), Frankfurt/Main, 1987, pp. 89118.

Those two events have with no doubts changed for ever the musical phenomena in all its components. From production and performance to ubiquity and perception. However, both of them have different implications in our research. While the computer revolution from the 50s can be argued to have changed the way composers approach the seek of new sounds employing new technologies for its production, the digital era can be placed more in the social impact of technology since it made available new means not just for researchers attached to institutions but also for people who can afford just a personal computer (PC).

Computers enabled composers to control, thanks to different techniques (programming languages or compositional environments), musical parameters (or its equivalents in a computer music language) originally belonging to physically palpable traditional instruments.

Brazilian researcher Fernando Iazzetta summarizes the above mentioned contents in the following statement: Technique and technology are two cultural aspects that have been deeply involved with music, not only in relation to its production, but also in relation to the development of its theory and to the establishment of its cultural role. Since the beginning of the twentieth century the relation between music and technology became more intense due to a series of reasons, among them, the increasing knowledge about sound physics and sound cognition; the access to low cost electricity; and the use of electronic and digital technology to artificially generate and manipulate sounds. (Iazzetta 2000)

In addition, musical parameters in a computer-based system used for musical performance can be programmed to establish different relations or interactions (understood as an action and reaction between performers-musicians and the computer), mainly when performing in a concert situation. This would be understood as an Interactive Music System.

10

Italian composer-researcher Agostino DiScipio formulated that typical interactive music systems can be viewed as dedicated computational tools capable of reacting in some way upon changes they detect in their external conditions, namely in the initial input and the run-time control data. (DiScipio 2003) Even more, the interaction should concern not only those two elements but also the performer, the computer and the physical place space where the performance is being held. Making thus, the system place-specific. We found similarities between our concept and a statement by American philosopher Jerrold Levinson. Levinson highlighted that the concrete dependency on the performance medium is a reminder that music itself is impermanent. All music, and particularly computer music, only exists in the moment. (Levinson 1997)

In the first part of our research we have covered those areas related to Interactive Systems and those related to Musical Performance. In terms of programming related to Interactive Systems in Musical Performance we have been influenced by a definition by Ali Taylan Cemgil and Ben Krse when they stated that computer programs that listen" to the actions of a performer and generate responses in real time are referred in computer music as Interactive Music Performance System (IMPS). (Taylan Cemgil and Krse, 2003)

We have also considered some aesthetic issues regarding Interactive Systems as well as analyzed different areas of Computer Music, Live Performances, Computer Languages Programming issues and New Media Art tendencies.

In the second part we intend not to cover already existent Interactive Music Performance Systems (IMPSs) nor to discover a novel system but to propose a different (more interesting for the writer) way of conceive interaction between musicians and computer. For this purpose we have created a system that analyzes (and understands) the behavior of musical information (provided by musicians) in real time and based on the information retrieved, automatically

11

diffuses sound signal through a multi-speaker system in order to make a translation of the musical gesture (provided by instrumentalists). That translation, is intended to properly fit (physically and acoustically) the concert space, by diffusing sound signal (Spatialization) in order to achieve the best acoustical balance for the specific incoming signal in relation to the out coming sound result.

By these means, we refer to our system of interaction as an integration circle (cycle), since we not only contemplate the relationship between performer and computer, but we are also concerned on the specific concert space, which will directly affect the musical result.

In order to carry out this research we planned to approach it from the different viewpoints within the framework of what we defined as the Music Cycle. This allowed us to have a wider scope providing us the possibility to cross-fade the related disciplines involved in each stage of the cycle, aiming an Holistic result.

Graphic 1 shows the way each of the three elements of our cycle relates to each other.

Graphic 1, The Music Cycle

However, there is one subject that put those three disciplines together in particular for our project. The fourth element is research. By research we mean that every single stage of the circle might be a research subject that might with

12

no doubt related to another element of the cycle. Furthermore, research sets up the framework to include the other elements into one explorative path.

Graphic 2, The Fourth Viewpoint as Framework for the Music Cycle

Those four different axis happens to set us to four very different situations in which we might found ourselves usually and from which the obtained results would be different if not opposite. These four viewpoints are: the researchers one, the composers, the performers and the listeners. For instance, as researchers, we feel that better tools would lead to better music but also and more important that new tools allow new music. As composers, we would find the audience enthusiastic or apathetic, but its presence provides real-time feedback in order to prove our results. As performers, music provide us with the skill of speaking a universal language. The last element would be to place us as an audience member. When attending to a performance nothing matters but the moment of being in the musical dimension for as long as the music lasts. Experiencing with the whole body the musical phenomenon. Each of these facets within ourselves is connected with the others. However, each focus is an attempt to reach greater understanding within the musical experience through a different viewpoint. 13

1.1_The (Intuitive) Need of Interaction


[]reality is an interactive construction in which the observer and what is observed are two interdependent sub-aspects.
1

We feel that implementing of a system that allow performers to interact in a logical consistent as well as intuitive way with it and, most important, that allow us to modify it at will in order to achieve more control during the performance, would be a partial solution to the performativity issues we found.[] We should implement a System that enhances gestural live performances [] Not just the implementation of sensors and actuators technology but also link the real (physical) world with the virtual (computer) environment.[] (Perepelycia, 2005a)

Human beings always unconsciously expect feedback coming from the actions produced in daily life. Every action we make is just supposed but also expected to have a direct effect on the object we are affecting, which, at the same time is supposed to respond to that stimuli generating a reaction which will call for at the same time another reaction, an so on.

As artists, we have similar expectations within our artistic musical environment. For example, when we perform live we have the natural instinct of expecting a reaction to a certain musical phenomenon we might have performed namely, an action. Moreover, in a concert situation where we are part of a group performing together, the need of interaction with the other performers is present since the very first sound (note) being sounded. And then again, we will not just waiting for the audiences reaction but for our music mates reactions.

We would like to point that even before the introduction of computers into music, musicians and performers were interested in controlling electric instruments in
1

Heinz von Foerster, On Self- Organizing Systems and their Environment, 1960. Cited in: Claudia Giannetti, Aesthetics and Communicative Context.

14

order not only to achieve the amount of control they had over acoustic instruments but also to experiment with the unseen possibilities of these technologies, seeking always to expands the possibilities and thus the limits of the musical language. However, we fell there is still a gap in both areas which needs to be filled: the lack of feedback between performers and computers.

1.2_The Principle of Human-Computer Interaction (HCI)


[]from studying human beings as brains, the focus moved to the study of human beings as subjects having a body interacting with the environment. []
1

Human-Computer Interaction (HCI), would be defined as the study of different types of communication (interrelation) between people (users, performers in music) and computers.

It directly links computer science with many other fields of study and research, which makes it an interdisciplinary area that aims the synergy of every field.

Interaction between users and computers is based (from the users point of view) on the implementation of UIs (User Interface), GUI (Graphical User Interface when related to software), or simply any type of interface, which might includes both hardware (different computer peripherals, pervasive computing systems, etc.) and software.

Antonio Camurri et al., Expressive gesture and multimodal interactive systems, InfoMus Lab Laboratorio di Informatica Musicale, DIST University of Genova, Viale Causa 13, I-16145 Genova, Italy, 2004.

15

1.3_Aims of Human-Computer-Interaction (HCI)

The basic goal of Human-Computer-Interaction (HCI) is to improve the interaction between users and computers by making computers more userfriendly and providing the proper answer to the user's needs.

A relevant area of research in HCI is related to the different methodologies and processes for designing interfaces and the process in which they are implemented as well as its correlated theories of interaction. New interaction techniques regarding descriptive and predictive models are also main areas of research, as well as development of new, dedicated software related to experimental (dedicated, specific) pieces of hardware that would cross the pragmatic and paradigmatic barriers of interaction and interactivity. The concept of HCI rose in popularity because of the notion that the humans needs should be considered first and are more important than the machine's. Therefore, in the last few years the field of HCI has emerged as an even more pronounced focus on understanding human beings as actors within sociotechnical systems. This increase on popularity of HCI rose a series of worries on the way HCI should be not only employed and implemented but also the way the should be designed. Design methodologies in HCI aim to create user interfaces that are ease to understand and operate with efficiency and at the same time they should be useful. Which means that they should bring us new solutions instead of just new ways of solving old problems.

A clear example, of a yet unclear area, is software development. Most pieces of software which are said to be intuitive to use base its functions on the implementation of a GUI (graphical user interface) with WIMP (Windows, Icons, Menus, Pointing) designs. Unfortunately, many times GUIs are poorly designed, which makes the whole software not really functional. In such cases the terms

16

intuitive and natural become very vague. Therefore, we can state that creating and implementing such interfaces is definitely a context-dependent issue.

1.4_HCI Design methodologies


The endeavor to optimize the human-machine interaction process and the response times involved led to an enhancement of the visualization and sensorial perception of 1 computer-processed information.

A number of diverse methodologies outlining techniques for human-computer interaction design have emerged since the rise of HCI in the 1980s mainly by the huge growth of microprocessors power and thus the popularization of personal computers.

Early methodologies employed in HCI systems design, treated users' cognitive processes as predictable and quantifiable and encouraged design practitioners to look to cognitive science results in areas such as memory and attention when designing user interfaces.

On the other hand, models conceived nowadays, tend to apply the same recursive loop-like system even from its conception. Generally they focus on a constant feedback and conversation between users, designers, and engineers and push for technical systems to be wrapped around the types of experiences users want to have, rather than wrapping user experience around a completed system. A clear example of this could be the project The Hands by Michel Waisvisz which is still a work-in-progress (like most system conceived with the above mentioned theory) though Waisvisz has been employing them for more than 20 years now.

Claudia Giannetti Aesthetics and Communicative Context, http://www.medienkunstnetz.de/themes/aesthetics_of_the_digital/aesthetics_and_communicative %20Context/1/ , p. 8.

17

An other modern design philosophy is UCD (User-centred design). UCD is rooted in the idea that users must take centre-stage in the design of any computer system. Users, designers, and technical practitioners work together to articulate the wants, needs, and limitations of the user and create a system that addresses these elements. Often, UCD projects are informed by ethnographic studies of the environments in which users will be interacting with the system. This philosophy of design is based on the needs of the user, leaving aside what is considered to be secondary issues in the design process i.e., aesthetics.

Lastly, there is an other current dating from the 1990s which is called CU (Contextual Usability). CU seeks to privilege neither users nor technology within a use or usage process. As such it links usability, ergonomics and user experience design to ideas emerging from social studies of science and technology such as actor-networks and socio-technical constituencies. Therefore seeking to locate motivations, instances and circumstances of use against social, cognitive and cultural influences.

2_Aesthetic Considerations of Interactive Systems (I.S.)


In most cases the aesthetic models are able to include all the arts.
1

Throughout the twentieth century there has been a questioning of the traditional forms of artist/audience boundaries.

In the 1960's and 70's the interactive art movement flourished all over the globe in art forms including visual art, theater, dance, music, poetry, and architecture. For example, happenings created free form installation/theater events in which the audience was often absorbed into participation into ongoing events.

Claudia Giannetti Endo-Aesthetics, www.medienkunstnetz.de/themes/aesthetics_of_the_digital/endo_aesthetic, p. 1

18

In recent years interactive art has not been a major movement although the advent of contemporary interactive technology is resurrecting interest in these traditions. Perhaps it increased the repertoire of actions and thus increased the chance for fruitful randomness. In this sense Stephen Wilson pointed that many contemporary high tech artists are more focused on the design of systems for creation rather than one particular outcome. (Wilson 1986)

The experience of the interactive artists is useful to those outside of art because of their analysis of the relationship of culture and media, their sensitivity to the relationship between media and audience, and their attention to the aesthetics of interactivity.

However, we should bear in mind that I.S. do not attach to an specific path nor follow a certain aesthetic tendency or current. On the contrary they could perfectly fit to any aesthetic without modifying it. This is mainly because I.S. are (at least Computer-based ones) mostly based on computer languages programming. And I.S., as well as programming, have hierarchies and structures but they cannot be framed into aesthetics, at least from an analytical point of view not from an artistic aesthetic, simply because they do not consider that point on their conception since aesthetics considerations are linked to art and programming is linked to science.

19

3_Brief History of the Interactive Systems in Musical Performance


Media artin its diverse forms ranging from audiovisual installations to interactive systems, from hypermedia to artificial reality, from the net to cyberspacereinforces the idea of interdisciplinarity, which reaches much further than the aforementioned considerations about the relationship of art and technology.
1

It is, for sure, that music is the oldest of the electronic arts. The term electric music can be traced as far back as the 19th century. By that time there were several electric instruments which generated tones by means of simple electromechanical circuits. Perhaps one of the best examples is the electric Organ by American entrepreneur Thaddeus Cahill, made in 1897. It was the size of a train and intended to transmit music over wires directly into peoples' homes.

In 1920, Russian inventor Leon Theremin demonstrated for the first time an instrument which could be played without any direct physical contact. The Theremin is a touch-less instrument conceived for being gesturally played by moving the hands near an antennae, controlling the pitch and a second one controlling the volume. One of the most interesting concepts of that particular instrument is the idea of gestural performance while at the same time not having a tactual interface.

Analogue synthesizers (Modular voltage-controlled) appeared on mid 60s (Robert Moog, Don Buchla, Paolo Ketoff) provided another alternative to keyboard-based live performances. Musicians implementing those synthesizers began to interact with the instrument during performances (i.e., adjusting panel controls, using more unusual devices such as joysticks or ribbon controllers, etc.), extending the possibilities employed till that moment.

Claudia Giannetti Aesthetics Paradigms of Media Art, www.medienkunstnetz.de/themes/aesthetics_of_the_digital/aesthetic_paradigms, p. 1

20

In addition, more complex interactive systems started to being developed since modular synthesizers were flexible enough to not only control every parameter in the synthesizer (which gave composers and performers the possibility of a detailed sound manipulation) but to control several parameters at once (which gave the possibility of manipulating multiple elements on real time).

Musical examples that included these techniques could be those pieces by American composer Morton Subotnick, which had names starting with the word touch. These actions could be (relatively) compared to those interactive performance produced on today's IMPSs.

During the 80s, the fast development of micro-processors made humancomputer relationship to became more common and fluent.

Computers were then available not only for big companies and academic studios but also for people in general. Personal computers (PCs) become powerful enough to process sound with really complex algorithms in real-time giving the possibility to lots of programmers and musicians to explore interactive music just with their computers without the need of being part of big institutions. (Winkler, 2001).

In our time, systems became capable of highly complex algorithmic computations with real-time responses making them perfectly suitable for gesture detection and translation. Moreover with the great amount of hybrid-instruments (meta-, hyper-, cyber-) being employed in musical performances, the distinction between interactive and traditional performance techniques could be difficult to establish in a hybrid system since there is still no mapping standards in gestural performance of computer music.

21

3.1_Non standard models of IMPSs (just a few examples)

There is a growing number of instruments not based on traditional models that would be instructive to look at in detail. The performance group Sensorband provides a good illustration of some new approaches to interactive music performance. Sensorband members are Atau Tanaka, Edwin van der Heide and Zbigniew Karkowsky. Each one of them performs on instruments including sensor-based interfaces implementing various types of sensor technologies to communicate with the computers which form the basis of the instruments.

Below I have included an interview to the Sensorband done in 1998 by Dutch researcher-creator Bert Bongers:
Sensorband, as the name suggests, is an ensemble of musicians who use sensor-based gestural controllers to produce computer music. Gestural interfacesultrasound, infrared, and bioelectric sensorsbecome musical instruments. The trio consists of Edwin van der Heide, Zbigniew Karkowski, and Atau Tanaka, each a soloist on his instrument for over five years. Edwin plays the MIDI Conductor, a pair of machines worn on his hands. The MIDI-Conductor uses ultrasound signals to measure his hands relative distance, along with mercury tilt sensors to measure their rotational orientation. Zbigniew activates his instrument by moving his arms in the space around him. This motion cuts through invisible infrared beams mounted on a scaffolding structure. Atau plays the BioMuse, a system that tracks neural signals, translating electrical signals from the body into digital data. To quote the groups World Wide Web page ( http://zeep.com/sensorband ), the result is a powerful musical force of intense percussive rhythms, deep pulsing drones, and wailing melodic fragments. As a developer of new electronic musical instruments, I have been involved in some of Sensorbands projects, and have seen the group perform a number of times, starting in 1994 at the Sonic Acts Festival in Paradiso, Amsterdam. At these concerts, the audience becomes involved in the compelling energy of the performance, the relationship between physical gesture and sound, and the musical communication between the three performers. A Sensorband concert is

impressive in its display of instrumental virtuosity, and proves that the three musicians have been playing together for some time. Although they had met each other individually on several occasions (including the International Computer Music Conference in Cologne in 1988), the first time the three met as a group was in October 1993, at the Son Image

22

festival in Berlin. While in Berlin, Edwin had the idea to form a trio. Sensorbands first performance was in December of 1993, at Voyages Virtuels, a virtual reality exhibit organized by Les Virtualistes in Paris.
1

When referring to the Sensorband, Karkowski said also that they wanted to make pure, primitive and direct connection with their audience by playing music with their bodies.

German Digital ArtistComposer Rainer Linz mentioned that the impression of watching Sensorband is a strange one, since the performers' gestures are unlike any that are used to play a traditional instrument, yet they clearly create the music that one hears. (Linz, 1997)

With some non-standard IMPSs the issue of divergence between gesture and sonic result seems to be quite distant. Many interactive systems seems to pursue just interaction emphasis putting aside the correlation between the gesture behind the sound result. In recent years it has been a general preoccupation and substantial work being done in the field of Mapping techniques for gestural translation within the framework of Musical Performance Systems. (Mishra and Hahn 1995), (Rovan et al. 1997), (Camurri et al. 2000a, 2000b), (Wanderley et al. 2001), (Chadabe 2002), (Young and Lexer 2003), (Sedes et al. 2003). However we feel there is still a lot to be done in order to get close to a standardization regarding Mapping techniques. We mentioned standardization since it would represent the foundations for evolution in terms of appropriate techniques of Mapping tools.

Bert Bongers, Computer Music Journal, 22:1, pp. 1324, Spring 1998, Massachusetts Institute of Technology.

23

4_ Music = Interactive Art (what and where is the intention of interaction?)


In the performers mind and ears lies the only source of unforeseen developments, of dynamical behavior.
1

During the last century the main focus of psychological research was set to the question of learning and teaching. Researchers tried to understand how humans learn, remember and use information.
2

Although some of these traditions, such as behaviorism

, have stressed

traditional notions of teaching and learning such as drill and practice and prestructured presentation, other traditions suggested the value of learner-centered or inquiry approaches. Several different theoretical traditions thus offer foundations for the importance of interactive media and interactive art.

Therefore, if we understand music as a human language which is produced by human expression we are assuming that the human element is present (or in the case of an acousmatic experience, the human element was present by the time the recording or the sound processus were done).

This will lead us to the conclusion that the interactive element (press a button, pluck a string, blow a reed) is empirically present in every practical musical situation and was ever since. Care on every case should be taken since different degrees of interactivity would be present on each musical situation.

Agostino Di Scipio, Sound is the Interface, Proceedings of the Colloquium on Musical Informatics, Firenze 8-10 May 2003 2 Behaviorism is an approach to psychology based on the proposition that behavior can be researched scientifically without recourse to inner mental states. It is a form of materialism, denying any independent significance for the mind. One of the assumptions of many behaviorists is that free will is illusory, and that all behavior is determined by a combination of forces comprising genetic factors and the environment, either through association or reinforcement. (URL 1)

24

To give some light on this aspect I would like to refer to a paragraph by Rainer Linz from his writing Interactive Musical Performances. Rainer claims that the term interactive when applied to music performance can be problematic. This is because in a broad sense, music has always been an interactive art. We could describe any musical performance as a process of real time control over a complex system (instrument), through gesture and based on feedback between performer and machine. Even with a more recent perspective, and with hindsight to advances in computer technology, the distinctions between interactive and traditional music practice can sometimes be difficult to define. [] (Linz, 1997) Regarding Interactive Musical Performance Systems (IMPSs) we can trace the origins at the begin of the 20th century. However, we are concerned with the type of Interactivity implemented (conceived) with the introduction of computers into music and consequently, we could state that after the inclusion of computers into music (after Max Matthews incursions in computer music in the mid 50s), Interactive Systems have been employed more and more in this filed of art.

As we explained early, we feel that IMPSs were employed even more, after the Digital Era, in the 80s. In that sense, thanks to the development of programming languages the level and enhancement of interaction in the piece, have greatly increased.

Interactive Systems are conceived in a similar way a musical composition is made. Its designer makes decisions during its creation. Those decisions are mainly but not exclusively related to the type of interaction the system will support or will be based on and will affect the performer(s) actions who will be expected (asked) to act in a certain way in order to make the system react.

Historically, electronic and interactive art music have occurred almost contemporarily and both have made major impact on today's instrument designs.

25

It is clear that interactive music performance is part of a musical tradition that has been present for centuries in either western and eastern cultures.

Regarding IMPSs, in a typical live situation, the computer will usually detect a certain type of information (data) and will react to it according to the programming (decisions) made previously. The interaction is established by this two-way flux of data (the musician acts and the computer reacts, then, the reaction of the computer becomes an action, to what the musician reacts, and so on).

Graphic 3, Basic HCI data flux (computer system w. no learning capabilities)

Interaction is described in this graphic in the way most systems are designed today. Computer will perceive any action made by the performer and it will act accordingly but in no way the will act firstly. They will always wait for our action in order to react and just then the loop-like cycle will began.

Argentinean composer-researcher Horacio Vaggione pointed that action and perception lie at the heart of musical processes, as these musical processes are created by successive operations of concretization having as a tuning toolas a principle of reality an action/perception feedback loop. (Vaggione 2001)

26

In this statement, Vaggione seems to unify the idea of perception and cognition into perception, enclosing the whole meaning of sensitivity into one stage.

When considering the interrelations between different phases of the cycle (perception-cognition-action), a yet blurred field would be setting the rules on which those phases (elements/sections) base its activity and thus, interact with each other when the system is in use. We believe this is also a Mapping subject, which respond to aesthetics considerations.

Agostino DiScipio argued that interaction is not usually referred to the mechanisms implemented within the computer: the array of available generative and/or transformative methods usually consists of separate functions, i.e. they are conceived independent of one another, and function accordingly. He continued by stating that interaction (either between system and external conditions, or between any two processes within computer) is rarely understood and implemented for what it is in real living systems (either human or not): a byproduct of implemented lower-level interdependencies among system

components. (DiScipio 2003) Considering those statements by Di Scipio, we should urge to an emergence act in terms of interdependence between performers and systems (computers) which would contemplate AI for intelligent behaviors. If not, we will always be responsible of throwing the first stone, to always began the action our selves and not having the possibility of being surprised by the machine.

However, we should bear in mind that the autonomous obviates any possibility of interactivity between humans and computers, making the term HCI to disappear, so we would like to mention that regardless of the degree of interactivity present on a system the human factor is still conclusive during the performance.

27

To conclude, we would like to refer to a reflection by Italian digital artist Claudia Giannetti on her writing on Endo-Aesthetics. Giannetti explained that the interactive system is insofar always potential, and does not exist in actively autonomous form, since it is dependent on the action of the observer or environment, be this action visual, acoustic, tactile, gestural or motoric, be it energetic (as in the case of brainwaves), or physical (as in the case of respiration and movement). (URL 2)

4.1_Beyond Interaction = Automation?


Machines have the power and potential to make expressive music on their own.1

Considering a full autonomous system would be, regarding the topic of this research, not just useless but contradictory. As we already seen, autopoietic systems are 100% autonomous and accepts no interactivity. Therefore, any external intervention will make them to collapse since it will be recursive to its Autopoiesis 2.

Chilean researcher Humberto Maturana stated that

for any particular

circumstance of distinction of a living system, conservation of living (conservation of autopoiesis and of adaptation) constitutes adequate action in those circumstances, and, hence, knowledge: living systems are cognitive systems, and to live is to know. (Maturana 1988)

Autonomous systems like those described by Maturana or those including AI on their basis might be able to produce the first action (reaction would be understood as an action viewed just from a different perspective). Though, they

Tristan Jehan, Creating Music by Listening, PhD Thesis, Massachusetts Institute of Technology, 2005. 2 Autopoiesis - from the Greek auto (self) and poiein (shaping) - means self-shaping.

28

will close them selves becoming autopoietic and thus generating only states of Autopoiesis.

On such cases Maturana thinks that the most important consequence of an autopoietic organization consists in the fact that everything occurring within the system is subjected to autopoiesis; otherwise the living system would collapse, because changes in the state of the organism and of the nervous system as well as of the medium act reciprocally, and so give rise to continuous autopoiesis. That means that living systems are determined by their structure (structurespecified), and that autopoiesis represents their constitutive attribute. The expansion of the cognitive processes (action and interaction) by the nervous system enables, non-physical interactions between organisms in simple relationships and therefore communication. (Maturana, 1996) In this particular case, any attempt to try to modify them by interacting will fail. Therefore, an intelligent system would provide the ideal environment to freely act or wait for a newly unexpected action from the theoretical reactor which in this case would be both (actor and reactor), taking the same position than performers.

The following graphic compares different aspects of HCI systems by J. C. R. Licklider1 and Douglas Englebart2. It was extracted from the paper Affording Virtuosity HCI in the Lifeworld of Collaboration by Linda T. Kaastra and Brian Fisher:

J. C. R. Licklider Man-Computer Symbiosis IRE Transactions on Human Factors in Electronics, v HFE-1, pages 4-11, 1960. 2 Douglas Englebart Augmenting Human Intellect: A Conceptual Framework. AFOSR-3233 Summary Report, 1962.

29

Englebart System Goal Augment human abilities

Licklider Supplement or replace human function with AI

Development

Evolution of human concepts supported by technology

System takes over tasks, increasing its role as it becomes more intelligent

Integration with Organizations

Human

analysis

of

Humans freed of mundane tasks are able to accomplish more creative tasks

human/human/machine systems leads to increasingly effective decision making (Network improvement community)

End state

Unknown-- to be defined by augmented humans

Constrained capabilities

by

system

Graphic 4 (Table of Comparison of expectations by Englebart and Licklider)

Brazilian researcher Eduardo Reck Miranda takes in consideration AI applications to music systems when he describes the Musical Brain. Reck

Miranda said that from a number of plausible definitions for music, the one that frequently stands out in musicological research is the notion that music is an intellectual activity; that is, the ability to recognize patterns and imagine them modified by actions. We understand that this ability is the essence of the human mind: it requires sophisticated memory mechanisms, involving both conscious manipulation of concepts and subconscious access to millions of networked neurological bonds. In this case, it is assumed that emotional reactions to music arise from some sort of intellectual activity. (Reck Miranda, 2000)

After considering Reck Mirandas thought and a few others regarding the field of emotional and intellectual perception of arts we conceived a concatenated term. Artists are expected supposed, at least to produce make art. An artist creates. This implicates, in every case, to make use of the sensibility. Therefore, if artists make use of their intellects, the artistic notion would be set aside and that work risks to be considered not art but a group of artistic ideas put together. Then, artists should make use of what we defined contrary to what most

30

psychologist think as Intellectual Instinct. This concept which seems to be a de facto contradiction is based on the principle of the natural instinct dealing, at the same time, with the rational side of the mind. In order for this principle to exist, artists should be capable of applying rationalism within their instinct again, another contradiction. They should be able to see the paths being taken by their creative impulses in order to understand and cohere them. By making this, artists would have a certain management over their creation in the exact moment the conception of the artistic idea is evolving, tough far before it would exteriorized. In this way they would correct parameters related to the creativity on-the-fly. Parameters that might have been decided during the theory process but doesnt fit anymore into the idealized result being produced. Intellectual Instinct will definitely be directly related to their feeling(s) during the creative process. (Perepelycia 2004)

In an interview by journalist Amelia Barili in early 1980s, Argentinean writer Jorge Luis Borges mentioned the idea of the intellectual instinct. He said that intellectual instinct is the one that makes us search while knowing that we are never going to find the answer. (URL 3)

We like to think that perhaps music is this. Searching the meaning of our selves knowing that we are going to spend our whole life and in the end we will not find it anyway.

However, we keep trying!

31

4.2_Multimodal Interactive Systems (Towards an Integrated Interactivity)


The so caused strong interaction of different sensations makes our subjective sound impressions and assessments crucially dependent on such cross-modal influences. Human auditory impressions are essentially influenced by the multiplicity of additional 1 sensory information.

Italian researcher Antonio Camurri described Multimodal Interactive Systems as follows. Multimodal interactive systems employ information coming from several channels to build an application designed with a very special focus on the user, and in which interaction with the user is the main aspect and the main way through which the objectives of the application are reached. (Camurri et al. 2004)

Implementation

of

Multimodal

Interactive

Systems

(MMISs)

would

be

advantageous since they are conceived with elements from both the scientific field and the artistic and humanities field. They are strongly rooted on the study of Human-Human interrelationship (scientific and technological research empowered by psychological, social and artistic models and theories) in order to establish a later relationship in HCI which would emulate the previous model.

After many years of research on cognitive aspects we are starting to concern our actual research to study emotional processes and social interaction.

Multimodal interactive systems are able to process high-level expressive information making data exchange more effective. In other words, once extracted the high-level information from the incoming users gestures, the system should be able to produce a response containing information suitable with respect to the context and as much high-level as the users inputs. In order to perform this task, the multimodal interactive system should be endowed with strategies (sometimes called mapping strategies) allowing the selection of a suitable response. Such

Joachim Scheuren et al., Practical Aspects of Product Sound Quality Design. Proceedings of

Les Journes du Design Sonore , October 13-15 2004, Paris, p. 2.

32

strategies are very critical, since they are often responsible of the success (and of the failure) of a multimodal interactive system. (Camurri et al. 2004).

Regarding specifically to music (either the ones that relates to this research or not) we feel that the performance phenomenon is a multimodal one in terms of how it is perceived and how it is produced. We refer to the idea that one sensory modality always has the ability to modulate and/or influence the other, making it multisensorial. Then we would refer to the relationship between the performer and an instrument as a multimodal relationship.

Portuguese composer-researcher Pedro Rebelo wrote that the relationship between a performer and an instrument is defined as a multimodal participatory space, rather than one of control; it is defined as a sensory space that is navigated by constant reference, by constantly acting on feedback from an immediate environment . (Rebelo 2004)

4.3_Interaction in the field of New Media Art


The digital computer has penetrated different strands of creativity to the extent that it has fundamentally changed the way we are exposed to media objects such as music, film, visual art []
1

The origins of new media art can be traced to the moving photographic inventions of the late 19th Century such as the Zoetrope (1834), the Praxinoscope (1877) and Eadweard Muybridge's Zoopraxiscope (1879). Averaging 20th century, during the 1960s the divergence with the history of cinema came with the video art experiments of Nam June Paik, and multimedia performances by Fluxus artists. More recently, the term new media has become
1

Pedro Rebelo, Performing Space, Architecture, School of Arts, Culture and Environments, The University of Edinburgh, 2003.

33

closely associated with the term Digital Art, and has converged with the history and theory of computer-based practices.

New media art would be placed field of contemporary art practice incorporating media technologies by a very diverse group of artists, scientists, poets, musicians, and theorists since the art and technology movement in the 1960s, including video installation and more recently net-art. The subject of new media art began with a pre-history of it including the long history of immersion Wagner's concept of the 'total art work' (Gesamtkunstwerk) and also Marcel Duchamp's concept of dictum (considered by us also as motto) that the viewer completes the work of art. This art current examines the concept of interactivity, and its origins in avant-garde traditions at the beginning of the twentieth century, as a reaction to the widening gap between the mass media and the art audience.

We can also trace New media basis on electronic and digital art. Both as starting points to begin to understand how new media blurs the hierarchies separating art forms and the conventional distinctions between artwork and viewer. It is also concerned with the implications of the performative nature of digital art, scienceart crossover, collaborative creations, global audiences and the politics of virtual aesthetic experience in the age of the Internet.

Furthermore, media art, in its diverse forms ranging from audiovisual installations to interactive systems, from hypermedia to artificial reality, from the net to cyberspace, reinforces the idea of interdisciplinarity, which reaches much further than the aforementioned considerations about the relationship of art and technology. In the context of interdisciplinarity, the intermeshing of art, technologies, and science refers to the process that brings about convergence, interference, appropriation, overlapping and interpenetration; a process

successively leading to the generation of referential networks and reciprocal-nonhierarchic-influences.

34

4.4_Do we need Interaction in Computer Music Performance?


It is not what you see and what you hear, it is what you want to see and hear.
1

Before naming some trends on interaction in computer music performance I would like to refer to some ontological considerations about music and traditional music performance by citing Michel Waisvisz words:
[] Music in a pure conceptual format is only understandable by the ones who know the concepts. Music that contains the physical expression of a performer is recognizable by a larger group through that expressive mediation of the performer. [] (URL 4)

The stages or steps that conform the our actual Systems of Interaction between Human and Computer could be described as follow:
i. ii. Humans perform an Action through its Effectors (Muscle Action, Speech, Breath, etc.). This Action is perceived by the Computers Senses (Input Peripherals like Keyboards, Webcams, Joysticks, Sensor Technology, etc.) iii. The Computer analyze that information perceived and according to an acquired knowledge (programming -AI) decides which action is to be produced to the stimulus. iv. The Computer Reacts through its Actuators and sends a Feedback (actually a new data signal) to the Human. v. Humans perceive this Feedback through their Senses and translate this into Information according to their acquired knowledge (Perception-Cognition). vi. Then, Humans React to this Data through its Effectors.

This system represents a loop-like data stream between Human and Computers, which represents a constant Action and Reaction flux of information, (Graphic 3, Section 3).

Alexis Perepelycia, It is not what you see and what you hear, it is what you want to see and hear , Some considerations on musical perception after Stockhausen, Zappa, The Beatles and MTv, (in progress).

35

Joel Chadabe defined that the way electronic instruments behaves could be understood both as deterministic as well as indeterministic. He said that deterministic instruments might offer more powerful controls but performers will produce a gesture expecting a predictable effect. On the other hand indeterministic instruments will call for an interactive role of improvising relatively to an unpredictable output.

When Chadabe referred to interactive instruments he meant 'mutually influential'. The performer influences the instrument and the instrument influences the performer. The unique advantage of such interactive instruments is that they foster 'interactive creativity'. (Chadabe 2002)

It seems to us that Chadabe refers to the loop-like interactive system (described in chapter 3) in a comparable way Vaggione1 and Di Scipio2 had referred to it. This concept on the way Interactive Systems are conceived give raise to certain evaluations on the way those systems should be designed bearing in mind its needs and aims.

Even though our background is mainly composed of experiences using traditional instruments which are based on deterministic principles we found our selves gradually moving into an area involving Indeterministic principles, which become part of our research topics.

Chadabe points that the design of a traditional instrument is fundamentally different from the design of an interactive instrument. A traditional instrument is structured as a single cause and effect, articulated as a synchronous linear path through a hierarchy of controls from a performer operating an input device to the multiple variables of a sound generator. An interactive instrument, on the other
1

Horacio Vaggione, Some Ontological Remarks about Music Composition Process, Computer Music Journal, 25:1, pp. 5461, Spring 2001 2 Agostino Di Scipio, SOUND IS THE INTERFACE, Proceedings of the Colloquium on Musical Informatics, Firenze 8-10 May 2003

36

hand, is structured as a network of many causes and effects at various levels of importance, with a performer's input as only one of the causes of the instrument's output in sound. (Chadabe 2002)

We feel a need on implementing IMPSs in our work since, as we mentioned before (chapter 1) we consider they are becoming more and more necessary as an act of emergence. Emergence for a need of fast, fluid and reliable way of communication [Interactivity (action and reaction)] between performers and computers in order to take live computer music into a higher level of performativity.

Regarding the emergence act Pedro Rebelo formulated the following statement. The emergence of behavioral traces in the experience of an art-work urges the artist/designer to engage in a re-configuration that transforms an object-orientedart into a performative environment. (Rebelo 2003)

In any case, the term emergence can be taken as indicating a conscious utilization of the changing boundaries between the subject (listener, interpreter) and the maker (artist, composer), in which the former interacts with what the latter has made, such that the work can be said to emerge in its use, rather than having been designed in its entirety by the artist and then presented. This too might be regarded as a principle enhanced by the mechanisms (technological and social) associated with digital technologies. (Paraphrase of Simon Waters (2000). Beyond the Acousmatic: Hybrid tendencies in electroacoustic music, in Simon Emmerson, ed. Music, Electronic Media and Culture. Aldershot: Ashgate.) (URL 5)

37

5_ Computer Music
Evidently, using computers (the most general symbolic processors that have ever existed) drives music activity to an expansion of its formal categories.
1

In 1982 IBM released the first commercial personal computer. This drastically changed not only our daily lives but also the way most musicians approach to music. Those who were used to analog synthesizers changed from voltage to bits and coding to achieve musical results faster than ever before. In general, we could say that computers are nowadays almost anywhere and are used almost for every task from really simple paperwork to highly complicated calculations and scientific processes.

Regarding the music field, we agree with Miller Puckettes words on the introduction of his book on Theory and Techniques of Electronic Music when he refuses the term Computer Music since most electronic music nowadays is done with computers. He proposed that this type of music should be really called Electronic music using a computer. (Puckette 2006)

We consider these words a stand point to redefine what is considered nowadays as computer music and how this definition influences our work. Moreover, we like to make a cross-fade between this idea with Barry Truaxs on complexity of todays music. Truax argues that computer music, however it may be defined today, has continued the Western, i.e. European, tradition of music as an abstract art form, i.e. an art form where the materials are designed and structured mainly through their internal relationships. In this model, sounds are related only to each other, what I have elsewhere termed inner complexity. (Truax 1994a)

Vaggione, Horacio Some Ontological Remarks about Music Composition Process, Computer Music Journal, 25:1, pp. 5461, Spring 2001.

38

Thus, we think that computer-based IMPSs are the key to pursue an outer complexity that could, or not, be related in a way to that inner complexity, but with no doubts should aid composers and performers to achieve a more clear relationship between the Object and the Subject in EA music. That relationship might remain unclear although we feel science producing new technology and Cognitive Sciences working on the perception and understanding side are providing the proper tools to get the two poles closer to each other.

Claudia Giannetti on her writing on Endo-Aesthetics wrote: as a system, art is closer to science then ever before. (URL 2)

5.1_Performativity of Computer-based Interactive Systems


[] The computer allowed us to reinvent expression.[]
1

the traditional categories of musical

The term Performativity could be understood as the amount of different actions elements of the final structure that could be done over a certain instrument or system in order to (each time) obtain different results. In other words, Performativity could be understood as how flexible, in terms of creative possibilities, a system is.

When referring to IMPSs Performativity could be described as the amount of interactivity between Performers and Computers within a System regardless of the types of Interaction. If we think of the System of Action-Reaction described in chapter 1.2 (Music as Interactive Art), we could think of Performativity as the amount of different Actions (made by the Performer) that will cause different Reactions (by the Computer-System).

Hugues Genevois Geste et Pense musicale: de loutil linstrument, Les Nouveaux Gestes de la Musique, ditions Parenthses, 1999 ; p. 35.

39

However, different types of Interaction not considering the amount will definitely affect Performativity of the system. Consequently we can state that performativity of an IMPS would be the result of the different combinations of types of interaction plus the amount of interactivity each of those set interaction parameters offers, resulting in a greatly flexible system. Then, flexibility is the keyword for Performativity with IMPSs.

We could make a comparison with Chadabes definition on designing traditional and interactive instruments. It is all matter of flexibility. The more variable you have to perform with the wider the sound spectrum available for performance and thus the richer the sound result. (Chadabe 2002)

5.2_Performance of Computer-based Music


Composers use computers not only as number-crunching devices, but also as interactive partners to perform operations where the output depends on actual performance.
1

Computer-based interactive music systems started being employed during the late 1960s, initially involving computer-controlled analog synthesizers in concerts or installations. This current could be cited as one of the first successful attempts of HCI in musical performance since it proposed a new way of interacting with a musical instrument exploring its sonic possibilities.

We would like to mention that perhaps one of the most significant achievements of this current was the possibility for non-musicians (non-instrumentalists) to control a musical device which would provide commands for more than just pitch and amplitude control. The will of having devices for controlling different parameters than the already mentioned pitch, loudness and time, began with
1

Vaggione, Horacio Some Ontological Remarks about Music Composition Process, Computer Music Journal, 25:1, pp. 5461, Spring 2001.

40

the Musique Concrete era, after researchers efforts for achieving control over other sound dimensions like Spatialization parameters, etc.

In this area it was originally Pierre Schaeffer who initiated the search and developed the device called Pupitre dEspace
1

(Spatial Desk), which consisted

on a control device which required a physical movement from the performer in order to diffuse sound signals within the concert space.

During the following decade (1970s) the use of real-time algorithmic composition spread with the work of composers and performers such as David Behrman, Joel Chadabe, Salvatore Martirano, Gordon Mumma or Laurie Spiegel cited by Mumma2 and Bernardini3 but its greatest impulse really came during the mid 1980s with the MIDI standardization and, slightly later, with the advent of dataflow graphical programming languages such as MAX originally created to interface and control the IRCAMs 4X by Miller Puckette4 which made the design and implementation of custom interactive systems simpler than ever before. (Winkler, 1998).

Whilst many interaction peripherals may form part of the computer performer's interface, the typical performance mode consists of a single user interacting via mouse with a GUI-based program at a gestural rate divorced from the rate of output events, so that causes are uncorrelated from effects. This interaction method does not offer value in terms of Musical Gesture since it is very poor. Therefore we would not go in deep to study this particular case of interaction in Computer Music Performance.

1 2

Joel Chadabe Electric Sound Prentice Hall, 1997, pp. 31. Mumma, G. 1975. Live-electronic music. In J. Appleton and R. Perera, eds. 1975. The Development of Electronic Music. Englewood Cliffs: Prentice Hall. pp. 286-335. 3 Bernardini, N. 1986. Live electronics. In R. Doati and A. Vidolin, eds. Nuova Atlantide . Venice: la Biennale di Venezia. pp. 61-77. 4 Puckette, M. 1988. The Patcher. Proceedings of the 1988 International Computer Music Conference. International Computer Music Association, pp. 420-429.

41

5.3_Computers as Instruments
There are no theoretical limitations to the performance of the computer as a source of musical sounds, in contrast to the performance of ordinary instruments. At present, the range of computer music is limited principally by cost and by our knowledge of psychoacoustics.
1

Actually many performers, programmers and composers including the author use to take advantage of the possibilities provided by computers either when composing in a studio situation (for assistance e.g., algorithmic composition or for sound processing e.g., sound synthesis) or in a live situation.

Nevertheless, it seems that those performers (many times composers as well) who employ (almost exclusively) a computer in live situations are those belonging to aesthetic currents considered aside of the more purist

Electroacoustic field, such as experimental electronic music and the so-called laptop music.

On the other hand, it seems that composers and performers belonging to more traditional (most times academic) currents of Electronic and/or Electroacoustic Music implementing computers in live concerts, tend to use them for real-time sound process (also referred as live electronics) of acoustic instruments. In other words, computers are used here just as a small part of the process. A small number of performers who might be framed into that aesthetic current employ solely a computer during their performances. Either as a sound source (synthesis, playback of pre-recorded material) and as a DSP unit (a variety of processes).

MATHEWS, M. : The Digital Computer as a Musical Instrument . Science. November 1963. Cited in, CHADABE, J. :Electric Sound, p. 110.

42

5.4_The Computer as a Meta-Instrument


A crucial feature in the application of digital technology to sonic art is the development of sophisticated hardware and software tools to permit human physiological-intellectual performance behavior to be transmitted as imposed morphology to the soundmodels existing within the computer.
1

We would not consider a computer as an instrument (at least not a musical one). Although it has the potential to become one (if we make the right decisions when programming or use the right software). We also think that this instrument a computer might became would be a meta-instrument, since it will have newer/different or updated possibilities to a previously existent instrument. If not, we would have employed an already existent one. As a meta-instrument or a new conceived one, it has to provide us with as much performance possibilities (parameters) as possible in order to explore its potential. However it should be user-friendly enough to make its use in a concert situation as easy as possible in order to allow us to focus exclusively on musical phenomenon.

Trevor Wishart affirmed that a computer can change our entire perspective on the way we do things because it is not a machine designed for a specific task but a tool which may be fashioned to fulfill any task which we can clearly specify. Therefore we can assume it as a meta-machine. In particular, it offers the possibility of being a universal sonic instrument, a device with which we can model and produce any conceivable soundobject or organization for sounds. One of the main principles for considering the computer as a Meta-Instrument, as Wishart claims with his statement existing within the computer, is the fact that computers are unlimited in terms of programming flexibility to create new music, new interactive systems, new ways of creating new music. The only limitations are us. And we are dealing with it daily in order to keep going ahead and keep finding new ways to take advantage of computer systems. (Wishart 1996)

Trevor Wishart On Sonic Art. Harwood Academic Publishers, 1983/1996.

43

5.5_Laptop Music
The transition to laptop based performance created a rift between the performer and the audience as there was almost no stage presence for an onlooker to latch on to. Furthermore, the performers lost much of the real-time expressive power of traditional analog instruments.
1

The mass production of personal computers (PCs) since the 90s and thus the lower in costs, provided musicians with possibility of including computers as tool (by that time, mainly in studio environments). Moreover, portable computers (notebooks, laptops, tabletops, PDAs) provided performers to take advantage of taking to the concert stage a tool that until that moment was only possible to employ at the studio. That fact, produced a big impact in the way music was demonstrated to public. Then, it was possible to bring your Laptop to a concert almost everywhere and to create in real-time (here & now) what until a few years was only possible with a Workstation.

Regarding the label Laptop music, nowadays is present in many Electronic Music concerts and festivals, however it is not a genre but a characteristic of contemporary performance practice in electronic music.

Pioneers of implementing just Laptops in their performances are people like Carl Stone who started in 1991 to use the Max programming environment to perform live. An other case is people belonging to the by then up and coming Japanese Noise scene began also to implement Laptops in their performances and improvisations since the early 90s. We should mention artists like Yuji Takahashi, Mamoru Fujieda, Yuko Nexus, Nobuyasu Sakaonda et Masayuki Akamatsu. Also Austrian collective Farmer's Manual are often referred as the first laptop ensemble since they started performing live with their Laptops in 1996.

Patten, J.; Recht, B. & Ishii, H. (2002). Audiopad: A Tag-based Interface for Musical Performance. In Proceedings of the 2002 International Conference on New Interfaces for Musical Expression (NIME-02), Dublin, 11-16.

44

Also to be mentioned is the case of DJs who, since the raise of reliability on Laptop implementation in musical performances, perform live and make music throughout an entire set (perhaps a few hours) with just a Laptop. We can refer to artists such as Merzbow, Vladislav Delay, Carsten Nicolai, AGF, Zbigniew Karkowski, Matmos or Oval, just to give a few examples.

5.6_Other Mobile Devices (that enhance interactivity)

The need of audio files compression in order to match the Internet streaming possibilities gave birth to the MP3 audio codec. After a few years MP3s not only remained inside the web and computers but also, taking advantage of their size, companies started developing portable devices to be loaded with those audio files in MP3 format. This phenomena provided musicians with new portable devices to make music even more portable and take the concept of ubiquity to a new level. MP3 players, Palmtops, Laptops, iPods, Mobile Phones, etc.

We have also to recognize as it already happened with computers that portable devices are becoming more powerful and the types of applications they can support are becoming really sophisticated. Increased processing power, memory and the addition of multiple forms of motion and location sensing bring into the hand a device capable not only of supporting more demanding applications such as video capture and editing, but also of supporting entirely new forms of interaction.

During the Summer School in Sound and Music Computing 2006 (S2S2) held at the Universitat Pompeu Fabra in Barcelona, German researchers Gnter Geiger and Martin Kaltenbrunner from the Music & Technology Group at UPF

showed an adapted version of an iPod mini which was modified in order to run previously programmed Pure Data patches. They claimed that the version of the iPod they were using has a dual processor providing enough CPU power to

45

easily run synthesis algorithms as well as providing a decent audio quality from its headphones output. One of the lowest points of this project, however, is the limited User Interface that iPods provides if we intend to use them to a highly interactive purpose rather than listening to mp3s. Though, with good programming skills the power of extremely portable devices providing great computing power and perhaps Bluetooth connection in order to have internet access through a mobile phone becomes just the beginning of a new current of IMPSs.

6 _Types of Interaction
We seek to frame an extended expressiveness towards interactive systems through the concept of Aesthetic Interaction that can be obtained when the human body, intellect and all the senses are used in relation to interactive systems.
1

Although we are mainly concerned with IMPSs, during this chapter we will show some examples of different types of ISs related not just to the performative arts as well as briefly defining a taxonomy of most relevant techniques regarding our research.

As we have already seen, in the electronic music field different types of interaction are almost always present. However, since the most common interaction in the electronic arts is the interaction between performer and its instrument namely, the system interactivity does not limits only to music. We find examples of interactivity present in other artistic fields implementing computer systems such as painting and architecture i.e., a painter drawing with an optical pen on an tablet PC or an architect designing and/or drawing with a computer program i.e., a CAD (Computer Aided Design) software.

Marianne Graves Petersen et al., Aesthetic Interaction A Pragmatists Aesthetics of Interactive Systems. DIS2004, August 14, 2004, Cambridge, Massachusetts, USA.

46

However, we would like to present a simply taxonomy regarding our research topics conceived after some considerations on different aspects such as:
the way ISs are accessed by the users we perceived a differentiation between those systems offering palpable interfaces and those which offer touch-less interfaces to users, performers, etc. regarding our research topics we covered systems which favor Gestural music performances with special attention on systems favoring Musical Gestures. Interactive Systems which support Non-Gestural interaction.

We have also covered site-specific and network systems,

the interactive

implications of different systems, the amount of Interactivity offered by different systems, single- and multi-users systems, etc.

6.1_Palpable (Haptic) vs. Touch-less Interaction


We create the world by perceiving it.
1

In every culture traditional musical instruments are based on haptic actions which not only provides aural feedback (acoustical representations) but also palpable feedback at the precise moment the action is being done.

For instance, guitar players move the left hand (right handed players) across the fret board to change the pitch of the sound. If they move one finger before the previous sound stops they could achieve a legato sound. If the player moves the right hand between the bridge and the fret board he/she could achieve different timbers from more aggressive to a mellower sound. If you need a shorter sound you just release the pressure of the finger and the sound will stop. All these simple actions differ drastically one from the other and by doing them you get different results, mainly because of tactile contact with the instrument. So we
1

Humberto Maturana, Kognition, in Der Diskurs des Radikalen Konstruktivismus, Siegfried J. Schmidt (ed.), Frankfurt/Main, 1987, pp. 89118

47

could state that Tactual perception allow us to react and modify different parameters of sound on acoustic instruments. (Perepelycia 2005a)

On Haptic Interaction Irish researcher Sile OModhrain formulated the following statement. The coincidence of connectedness, awareness and richly multimodal input and output capabilities brings into the hand a device capable of supporting an entirely new class of haptic or touch-based interactions, where gestures can be captured and reactions to these gestures conveyed as haptic feedback directly into the hand. (OModhrain 2005)
1

In short the combined feedback from tactile and haptic proprioceptive

systems

provides a myriad of cues that help us move and act in the world: The tactile sense, mediated by receptors in the skin, relies on movement between the skin and an objects surface in order for any sensation to be perceived without movement, objects disappear from our tactile view. (Mine et al. 1997)

In Computer Music performance Tactual Haptic Feedback is almost inexistent. The lack of physical feedback different to sound perceived either by ears or in an event of great sound pressure also by the body tends to make artificial any attempt of implementing a Physical Interface. This is mainly due to the fact that in most cases there wont be a direct relationship between the Physical Gesture perceived by the interface and the sound result produced by the computer. This dissociation of events tends to confuse the audience since the sonic musical result will not match the visual result, generating a perceptual dichotomy between the vision and hear channels. On the other hand, trained musicians when performing Computer Music tend to favor gestural actions, trying, perhaps, to emulate or mimic their instrumental

Proprioception from Latin proprius (meaning one's own) and perception is the sense of the position of parts of the body, relative to other neighboring parts of the body. It is the sense that indicates whether the body is moving with required effort, as well as where the various parts of the body are located in relation to each other. (URL 6)

48

skills or, in any case, looking for gestures that would emphasize computational musical processus and thus, the overall musical result.

There are, though, a number of different possibilities regarding physical interfaces. Among the instrument-like controllers most implemented are keyboard type (either with MIDI connection or, more recently, with USB connection). But also MIDI guitars, violins, drums and wind-instruments such as trumpets and saxophones are employed, though less frequently.

However, the majority of the interfaces perhaps used to physically control a computer favor gestures not related to traditional musical instruments. It is frequent to see performers moving faders and tweaking knobs of a MIDI mixer or a similar device which represent the most used interface nowadays. Also pedal-boards are normally implemented as a way of sending information to the computer. Switches and buttons are also implemented though the offer limited options of data transfer and physical actions.

Regarding touch-less interaction, several types of sensor-based hybrid controllers tend to provide performers with tools that will mainly translate in a contactless way physical actions into data. (See also: Bongers 2000, Paradiso et. al 2000, Bowers and Archer 2005)

An interesting project to mention though not strictly musical is the project called EyesWeb by Antonio Camurri. This project is a camera-based motion detection and analysis through specific designed software. It was implemented in live computer performances several times. Italian composer Luciano Berio implemented the system for his opera Cronaca del Luogo in the inauguration of the Salzburg Festival in 1999. For that occasion one of the main characters, interpreted by David Moss had attached several sensors to his costume plus there where a few cameras strategically placed which served to provide him with

49

contact and contact-less real-time control over the sound processes made on his own voice by the computer. (Camurri 2004)

6.2_Gestural Interfaces
[] Also, from the very moment where the notion of musical instrument experiments major changes, we question the modes of existence and appropriation on the nature of the musical gesture. []
1

Regarding the freedom of combination possibilities during the design process either of the interface or the processes it will control Bert Bongers pointed a few remarkable issues. He mentioned that the question of how to deal with the total freedom of being able to design an instrument based on the human (in)capabilities instead of a traditional instrument form, has not been answered yet. A suggested approach is to look at the most sensitive effectors of the human body, such as the hands and the lips, which also have the finest control, and look at the gestures and manipulations that humans use in traditional instruments or in other means of conveying information. (Bongers 2000)

6.2.1_New Devices and Hybrids ones

Given the wide array of available input devices, the choice of suitable controllers is a fundamental consideration, as is the subsequent interpretation of control gestures. In this aspect gestural control it seems, not only to me, that the newest (really) tactual/gestural musical instrument to have achieved a certain amount of success among musicians, is the Turntable. Turntables has been used as musical instruments almost since its creation. In 1894 Emile Berliner created

Hugues Genevois Geste et Pense musicale: de loutil linstrument, Les Nouveaux Gestes de la Musique, ditions Parenthses, 1999 ; p. 35.

50

the Gramophone and 25 years after being created musicians started to experiment with its musical capabilities.

The history of using the turntables for making new music dates back to Paul Hindemith, Ernest Toch, Percy Grainger, Edgar Varse and Darius Milhaud in the twenties, but the first important attempt was John Cages Imaginary Landscapes #1 from 1939. His means of manipulating was adjusting the rotation speed when playing monophonic tones (RCA test tones). The next step, Musique Concrte, introduced the father of vinyl manipulation, Pierre Schaeffer. He claimed to have found the music of the future and foresaw an ensemble of turntables. Because the record player is able to produce any desired sound, the turntable represented the ultimate musician. Perhaps the most known piece for turntable Schaeffer did is Etude aux Chemins de Fer from 1948. (Khazam, 1997)

However, Turntables become used not just to playback vinyls but also as physical instruments. This by accident gave birth to Turntablism described as the act of playing a Turntable which was born inside the Hip-Hop culture (from which now it has became an essential part) in the Bronx, New York, during the middle 70s.

From a different viewpoint, we would like to refer to a previous experience of our own. On that project we have worked with a pair of Data-Gloves and a MIDIcontrolled acoustic Piano. The key point of this project was to create a system, which allows the Performer to interact gesturally with an Instrument. Our experience with the system suggest that sensor technology would be a very powerful tool to control an acoustic instrument remotely, by translating the performers gestures into musical information through computer programming. For the piece Libertad(es) Controlda(s) we provided de performer with a reliable wearable interface which allow him/her to contact-less interact with a Piano enhancing the principle of gestural integration between Performer and Instrument. (Perepelycia 2005b)

51

One of the first and perhaps most representative examples of a glove implemented in musical performance is Laetitia Sonamis Lady's Glove. The actual version (N 4) was built by Bert Bongers in 1994. The Ladys Glove is fitted with a variety of sensors to enhance control. It includes five micros-witches, four Hall effect transducers, pressure pad, resistive strips on each finger, two ultrasonic receivers and a mercury switch on the top of the hand and an accelerometer which measures the speed of motion of the hand.

Sonami mentioned the intention in building such a glove, was to allow movement without spatial reference (there is no need to position oneself in front or in the sight of another sensor), and to allow multiple, parallel controls. Through gestures, the performance aspect of computer music becomes alive, sounds are "embodied", creating a new, seductive approach. (URL 7)

We cannot obviate Michel Waisvisz The Hands. This device developed at STEIM, in Amsterdam in 1984 is based on a hand-mounted interface which is to be controlled by different hand and fingers actions. They are also fully equipped with sensors to measure physical actions as well as events related specifically to the inter-relation between each one of the interfaces like proximity from one another and distance of each hand in relation with the floor, etc.

There are several commercially available Data-gloves that not just include sensors but they also implement actuators like the CyberForce or CyberTouch by Immersion Co. These interfaces provide users with tactual feedback (e.g. pulses, sustained vibrations or even force feedback), which could definitely enhance the feel of performing different tasks while interacting with the system and most important, the system could react in a different manner to different actions the performer does.

52

6.2.2_Hyper, Meta ,Cyber

All the above given examples are not-based on previous musical instruments. In other words, they were born as novel interfaces. Nevertheless, there are several examples of acoustically modified instruments, some of them respond to the name of Hyper-Instruments. This type of instrument was created by Tod Machover for a project called VALIS. Machover wrote: The technology we developed for this opera project [VALIS] came to be called hyperinstruments. By focusing on allowing a few instrumentalists to create complex sounds, we continually improved the technology and reduced the number of actual musical instruments. The resulting opera orchestra consisted of two performers: one keyboard player and one percussionist. They controlled all the music for the opera, and almost all of it is live [To build effective hyperinstruments] we need the power of smart computers following the gestures and intentions of fine performers. (URL 8)

This family of instruments was conceived with the aim of augmenting or emphasize the performers capabilities. So it is clear that in order for performers to achieve its maximum potential to conditions would be necessary: first, play a traditional instruments and then to master its execution.

There is another family of hybrid instruments which might be including in a similar category of the taxonomy if not in the same which receives the name of Meta-Instruments. Meta-Instruments, different to Hyper-Instruments, are based on the concept of modified traditional instruments in order to achieve different musical results at least different to those for which the instrument was originally conceived. They represent a pseudo-synthetic product conceived through the concatenation of two elements. On one side, they are based on alreadyexistent traditional instruments, which provides traditional performers the possibility to keep implementing their already acquired traditional techniques. On the other hand, these instruments offer a whole world of new possibilities

53

specially those including electronic/digital modifications in order to be computercompatible in terms of direct connectivity with a computer-based system. These instruments doubles as acoustic instruments as well as working as interface to control for instance a Laptop running Max/MSP which might include a sampler for the acoustic signal of the instrument, a synthesizer, Spatialization of sound, controlling external devices via the OSC protocol, etc.

It is clear that instruments belonging to this category placed in the middle between traditional lutherie considering mechanical modifications and adaptations and what is called virtual lutherie which is built from virtual instruments such as synthesizers, physical modeling and so on with the extra value of being the instrument itself the control interface are extending the possibilities not only of the instrument itself but also of the sound possibilities, thus increasing the musical potential. We believe this category provides artists with new tool to design new types of interaction and thus, redefine the scope and applicability of IMPSs.

We will then, refer to cases including electronic/digital modifications since those cases are the closest to our research.

A good example of this category would be Jonathan Impetts Meta-Trumpet dating from 1993. Impetts Meta-Trumpet has several sensors to measure different frequently- made physical actions such as air pressure, valves pressure, orientation of the bell, etc. This data is transformed and sent into a computer running Max/MSP. Then, processes are controlled by the same actions that produce the acoustic signal. (See also: Impett 1994)

Other similar examples including different techniques and/or technologies to produce the Meta part of the instrument are Nic Collins Trombone-propelled electronics v. 3.0 and performers from the UEA Electronic Orchestra (integrated by diverse artists such as Jonathan Impett, Nic Collins, Cesar Villavicencio, John

54

Bowers, Phil Archer and Nick Melia. They implement not just Impetts Trumpet and Collins Trombone, but also a custom-made Meta-bass recorder, a Metaflute and several other modified acoustic and electric instruments.

USA-based researcher Insook Choi mentioned when referring to the musical gesture implementing new technologies that the configuration of a performing art with new performance media demands research criterion for applying human motions and gestures. It has been a challenge for an artist living in rapidly changing industrial society to identify the relevance of existing research, and to identify the relevance of goals suitable for performing art with new technology. (Choi 2000)

6.2.3_The meaning of the Musical Gesture


[] we believe that to study the role and functionality of gesture in interactive music systems, one should consider it in a broader sense in order to be able to take some music peculiarities into account. For this reason we take gesture as any type of "bodily movement [...] made consciously or unconsciously to communicate either with one's self or with another" (Hayes 1966; Nth 1990). This would include not only emblematic hand gestures, but also any other bodily movement capable of conveying meaning. This would also include actions of touching, grasping or manipulating physical objects in order to control music or sound parameters.
1

Musical gestures have been evolving since we have knowledge of them, either in terms of expressiveness or in terms of technique. There have been great instruments inventions throughout musical history, some of them were based on already existent ways of gesturally interact with an object or a non-musical instrument while others postulated new approaches, new techniques and therefore new gestures. Those new inclusion were really important in the musical history not just because they represent an evolution bringing new possibilities
1

Fernando Iazzetta Meaning in Musical Gesture, Reprint from: Trends in Gestural Control of Music , M.M. Wanderley and M. Battier, eds. 2000, Ircam - Centre Pompidou, p. 261.

55

probably inexistent until then but also because they represented a challenge to musicians who were exposed to those new instruments or new designs modifications, etc. In other words, they had to break with the cultural heritage and accepts new forms . They had not to break with traditions but at least to deviate a little bit from it. This gave place to an evolving culture. With strong roots but though open to new forms.

Each new step represented (represents, with inventions nowadays) not just innovation in the musical and musicological fields but also in the scientific (from which most developments came from), technologic and historic.

To graphic the relationship between gesture and music we would like to make a comparison with the relations established between gesture and sound in the Narrative Art theory by Walter Benjamin. Benjamin, in his essay Sociologic Problems of Language1 from 1935 wrote that gesture comes before sound. Originally he pointed that elements used in language are based on mimicgestural elements. Though, after meditating on some thoughts by Mallarm on dance, he slightly modified his definition claiming then, that the roots on spoken expression and dance expression belong to a one and only mimic faculty.

This concept applies as an example of non-musical gesture and it might be placed in the field of Language Processing studies by Cognitive Sciences. The study of language processing ranges from the investigation of the sound patterns of speech to the meaning of words and whole sentences.

However, tracing a parallel with our topic every theory or technique implemented to study musical gesture demands a mapping strategy to divide the

Walter Benjamin Problmes de Sociologie du Langage, Oeuvres III, Paris, Gallimard (folio essais), 2000, trad. Par Maurice de Gandillac, revue par Pierre Rusch. Cited on Anne Boissiere La Part Gestuelle de Sonore : expression parle, expression danse. Main et narration chez Walter Benjamin. Universit de Lille_3, 2004.

56

whole into small portions in order to build a taxonomy of events to be independently studied. Bert Bongers described a taxonomy applied to sensor-based movement detection for interactive systems implementation. Bongers said that movements can be measured without contact (for instance with a camera) or with a mechanical linkage (with a potentiometer for instance). A complex, multiple degree-of-freedom1 movement (as most movements are!) is often decomposed and limited by mechanical constructions.

A Gesture starts with human muscle action , and is then distinguished into isometric (no movement, just pressure of pushing against something) or movement (when there is displacement). In the latter case, the movement can be sensed through mechanical contact , or free moving contactless measurement.

Coming back to Benjamins concepts, after some considerations on it, we suggest that, regarding the Musical Gesture, that specific relationship gives place to a specific type of language the Gestural language. This specific language is a two-ways communication interaction channel through which both elements affect each other. We also refer to musical gesture as the musical representation of the physical gesture need to produce a certain sound, more specifically, a certain music (note, phrase, section, piece). We can say that the complex performer-instrument interrelation would place the instrument as an extension to the performers body. That specific gesture is rooted on archaic principles dating from the very first steps in communication between human beings. It is rooted on the first traces of languages. It is not just a physical gesture. Is a communicative gesture. It is a code. Has a meaning. It is not just understood and translated by the instrument, it is also understood by the audience who is carefully watching

Degrees of freedom (DOF's) is the term used to describe the position and the orientation of an object in space. There are three degrees of freedom to mark the Position on the x, y and z axis of the three-dimensional space, and three degrees of freedom which describe the rotation along these axes.

57

the performer make a succession of gestures conforming a dialogue. A musical discourse with its own choreography. That choreography has the aim of transforming physical energy bio-mechanical energy into acoustical energy which will shape the sounds, firstly, and therefore the music.

French researcher Claude Cadoz wrote that the Instrumental Gesture which is employed by musicians to sound an acoustical instrument is composed by a chain of categorical elements. Namely, the instrument itself, the musician, the way the action to sound the instrument is produced, the way the energy to produce this action is transformed into acoustic energy and the sound result. (Cadoz 1999)

Cadoz also refers to the Instrumental Gesture as communicational and thus is by essence semiotic it is used to express something , epistemological as many of our daily gestures employs unconscious knowledge acquired through our senses but moreover is ergotic since musicians produce their own (physical) energy to be used to produce the later Musical Gesture.

Lawrence

Barsalou

suggested

that

gesture

mediates
1

all

modes

of

communication. (Barsalou 1999) In this way the mnemonic image of a musical passage can be closely related to the kinaesthetic image required for its

reproduction. In a listening context, it is for this reason that the experience of a particular musical image can trigger association with the kind of kinaesthetic image involved in its generation. Recent theories of music and gesture describe the possibility of kinaesthetic imagery to play a mediatory role in relationships between both poles. (Battey 1998).

Kinesthesia is a term that is often used interchangeably with proprioception, though researchers differentiate the kinesthetic sense from proprioception by excluding the sense of equilibrium or balance from kinesthesia. Kinesthesia is a key component in muscle memory and hand-eye coordination and training can improve this sense. (URL 9)

58

In other order of things we would like to mention that every musical instrument is specifically related to a certain culture and also to a certain technology employed to develop it after cultural background and social implications of the specific place it was conceived and therefore they suggests its own types of interaction and engagement. For instance, a string instrument from Middle-east will be played in a different way if taken to Asia, where people is used to play other string instruments in a different way, employing different type of gestures and also will have different expectations in terms of acoustic energy and thus sound results. In this sense we feel that social implications of new interfaces that call for new gestures would be perceived in a different way by different cultures.

6.3_Non-Gestural Interaction
[] non-physical interactions distinguish human beings from organisms that lack a nervous system and in which interactions are purely physical in nature (as in the case of a plant, for example, where the reception of a photon triggers photosynthesis). Communication as interaction is a component of the system, and as a cognitive process does not refer to an autonomous external reality, but is a process of behavioral coordination between the observers through structural coupling. []1

Although this type of interactivity is not our aim we consider that we should mention it as it seems, in many ways, the more closely related to the Cognitive Sciences, and thus, they seem to provide us with reliable at least, most promising tools in terms of research in the music perception field as well as providing methods to study Electroacoustic music perception, which is of our concern.

An example of a Non-Gestural (touch-less) Interactive System is the so-called Brain Orchestra developed at the University of Toronto.
1

Humberto Maturana, Kognition, in Der Diskurs des Radikalen Konstruktivismus, Siegfried J. Schmidt (ed.), Frankfurt-on-Main, 1987, p. 114. See Humberto Maturana and Francisco Varela, Autopoiesis and Cognition: The Realization of the Living, Boston, 1980.

59

On the 31st of March 2005 the project Regenerative Music was presented. Regenerative Music, developed by James Fung at the University of Toronto, explores new physiological interfaces for musical instruments. The computer, instead of taking active only cues from the musician, reads physiological signals (heart beat, respiration, brain waves, etc.) from the musician. These signals are used to alter the behavior of the instrument. For instance filter settings on the sound can be applied, to which the musician responds by changing the way she/he plays. The music will in turn generate an emotional response on the part of the musician/performer, and that this response will be spotted by the computer, which then modifies the behavior of the instrument further. The musician and instrument can both be regarded as an "instrument" playing off of each other.

The concept was extended during DECONcert 1. Where 48 people's electroencephalogram signals were hooked up to affect collectively the audio environment. The EEG sensors detected the electrical activity produced in the brains of the participants. The signals from the brains were used as signals to alter a computationally controlled soundscape. The resulting soundscape triggered a response from the participants, and the collective response from the group was sensed by the computer, which then altered the music based upon this response. And so forth. (URL 10)

One of the remarkable sides of this system device is that it is not necessary to be familiar with a particular technique i.e., any instrumental technique. Therefore, it is affordable to everybody in terms of performativity, since it is a novel channels to produce music and there are no standards in terms of ways of producing it, other than just producing brain activity. On the other hand, this technique presents a great lack of control as an interface for an IMPS. Consequently, we are not focused on interfaces implementing this type of techniques, since they had not proved to be reliable enough for interesting music performance.

60

A strange example which is difficult to include in this chapter since its duality is the Bio-Muse. Bio-Muse (developed by BioControlSystems and adapted by Benjamin Knapp for Atau Tanaka) was introduced in 1992. The Bio-Muse is a bioelectric signal controller, which allows users to control computer functions directly from muscle, eye movement, or brainwave signals, bypassing entirely the standard input hardware, such as a keyboard or mouse. It receives data from four main sources of electrical activity in the human body: muscles (EMG signals), eye movements (EOG signals), the heart (EKG signals), and brain waves (EEG signals). These signals are acquired using standard non-invasive trans-dermal electrodes. This device used by Tanaka implements EMG technology to catch the Bio-impulses received by a sensor patched in the forearm of the performer. (Perepelycia 2005a)

We decided to include the example of the Bio-Muse within the Non-Gestural Interfaces chapter since we perceive that its principle is not directly related to Gesture it self but to a collateral action detached form a gesture. We could say that though a physical action is needed in order to make the system work the information retrieved by the system will be just bioelectric signals regardless of the physical action namely, Gesture which produced it.

6.4_Single- and multi-users IMPSs


It is by designing and constructing electronic communication channels among players, that performers can take an active role in determining and influencing, not only their own 1 musical output, but also their peers.

Since its origin, IMPSs were both implemented in solo and multiple user settings. While performing music has typically been a collective event, traditional musical instruments have been mostly designed for an individual use (even if some, as
1

Weinberg, G; Interconnected Musical Networks Towards a Theoretical Framework. (2005). Computer Music Journal, submitted. Cited in Sergi Jord Puig, Digital Lutherie: Crafting musical computers for new musics performance and improvisation, PhD Thesis, Universitat Pompeu Fabra, March 2005.

61

the piano or the drum kit can be easily used collectively). This restriction can now be freely avoided when designing new interfaces, which leads to a new generation of distributed instruments, with a plethora of possible different models following this paradigm (statistical control, equally-allowed or role-playing performers, etc.). Implementations of musical computer networks date back to the late 1970s with performances by groups like the League of Automatic Music Composers or the Hub (Bischoff, Gold and Horton, 1978).

Also, art collectives such as the Austrian collective Farmer's Manual who are often vaunted as the first true laptop ensemble since they started performing in 1996 , the UEA Electronic Orchestra (Jonathan Impett, Nic Collins, Cesar Villavicencio, John Bowers, Phil Archer and Nick Melia) or B.L.I.S.S. (the Belfast Legion of Improvised Sights and Sounds) are exploring and proposing new ways of collective multi-users interaction. Pushing the threshold not only on which regards to the sound result comparable to that of an ensemble or an orchestra contrasted with a single instrument but also redefining the way computer might behave: a Meta-Instrument within an ensemble, coordinate others actions, process others sounds, share sounds or algorithms with other performers, sonification or visualization via an own algorithm others sounds or others images, etc.

This type of IMPS is clearly based on the traditional performance concept where musicians interact with each other but there is little to none interaction with the audience.

There are some novel designs regarding IMPSs which seek different ways of creating music in a group context while looking for new types of propositions in terms either of performativity and performance situation. One of this examples is the Audiopad, developed at the MIT Media Lab. Audiopad, is an interface for musical performance that aims to combine the modularity of knob based controllers with the expressive character of multidimensional tracking interfaces.

62

The performers manipulations of physical pucks on a tabletop control a real-time synthesis process. The pucks are embedded with LC tags that the system tracks in two dimensions with a series of specially shaped antennae. The system projects graphical information on and around the pucks to give the performer sophisticated control over the synthesis process. (Patten et al 2002)

A similar, in a way, interface employing also table-top concept is the reacTable, developed by a research team lead by Sergi Jord within the MTG group at the Pompeu Fabra University. The reacTable* aims to create a state-of-the-art interactive music instrument, which should be collaborative (off and on-line), intuitive (zero manual, zero instructions), sonically challenging and interesting, learnable and masterable, suitable for complete novices (in installations), suitable for advanced electronic musicians (in concerts) and totally controllable (no random, no hidden presets). The reacTable* should use no mouse, no keyboard, no cables, no wearables. It should allow a flexible number of users, and these should be able to enter or leave the instrument-installation without previous announcements. (Jord 2005)

From a different perspective, we believe that the field of multi-users IMPSs would be best represented by the concept of interactive sound installations which respond to the public actions movements and gestures leading us to another important point: that of the skills and the know-how of the performer(s). In other words, an installation will replace the concept of skilled performers by an interactive/explorative phenomenon where users/performers the public discovers the interactive features of the system in an intuitive way. Perhaps the two last mentioned examples are the closest to this concept, although they offer potential users a limitation in terms of available space in which to interact since the interface is site-specific namely, a few square meters.

Regarding the idea of integrating and exploring the performance space by its users/performers in the form of a large scale Installation which would solve the

63

limitations of the above mentioned examples , we found a project developed in the middle 90's at the MIT called Brain Opera.

The Brain Opera, conceived and directed by Tod Machover, was designed and constructed by a highly interdisciplinary team at the MIT Media Laboratory during an intensive effort from the fall of 1995 through summer of 1996. A major artistic goal of this project was to integrate diverse, often unconnected control inputs and sound sources from the different Lobby participants into a coherent artistic experience that is "more than the sum of its parts", inspired by the way our minds congeal fragmented experiences into rational thought. (Machover 1996) The Brain Opera is by no means a fixed or purely experimental installation; it had to operate in many real-world environments (already having appeared at 7 international venues), and function with people of all sizes, ages, cultures, and experience levels. As a result, the interface technologies were chosen for their intuitiveness, overall robustness and lack of sensitivity to changing background conditions, noise, and clutter. This tended to rule out computation-intensive approaches, such as computer vision (e.g., Wren et. al. 1997), which, although improving in performance, would be unable to function adequately in the very dense and dynamic Brain Opera environment. (Paradiso 1998) We found this project effective to a certain extent when considering our aims since its interface invites to multi-user exploration and interaction. However, we feel it has a lack of control over the sound processes as it does not provides great amount of performativity control for users, which might lead to similar music results.

Other commercial examples not related to IMPSs such as gaming environments use to enhance single and multi-users interaction providing also enhancing tools such as visual and sonic immersion, which greatly increase the level of interaction while using an interactive system.

64

6.5_... and it goes through the Internet (network and web-based interaction)

The formation of international projects in the 1970s was a crucial stimulus for art in conjunction with telecommunication as well as for the notion of ubiquity. The Brazilian Waldemar Cordeiro, a pioneer of Computer Art, in 1971 identified the inadequacy of communications media as a form of information transmission and the inefficiency of information as language, thought, and action as being the causes of the crisis of contemporary art.

Cordeiro asserted in his Manifest Artenica that art whose main emphasis lies on the material object restricts audience access to the work and therefore meets the cultural standards of modern society neither qualitatively nor quantitatively. (Cordeiro 1972)

Cordeiros

deliberations

in

regard

to

global

networking

and

free

telecommunication, enabled audience access to a work of art anticipated the notion of ubiquity, participation and net art.

We would like to mention an example of a system web-based for live interactive music designed by American Composer-programmer John Paul Young.

In order to achieve successfully interactive communication between the system and its potential users performers Young suggested the following conditions to be accomplished:
Bidirectional communication between each interactor Independent and persistent existence Consistent, perceptible rules governing interaction and feedback Aspects of emergent behavior incorporating, but not limiting to, direct manipulation Potential for coordinated collaboration with other interactors without requiring external channels of communication Evolution the shared environment changes as a result of the sum total of interactions

65

These conditions describe many aspects of our perception of physical reality, but need not be implemented as a literal reflection thereof, with all the complexity that would imply. Taking as a point of departure the observation that music is deeply meaningful though fundamentally abstract, the features above can potentially be incorporated into a virtual environment arising from the same conceptual basis as our relationship to music. (Young 2003)

6.6_How Interactive a system is?

Most Interactive Systems as we know them are partially interactive in terms of unpredictability. By this we mean that most of those ISs will react in the expected (usual) way. Since most of its creators intend to provide users with relatively predictable actions in order to engage them with the system or, in other words, to engage the system with them.

In this sense there is a wide range of possible ways ISs would react. A better description on how interactive an IS would be was given by Douglas Englebart when he coined the term Intelligence Amplifier1 when referring specifically to the computer. This means that if we push our selves and demand the system it will answer in accordance with our action if the system is accurately prepared.

Agostino Di Scipio formulated an interesting reflection on the behavior of ISs:


[] As an example, the sudden occurrence of, say, too large a mass of notes or sound grains or other musical atom units would not automatically induce a decrease in amplitude (a perceptually correlate dimension of density): such an adjustment remains a chance left to the performer. No interdependency among processes is usually implemented within the interactive system. []
1 2

Douglas Englebart Augmenting Human Intellect: A Conceptual Framework. AFOSR-3233 Summary Report, 1962. 2 Agostino Di Scipio, SOUND IS THE INTERFACE, Proceedings of the Colloquium on Musical Informatics, Firenze 8-10 May 2003

66

Hence, composers who implement this type of systems are directly responsible for the results to be obtained when performers interact with the systems and on how the system will be perceived by users/performers.

In other words, the success or failure of the task is typically defined a-priori, thus enabling researchers to establish how quickly and efficiently the users would be able to achieve their task-defined goals. In this model the computer acts as a tool whose evolution is mediated by increased intelligence and has as its goal a "partnership" with the user in the sense of J. C. R. Licklider's Man-Computer Symbiosis1. (Kaastra & Fisher 2006)

7_Programming & Coding (Everything comes from the source!)


Programs that execute strictly in time order are said to obey causality because there is never a need for knowledge of the future. This concept is especially important when modeling music perception processes and interactive systems that have to respond to some signal that is yet unknown ; for example, a machine for automatic accompaniment of live performance.
2

Human brain is born as an empty container which needs to be filled in order to acquire knowledge. In the case of computers, its brain is to be filled with code in order to teach them to perform any action but more important, to make them learn and understand from those actions.

From the 50s computer scientists began to theorize about the possibility of computers to think. After hard efforts in psychology and cross-disciplinary studies Artificial Intelligence (AI) was born and with it a new standpoint from where we
J. C. R. Licklider Man-Computer Symbiosis IRE Transactions on Human Factors in Electronics, v HFE-1, pages 4-11, 1960. Cited in: Kaastra, Linda T. and Fisher, Brian Affording Virtuosity HCI in the Lifeworld of Collaboration, CHI 2006, April 22-27, 2006, Montreal, Canada, p. 1. 2 DANNENBERG, R. B., DESAIN, P., & HONING, H. : Programming language design for music. de POLI, G, PICIALLI, A. POPE, S. T. & ROADS, C. (diteurs) : Musical Signal Processing. Lisse :Swets & Zeitlinger, 1997.
1

67

began to think on the possibilities of machines to learn. Learning machines are capable of simulating human behavior in terms of Neural Networks1 (NN). We will focus on these two topics in the next chapter, however, before moving forward we first need to define Intelligence.

Regarding intelligence we would like refer to Jean Piagets definition:


"Intelligence is assimilation to the extent that it incorporates all the given data of experience within its framework [] There can be no doubt either, that mental life is also accommodation to the environment. Assimilation can never be pure because by incorporating new elements into its earlier schemata the intelligence constantly modifies the latter in order to adjust them to new elements"2

Therefore, we could say that a system that enables a machine to increase its knowledge by a learning process and improve its skills, would be an intelligent one. Many if not most of those intelligent models employ computational modeling to simulate human brain behaviors implementing arrays of Neural Networks (NN).

In recent decades, computational modeling has become a well-established research method in many fields including Music Cognition. There are two clearly definable approaches. One aims to model musical knowledge departing from music theory taking methodical formalization in order to contribute to the understanding of the theories employed, the predictions made and the scope of the previous two. The second approachs aim is to construct theories of music cognition. Here, the objective is to understand music perception and music performance by formalizing the mental processes involved in listening to and in
1

An Artificial Neural Network (ANN), also called a Simulated Neural Network (SNN) or commonly just Neural Network (NN) is an interconnected group of artificial neurons that uses a mathematical or computational model for information processing based on a connectionist approach to computation. In most cases an ANN is an adaptive system that changes its structure based on external or internal information that flows through the network. In more practical terms neural networks are non-linear statistical data modeling tools. They can be used to model complex relationships between inputs and outputs or to find patterns in data. (URL 11) 2 Jean Piaget The psychology of intelligence. New York: Routledge, 1963, pp. 6-7.

68

performing music. The two approaches have different aims and can be seen as complementary. (Honing, 2006)

7.1_Object-Oriented Programming

Object-Oriented Programming (OOP) is the name some programming languages i.e.: Common LISP, JAVA, etc. receive when having certain methodological properties related to organization structures and taxonomy inside the code.

For instance, consider an object inside a program as a citizen in the society. Each one have different roles, that others doesnt have. But everyone needs that each other perform their roles in order to succeed as a whole.

Therefore when using object-oriented programming your overall program is made up of lots of different self-contained components (objects), each of which has a specific role in the program and all of which can talk to each other in predefined way. OOPs environments represent the ideal environments when simulating brain behaviors since each of the objects inside a code would represent a Neuron. Therefore Neuronal Networks (NNs) could be created and interrelated in order to simulate human brain behavior.

Then if we decide to conceive any kind of Interactive System implementing Artificial Intelligence thus Neural Networks we will have to employ an OOP environment for our programming.

A designed network consists of a series of additions and multiplications along with a transfer function. A neural network is made up of layers, each of which contains several neurons. Each neurons operation can be considered as vector operations. (Cont et al. 2000)

69

We should mention that Neural Networks a form of Connectionist

models

utilize a sub-symbolic representation of neural activity based around abstracted neuron components.

The training session of a neural network does not require an expertise to realize the network. An empiric approach with trials and errors would eventually make the network converge to the desired behavior. However, a clever choice of network architecture and parameters would save a lot of time in realizing the network. (Cont et al. 2000)

Different project dealing with this type of programming use to take advantage because NNs provide flexible and adaptive algorithms. In other words, programs employing NNs are able to compare different instances and then based on previous events they learn the needed actions to be made in order to improve their functioning.

In other chapters we have mentioned the problem of mapping when referring to gesture translation into sound or interpretation by Computer Systems. Programming including NNs would provide a flexible tool to perform the task of Gesture translation by Mapping Strategies.

In a research carried out in La Kitchen, Paris (Cont et al. 2000) a Neural Network simulation was implemented to perform gestural mapping within the Pure Data environment.

Connectionism is an approach in the fields of artificial intelligence, cognitive science, neuroscience, psychology and philosophy of mind. Connectionism models mental or behavioral phenomena as the emergent processes of interconnected networks of simple units. There are many different forms of connectionism, but the most common forms utilize neural network models. The central connectionist principle is that mental phenomena can be described by interconnected networks of simple units. The form of the connections and the units can vary from model to model. For example, units in the network could represent neurons and the connections could represent synapses. Another model might make each unit in the network a word, and each connection an indication of semantic similarity. (URL 12)

70

They have compared some traditional mapping methods with their implemented system and mentioned the following advantages:
The user can work directly with the desirability of correspondence between gesture and produced results, rather than the complex mechanism of the mapping algorithm. The empirical approach of neural networks can evade the complexity of formalizing the problem. The system can perform well even in the presence of non-linearity and noise in the input. Ability to make mappings of unseen input patterns. Cheap computation compared to other methods of complex mappings. Neural Networks do not require expertise to train and maintain the network.

There is a major sub-field of Artificial Intelligence called Machine learning (ML). Machine learning is inspired by neurophysiology and also employs neural networks connectionist model.

Eduardo Reck Miranda defined Machine Learnings aims as providing mechanisms so that the desired computation may be achieved simply by repeatedly exposing the system to examples of the desired behavior. As the result of learning, the system records the "behavior" in a network of single processors (metaphorically called "neurons"). (Reck Miranda 2000)

Reck Miranda also mentioned that perhaps the most popular current debate in ML, and in AI in general, is between the sub-symbolic and the symbolic approaches.

71

7.2_Live Coding Practice


The degree of challenge and flexibility in programming music software can be characterized along various continua.
1

As people saw the potential of programming they have tried to achieve the feeling of real live improvisation with computer emulating properties of instrumental improvisation. Traditionally most computer music programs have tended toward the old write/compile/run model which evolved when computers were much less powerful. Some programs have gradually integrated real-time controllers and gesturing (for example, MIDI-driven software synthesis and parameter control). Later, programs like Ableton Live and Propellerheadss Reason appeared, offering some sequencing, triggering and processing controls with quite straight-forward GUIs functions, but they do not support the algorithmic exploitation and customization potential of graphic programming environments like Max/MSP or Pure Data.

Until recently, however, the musician/composer rarely had the capability of realtime modification of program code itself. This legacy distinction was somewhat erased by programs such as SuperCollider (McCartney 2002) and ChucK (Wang and Cook 2003). Those types of music-oriented programming environments gave the possibility to performers-programmers to experiment with a new aesthetic current on live computer music called Live Coding.

Live coding (Ward et al. 2004, Collins et al. 2003, Collins 2003) was born out of the possibility of programming on stage with interpreted languages. Live coding is the activity of writing (parts of) a program while it runs. It thus deeply connects algorithmic causality with the perceived outcome and by deconstructing the idea of the temporal dichotomy of tool and product it allows code to be brought into
1

Alan Blackwell and Nick Collins The Programming Language as a Musical Instrument, In P. Romero, J. Good, E. Acosta Chaparro & S. Bryant (Eds). Proc. PPIG 17. Pages 120 130. 17th Workshop of the Psychology of Programming Interest Group, Sussex University, June 2005

72

play as an artistic process. The nature of such running generative algorithms is that they are modified in real-time; as fast as possible compilation and execution assists the immediacy of application of this control. Whilst one might alter the data set, it is the modification of the instructions and control flow of processes themselves that contributes the most exciting action of the medium.

One of the first languages to be implemented for this purpose was FORTH. In fact the, the first known live coding performance is that of Ron Kuivila in 1985 at the STEIM music research institute in Amsterdam. Kuivila performed on a desktop computer for about half hour using FORTH until the system crashed. The Hub, notable as the first computer network band, were also active in the late 80s, often programming new processes during performance, though this was not made an explicit part of the act.

Live coding is increasingly demonstrated in the act of programming under realtime constraints, for an audience, as an artistic entertainment. Any software art may be programmed, but the program should have demonstrable consequences within the performance venue. A performance that makes successive stages of the code construction observable, for example, through the use of an interpreted programming language, may be the preferred dramatic option. Text interfaces and compiler quirks are part of the fun, but a level of virtuosity in dealing with the problems of code itself, as well as in the depth of algorithmic beauty, may provide a more connoisseurial angle on the display.

A second wave of live coding began around the year 2000 with laptop performers following Julian Rohruber's experiments with SuperCollider, including his Just in Time Library for performative code, and the live shows of custom software laptop duo Slub, who followed a mantra of 'project your screens' to engage audiences with their novel command line based music programs. Recent years have seen further expansion of live coding activity, and the formation of an international body to support live coders TOPLAP (Ward et al. 2004).

73

Perhaps the prototype live coders are improvising mathematicians, changing their proofs in a public lecture after a sudden revelation, or working privately at the speed of thought, with guided trail and error, through possible intuitive routes into a tricky problem. There are always limits to the abstract representations a human mind can track and which outcomes predict. In some cases trial and error becomes necessary and cause and effect are too convoluted to follow. As a thought experiment, imagine changing on-the-fly a meta-program that was acting as an automated algorithm changer. This is really just a change to many things at once. Iterate that process, piling programs acting on programs. Psychological research would suggest that over eight levels of abstraction are past the ability of humans to track. Live coding allows the exploration of abstract algorithm spaces as an intellectual improvisation. As an intellectual activity it may be collaborative.

Coding and theorizing may be a social act. If there is an audience, revealing, provoking and challenging them with the bare bone mathematics can hopefully make them follow along or even take part in the expedition. These issues are in some ways independent of the computer, when it is the appreciation and exploration of algorithm that matters.

8_The (Concert) Space as an Interactive Element


Inhabitants are full participants, users, performers of space.
1

We believe that the proper physical space though being explored in studio situations or in diffusion of Electroacoustic (Acousmatic) music is not often considered when establishing interactivity within a Live Computer Music framework. That space the concert space during a performance becomes an environment that directly affects the sounds produced by any sound-emittingobject (instrument or not). This will change the effect each wave of emitted sound
1

Henri Lefebvre The Production of Space, Oxford: Blackwell Publishers, 2001.

74

produces over the audience and also over the performer(s) responsible(s) of the sound source.

Therefore, we consider establishing a proper relationship between the performer(s), the music and the concert space would help to obtain the maximum of interactivity between elements, taking advantage of the implications of sound modifications by the concert space.

Graphic number 5 shows the proposed flux of information exchange between the elements of an Interactive Musical Performance System considering the concert space.

Graphic 5, Data Flux between the elements of an IMPS and the Concert Space

Regarding our proposal for integrating the physical space to IMPSs composer Dominique M. Richard mentioned that computer music technology helps develop a renewed relation with sound fields. For instance, it allows the introduction of spatial direction as a compositional device through the manipulation of loudspeakers. Stereophonic effects and illusion of movement are among the simplest of such devices. But beyond these methods, which may sometimes appear almost as a caricature, the technology of sound diffusion helps displace the listener-centered perspective of music apprehension and again subverts the composer/listener dichotomy by offering a multiplicity of readings in a single performance. In these cases, the projection of music becomes the establishment

75

of a sound field to negotiate rather than the catering to the sweet spot at the middle of the concert hall. The audience members become co-creators of their experience through interaction with the sound field that they modulate by moving and changing position. (Richard 2000)

This is mainly why we believe on the notion of a symbiotic integration between IMPSs and its surrounding space. We started considering Installations as the ideal of integration. However, in order to include our previously mentioned IMPSs we need to consider a Concert-Installation framework as the mean to achieve our goal since it would integrate the already described Music Cycle (Graphic 1) with its containing space, which would double not only as another element but also as the structure for the new system.

8.1_Some Considerations on Sound Diffusion


It may well be the case that the computer acts in future as a virtual listener and assists with the diffusion.
1

Denis Smalley has defined many of the spatial characteristics that concern composers working with electroacoustic materials. He describes five categories that define space: spectral space, time as space, resonance, spatial articulation in composition and the transference of composed spatial articulation into the listening environment. Regarding the transference of composed spatial articulation into the listening environment Smalley stated that is a question of adapting gesture and texture so that multi-level focus is possible for as many listeners as possible. And also that in a medium which relies on the observation and discrimination of qualitative differences, where spectral criteria are so much the product of sound quality, the final act becomes the most crucial of all. (Smalley 1986)
1

Adrian Moore, Sound Diffusion and Performance: new methods new music. Department, University of Sheffield. (URL 13)

Music

76

In our research we are deeply concerned with the integration of interactive systems at least ours with its surrounding space the physical place where they are employed or implemented. This is mainly due to our work in the Acousmatic field done in the past. In Acousmatic music which continued the tradition of Musique Concrte composition and performance are inextricably linked. Therefore, sound diffusion becomes a continuation of the compositional process.

However, our actual concern focuses in the live performance phenomenon which does not includes any pre-recorded material like in diffusion of Acousmatic music. We share the concept of sound diffusion being a continuation of the compositional process thus, in real-time. Our intention is focused on a system which diffuses sound material being produced in real-time in order to set a relation between the acoustic element gesturally controlled and the acoustic space. Hence, in that sense, we found musical-expressive gestures represent the channel to inter-relate the music with its surrounding space.

In order to explain some other meanings of Sound Diffusion we would like to refer to a statement by British musician Jonathan Arnold on Sound Diffusion and Musical Expression.

In Arnolds words musical expression forms a very important, albeit natural, part of any musical performance and is something that cannot as yet be completely replicated or recreated by a computer in a performance aspect. [] human perception forms an important part in the process of analysis and diffusion that does not use any computer assistance. [] The concept of diffusion as a performance aspect for the delivery of electroacoustic music is an important one. Diffusion controls the spatial distribution of music played through an array of loudspeakers at a concert venue. [] The act of diffusion significantly enhances the effect of listening to a piece of electroacoustic music. One of the most important aspects of diffusion is how the diffuser responds to the music. Various

77

aspects of the audio content tend to provide cues to carry out a specific panning gesture. The diffusionist reacts to onsets of audio segments, and generally articulates the diffusion patterns in response to the amount of activity within the music. It is the content and sonic gestures contained in the music that a system aims at analyzing and translating through to the diffusion system. (Arnold 2005)

In that sense we feel appropriate to establish a relationship between the spatial factor of a performance and the music being played since it would empower the musical result by being a continuation of the process one from the other. However, in order to achieve better results, this task should be algorithmically done by the computer. Basing its decisions on Sound Analysis.

8.2_ The importance of the physical space


Our notions of performance are intrinsically connected to inhabited space as our daily performative actions are greatly modulated by the spaces we inhabit.
1

When conceiving a musical piece, not considering the proper conditions of the place the music will be performed, would represent losing important information (features) from the structure of the piece. Mainly because the physical space will directly affect the sonic outcome and thus, the music.

Either in Acoustic or Electroacoustic music (EAm), the physical space represents a key element when considering the acoustic result of the performance. More specifically in EA music where the implementation of complex compositional concepts related to sound objects, morphology of sound, musical semiotics and spectro-morphology of sound is frequent , a proper placement of the speakers setup within the space will make a great difference in the sonic response of the

Pedro Rebelo, Performing Space, Architecture, School of Arts, Culture and Environments, The University of Edinburgh, 2003.

78

room to the music and vice versa. Loudness control will have similar effects than spatial distribution of signal in terms of modifying the musical result.

This would be problematic and it would play against the musical result. On the other hand, if properly considered, it would enhance the musical performance. For instance, we can further develop the behavior of the inner space of an Acousmatic piece by making an appropriate translation or adaptation into the physical space (concert space) where it is to be represented.

In a live-electronics situation precisely a situation including live-electronics as well as an acoustic instrument(s) , the proper use of the sound positioning directly or indirectly related to the performers location within the space could be stressed by implementing a surround system of speakers as well as making the needed modifications or adaptations to the concert space mainly regarding the stage (performers) and audience placement in order to produce the sensation of Sound Immersion.

8.2.1_Musical implication of the sound space


Computer music that is often structured through formalized models is rich in possibilities for the invention of such abstract spaces.
1

The space metaphor extends to abstract spaces as well. Though, the placement of sound around a performing space is not new. The need to carve space physically has remained symbolically and musically important throughout most cantoris/decani antiphonal choral music and double chorus works such as Bach's St. Matthew Passion. Also, during the Italian Renaissance, with the concept of 'cori spezzati' (broken choirs) of Willaert and Gabrielli and in Baroque organ

Dominique M. Richard Holzwege on Mount Fuji: a doctrine of no-aesthetics for computer and electroacoustic music. Organised Sound 5(3): 127133. Cambridge University Press. Printed in the United Kingdom. 2000.

79

works to Stockhausen's Gruppen, composers have delighted in working with space and the performance situation.

The arrival of computer-controlled sound diffusion techniques have expanded these possibilities, making the topological listening spaces to grow in complexity. Xenakis Metastasis (Xenakis 1992), homomorph of the Phillips Pavilion at the Brussels World Fair in 1958 (a collaboration between Le Corbusier, Varese and Xenakis), as well as his various Polytopes (1975), La Lgende DEer (for the inauguration of the Georges Pompidou Centre) or Boulezs Repons (1981), as well as works by Luigi Nono, Luciano Berio are some examples of this approach to constructing an acoustic space.

We would also like to refer to a concept by French composer-researcher Anne Sedes. Sedes has worked on interactive musical pieces as well as installations establishing different ways of translating the types of energies employed during a performance i.e., instrumental, sonic, visual and scenographic energy, to name a few. In order to achieve those translations Sedes employed different types of Instrumental and Gestural Interfaces. Sedes claims that in this way the sonic space becomes conscious of its surrounding environment. Then, the sonic space lespace sonore becomes the conscious space lespace sensible. (Sedes et al. 2003) (Sedes 2003)

Our goal is not just to learn what that space looks like but also to understand the implications it will have in our performance, in order to find significant ways to explore it and make it work to our purpose. Obviously, the main difficulty is to define the possible relationships between sounds and space. Depending on the purpose, we can translate this information onto a musical dimension and combine them by their musical and physical (acoustical) significance.

80

Section II

Description of the creative process of the piece un puntotodos los puntos

81

I do believe that a composer of electroacoustic music must take the full responsibility for creating a complete musical structure where sound and eventstructures are mutually dependent and in-exchangeable.
1

ke Parmerud Tendencies in Electroacoustic Composition, Swedish Royal Academy of Music. (Early 90s)

82

1_Overview
Musical systems are suggestive in nature and beget new systems.
1

The interactive piece un puntotodos los puntos (one dot.all the dots) is based on the concept of extending time perception trough continuum/discontinuum variations (combinations or interruptions). The piece is then conceived as a whole with no pauses or gaps. Although small fermatas are employed within the musical discourse to achieve small discontinuities on the continuum.

Besides the obvious interaction between musicians and their instruments and between each other the computer plays an essential role throughout the whole piece by integrating the sounds produced by the instruments with the concert space. To achieve this, Sound Spatialization through a quadraphonic system is employed in order to have a wider sound coverage. The main interactive principle between the instruments (including the computer here considered as a meta-instrument) and the sound Spatialization within the space relies on sound analysis of the incoming signal from the instruments in order to determine the properties of the notes played and mainly the amount (or type) of musical activity related to the physical point where is being made. This will lead to an automatic Spatialization made by the computer which will find the most balanced possibility for sound placement in order to represent the continuum as an external (physical) phenomena outside the score or musical notes.

The name of the piece was conceived as a consequence on some thoughts on the novel El Aleph (The Aleph) by Argentinean writer Jorge Luis Borges. The Aleph is described as a cosmic point of 2-3 cm. diameter, placed on the basement of a house, through which one could see the whole Universe at a sight. As Borges writes, the whole Universe can be seen at a contemporary time.
1

Ralph Towner Improvisation and Performance Techniques for Classical and Acoustic Guitar, p. 82.

83

One possible explanation (at least one that we like) would be to understand it as an infinite succession of aligned dots, conforming what we know as a line.

Therefore, the very first point would be a reflection of the very last and vice versa, actually including all the points in between, producing an infinite succession of the (in reality inexistent) end or beginning point. Resulting in a great amount of concentrated and sustained energy.

2_Aims
The function of the vocabulary is not necessarily to increase the amount of verbiage to be used, but to extend the range of choices available from your expressive palette.
1

On Part 1 we have already covered a few topics on IMPSs and presented our ideas on that area. However, in the second part of this writing, we will focus on the conception of a new interactive model employed in a musical performance.

We had mainly described HCI systems with little to no contemplation on acoustic principles and even less on instruments. Nevertheless, the system presented by us in this second part of our research makes use of an ensemble of acoustic instruments (Soprano Saxophone, Bass Clarinet and a set of Percussion instruments) to produce interaction with a computer system. We consider the ensemble as an instrument in it self. In other words, the acoustic and sound properties of several components will provide us with a enhanced result. This is, the sum of X elements will give a new richer and complex sound as a result. Moreover, we believe that combining acoustic instrument(s) with a computer system results in a mix of two extensively wide palettes. Again, the sonic result will definitely be broad and perhaps depending on the DSP effects applied a hybrid one. This new hybrid would offer us different ways of interaction with the
1

Ralph Towner Improvisation and Performance Techniques for Classical and Acoustic Guitar, p. 6.

84

system. We could establish relationships between, not just the performer and the computer or the performer (musician) and its instrument, but also between the instrument (considering its organologic features) and the computer. Or, we would get an even richer result by combining the three elements by establishing different relationships (ways of interaction) between them and thus obtain a more interesting musical discourse (exchanging information) between each element that conforms the music (whole).

Furthermore, with the increment of live-electronics music, composers usually became performers as well. Then, a strong side of the system proposed in this project is to enhance the performance skills of the composer and the composing skills of the performer by creating a cross-fade between both. By placing the computer performer within the concert space where the other musicians are placed we try to make an interdisciplinary ensemble.

3_General Description
The ensemble becomes an esthetic provocation: beauty as a refusal of habit.
1

Any musical performance involves of sound material generation, by different means. Taking advantage of this acoustic principle we have created an IMPS that enhances information exchange between musicians, the computer and the concert space.

In our system we use sound as a controller device for interaction. In other words, we analyze the incoming signal from the acoustic instruments and convert that acoustic signal into data to which the program will react.

Helmut Lachenmann, on Pression. As appeared on: www.interdisciplines.org/artcog/papers/5 by Nicolas Bullot. 2003.

85

We have chosen audio signal as our source to control information since it carries information directly from the performer's natural musical expression. Pitch tracking, and to a lesser extent, amplitude following are both unsatisfactory, reflecting a reductive approach that simplifies the musical input to the noteorientated language of MIDI. The technical limitations of MIDI, for instance its fixed bit depth, have been frequently noted for instance by David Wessel who also proposed that audio signals may provide a way of stream control with vastly improved flexibility, resolution and responsiveness. Adapting this approach, we suggest that signal-based control parameters may be derived from the live performance by means of spectral analysis.

Signal is used as a form of wireless communication that carries information about itself. There is a continuum which evolves from traditional instrumental sound (via extended techniques of production) to live effects processing and, ultimately, to sample triggering. In this extreme case there is no longer any necessary musical relation between the triggering event (performance activity) and sonic outcome, although the possibility remains for shaping the characteristics of pre-recorded samples during live playback. Connections can also be established, in performance, between audio from sample playback and control parameters that engender further computer responses.

We would also like to demonstrate that computer-based IMPSs, are becoming more and more necessary, perhaps as an act of emergence. Since there is a super-abundance of computer-produced-music and 90% of it has no

contemplation of the relationships between composers and its (potential) instrument (the computer), we feel a compulsory need of fast, fluid and reliable way of communication [Interactivity (action and reaction)] between performers and computers, performers them selves, computers and acoustic instruments and performers and the sound space.

86

4_Overall Musical Considerations


Where language ends, music begins.
1

We have used a linear musical contour (form as an overall hierarchical principle for the music. Therefore, most transformations either temporal, rhythmic, melodic, spectral, timbral, etc. are achieved as slow processes. In addition, several temporal articulations (fermatas and pauses) were employed to produce small divisions within the musical discourse.

However, there a few occasions where sudden changes are introduced in order to produce fast mood shifts thus, contrasting with the continuum.

The departure point is the spectral analysis of the lowest note on the Bass Clarinet, a D2. From that spectrum the music evolves slowly through almost three minutes transforming (compressing and expanding) the spectrum until a big spectral change is introduced in the first bar of Section 6. On that point a change form the harmonic spectrum of D2 evolves (is transformed) into a inharmonic spectrum (of a Suspended Cymbal analyzed with an FFT analyzer) by introducing small deviations to the original spectrum.

Zbigniew Karkowski The Method is Science, The Aim is Religion. Amsterdam, March 1992.

87

5_Programming

5.1_Conception of the Program

Aim(s) of the Project

Development and implementation of a Program specifically, a Patch programmed in the Max/MSP programming environment. The program will enhance interaction between acoustical instruments (soprano saxophone, bass clarinet, and percussion), a computer (performed by a performer) and the space1.

5.2_Description of the Different Sections

Overall Description

The program has several instances (sections) that performs different tasks each one. Some of them are independent from the others (section I) while all other sections are connected between them in order to facilitate the performance.

Section I is dedicated to real-time process of the incoming signal from the instruments. Section II is dedicated to capture sounds (sample) from the incoming signal of the acoustic instruments and create sound objects (small portions of musical phrases produced by the instruments, being stored in the computer to be later played back). Sound objects are then fed into Section III which is dedicated to sound Spatialization (sound diffusion throughout the concert space). Aided by a computer algorithm which, based on pitch (FFT) and amplitude (loudness) analysis of the sound performed on section IV, decides where to place different sound objects (through the speakers) in order to achieve

The word space it is understood as the concert space or the physical space where the performance takes place.

88

the best acoustic balance of the overall sound (sound objects plus direct signal produced by acoustic instruments).

5.3_Real Time DSP features

Section I

The first section of the patch is dedicated exclusively to real-time signal processing (DSP). It provides the performer with several types of signal processing in order to enrich the musical result by enlarging the musical spectra. The following algorithms are included in the final version of the program: Pitchshifter (allows to modify the original signal by adding a second frequency to it producing a ring-modulator-like effect), Convolution (between the unprocessed incoming signal and the Pitch-shifted signal), Resynthesis (of the pitch-shifted convoluted with pink noise to achieve a new richer sound colour with more partials), Quadraphonic Delay (with independent delay time and feedback per channel).

Frequency of the Pitch-shifted signal and the Resynthesis sound employs FFT analysis to match the original frequency for an accurate Resynthesis result. However, it is possible to set the Pitch-shifter and the Resynthesis to different frequencies to obtain different melodic/harmonic combinations.

89

5.4_Creation of Sound Objects1 (S.O.)

Section II

This section might be considered as a multi-track sampler where sound objects are stored. It has 4 (four) buffers (to store up to four sound files at a time) that can be continually reset to zero so new sounds can be stored.

Any of the three acoustic instruments could be the sound source for storing a sound in a buffer.

A control interface with controls for playing back the sound objects plus volume for each player was created.

The aim of this section is not just feed those objects into Section III (Spatialization) but also to provide the user with a easy-to-use tool for modifying the sonic spectrum by capturing sounds produced by the instruments in a certain time point and playing them later on. This principle is being used throughout the whole performance to enhance the main idea of the music this program was made for, which is the correlation of temporal continuum/discontinuum.

We employ the concept of sound object as defined by Pierre Schaeffer. In other words sound objects are produced by the correlation of the perception act that constitutes it: reduced listening. The sound object is then, constituted in the action of perceiving a specific sound percept in an Acousmatic listening situation. In addition, we found an interesting concept on Sound Object and Sound Event by ke Parmerud in his early 90s writing Tendencies in Electroacoustic Composition: [] A "soundobject" could mean anything from a single soundevent to a large complex of sounds [] A "soundevent" simply implies a sound of some sort that occurs on a timeline. []

90

5.5_Sound Spatialization (Relationships between the inner and the outer space)

Section III of the Program

This project explores the possibilities of musical interaction between several performers (four musicians with their instruments soprano sax, bass clarinet, percussion and computer) and the concert space where the performance takes place.

In addition, we have established a few rules


The computer is employed as an organizational element (it will store sound elements provided by the other instruments and will spatialize them into the concert space). Interaction between the performers and the space will be achieved through the computer while musician will provide the sonic material.

In order to achieve our proposal, we have created a section within our program which is dedicated to sound diffusion (Spatialization) of the sound objects created in the previous section. It has a graphic user interface (GUI) that provides the user with a group of faders (dedicated to control the parameters for Spatialization) and a representational 2D graphic of the space where the performance takes place. In that graphic the relative position of each sound object being spatialized (up to eight) is simulated in order to provide a reliable visual feedback of the sounding events. The graphic of the concert space was made with the LCD object and the sound objects are represented by circles, each one of a different color.

Parameters provided in the GUI are: amount of objects (1 ~ 4), incidence area over concert space (this is how much energy is fed to each speaker, it is represented by the size of the circles, in %), inertia (represented in %, is a simulation of this physical phenomenon and is employed to affect the amount of time that would take to a sound object to trace a certain trajectory).

91

Ideally an array of 8 or more speakers would provide the best Spatialization setting since the aim of sound diffusion at least in this project is to simulate sound immersion. That is, to surround the listeners with sound making them experience the feeling of being in the middle of the sound source.

However, due to technical issues, a quadraphonic setting was implemented in order to facilitate programming while partially achieving the desired results.

Regarding the programming issues for the Spatialization section several external objects were tested in order to achieve the best possible result with a low CPU process (vbap~ [by Ville Pulkki 1999 - 2003 Windows port by Olaf Matthes]; Stereo-4Pan, GranSpat2.0-4 and SpectralSpat [by Christopher Keyes]) as well as a self-made patch. However, most of those objects proved to be quite effective in terms of sound Spatialization but some of them seemed to be quite demanding in terms of CPU performance. Therefore I have chosen the object ambipan~ (by R.MIGNOT and B.COURRIBET, CICM Universit Paris_8, MSH Paris Nord, ACI Jeunes Chercheurs "Espaces Sonores".) which diffuses sound in 2D (x and y axis) by ambisonie (which provides an excellent result when diffusing mono signals), giving a reliable result with a low CPU consuming.

The (relative) incidence area is represented by an inverse relationship between loudness and reverberation amount of the sound source, in order to produce a realistic effect of distance/closeness acoustic phenomena in the listener.

Considerable efforts were made in order to find a reliable-enough reverberation algorithm that could be employed several times (one for each S.O. player, up to eight) and still being a low CPU consuming reverberation (an always-present topic in DSP process and in music-oriented computer programming). Again, several externals were tested: Freeverb~ and Monoverb~ (by Olaf Matthes), Gigaverb~ (by Juhana Sadeharju and Olaf Matthes), Tap.verb~ (by Timothy Place) and the NewRev algorithm by Richard Dudas (ported to Max/MSP by

92

Christopher Keyes). All of these algorithms had similar performance in terms of CPU performance, which seems to be a little high consuming to be used in a live situation, if we consider the possibility of up to eight instances. The latest algorithm was more friendly in terms of CPU consuming, in the order of 3 to 4% each instance, for a total of around 30% of CPU consuming only for the reverberation stage of one third of the overall DSP employed in the patch, which was too much to be consider for a live implementation. [Even using just one algorithm with the Poly~ object, gave similar results]. Finally a self-made algorithm was employed. Implementing formulas and following indications found of F. Richard Moores book Elements of Computer Music (pages 380~387). The reverberation obtained from that algorithm proved to have quality enough to be included in the program as well as being reliable enough (in terms of CPU usage) when used in several instances at once (up to eight times - one for each sound object). It represents a CPU consume of 1% for each instance.

1 = computer 2 = percussion 3 = bass clarinet 4 = soprano sax

In the graphic from above, the idea of a concert stage is replaced by an integration of the concert space into the audience and the performer/s, in order to enhance the relation of sound position and performers location. The physical place where the instruments are placed, is in the centre of the space, surrounded by eight small groups of seats for the audience.

93

The speakers setup consists of 8 speakers, each one is connected to an individual channel of the audio interface. Eventually, a four speakers system, could be made in order to implement a quadraphonic system. However, an octophonic system is preferred.

Ideal Sound Trajectories for Sound Objects for the piece un puntotodos los puntos

A = audience S = speakers

(Concert) Space description for the performance of un puntotodos los puntos

94

Description of the Incidence Area and Position of the Sound Objects when being Spatialized

5.5.1_FFT 1 Analysis for Sound Spatialization (Section IV)

Last section is actually the most relevant of the overall program since it is the one which establish the interaction type thus defining the main characteristics of the system. It is dedicated to signal analysis and is divided in two stages: amplitude and pitch analysis. Amplitude analysis is done by the native Max/MSP object Peakamp~ whilst pitch is analyzed with the external object Pitch~, by Tristan Jehan [which is based on the Fiddle~ object by Miller Puckette]. That last object decomposes the incoming signal using an FFT algorithm.

The analysis is made in a section of the program and after the data is outputted, another algorithm is employed to Spatialize sampled sound being played-back. That second algorithm, after a few considerations decides where to place
1

FFT analysis metaphorically splits the incoming audio into several separate channels, each representing the weight and phase of each frequency bin in the analysis (as idealized sine wave components). This information can be then used to Resynthesis (an estimation of) the input signal; if the time/frequency information is distorted before Resynthesis, frequency domain transforms can be performed, such as spectral filtering, spectral gating, spectral delays, etc.

95

different sound objects (those samples played-back) through the speakers, in order to achieve the best acoustic balance of the overall sound (sound objects plus direct signal produced by acoustic instruments). Some relationships regarding amplitude analysis are combined with those regarding FFT analysis in order to focus in both properties of sound at once.

FFT windowing featured the following numbers:

Buffer size_2048 samples. Hop Size_1024 samples. FFT size_1024 samples. Type of Window_blackman.

A combination of the data flux coming from both results is employed to control the Spatialization of the sound objects being sounded. Basically amplitude analysis controls the y axis motion (in the graph) while pitch analysis controls the x axis motion (in the graphic).

6_GUI Design

GUI (Graphical User Interface) [linking all sections]

The GUI was designed in order to make it easy to use and reliable enough in a concert situation. It was also conceived as a simply interface to connect all the previous ones to an external device (control surface, MIDI controller, etc.) that allows the performer to input information to the computer and thus interact with the features the instrument provides.

96

However, after testing the program in a concert situation (without Control Surface), it proved to has a quite complex GUI that seems to be ineffective in terms of performativity. Actually, this could be explained as one of the pragmatic problems of computer-based music, since programs use to offer more parameters to control than the user is able to handle, at least with a mouse and in real-time. Though this kind of complex interfaces, that aims to be used as an instrument become confusing and then obsolete.

After the first experience with this program a new GUI, with less parameters and clear defined areas (one for each function) inside is being developed.

Screenshot of the Programs GUI

Previous versions of the patch included a Joystick connected to a USB port and this was interfaced with the HI object inside Max/MSP (a previous version of this section implemented an object called Joystick, by Dan Schoenblum, but it was

97

later replaced by the HI object in order to make the patch compatible with Mac and PC). However, it was later discarded since it was not fulfilling the expectations in terms of performativity as control surface.

7_Performance of the Program and Future Work

Each of the sections used in the final patch were first dully tested independently, proving to be effective enough.

A great achievement would be to lower down CPU consumption by Section IV, since its implementation makes the program to become slightly unstable. Unfortunately, it cannot be discarded since it is the core of the whole program, given that sound Spatialization plus the Real-time FX are set to work under command of the data provided by the analysis stage.

In terms of musical features, the overall program seems to be satisfactory enough for a concert situation. To name a few features, it has different DSP effects that were programmed in order to provide a wide variety of timber/colormodifications/transformations possibilities to the performer, also an eight-tracks multi-track sampler with possibility to change up to four sound sources. Threechannels panel for sound objects parameters control. Analysis stage with detailed access to data retrieval and Real-time FX section with full control over parameters.

In terms of performativity the instrument programmed offers great potential, providing a wide palette of parameters (as any instrument should have). Unfortunately, though the program it is quite easy to use, its Graphic Interface became slightly complex when using it in a live situation and it tends to confuse the user with all its built-in controllers.

98

There are, however, three main topics still to improve. One of them is the (already mentioned) CPU consumption produced by the Analysis Stage (presumably produced by the external object Pitch~, though, even different externals for FFT analysis were tested, no sure judgment was done yet). The second issue, is the (so-to speak) patching or programming it self. After a few months of programming and modifications (improvements) on the same patch a great level of untidiness is evident when the patch is turned into EDIT mode. The bpatcher (windowed sub-patcher) function might have been used but the idea was always to try by all the means to keep the program as self-contained, with obvious exceptions of the externals employed.

At last, the urgent development and implementation of a new GUI, with less parameters and clear defined areas for different processes, will definitely improve the instrument (program) implementation in a concert situation, empowering the musical performance.

A new version of the program is being developed and will include an Acoustical Impulse Response Measurement in order to adequate the DSP features to the Physical Space already mentioned. We strongly believe that this new feature will definitely improve the translation of the sound processes being made to and by the instruments into the Physical Space. Thus, enhancing even more the concept of Interaction through Integration.

99

Section III

Conclusions

100

1_Conclusions

After being exposed to several IMPSs and analyzing its functioning features and performativity plus having performed with ours we are convinced the implementation of interactivity in Live Computer Music is a must in order to obtain the maximum performance of the system being employed. However, establishing the rules for the interactive elements deciding the answers to the whys and the hows is not an easy task. Hence, a deep study of the aim and scope of the project is needed beforehand.

Regarding Gestural Performance issues, we are conscious that finding the proper translation of physical-musical gestures is not an easy task. Though we are convinced that new mapping strategies will help performers finding the proper way to fill the actual gap.

Concerning the notion of symbiotic integration of our systems with the physical space we have realized that gradually moving into the field of Installations mainly Concert-Installations would take us to the right direction. This type of artistic expression aims to blur the borders of the Composer-Performer-Listener cycle and places them all together into the frame of the physical space where the Installation is being held making the three elements to exist within the fourth.

We have also found three related yet different research fields which are currently being vastly explored and which suggest different ways of approaching Interactivity and the way it should be developed. In first place we keep trying to promote IMPSs in the traditional concert situation although, we are pursuing new ways of integrating the concert space and the audience as mentioned above , the musical gesture, performers and computers. The second perhaps a little bit newer field is related to portable devices allowing interaction. This is an growing area empowered by digital technologies and we see a huge potential

101

which represents the perfect chance to think of interaction regardless of the physical location but regarding directly to the device, the performer (user) and the virtual environment. The last probably the newest field is the Internet. This absolutely virtual environment would provide performers to find new ways of multi-players interaction and thus redefine the parameters of performances and spaceless concerts where audience would be performers as well, making a full integration of the Music Cycle described in Graphics 1 and 2 in a different way to the above mentioned one regarding Installations.

Regarding the Action-Reaction loop cycle we are concerned with the proper translation of performance gestures into sound. In this field we found the problem of mapping is essential for Computer Music Performance mainly because of the rapid growth of technology challenges programmers as well as performers. Thus, new and intelligent mapping algorithms and controllers needs to be designed. For that purpose we feel implementing adaptive Neural Networks simulation for the creation of adaptive mapping tools would provide efficient answer.

Finally we would like to remark that we are convinced that the actual tendencies on Interactive Musical Systems are starting to redefine what we will consider to be music in the near future or at least to define how music will be conceived, performed and perceived in the following years.

102

Section IV

Acknowledgements

103

1_Acknowledgements

This work would have not been possible without the wise guidance and contributions of Prof. Horacio Vaggione.

I would also thank to Prof. Anne Sedes and Prof. Jos Manuel Lpez Lpez for their advices and understanding through the whole year.

Thanks also for the people who, in one way or in other, helped me during this quite strange year. In Italy (Padua): Damin, Valeria, Diego, Paola, Diego (2), Daniela, Dimitri, Sabrina and Flavio. In France (Paris): Abril, Ins, Marc, Carlos, Pedro and Pedro (2). In Spain (Barcelona): Mara, Juan Pablo, Lucas, Luciana and Gernimo. In England (London): Sebastin and Natacha. In Argentina (San Nicols, Rosario and Buenos Aires): Santiago, Agustn, Guillermo, Marcelo, Federico, Daniel, Leandro, Santiago (2), Franco, Guillermo (2), Hernn, Guillermo (3), Juan Pablo, Martn, Jeremas and so many others that would take a whole chapter to name them all.

Special thanks to Luciana Porini, for her spiritual guidance.

Thanks also to Cecilia Gonzlez for her patience and support through this research time.

Finally, this work is specially dedicated to my family Leonid, Maggie and Nadia , for their unconditional support always and everywhere.

To all of them, my gratitude.

104

Section V

Resources

105

1_References Arnold, Jonathan Analysis and Automatic Diffusion of Electroacoustic Music, MA Thesis, Sonic Arts Research Centre, Queens University Belfast, UK, September 2005. Barsalou, L.W. (1999). Perceptual symbol systems. Behavioral and Brain Sciences, 22, 577-609. Battey Bret. (1998). An investigation into the Relationship between Language, Gesture and Music. http://staff.washington.edu/bbattey/Ideas/lang-gestmus.html Bongers, Bert Physical Interfaces in the Electronic Arts: Interaction Theory and Interfacing Techniques for Real-Time Performance in Trends in Gestural Control of Music IRCAM (Institut de Reserche et Coordination Acoustique / Musique), France, ISBN 2-84426-039-x., April 2000. Bowers, John. and Archer, Phil. Not Hyper, Not Meta, Not Cyber but InfraInstruments. Proceedings of NIME 05, Vancouver, BC, Canada, 2005, 5-10. Cadoz, Claude Musique, Geste et Technologie. Dans Les Nouveaux Gestes de la Musique, ditions Parenthses, 1999 ; p. 47-92. Camurri, A., Hashimoto, S., Ricchetti, M., Trocca, R., Suzuki, K., and Volpe G., (2000a) EyesWeb Toward Gesture and Affect Recognition in Interactive Dance and Music Systems, Computer Music Journal, 24:1, pp. 57-69, MIT Press, Spring 2000. Camurri, A., Coletta, P., Peri, M., Ricchetti, M., Ricci, A., Trocca, R., and Volpe G., (2000b), A real time platform for interactive dance and music systems, 2000. Camurri, A. Informatica musicale: spazio, espressivit e corporeit in interazione. 2004. pp. 297-315. Camurri, Antonio; Mazzarino, Barbara; Menocci, Stefania; Rocca, Elisa; Vallone, Ilaria; Volpe, Gualtiero, Expressive gesture and multimodal interactive systems, InfoMus Lab Laboratorio di Informatica Musicale, DIST University of Genova, Viale Causa 13, I-16145 Genova, Italy. 2004. Chadabe, Joel The structural implications of interactive creativity, Interactive Computer Music Systems, A.S.A. Conference in Pittsburgh, PA, 2002. Choi, Insook Gestural Primitives and the context for computational processing in an interactive performance system, in Trends in Gestural Control of Music, Marcelo Wanderley and Marc Battier, (CD-Rom), Ircam, 2000, pp. 139-172.

106

Collins, N. (2003). Generative music and laptop performance. Contemporary Music Review. 22(4), 67-79. Collins, N.; McLean, A.; Rohrhuber, J. and Ward, A. (2003). Live coding techniques for laptop performance. Organised Sound, 8(3), 321-330. Cont, Arshia; Coduys, Thierry; Henry, Cyrilie Real-Time Gesture Mapping in PD Environment using Neural Networks. La Kitchen, Paris, France. 2004. Cordeiro, Waldemar (ed.), Artenica o uso criativo de meios eletrnicos nas artes, So Paulo, 1972, pp. 34. Di Scipio, Agostino SOUND IS THE INTERFACE, Proceedings of the Colloquium on Musical Informatics, Firenze 8-10 May 2003. Honing, Henkjan Computational Modeling of Music Cognition: A Case of Study in Model Selection, Music Perception, volume 23, Issue 5, pp. 365-376, 2006. Kaastra, Linda T. and Fisher, Brian Affording Virtuosity HCI in the Lifeworld of Collaboration, CHI 2006, April 22-27, 2006, Montreal, Canada, p. 1. Iazzetta, Fernando Meaning in Musical Gesture, Reprint from: Trends in Gestural Control of Music , M.M. Wanderley and M. Battier, eds. 2000, Ircam - Centre Pompidou, pp. 259-268. Impett, J. A Meta-Trumpet(er). Proceedings International Computer Music Conference. Aarhus, Denmark, 1994. Jord Puig, Sergi Digital Lutherie: Crafting musical computers for new musics performance and improvisation. PhD Thesis. Departament de Tecnologia Universitat Pompeu Fabra. 2005 Khazam: The Wire Magazine #160, 1997, p. 38. Levinson, Jerrold. Music in the Moment. Ithaca, NY: Cornell University Press, 1997. Cited in Richard, Dominique M. Holzwege on Mount Fuji: a doctrine of noaesthetics for computer and electroacoustic music. Organised Sound 5 (3): 127 133. Cambridge University Press. Printed in the United Kingdom. 2000. Linz, Rainer Interactive Music Performance, The Interactive Art , catalogue of the 3rd international festival of computer arts in Maribor, Slovenia 1997, 2003 NMA Publications and Rainer Linz. Maturana, Humberto La realidad: objetiva o construida? II, Barcelona,1996, p. 214. Cf. Humberto Maturana and Francisco J. Varela, The Tree of Knowledge: The Biological Roots of Human Understanding, Boston, 1987.

107

Maturana, Humberto Ontology of Observing. The biological foundations of self consciousness and the physical domain of existence, Chapter 8, i) The Answer. Conference Workbook: Texts in Cybernetics. Felton, CA, October 1988, pp.28. McCartney, J. (2002). Rethinking the computer music language: SuperCollider. Computer Music Journal, 26(4), 61-68. McNeill, David. (1992). Hand and Mind: What Gestures Reveal About Thought. Chicago: University of Chicago Press. Mine, M.; Brooks, F.; Squin, C. Moving Objects in Space: Exploiting Proprioception in Virtual-Environment Interaction Proc. of SIGGRAPH '97, 1997. Cited in: O'Modhrain, Sile Frames of Reference for Portable Device Interaction. Sonic Arts Research Centre Queens University, Belfast, UK. 2005. Mishra S. and Hahn. J. Mapping motion to sound and music in computer animation and ve. Invited Paper, Proceedings of Pacific Graphics., 1995. O'Modhrain, Sile Frames of Reference for Portable Device Interaction. Sonic Arts Research Centre Queens University, Belfast, UK. 2005. Paradiso, J. The Brain Opera Technology: New Instruments and Gestural Sensors for Musical Interaction and Performance, v. 2.0, 1998. Paradiso, J. et al New Sensor and Music Systems for Large Interactive Surfaces. Proceedings of the ICMC, 2000, pp. 01-04. Perepelycia, Alexis On Art Creation and its Perception, Rosario, Argentina, March 2004. Perepelycia, Alexis From Robotics & Biomechanics to Musical applications (New ideas in the compositional, performing environment and beyond), (2005a). Sonic Arts Research Centre, Queens University Belfast, UK, January 2005. Perepelycia, Alexis Libertad(es) Controlada(s), MA Thesis, (2005b). Sonic Arts Research Centre, Queens University Belfast, UK, September 2005, p. 2. Puckette, Miller Theory and Techniques of Electronic Music, DRAFT: March 3, 2006, pg. 1 Rebelo, Pedro Performing Space, Architecture, School of Arts, Culture and Environments, The University of Edinburgh, 2003. Rebelo, Pedro Haptic Sensation and Instrumental Transgression. Sonic Arts Research Centre, Queens University Belfast, UK, 2004

108

Reck Miranda, Eduardo Regarding Music, Machines, Intelligence and The Brain: An Introduction to Music and AI, Readings in Music and Artificial Intelligence, edited by Eduardo Reck Miranda, Sony Computer Science Laboratory, Paris, France, Harwood Academic Publishers, 2000. Reck Miranda, Eduardo Machine Learning and Sound Design. Sony CSL - Paris. 2000. Richard, Dominique M. Holzwege on Mount Fuji: a doctrine of no-aesthetics for computer and electroacoustic music. Organised Sound 5(3): 127133. Cambridge University Press. Printed in the United Kingdom. 2000. Rovan, Joseph; Wanderley, Marcelo; Dubnov, Shlomo and Depalle, Phillipe (1997) Instrumental Gestural Mapping Strategies as Expressivity Determinants in Computer Music Performance in Proceedings of the Kansei - The Technology of Emotion Workshop, (Genova, Italy), Oct. 1997 at:
http://www.ircam.fr/equipes/analyse-synthese/wanderley/Gestes/Externe/Mapp/kansei_final.html

Sedes, Anne Espaces sonores, Espaces sensibles, ACI Jeunes Chercheurs, Espaces Sonores (Actes de Recherche), Editions Musicales Transatlantiques, 2003, pp. 105-114. Sedes, Anne ; Couribet, Benot ; Jean-Baptiste Thibault ; Verfaillles, Vincent Projection Visuelle de lEspace Sonore, Vers la Notion de Transduction : Une Approche Interactive en Temps Rel, ACI Jeunes Chercheurs, Espaces Sonores (Actes de Recherche), Editions Musicales Transatlantiques, 2003, pp. 125-143. Smalley, Denis Spectro-morphology and structuring processes. In Simon Emmerson (ed.) The Language of Electroacoustic Music. London: Macmillan, 1986, pp. 90-92. Taylan Cemgil, Ali and Krse, Ben Probabilistic Machine Listening for Interactive Music Performance Systems, Intelligent Autonomous Systems, Universitat von Amsterdam. October, 2003. Truax, Barry (1994a) The inner and outer complexity of music. Perspectives of New Music 32 (1): 17693, 1994. Vaggione, Horacio Some Ontological Remarks about Music Composition Process, Computer Music Journal, 25:1, pp. 5461, Spring 2001. Wanderley, Marcelo M.; Serra, Marie-Hlne; Battier, Marc; Rodet, Xavier; Gestural Control at IRCAM, 2001 Wang, G. and Cook, P. (2003). ChucK: A concurrent, on-the-fly audio programming language. In Proceedings of the International Computer Music Conference, Singapore. 109

Ward, A.; Rohrhuber, J.; Olofsson, F.; McLean, A.; Griffiths, D.; Collins, N. and Alexander, A. (2004). Live Algorithm Programming and a Temporary Organisation for its Promotion. Proceedings of the README software art conference, Aarhus, Denmark. Wilson, Stephen Using Computers to Create Art Prentice Hall: Englewood Cliffs, NJ, 1986. Winkler, Todd Composing Interactive Music, MIT Press, 2001, p.12 Winkler, Todd Composing Interactive Music: techniques and ideas using Max Massachusetts Institute of Technology, Cambridge, Massachusetts, 1998. Wren, C.R. et. al. Perceptive Spaces for Performance and Entertainment: Untethered Interaction using Computer Vision and Audition. Applied Artificial Intelligence, 11(4), 267-284, 1997. Young, John Paul Live Interactive Music, MA Thesis, Peabody Conservatory of Music, The Peabody Institute of The Johns Hopkins University, Baltimore, Maryland, 2003. Young, Michael and Lexer, Sebastian FTT Analysis as a Creative Tool in Live Performance, Proc. of the 6th Int. Conference on Digital Audio Effects (DAFX03), London, UK, September 8-11, 2003

110

2_Bibliography BONGERS, Bert Physical Interfaces in the Electronic Arts: Interaction Theory and Interfacing Techniques for Real-Time Performance in Trends in Gestural Control of Music IRCAM (Institut de Recherche et Coordination Acoustique / Musique), France, ISBN 2-84426-039-x., April 2000. GENEVOIS, Hugues and de VIVO, Raphal Les Nouveaux Gestes de la Musique, ditions Parenthses, Marseille, France, 1999, pp. 193. IRCAM Composition et environnements informatiques, Les Cahiers de lIRCAM, Recherche et Musique, IRCAM (Institut de Recherche et Coordination Acoustique / Musique), France, 1992. MOORE, Richard. F. Elements of computer music. Englewood Cliffs, N.J :Prentice Hall, 1990. MURAIL, Tristan Modles & Artifices, Presses Universitaires de Strasbourg, France, ISBN 2-86820-218-7, 2004, pp. 223. ROADS, Curtis Laudionumrique, Dunod, Paris, 1998, ISBN 2-10-004136-3, pour la traduction franaise de The Computer Music Tutorial MIT Press, London, Cambridge, Massachusetts, 1996, pp. 679. ROADS, Curtis Microsound MIT Press, London: Cambridge, Massachusetts, 2001, 209 p. SEDES, Anne et al. Espaces Sonores Actes de Recherche. Editions Musicales Transatlantiques, Paris, France, 2003, 144 p. SOSA, Rogelio La Musique lectroacoustique en direct et son application dans u programme dveloppe en Max/MSP, THA 15483, Universit Paris 8, Vincennes Saint-Denis, France, Paris, 2003, pp. 95. WINKLER, Todd Composing Interactive Music, MIT Press, Massachusetts , 2001 WINKLER, Todd Composing Interactive Music: techniques and ideas using Max Massachusetts Institute of Technology, Cambridge, Massachusetts, etc, 1998, 350 p. XENAKIS, Iannis Formalized Music: thought and mathematics in composition Rev. ed., Harmonologia series; n 6, Pendragon Press, Stuyvesant, New York, 1992, 387 p.

111

3_Websites URL 1 Wikipedia http://en.wikipedia.org/wiki/Behaviorism Accessed: 22/08/06 URL 2 Giannetti, Claudia Endo-Aesthetics, www.medienkunstnetz.de/themes/aesthetics_of_the_digital/endo_aesthetic, p. 13. Accessed: 18/07/06 URL 3 Southern Cross Review http://southerncrossreview.org/borges-barili_home.htm Accessed: 28/08/06 URL 4 Michel Waisvisz http://www.crackle.org/SubjectRoundTable.htm Accessed: 18/07/06 URL 5 Ears: Electroacoustic Resource Site http://www.ears.dmu.ac.uk/rubrique.php3?id_rubrique=231 Accessed: 21/08/06 URL 6 Wikipedia http://en.wikipedia.org/wiki/Proprioception Accessed: 22/08/06 URL 7 Laetitia Sonami http://www.sonami.net/lady_glove2.htm Accessed : 28/08/06 URL 8 Machover, T. Classic Hyperinstruments. http://brainop.media.mit.edu/Archie/Hyperinstruments/classichyper.html Accessed : 10/08/06 URL 9 Wikipedia http://en.wikipedia.org/wiki/Kinesthesia Accessed: 22/08/06

112

URL 10 The Brain Orchestra www.we-make-money-not-art.com/articles/ Accessed: 20/07/06 URL 11 Wikipedia http://en.wikipedia.org/wiki/Neural_network Accessed: 22/08/06 URL 12 Wikipedia http://en.wikipedia.org/wiki/Connectionism Accessed: 22/08/06 URL 13 ARiADA Number 2 February 2002 www.ariada.uea.ac.uk Accessed: 07/06/06

113

Você também pode gostar