Você está na página 1de 5

Stuck in the Sound

Creation of a Sonic Environment to engage Body Movement Games for Kids.


Alexis Perepelycia
Student N 10848045
MA in Sonic Arts
Sonic Arts Research Centre, Queens University Belfast
205MUS719
Musical Interfaces for Creative Environments (MICE)

Abstract
Since it is very difficult to get kids attention to steady games, most of them enjoy
quite much to run around, and experience more physical games. This project
tries to expand the basic principle of the game Stuck in the Mud, where kids
should try not to be frozen by their opposites, and once they are frozen, they
cannot move any more, till their lads unfreeze them again. Obviously, this game
involves lots of motion, and so, I have taken advantage of this to create an
engaging environm ent, which reacts to the body motion.

Introduction
As the game I have chosen requires considerable amounts of motion, choosing
the right devices to track the motion as accurately as possible and developing a
system that would be reliable enough to let the kids play in a normal way their
game, was essential. I have found inevitable to rely on cameras to track the
motion, since they are very stable delivering data since they are highly
developed nowadays. Also a significant factor was the space chosen for the
game it self, as it requires a considerable space to let the kids involved to play
properly and freely the game. The Sonic Lab at S.A.R.C., proved to be an
appropriate space for it performance. Another keypoint was the selection of
software that proves to be flexible enoug h to receive and handle information
from a camera and react to this by producing and modifying sounds or music.
The obvious answer was Max/MSP/Jitter, since the Max part of the software
can deal with the algorithms and maths, MSP can handle audio process in real
time and Jitter deals with the video part of the game. Additionally, choosing the
right sounds and music to engage the kids with the game wasnt and easy task,
since its really difficult to predict kids reaction to music or sounds, plus different
kids like or react in a different manner to different music. Finally, a section of the
programming was dedicated to the amount of sound that was supposed to be
played and how those sounds were supposed to be triggered, and its
spatialization within the actual place the game was to be played.

Development of Max/MSP/Jitter Patches


From the beginning I started to work with the prototype system including the
cameras so I could chose the proper objects inside Max that can handle video
information. I have used the Tap.tools Library of objects, version 2.0, which
provided a very stable video tracking yet not heavy on CPU consuming, which
for video tracking plus sound processing, becomes an issue. At first, I have
started tracking colour instead of motion implementing the tap.jit.colortrack
object, since my original idea was to divide the amount of kids into two groups
and track them separately, but after lots of work with this idea, I found it really
tough to keep the idea of colour tracking, because the signal the cameras
delivered was extremely unstable and the slightest change if lighting
circumstances changed abruptly the settings for values and scales inside the
patches. I have then changed to motion tracking. For that purpose I have
implemented the tap.jit.source, which allows connecting an input device into
Jitter. Then I have split the signal into four sources with the object jit.scissors by
creating 2 rows and 2 columns. Each of the signals coming from that object was
analyzed to determine the Amount of Motion taking place on that area plus the
relative position of the kids in the space, over the X and Y axis. The out coming
signals related to motion were the summed together, and the resulting signal,
related to the amount of players actually playing per time. That is, if the scale is
set from 1 to 10 in relation to the overall loudness level, and there is 4 audio
players reacting to the amount of motion, if the summed motion signal is 5, this
means that half of the players will be playing at that time, so you will be listening
to two sound samples coming from the loudspeakers. Furthermore, the
coordinates X and Y tracked by the cameras, indicate which of the speaker or
speakers closer to where the motion is produced, will reproduce the signals
actually being played.

How the system works

If we consider that an amount motion is being done in a certain area (Area 1),
the far most camera to the actual place, which is actually pointing towards that
direction, will detect that motion and according to the amount of motion, the
Patch will determine how loud or quiet will the sounds playback. Then the
sounds will be only reproduced in the speakers close the Area 1 at the
determined loudness. And, as each player is assigned to a certain speaker,
hence in principle, just Speakers 1 and 8, will reproduce sounds 1 and 8. If the
loudness level increases, Speakers 2 and 7 might reproduce proportionally
sounds 2 and 7, and so on. With this principle, if there is a group of ten kids
playing at the same time, and there is five in the Area 1, three in Area 2 and two
in Area 3, supposing that all of them are moving in the same way, at the same
speed; the cameras should perceive it and keep the relationship between
motion to loudness as explained previously. Thus, the percentage of the overall
loudness should be: 50% between Speakers 1 and 8, 30% between Speakers 2
and 3, and 20% between Speakers 4 and 5. (Graphic N1)

Graphic 1
Motion Scale

Loudness

Player 1 Player 2 Player 3 Player 4 Player 5 Player 6 Player 7 Player 8


X/Y Position
Spkr 8

Spkr 1

Spkr 7

Area 1

Spkr 6

Spkr 2

Spkr 5
Area 2

Area 3

Spkr 3

Spkr 4

Setting the Cameras

Two Apple iSight Fire-wire cameras were implemented in the final version of the
project. This allowed to cover the needed space in a more adequate manner,
since, as I have explained previously, a big space is needed in order to allow
the kids to play their game properly. The cameras were intended to be holding
from two ceiling panels at the Sonic Lab, in SARC. Pointing down. However,
since the iSight cameras have no zoom, it is not possible to adjust the distance
or the width you want to cover. One possible solution would have been to raise
the cameras up the top of the ceiling of the Sonic Lab, but then a new issue
occurred. In order to hold the cameras from the top of the ceiling a pair of larger
than normal fire-wire cables would have been needed. So, the final solution was
to hold the cameras from the sides of the ceiling panels and point them slightly
angled one to each other, so they can cover the needed area.

Graphic 2

Panel 1

Panel 2

Camera 1

Panel 3
Camera 2

Graphic 2 shows the final camera setup inside the Sonic Lab.

The Music and The Sounds

Perhaps the most difficult task of this project was the selection of appropriate
music and sounds to engage the kids actions, while they were actually playing
the game. Despite there is some standard in relation to what kids might like or
what might get their attention, it is almost impossible to predict their real
reaction to different types of music and sounds. To try to cover different music
areas and sonic possibilities, I have prepared four Sound Banks loaded with
eight sounds each. The first one, was the most diverse one, and included
various sounds ranging from Chinese Theatre music and Tango, to Pop and
Rock beats. The Second Bank was related to Soundtracks, since kids might be
familiar with movies according to their age. I have chosen music from the Lord
Of The Rings saga, Matrix, etc. The Third one included sounds from different
Ring tones that are quite popular now, and it was therefore the sounds that kids
found more engaging, funny and attractive. Finally, I intended to include a quote
of educational aspect in the game, so I have included several samples of music
by SARC composers as well as some classic Electroacoustic composers, to
show the kids what type of music is done at SARC. Curiously, kids related this
type of music to Sci-Fi movies, so they found it engaging to make different types
of movements or even dance.

Testing the Prototype


On Friday May the 6th a group of kids from a school, and they served us as
testers for the work in progress projects. The experience worked really well for
me, since I hadnt had, at that time, the chance to test it with the proper amount
of kids and also because it helped me to decide the direction I wanted to take in
my game. Originally the idea was to let ask them to play the game called Stuck
in the Mud, but after testing it, I realized that this wasnt the best option, since

the kids seems to enjoy more to change their own rules all the time, and
because during the festival people from different schools was about to come to
the SenSonic event, and I found out that kids should feel comfortable with each
other in order to play that kind of games freely.

Improving the System

After the test, some fixes were needed, in order to improve the game and the
Patch functioning. As shown on Graphic 2, the final setup for the cameras in the
Sonic Lab has an obvious overlap in the central area of the space tracked. This
was inevitable because of the angle of the cameras had been set to. To avoid
increased sensitivity on this specific area, a non-linear scale was applied to the
signal coming from these areas of the cameras. Another big issue was
sensitivity of the cameras. Since the game is intended to be played in groups,
the calibration was really difficult. Therefore, if the group is too small, the overall
sonic effect will not be that effective as if it the group is around eight kids. This
is mainly due to the speed cameras track motion, send the data to the players
and adjust the loudness level, before finally the speaker playback the sound. If
the mass that moves and therefore generates a reaction into the Patch is
bigger, then the reaction will be faster. With some simple number scaling this
was relatively solved, though, the bigger issue when processing video and
audio in real time, is CPU speed.

Conclusions and Future Work

Regarding the response and acceptance by the kids, the project worked really
well, even better than I expected. They seemed to have really enjoyed the fact
of music being generated just by their body movement and gestures. It seemed
that they really liked the fact of moving freely within a certain space and to
explore the sound possibilities of the game, despite it sometimes turned into
kids running almost uncontrolled over the space, which was a little bit risky,
since kids dont have notion over potential risks of injures when running fast. So
an alternative game that intended to let the kids explore the sound through
gentle gestures was proposed to avoid possible problems. However, a better
solution is needed, in order to improve the engagement of all the kids, since
some of them didnt like the alternative game. Technically speaking, a more
flexible scaling system for motion and loudness levels is needed, in order to re
scale the values according to the amount of kids playing the game at a time,
since this would improve their interaction with sounds.

Você também pode gostar