Você está na página 1de 26

ABSTRACT

While special effects designers have been chasing this fantasy for 30 years, this might be the
decade that we reach this momentous goal. One of the secret ingredients that will get us there
may be motion capture (MoCap), defined as a technology that allows us to record human motion
with sensors and to digitally map the motion to computer-generated creatures. MoCap has been
both hyped and criticized since people started experimenting with computerized motion-
recording devices in the 1970s. Whether reviled by animation purists as a shortcut on exalted as
the solution to traditional film and animation shortcomings. MoCap's controversial applications
are on the brink of transforming contemporary entertainment.

1. Introduction

Motion capture, motion tracking, or mocap are terms used to describe the process of recording
movement and translating that movement onto a digital model. It is used in military,
entertainment, sports, medical applications and for validation of computer vision and robotics. In
filmmaking it refers to recording actions of human actors, and using that information to animate
digital character models in 2D or 3D computer animation. When it includes face, fingers and
captures subtle expressions, it is often referred to as performance capture. In motion capture
sessions, movements of one or more actors are sampled many times per second, although with
most techniques (recent developments from Weta use images for 2D motion capture and project
into 3D) motion capture records only the movements of the actor, not his/her visual appearance.
This animation data is mapped to a 3D model so that the model performs the same actions as the
actor. This is comparable to the older technique of rotoscope, such as the 1978 The Lord of the
Rings animated film where the visual appearance of the motion of an actor was filmed, then the
film used as a guide for the frame by frame motion of a hand-drawn animated character.

2. History of motion capture


The use of motion capture to animate characters on computers is relatively recent, it started in
the 1970s, and now just beginning to spread. Motion capture (collection of movement) is
recording the movements of the human body (or any other movement) for immediate analysis or
decals. The captured information can be as simple as catching body position in space or as
complex as a capture of the face and the deformation of the muscles. The captured motion can be

1
exported to various forms like bvh, bip, fbx etc., which can be used to animate 3d characters in
3ds max, maya, poser, iclone, blender etc, You can download free motion capture data from this
blog. Motion capture for animation is the superposition of human movement on their virtual
identities. This capture can be direct, such as the animation arm of a virtual function of
movement of an arm, or indirect, such as that of a human hand with a more thorough as the
effect of light or color. The idea of copying human motion is of course not new. To make the
most convincing human movement, in "Snow White", Disney studios design animation film on a
film, play or real players. This method called "rotoscoping" has been used successfully since
then. In 1970, when he began to be possible to animate characters by computer, Animationer
have adapted traditional techniques, including the "rotoscoping".
Today, technology is catching is good and diverse, we can classify them into three broad
categories:

 Mechanical motion capture


 The optical motion capture
 The magnetic motion capture

Now although this technique is effective, it still contains some problems (weight, cost, ...). But
against any doubt that the motion capture will become one of the basic tools of animation.

3. The different types of motion capture


3.1 Mechanical Motion capture

This technique of motion capture is achieved through the use of an exoskeleton. Each joint is
then connected to an angular encoder. The value of movement of each encoder (rotation etc. ..) is
recorded by a computer that by knowing the relative position encoders (and therefore joints) can
rebuild these movements on the screen using software. An offset is applied to each encoder,
because it is very difficult to match exactly their position with that of the real relationship (and
especially in the case of human movements).

2
3.1.1 Advantages and Disadvantages
This technique offers high precision and it has the advantage of not being influenced by external
factors (such as quality or the number of cameras for optical mocap).
But the catch is limited by mechanical constraints related to the implementation of the encoders
and the exoskeleton. It should be noted that the exoskeleton generally use wired connections to
connect the encoders to the computer. For example, there is much more difficult to move with a
fairly heavy exoskeleton and connected to a large number of simple son with small reflective
spheres: the freedom of movement is rather limited.
The accuracy of reproduction of the movement depends on the position encoders and modeling
of the skeleton. It must match the size of the exoskeleton at each morphology. The big
disadvantage comes from the coders themselves because if they are of great precision between
them it can not move the object to capture in a so true. In effect, then use the methods of optical
positioning to place the animation in a decor. Finally, each object to animate to need an
exoskeleton over it is quite complicated to measure the interaction of several exoskeleton.
Thereby bringing about a scene involving several people will be very difficult to implement.
3.2 The magnetic motion capture
Magnetic motion capture is done through a field of electro-Magenta is introduced in which
sensors are coils of sensors electriques. Les son are represented on a place mark in 3 axes x, y, z.
To determine their position on the capture field disturbance created by a son through an antenna,
then we can know its orientation.

3
3.2.1 Advantages and disadvantages
The advantage of this method is that data captured is accurate and no further calculations,
excluding from the calculation of position, is useful in handling. But any metal object disturbs
the magnetic field and distorts the data.

3.3 Optical Motion Capture

The capture is based on optical shooting several synchronized cameras, the synthesis of
coordinates (x, y) of the same object from different angles, allows to deduce the coordinates (x,
y, z). This method involves the consideration of complex problems such as optical parallax,
distortion lens used, etc.. The signal thus undergoes many interpolations. However, a correct
calibration of these parameters allows a high accuracy of data collected.

4
The operating principle is similar to radar: the cameras emit radiation (usually red and / or
infrared), reflected by the markers (whose surface is composed of ultra-reflecting material) and
then returned to the same cameras. These are sensitive to one type of wavelength and viewing
the markers in the form of white spots videos (or grayscale for the latest cameras). Checking the
information of each camera (2 cameras therefore minimum) to determine the position of the
marker in the virtual space.

4. Methods and Systems

Motion tracking or motion capture started as a photogrammetric analysis tool in biomechanics


research in the 1970s and 1980s, and expanded into education, training, sports and recently
computer animation for television, cinema and video games as the technology matured. A
performer wears markers near each joint to identify the motion by the positions or angles
between the markers. Acoustic, inertial, LED, magnetic or reflective markers, or combinations of
any of these, are tracked, optimally at least two times the frequency rate of the desired motion, to
submillimeter positions.

5
4.1 Optical systems

Optical systems utilize data captured from image sensors to triangulate the 3D position of a
subject between one or more cameras calibrated to provide overlapping projections. Data
acquisition is traditionally implemented using special markers attached to an actor; however,
more recent systems are able to generate accurate data by tracking surface features identified
dynamically for each particular subject. Tracking a large number of performers or expanding the
capture area is accomplished by the addition of more cameras. These systems produce data with
3 degrees of freedom for each marker, and rotational information must be inferred from the
relative orientation of three or more markers; for instance shoulder, elbow and wrist markers
providing the angle of the elbow.

4.1.1 Passive markers

A dancer wearing a suit used in an optical motion capture system

Several markers are placed at specific points on an actor's face during facial optical motion capture

6
Passive optical system use markers coated with a retroreflective material to reflect light back that
is generated near the cameras lens. The camera's threshold can be adjusted so only the bright
reflective markers will be sampled, ignoring skin and fabric.

The centroid of the marker is estimated as a position within the 2 dimensional image that is
captured. The grayscale value of each pixel can be used to provide sub-pixel accuracy by finding
the centroid of the Gaussian.

An object with markers attached at known positions is used to calibrate the cameras and obtain
their positions and the lens distortion of each camera is measured. Providing two calibrated
cameras see a marker, a 3 dimensional fix can be obtained. Typically a system will consist of
around 6 to 24 cameras. Systems of over three hundred cameras exist to try to reduce marker
swap. Extra cameras are required for full coverage around the capture subject and multiple
subjects.

Vendors have constraint software to reduce problems from marker swapping since all markers
appear identical. Unlike active marker systems and magnetic systems, passive systems do not
require the user to wear wires or electronic equipment. Instead, hundreds of rubber balls are
attached with reflective tape, which needs to be replaced periodically. The markers are usually
attached directly to the skin (as in biomechanics), or they are velcroed to a performer wearing a
full body spandex/lycra suit designed specifically for motion capture. This type of system can
capture large numbers of markers at frame rates as high as 2000fps. The frame rate for a given
system is often balanced between resolution and speed: a 4-megapixel system normally runs at
370 hertz, but can reduce the resolution to .3 megapixels and then run at 2000 hertz. Typical
systems are $100,000 for 4-megapixel 360-hertz systems, and $50,000 for .3-megapixel 120-
hertz systems.

4.1.2 Active marker

Active optical systems triangulate positions by illuminating one LED at a time very quickly or
multiple LEDs with software to identify them by their relative positions, somewhat akin to
celestial navigation. Rather than reflecting light back that is generated externally, the markers

7
themselves are powered to emit their own light. Since Inverse Square law provides 1/4 the power
at 2 times the distance, this can increase the distances and volume for capture.

The TV series ("Stargate SG1") episode was produced using an active optical system for the
VFX. The actor had to walk around props that would make motion capture difficult for other
non-active optical systems.

ILM used active Markers in Van Helsing to allow capture of the Harpies on very large sets. The
power to each marker can be provided sequentially in phase with the capture system providing a
unique identification of each marker for a given capture frame at a cost to the resultant frame
rate. The ability to identify each marker in this manner is useful in realtime applications. The
alternative method of identifying markers is to do it algorithmically requiring extra processing of
the data.

4.1.3 Time modulated active marker

A high-resolution active marker system with 3,600 × 3,600 resolution at 480 hertz providing real time
submillimeter positions.

Active marker systems can further be refined by strobing one marker on at a time, or tracking
multiple markers over time and modulating the amplitude or pulse width to provide marker ID.
12 megapixel spatial resolution modulated systems show more subtle movements than 4
megapixel optical systems by having both higher spatial and temporal resolution. Directors can

8
see the actors performance in real time, and watch the results on the mocap driven CG character.
The unique marker IDs reduce the turnaround, by eliminating marker swapping and providing
much cleaner data than other technologies. LEDs with onboard processing and a radio
synchronization allow motion capture outdoors in direct sunlight, while capturing at 480 frames
per second due to a high speed electronic shutter. Computer processing of modulated IDs allows
less hand cleanup or filtered results for lower operational costs. This higher accuracy and
resolution requires more processing than passive technologies, but the additional processing is
done at the camera to improve resolution via a subpixel or centroid processing, providing both
high resolution and high speed. These motion capture systems are typically under $50,000 for an
eight camera, 12 megapixel spatial resolution 480 hertz system with one actor.

IR sensors can compute their location when lit by mobile multi-LED emitters, e.g. in a moving car. With
Id per marker, these sensor tags can be worn under clothing and tracked at 500 Hz in broad daylight.

4.1.4 Semi-passive imperceptible marker

One can reverse the traditional approach based on high speed cameras. Systems such as Prakash
use inexpensive multi-LED high speed projectors. The specially built multi-LED IR projectors
optically encode the space. Instead of retro-reflective or active light emitting diode (LED)
markers, the system uses photosensitive marker tags to decode the optical signals. By attaching
tags with photo sensors to scene points, the tags can compute not only their own locations of
each point, but also their own orientation, incident illumination, and reflectance.

These tracking tags that work in natural lighting conditions and can be imperceptibly embedded
in attire or other objects. The system supports an unlimited number of tags in a scene, with each
tag uniquely identified to eliminate marker reacquisition issues. Since the system eliminates a
high speed camera and the corresponding high-speed image stream, it requires significantly

9
lower data bandwidth. The tags also provide incident illumination data which can be used to
match scene lighting when inserting synthetic elements. The technique appears ideal for on-set
motion capture or real-time broadcasting of virtual sets but has yet to be proven.

4.1.5 Markerless

Emerging techniques and research in computer vision are leading to the rapid development of the
markerless approach to motion capture. Markerless systems such as those developed at Stanford,
University of Maryland, MIT, and Max Planck Institute, do not require subjects to wear special
equipment for tracking. Special computer algorithms are designed to allow the system to analyze
multiple streams of optical input and identify human forms, breaking them down into constituent
parts for tracking. Applications of this technology extend deeply into popular imagination about
the future of computing technology. Several commercial solutions for markerless motion capture
have also been introduced. Products currently under development include Microsoft's Kinect
system for PC and console systems.

4.2 Non-optical systems

4.2.1 Inertial systems

Inertial Motion Capture technology is based on miniature inertial sensors, biomechanical models
and sensor fusion algorithms. The motion data of the inertial sensors (inertial guidance system) is
often transmitted wirelessly to a computer, where the motion is recorded or viewed. Most inertial
systems use gyroscopes to measure rotational rates. These rotations are translated to a skeleton in
the software. Much like optical markers, the more gyros the more natural the data. No external
cameras, emitters or markers are needed for relative motions. Inertial mocap systems capture the
full six degrees of freedom body motion of a human in real-time. Benefits of using Inertial
systems include: no solving, portability, and large capture areas. Disadvantages include lower
positional accuracy and positional drift which can compound over time.

These systems are similar to the Wii controllers but are more sensitive and have greater
resolution and update rates. They can accurately measure the direction to the ground to within a
degree. The popularity of inertial systems is rising amongst independent game developers,

10
mainly because of the quick and easy set up resulting in a fast pipeline. A range of suits are now
available from various manufacturers and base prices range from $25,000 to $80,000 USD.

4.2.2 Mechanical motion

Mechanical motion capture systems directly track body joint angles and are often referred to as
exo-skeleton motion capture systems, due to the way the sensors are attached to the body. A
performer attaches the skeletal-like structure to their body and as they move so do the articulated
mechanical parts, measuring the performer’s relative motion. Mechanical motion capture
systems are real-time, relatively low-cost, free-of-occlusion, and wireless (untethered) systems
that have unlimited capture volume. Typically, they are rigid structures of jointed, straight metal
or plastic rods linked together with potentiometers that articulate at the joints of the body. These
suits tend to be in the $25,000 to $75,000 range plus an external absolute positioning system.

4.2.3 Magnetic systems

Magnetic systems calculate position and orientation by the relative magnetic flux of three
orthogonal coils on both the transmitter and each receiver. The relative intensity of the voltage or
current of the three coils allows these systems to calculate both range and orientation by
meticulously mapping the tracking volume. The sensor output is 6DOF, which provides useful
results obtained with two-thirds the number of markers required in optical systems; one on upper
arm and one on lower arm for elbow position and angle. The markers are not occluded by
nonmetallic objects but are susceptible to magnetic and electrical interference from metal objects
in the environment, like rebar (steel reinforcing bars in concrete) or wiring, which affect the
magnetic field, and electrical sources such as monitors, lights, cables and computers. The sensor
response is nonlinear, especially toward edges of the capture area. The wiring from the sensors
tends to preclude extreme performance movements. The capture volumes for magnetic systems
are dramatically smaller than they are for optical systems. With the magnetic systems, there is a
distinction between “AC” and “DC” systems: one uses square pulses, the other uses sine wave
pulse.

11
5. HUMAN MOCAP

The science of human motion analysis is fascinating because of its highly interdisciplinary nature
and wide range of applications. Histories of science usually begin with the ancient Greeks, who
first left a record of human inquiry concerning the nature of the world in relationship to our
powers of perception. Aristotle (384-322 B.C.) might be considered the first biomechanician. He
wrote the book called ’De Motu Animalium’ - On the Movement of Animals. He not only saw
animals’ bodies as mechanical systems, but pursued such questions as the physiological
difference between imagining performing an action and actually doing it.

Nearly two thousand years later, in his famous anatomic drawings, Leonardo da Vinci (1452-
1519) sought to describe the mechanics of standing, walking up and down hill, rising from a
sitting position, and jumping. Galileo (1564-1643) followed a hundred years later with some of
the earliest attempts to mathematically analyze physiologic function. Building on the work of
Galilei, Borelli (1608-1679) figured out the forces required for equilibrium in various joints of
the human body well before Newton published the laws of motion. He also determined the
position of the human center of gravity, calculated and measured inspired and expired air
volumes, and showed that inspiration is muscle-driven and expiration is due to tissue elasticity.
The early work of these pioneers of biomechanics was followed up by Newton (1642-1727),
Bernoulli (1700-1782), Euler (1707-1783), Poiseuille (1799-1869), Young (1773-1829), and
others of equal fame. Muybridge (1830-1904) was the first photographer to dissect human and
animal motion (see figure at heading 'human motion analysis'). This technique was first used
scientifically by Marey (1830-1904), who correlated ground reaction forces with movement and
pioneered modern motion analysis. In the 20th century, many researchers and (biomedical)

12
engineers contributed to an increasing knowledge of human kinematics and kinetics. This paper
will give a short overview of the technologies used in these fields.

5.1 Human motion analysis


Many different disciplines use motion analysis systems to capture movement and posture of the
human body. Basic scientists seek a better understanding of the mechanisms that are used to
translate muscular contractions about articulating joints into functional accomplishment, e.g.
walking.  Increasingly, researchers endeavor to better appreciate the relationship between the
human motor control system and
gait dynamics.

In the realm of clinical gait analysis, medical professionals apply an evolving knowledge base in
the interpretation of the walking patterns of impaired ambulators for the planning of treatment
protocols, e.g. orthotic prescription and surgical intervention and allow the clinician to determine
the extent to which an individual’s gait pattern has been affected by an already diagnosed
disorder. With respect to sports, athletes and their coaches use motion analysis techniques in a
ceaseless quest for improvements in performance while avoiding injury. The use of motion
capture for computer character animation or virtual reality (VR) applications is relatively new.
The information captured can be as general as the position of the body in space or as complex as
the deformations of the face and muscle masses. The mapping can be direct, such as human arm
motion controlling a character’s arm motion, or indirect, such as human hand and finger patterns
controlling a character’s skin color or emotional state. The idea of copying human motion for
animated characters is, of course, not new. To get convincing motion for the human characters in
Snow White, Disney studios traced animation over film footage of live actors playing out the

13
scenes. This method, called rotoscoping, has been successfully used for human characters. In the
late’70’s, when it began to be feasible to animate characters by computer, animators adapted
traditional techniques, including rotoscoping.

Generally, motion analysis data collection protocols, measurement precision, and data reduction
models have been developed to meet the requirements for their specific settings. For example,
sport assessments generally require higher data acquisition rates because of increased velocities
compared to normal walking. In VR applications, real-time tracking is essential for a realistic
experience of the user, so the time lag should be kept to a minimum. Years of technological
development has resulted into many systems can be categorized in mechanical, optical,
magnetic, acoustic and inertial trackers. The human body is often considered as a system of rigid
links connected by joints. Human body parts are not actually rigid structures, but they are
customarily treated as such during studies of human motion. 

Mechanical trackers utilize rigid or flexible goniometers which are worn by the user.
Goniometers within the skeleton linkages have a general correspondence to the joints of the user.
These angle measuring devices provide joint angle data to kinematic algorithms which are used
to determine body posture. Attachment of the body-based linkages as well as the positioning of
the goniometers present several problems. The soft tissue of the body allows the position of the
linkages relative to the body to change as motion occurs. Even without these changes, alignment
of the goniometer with body joints is difficult. This is specifically true for multiple degree of
freedom (DOF) joints, like the shoulder. Due to variations in anthropometric measurements,
body-based systems must be recalibrated for each user.

14
Optical sensing encompasses a large and varying collection of technologies. Image-based
systems determine position by using multiple cameras to track predetermined points (markers)
on the subject’s body segments, aligned with specific bony landmarks. Position is estimated
through the use of multiple 2D images of the working volume. Stereometric techniques correlate
common tracking points on the tracked objects in each image and use this information along with
knowledge concerning the relationship between each of the images and camera parameters to
calculate position. The markers can either be passive (reflective) or active (light emitting).
Reflective systems use infrared (IR) LED’s mounted around the camera lens, along with IR pass
filters placed over the camera lens and measure the light reflected from the markers. Optical
systems based on pulsed-LED’s measure the infrared light emitted by the LED’s placed on the
body segments. Also camera tracking of natural objects without the aid of markers is possible,
but in general less accurate. It is largely based on computer vision techniques of pattern
recognition and often requires high computational resources. Structured light systems use lasers
or beamed light to create a plane of light that is swept across the image. They are more
appropriate for mapping applications than dynamic tracking of human body motion. Optical
systems suffer from occlusion (line of sight) problems whenever a required light path is blocked.
Interference from other light sources or reflections may also be a problem which can result in so-
called ghost markers.

15
Magnetic motion capture systems utilize sensors placed on the body to measure the low-
frequency magnetic fields generated by a transmitter source. The transmitter source is
constructed of three perpendicular coils that emit a magnetic field when a current is applied. The
current is sent to these coils in a sequence that creates three mutually perpendicular fields during
each measurement cycle. The 3D sensors measure the strength of those fields which is
proportional to the distance of each coil from the field emitter assembly. The sensors and source
are connected to a processor that calculates position and orientation of each sensor based on its
nine measured field values. Magnetic systems do not suffer from line of sight problems because
the human body is transparent for the used magnetic fields. However, the shortcomings of
magnetic tracking systems are directly related to the physical characteristics of magnetic fields.
Magnetic fields decrease in power rapidly as the distance from the generating source increases
and so they can easily be disturbed by (ferro) magnetic materials within the measurement
volume. Acoustic tracking systems use ultrasonic pulses and can determine position through
either time-of-flight of the pulses and triangulation or phasecoherence. Both outside-in and
inside-out implementations are possible, which means the transmitter can either be placed on a
body segment or fixed in the measurement volume. The physics of sound limit the accuracy,
update rate and range of acoustic tracking systems. A clear line of sight must be maintained and
tracking can be disturbed by reflections of the sound.

Inertial sensors use the property of bodies to maintain constant translational and rotational
velocity, unless disturbed by forces or torques, respectively. The vestibular system, located in the
inner ear, is a biological 3D inertial sensor. It can sense angular motion as well as linear
acceleration of the head. The vestibular system is important for maintaining balance and
stabilization of the eyes relative to the environment. Practical inertial tracking is made possible
by advances in miniaturized and micromachined sensor technologies,
particularly in silicon accelerometers and rate sensors.  A rate
gyroscope measures angular velocity, and if integrated over time
provides the change in angle with respect to an initially known angle.

16
\

An accelerometer measures accelerations, including gravitational acceleration g. If the angle of


the sensor with respect to the vertical is known, the gravity component can be removed and by
numerical integration, velocity and position can be determined. Noise and bias errors associated
with small and inexpensive sensors make it impractical to track orientation and position for long
time periods if no compensation is applied. By combining the signals from the inertial sensors
with aiding/complementary sensors and using knowledge about their signal characteristics, drift
and other errors can be minimized.

5.2 Ambulatory tracking


Commercial optical systems such as Vicon (reflective markers) or Optotrak (active markers) are
often considered as a 'standard’ in human motion analysis. Although these systems provide
accurate position information, there are some important limitations. The most important factors
are the high costs, occlusion problems and limited measurement volume. The use of a specialized
laboratory with fixed equipment impedes many applications, like monitoring of daily life
activities, control of prosthetics or assessment of workload in ergonomic studies. In the past few
years, the health care system trend toward early discharge to monitor and train patients in their
own environment. This has promoted a large development of non-invasive portable and wearable
systems. Inertial sensors have been successfully applied for such clinical measurements outside
the lab. Moreover, it has opened many possibilities to capture motion data for athletes or
animation purposes without the need for a studio.

The orientation obtained by present-day micromachined gyroscopes typically shows an


increasing error of degrees per minute. For accurate and drift free orientation estimation Xsens
has developed an algorithm to combine the signals from 3D gyroscopes, accelerometers and
magnetometers. Accelerometers are used to determine the direction of the local vertical by

17
sensing acceleration due to gravity. Magnetic sensors provide stability in the horizontal plane by
sensing the direction of the earth magnetic field like a compass. Data from these complementary
sensors are used to eliminate drift by continuous correction of the orientation obtained by angular
rate sensor data. This combination is also known as an attitude and heading reference system
(AHRS).

For human motion tracking, the inertial motion trackers are placed on each body segment to be
tracked. The inertial motion trackers give absolute orientation estimates which are also used to
calculate the 3D linear accelerations in world coordinates which in turn give translation estimates
of the body segments.

Since the rotation from sensor to body segment and its position with respect to the axes of
rotation are initially unknown, a calibration procedure is necessary. An advanced articulated
body model constraints the movements of segments with respect to each other and eliminates any
integration drift.

5.3 Inertial sensors


A single axis accelerometer consists of a mass, suspended by a spring in a housing. Springs
(within their linear region) are governed by a physical principle known as Hooke’s law. Hooke’s
law states that a spring will exhibit a restoring force which is proportional to the amount it has
been expanded or compressed. Specifically, F = kx, where k is the constant of proportionality
between displacement x and force F. The other important physical principle is that of Newton’s
second law of motion which states that a force operating on a mass which is accelerated will
exhibit a force with a magnitude F = ma. This force causes the mass to either compress or
expand the spring under the constraint that F = ma = kx. Hence an acceleration a will cause the

18
mass to be displaced by x = ma/k or, if we observe a displacement of x, we know the mass has
undergone an acceleration of a = kx/m. In this way, the problem of measuring acceleration has
been turned into one of measuring the displacement of a mass connected to a spring. In order to
measure multiple axes of acceleration, this system needs to be duplicated along each of the
required axes. Gyroscopes are instruments that are used to measure angular motion. There are
two broad categories: (1) mechanical gyroscopes and (2) optical gyroscopes. Within both of
these categories, there are many different types available.

The first mechanical gyroscope was built by Foucault in 1852, as a gimbaled wheel that stayed
fixed in space due to angular momentum while the platform rotated around it. Mechanical
gyroscopes operate on the basis of conservation of angular momentum by sensing the change in
direction of an angular momentum. According to Newton’s second law, the angular momentum
of a body will remain unchanged unless it is acted upon by a torque. The fundamental equation
describing the behavior of the gyroscope is:
 

where the vectors tau and L are, the torque on the gyroscope and its angular momentum,
respectively . The scalar I is its moment of inertia, the vector omega is its angular velocity, and
the vector alpha is its angular acceleration.

19
Gimbaled and laser gyroscopes are not suitable for human motion analysis due to their large size
and high costs. Over the last few years, microelectromechanical machined (MEMS) inertial
sensors have become more available. Vibrating mass gyroscopes are small, inexpensive and have
low power requirements, making them ideal for human movement analysis. A vibrating element
(vibrating resonator), when rotated, is subjected to the Coriolis effect that causes secondary
vibration orthogonal to the original vibrating direction. By sensing the secondary vibration, the
rate of turn can be detected. The Coriolis force is given by:

where m is the mass, v the momentary speed of the mass relative to the moving object to which
it is attached and omega the angular velocity of that object. Various micro machined geometries
are available, of which many use the piezo-electric effect for vibration exert and detection.

5.4 Sensor fusion

The traditional application area of inertial sensors is navigation as well as guidance and
stabilization of military systems. Position, velocity and attitude are obtained using accurate, but
large gyroscopes and accelerometers, in combination with other measurement devices such as
GPS, radar or a baro altimeter. Generally, signals from these devices are fused using a Kalman
filter to obtain quantities of interest (see figure below). The Kalman filter is useful for combining
data from several different indirect and noisy measurements. It weights the sources of
information appropriately with knowledge about the signal characteristics based on their models
to make the best use of all the data from each of the sensors. There is no perfect sensor; each
type has its strong and weak points. The idea behind sensor fusion is that characteristics of one
type of sensor are used to overcome the limitations of another sensor. For example, magnetic

20
sensors are used as a reference to prevent the gyroscope integration drift about the vertical axis in
the orientation estimates. However, iron and other magnetic materials will disturb the local
magnetic field and as a consequence, the orientation estimate. The spatial and temporal features
of magnetic disturbances will be different from those related to gyroscope drift errors.

This figure: Complementary Kalman filter structure for position and orientation estimates combining inertial and
aiding measurements. The signals from the IMU (a − g and w) provide the input for the INS. By double integration
of the accelerations, the position is estimated at a high frequency. At a feasible lower frequency, the aiding system
provides position estimates. The difference between the inertial and aiding estimates is delivered to the Kalman
filter. Based on the system model the Kalman filters estimates the propagation of the errors. The outputs of the filter
are fed back to correct the position, velocity, acceleration and orientation estimates.

Using this a priori knowledge, the effects of both drift and disturbances can be minimized. The
inertial sensors of the inertial navigation system (INS) can be mounted on vehicles in such a way
that they stay leveled and pointed in a fixed direction. This system relies on a set of gimbals and
sensors attached on three axes to monitor the angles at all times. Another type of INS is the
strapdown system that eliminates the use of gimbals which is >suitable for human motion
analysis. In this case, the gyros and accelerometers are mounted directly to the structure of the
vehicle or strapped on the body segment. The measurements are made in reference to the local
axes of roll, pitch, and heading (or yaw). The clinical reference system provides anatomically
meaningful definitions of main segmental movements (e.g. flexion-extension, abduction-
adduction or supination-pronation).
 

21
6. Advantages

Motion capture offers several advantages over traditional computer animation of a 3D model:

 More rapid, even real time results can be obtained. In entertainment applications this can
reduce the costs of key frame-based animation. For example: Hand Over

 The amount of work does not vary with the complexity or length of the performance to
the same degree as when using traditional techniques. This allows many tests to be done
with different styles or deliveries.

 Complex movement and realistic physical interactions such as secondary motions, weight
and exchange of forces can be easily recreated in a physically accurate manner.

 The amount of animation data that can be produced within a given time is extremely
large when compared to traditional animation techniques. This contributes to both cost
effectiveness and meeting production deadlines.

 Potential for free software and third party solutions reducing its costs

7. Disadvantages

 Specific hardware and special programs are required to obtain and process the data.

22
 The cost of the software, equipment and personnel required can potentially be prohibitive
for small productions.

 The capture system may have specific requirements for the space it is operated in,
depending on camera field of view or magnetic distortion.

 When problems occur it is easier to reshoot the scene rather than trying to manipulate the
data. Only a few systems allow real time viewing of the data to decide if the take needs to
be redone.

 The initial results are limited to what can be performed within the capture volume
without extra editing of the data.

 Movement that does not follow the laws of physics generally cannot be captured.

 Traditional animation techniques, such as added emphasis on anticipation and follow


through, secondary motion or manipulating the shape of the character, as with squash and
stretch animation techniques, must be added later.

 If the computer model has different proportions from the capture subject, artifacts may
occur. For example, if a cartoon character has large, over-sized hands, these may intersect
the character's body if the human performer is not careful with their physical motion.

8. Applications

 Video games often use motion capture to animate athletes, martial artists, and other in-
game characters. This has been done since the Atari Jaguar CD-based game Highlander:
The Last of the MacLeods, released in 1995.
 Movies use motion capture for CG effects, in some cases replacing traditional cel
animation, and for completely computer-generated creatures, such as Jar Jar Binks,
Gollum, The Mummy, King Kong, and the Na'vi from the film Avatar.
 Sinbad: Beyond the Veil of Mists was the first movie made primarily with motion
capture, although many character animators also worked on the film.

23
 In producing entire feature films with computer animation, the industry is currently split
between studios that use motion capture, and studios that do not. Out of the three
nominees for the 2006 Academy Award for Best Animated Feature, two of the nominees
(Monster House and the winner Happy Feet) used motion capture, and only
Disney·Pixar's Cars was animated without motion capture. In the ending credits of Pixar's
film Ratatouille, a stamp appears labelling the film as "100% Pure Animation -- No
Motion Capture!"
 Motion capture has begun to be used extensively to produce films which attempt to
simulate or approximate the look of live-action cinema, with nearly photorealistic digital
character models. The Polar Express used motion capture to allow Tom Hanks to perform
as several distinct digital characters (in which he also provided the voices). The 2007
adaptation of the saga Beowulf animated digital characters whose appearances were
based in part on the actors who provided their motions and voices. James Cameron's
Avatar used this technique to create the Na'vi that inhabit Pandora. The Walt Disney
Company has announced that it will distribute Robert Zemeckis's A Christmas Carol and
Tim Burton's Alice in Wonderland using this technique. Disney has also acquired
Zemeckis' ImageMovers Digital that produces motion capture films.
 Television series produced entirely with motion capture animation include Laflaque in
Canada, Sprookjesboom and Cafe de Wereld in The Netherlands, and Headcases in the
UK.
 Virtual Reality and Augmented Reality allow users to interact with digital content in real-
time. This can be useful for training simulations, visual perception tests, or performing a
virtual walk-throughs in a 3D environment. Motion capture technology is frequently used
in digital puppetry systems to drive computer generated characters in real-time.
 Gait analysis is the major application of motion capture in clinical medicine. Techniques
allow clinicians to evaluate human motion across several biometric factors, often while
streaming this information live into analytical software.
 During the filming of James Cameron's Avatar all of the scenes involving this process
where directed in real time using a screen which converted the actor setup with the
motion costume into what they would look like in the movie making it easier for
Cameron to direct the movie as it would be seen by the viewer. This method allowed

24
Cameron to view the scenes from many more views and angles not possible from a pre-
rendered animation. He was so proud of his pioneering methods he even invited Steven
Spielberg and George Lucas on set to view him in action.

9. Conclusion

Although the motion capture requires some technical means, we can quite get what to do it
yourself at home in a reasonable cost that can make your own short film.
Motion capture is a major step forward in the field of cinema as you can reprocess the image in a
more simple, in fact, it is easier to modify an image captured a classic scene, all although this is
too expensive. But it is also a major asset in medicine, for example, it can be used to measure the
benefit of a transaction via a recording of the movement of the patient before and after the
operation (such as in the case of the application prosthesis, or simply at a medical classic (in the
future perhaps).

10. REFERENCES

• Motion Capture- http://en.wikipedia.org/wiki/Motion_capture

• A Brief History of Motion Capture for Computer Character Animation-

http://www.siggraph.org/education/materials/HyperGraph/animation/character_animation/motio
n_capture/history1.htm

• Computer Graphics World-

http://www.cgw.com/ME2/dirmod.asp?sid=&nm=&type=Publishing&mod=Publications%3A
%3AArticle&mid=8F3A7027421841978F18BE895F87F791&tier=4&id=A8B4004315A84A50
89255A2E366E2E78

• Motion Capture- What is it? - http://www.metamotion.com/motion-capture/motion-


capture.htm

• ACCAD-Motion Capture Lab-Home, information and documentation-


http://accad.osu.edu/research/mocap/mocap_home.htm

25
26

Você também pode gostar