Escolar Documentos
Profissional Documentos
Cultura Documentos
tracking.js | Documentation
tracking.js
This documentation will introduce you to most of the key concepts in working with
tracking.js. Don't worry if you don't understand everything. Each of the concepts
presented here is described in detail in the source code of the examples.
1/15
11/21/2014
tracking.js | Documentation
</script>
</body>
</html>
http://trackingjs.com/docs.html#web-components
2/15
11/21/2014
tracking.js | Documentation
tracking.track('#myVideo', colors);
</script>
</body>
</html>
This example will request your camera and track magenta, cyan and yellow colors
that appear in front of it. Look around you and grab any object that matches with
one of those colors and watch the console of your browser, it should display the
coordinates of all found objects.
Trackers
In order to understand how the tracker API works, first you need to instantiate the
constructor passing the targets you want to detect. Note thattracking.Tracker is an
abstract class only used to teach how to use the API.
var myTracker = new tracking.Tracker('target');
Once you have the tracker instance, you need to know when something happens,
that's why you need listen for track events:
myTracker.on('track', function(event) {
if (event.data.length === 0) {
// No targets were detected in this frame.
} else {
event.data.forEach(function(data) {
// Plots the detected targets here.
});
}
});
Now that you have the tracker instance listening for track event, you are ready to
start tracking by invoking the track
implementationmyTracker.track(pixels, width,height). This method handles all
http://trackingjs.com/docs.html#web-components
3/15
11/21/2014
tracking.js | Documentation
the internal logic that processes the pixels and extracts the targets from it.
But don't worry, you don't need to read the <canvas>, <img> or <video>pixels
manually, tracking.js provides an utility which handles that for you:
var trackerTask = tracking.track('#myVideo', myTracker);
It's also possible to plug the tracker instance in other elements. When tracking
a <canvas> or <img>, the utilitytracking.track('#image', myTracker)invokes only
one timemyTracker.track(pixels, width,height). All the required arguments are
fulfilled automatically, e.g. array of pixels, width and height. When using with
a <video> node it is a little bit different, for each video frame the internal track
implementation is executed.
If you want to have full control of the tracking task you've plugged on the previous
example, you may want to continue reading this section. Let's assume you need to
stop the tracking from a long-running video:
trackerTask.stop(); // Stops the tracking
trackerTask.run(); // Runs it again anytime
The previous example was an abstract overview about the tracker API available.
Now let's dig into some practical usages of some of the available trackers.
Color Tracker
Colors are everywhere in every single object. Being able to handle colored objects
to control your browser through the camera is very appealing. For that reason,
tracking.js implemented a basic color tracking algorithm that resulted in a real-time
frame rate through a simple and intuitive API. It offers several significant
advantages over geometric cues such as computational simplicity, robustness
under partial occlusion and illumination, rotation, scale and resolution changes.
In order to use a color tracker, you need to instantiate the constructor passing the
colors to detect:
var colors = new tracking.ColorTracker(['magenta', 'cyan', 'yellow']);
http://trackingjs.com/docs.html#web-components
4/15
11/21/2014
tracking.js | Documentation
Once you have the color tracker instance, you need to know when something
happens, that's why you need listen for track events:
colors.on('track', function(event) {
if (event.data.length === 0) {
// No colors were detected in this frame.
} else {
event.data.forEach(function(rect) {
// rect.x, rect.y, rect.height, rect.width, rect.color
});
}
});
Now that you have the tracker instance listening for track event, you are ready to
start tracking:
tracking.track('#myVideo', colors);
How do I register my own color? By default tracking.js color tracker provides out of
the box three default colors, they are: magenta, cyan and yellow. In addition to
those, you can register any custom color you want to track, it's very simple, let's
assume the color you want to track is green. In the RGB color space the green
color could be some value close to (r, g, b) = (0, 255, 0), where (r, g,
b)stands for red, green and blue, respectively. Once you understand the color to
track in the RGB color space, it's time to register your color
usingtracking.ColorTracker.registerColor.
tracking.ColorTracker.registerColor('green', function(r, g, b) {
if (r < 50 && g > 200 && b < 50) {
return true;
}
return false;
});
Note that the custom color function returns true to any value that the gvalue is
close to 255, to make sure we exclude other colors that could fit with the green
http://trackingjs.com/docs.html#web-components
5/15
11/21/2014
tracking.js | Documentation
RGB pattern, it also checksr and b values to make sure they are below 50, hence
close to (r, g, b) =(0, 255, 0).
API docs
Object Tracker
Having a rapid object detection as part of the library resulted in interesting
examples for web applications, such as detecting faces, mouths, eyes and any
other training data that could be added to the library later.
In addition to the tracking.js core script, there are some training classifiers, they are
going to teach tracking.js core how to recognize the object you want to track, make
sure to only include the ones you need, each of them have an average size of ~60
KB:
<script src="tracking.js/build/data/face.js"></script>
<script src="tracking.js/build/data/eye.js"></script>
<script src="tracking.js/build/data/mouth.js"></script>
In order to use object tracker, you need to instantiate the constructor passing the
classifier data to detect:
var objects = new tracking.ObjectTracker(['face', 'eye', 'mouth']);
Once you have the object tracker instance, you need to know when something
happens, that's why you need listen for track events:
objects.on('track', function(event) {
if (event.data.length === 0) {
// No objects were detected in this frame.
} else {
event.data.forEach(function(rect) {
// rect.x, rect.y, rect.height, rect.width
});
}
});
http://trackingjs.com/docs.html#web-components
6/15
11/21/2014
tracking.js | Documentation
Now that you have the tracker instance listening for track event, you are ready to
start tracking:
tracking.track('#myVideo', objects);
API docs
Custom Tracker
It's easy to create your own tracker whenever you need one.
Let's say for example that, for some reason, you need to build an application that
finds shadows in images. Our trackers currently don't support this use case yet, so
you'll need to implement the algorithm yourself.
Don't walk away yet though! You have the option of building your feature on top of
tracking.js and, if you do so, you'll be able to take advantage of all the abstractions
it provides, like accessing the camera and getting the pixel matrix through the
canvas on every frame.
It's simple! First, you just need to create a constructor for your new tracker (let's call
it MyTracker) and have it inherit from tracking.Tracker:
var MyTracker = function() {
MyTracker(this, 'constructor');
}
tracking.inherits(MyTracker, tracking.Tracker);
Then, you need to implement thetrack method for your tracker. It will receive the
pixel matrix for the current image (or video frame) and should hold the actual
tracking algorithm. When the tracking is done, the code should call the emit method
to send the results through the track event:
var MyTracker = function() {
MyTracker.prototype.track = function(pixels, width, height) {
// Your code here
this.emit('track', {
http://trackingjs.com/docs.html#web-components
7/15
11/21/2014
tracking.js | Documentation
That's it! You can now use your tracker in the same way the other existing trackers
are used. First, create an instance of it:
var myTracker = new tracking.MyTracker();
API docs
Utilities
For a better understanding of the library architecture, the implementation is divided
in several utilities, it also includes several computer vision algorithms to help you
implement your custom solutions. To develop computer vision applications using
only raw JavaScript APIs could be too verbose and complex, e.g. capturing users'
camera and reading its array of pixels.
The big amount of steps required for a simple task makes web developers life hard
when the goal is to achieve complex implementations. Some level of encapsulation
is needed in order to simplify development. The proposed library provides
encapsulation for common tasks on the web platform.
http://trackingjs.com/docs.html#web-components
8/15
11/21/2014
tracking.js | Documentation
API docs
Brief also provides a method that you can match the features decribed
indescriptors1 and descriptors2:
var matches = tracking.Brief.reciprocalMatch(corners1, descriptors1,
corners2, descriptors2);
API docs
Convolution
http://trackingjs.com/docs.html#web-components
9/15
11/21/2014
tracking.js | Documentation
Convolution filters are very useful generic filters for image processing. The basic
idea is that you take the weighed sum of a rectangle of pixels from the source
image and use that as the output value. Convolution filters can be used for blurring,
sharpening, embossing, edge detection and a whole bunch of other things.
In order to horizontally convolve image pixels you can do:
tracking.Image.horizontalConvolve(pixels, width, height, weightsVector,
opaque);
API docs
Gray Scale
Converts a color from a colorspace based on an RGB color model to a grayscale
representation of its luminance. The coefficients represent the measured intensity
perception of typical trichromat humans, in particular, human vision is most
sensitive to green and least sensitive to blue.
To convert the images pixels into grayscale:
tracking.Image.grayscale(pixels, width, height, fillRGBA);
API docs
Image Blur
A Gaussian blur (also known as Gaussian smoothing) is the result of blurring an
http://trackingjs.com/docs.html#web-components
10/15
11/21/2014
tracking.js | Documentation
API docs
Integral Image
A summed area table is a data structure and algorithm for quickly and efficiently
generating the sum of values in a rectangular subset of a grid. In the image
processing domain, it is also known as an integral image.
To compute the images pixels using tracking.js you can do:
tracking.Image.computeIntegralImage(
pixels, width, height, opt_integralImage, opt_integralImageSquare,
opt_tiltedIntegralImage, opt_integralImageSobel);
API docs
Sobel
Computes the vertical and horizontal gradients of the image and combines the
computed images to find edges in the image. The way we implement the Sobel
filter here is by first grayscaling the image, then taking the horizontal and vertical
gradients and finally combining the gradient images to make up the final image.
To compute the edges of the image pixels using tracking.js you can do:
tracking.Image.sobel(pixels, width, height);
API docs
http://trackingjs.com/docs.html#web-components
11/15
11/21/2014
tracking.js | Documentation
Viola Jones
The ViolaJones object detection framework is the first object detection framework
to provide competitive object detection rates in real-time. This techinique is used
insidetracking.ObjectTrackerimplementation.
To use Viola Jones to detect an object of an image pixels using tracking.js you can
do:
tracking.ViolaJones.detect(pixels, width, height, initialScale,
scaleFactor, stepSize, edgesDensity, classifier);
API docs
Web Components
Many of the existing computer vision frameworks are not available on the web, in
addition, they are too complex to learn and to use. The main goal of tracking.js is to
provide those complex techniques in a simple and intuitive way on the web. We
believe computer vision is important to improve people's life, bringing it to the web
will make this future a reality a lot faster.
We also believe that Web Components are the future of encapsulation on the web,
therefore tracking.js library features are available for you as custom elements on
the tracking-elements repository.
Can you imagine tagging your friend's face in a picture with one line of HTML? Or,
tracking a user's face with the same API? This section will show how you can do
that. This will requireBower a front end package manager. Once you have bower
installed, install tracking-elements:
$ bower install tracking-elements --save
After install tracking-elements few custom elements are available. They extends the
native <canvas>, <img>and <video> with tracking functionality.
http://trackingjs.com/docs.html#web-components
12/15
11/21/2014
tracking.js | Documentation
Color Element
As a first step of using tracking.js web components you need to learn how to
extends a native DOM element with tracking functionality using the attribute is="".
The tracking target is set through target="" attribute and accepts different values
depending on the tracker you are using, e.g. colors or objects.
<img is="image-color-tracking" target="magenta cyan yellow" />
<canvas is="canvas-color-tracking" target="magenta cyan yellow"></canvas>
<video is="video-color-tracking" target="magenta cyan yellow"></video>
Elements extending <video> could request the user's camera using the
attribute camera="true". Note that passing that the browser will request the user to
allow their camera to be shared. The custom elements exposes events and
methods fromTracker, for more information go to the API docs. The next example
will cover an example how to tag friends faces on a picture using ObjectTracker.
API docs
Object Element
Let's create an example that you can place an image with your friends faces and
mark with a rectangle each of them. In this step, you'll create an example file under
the examples/folder into where you unziped the project under your local drive. Go to
this directory and create a file calledtracking_element.html file in your favorite
editor. The starting file looks like this:
<!DOCTYPE html>
<html>
<head>
<!-- Importing Web Component's Polyfill -->
<script src="bower/platform/platform.js"></script>
<!-- Importing Custom Elements -->
<link rel="import" href="../src/image-object-tracking.html">
</head>
<body>
http://trackingjs.com/docs.html#web-components
13/15
11/21/2014
tracking.js | Documentation
The next step will teach you how to plot rectangles on your friends faces, you can
listen for track events direct from your DOM element,
e.g.img.addEventListener('track',doSomething). The event fires when all faces are
found on the image. The event payload (event.data) is an array of objects
containing all the faces coordinates. Now just pass them to the helper
function plotRectangles to plot each face.
<!DOCTYPE html>
<html>
<head>
<!-- Importing Web Component's Polyfill -->
<script src="bower/platform/platform.js"></script>
<!-- Importing Custom Elements -->
<link rel="import" href="../src/image-object-tracking.html">
</head>
<body>
<!-- Using Custom Elements -->
<img is="image-object-tracking" target="face" src="assets/faces.png" />
<script>
var img = document.querySelector('img');
// Fires when faces are found on the image.
img.addEventListener('track', function(event) {
event.detail.data.forEach(function(rect) {
plotRectangle(img, rect);
});
});
function plotRectangle(el, rect) {
var div = document.createElement('div');
http://trackingjs.com/docs.html#web-components
14/15
11/21/2014
tracking.js | Documentation
div.style.position = 'absolute';
div.style.border = '2px solid ' + (rect.color || 'magenta');
div.style.width = rect.width + 'px';
div.style.height = rect.height + 'px';
div.style.left = el.offsetLeft + rect.x + 'px';
div.style.top = el.offsetTop + rect.y + 'px';
document.body.appendChild(div);
return div;
}
</script>
</body>
</html>
http://trackingjs.com/docs.html#web-components
15/15