Escolar Documentos
Profissional Documentos
Cultura Documentos
Software Developer Journal
Software Developer Journal
www.BigDataTechCon.com
Big Data TechCon is a trademark of BZ Media LLC.
A BZ Media Event
The Best Of
Dear Readers!
Our Best of SDJ issue is finally released and it is free to download! We worked hard and we hope that you
see that. This issue is dedicated to Web Development mostly. We tried to compare as many frameworks
aswe can. Our jQuery section starts with Jquery is awesome. So, use it less! by Willian Carvahlo. Authoris
convinced that jQuery is a great tool but its sometimes used in a wrong way. This article is a voice in
adiscussion about proper use of jQuery.
Then youll find jQuery promises by Ryan ONeill. This article is related to the previous one. Author shows
in a simple way how you can manage all that you want with jQuery without complications.
Davide Marzioni shows a simple trick with web2py and Tomomichi Onishi presents Tutorial for creating
simple Hyakunin-Issyu application using Sinatra and Heroku. Manfred Jehle in theoretical way explains
howyou can Start developing better applications.
Aimar Rodriguez covers Django subject in the article entitled Developing your own GIS application
withGeoDjango and Leaflet.js.
Also look closely to the other articles. You need to read the article on AexolGL and I think you will find
thenew 3D graphics engine full of new tools. This issue contains really interesting content and we are
happyto publish that for you!
Were hoping youll enjoy our work.
Ewa & the SDJ Team
The Best Of
Copyright 2015 Hakin9 Media Sp. z o.o. SK
Table of Contents
AexolGL New 3D Graphics Engine ...............................................................................6
Jquery is Awesome. So, Use it Less!..................................................................................9
by Willian Carvalho
Jquery Promises..................................................................................................................11
by Ryan ONeill
CreateJS In Brief................................................................................................................52
by David Roberts
The Best Of
We wanted to create a tool for small/medium-sized developer studios, indie developers, that would let them design
3D projects on any platform they want.
AexolGL PRO is a tool for creating games and applications natively in C++/Python, for the following platforms:
iOS, Android,Windows, Mac and Linux. AexolGL WEB is
used to create games and applications for internet browsers
(Mozilla, Safari, Chrome) without the need to use plugins
or simple webview apps, games for iOS and Android.
AexolGL WEB is a perfect tool for creating visualizations. 3D technology is the modern form of presentation,
that works perfectly for visualizing interiors, buildings
and product models (e.g. cars and electronic devices). AexolGL takes website product presentation to a whole new
level.
AexolGL team
6
The Best Of
Ready for instantation animated sprite object from JSON file (C++)
The engine does have the most popular optimization algorithms available. Although not as advanced asUmbras,
A simple way of creating objects with assigned materials, shaders, geometry and transformation matrices.In
AexolGL the object is ready for display after only 30 lines of code (C++)
7
The Best Of
We give the developer the ability to define controls on keyboard, joystick, mouse and touchscreen. It is also possible
to define a virtual joystick on the touchscreen. However
how the application reacts to individual signals is entirely
up to its creator. By default, signals from the mouse and
one finger touches are treated thesame, however they can
easily be assigned to different actions.
The Best Of
The Best Of
It might not be of a big difference when iterating a small number of elements, but for larger lists the
performance would be compromised. Because deep inside, jQuery is using the plain old Javascript with an
anonymous function for every step of the loop.
Besides, the amount of lines of code required to build a loop is not very different between jQuery and pure
Javascript. Lets see:
Listing 1. jQuery sample
var arr = [a, b, c];
$(arr).each(function(index, data) {
console.log(index, data);
});
As you can see, there is no big difference between them, but keep in mind that pure JS is faster, for sure.
There are other cases where we should be using pure Javascript instead of jQuery. Unfortunately, there is no
magic way to decide which one, other than analyze each situation.
It is very hard to guess which one is faster, and you will want to make sure which one to use.
A simple way to compare your code with jQuery (or any other code) is by using JSPerf (http://jsperf.com/).
With this online tool, its possible to create any number of test cases and run them against each other to see
how many operations are done per second.
JSPerf is also able to store the results for every test in every browser and version. This is good, because some
tests might be quite similar, but one particular browser might have a very different result than the others,
which can help you on your decision.
In conclusion, when you are writing a web application, take a 5 seconds break, think about what you are
about to use jQuery for, and try to use it properly.
We all know how brilliant it is and how much it has helped us for the past years, but its important to always
keep in mind why and what it was really built for. Use less jQuery for the best!
Willian is a Senior Javascript Engineer at TOTVS. He worked specificly with Javascript for years and he
wants to discuss about jQuery.
10
The Best Of
Jquery Promises
by Ryan ONeill
jQuery has made working with asynchronous Javascript incredibly easy. However, callback
functions still get quickly out of control leading to code that is both hard to read and to
debug. Usage of the promises pattern via the jQuery Deferred object is a great way to keep
your code clean and maintainable.
This article will give an overview of jQuerys implementation of the promise pattern, how it can be used to
write clean asynchronous jQuery, and an example implementation.
The author has been working with the jQuery library for over five years. He is currently a front-end engineer
with Twitter designing and building single page apps using jQuery and other libraries.
For context, this article assumes that the reader has some general Javascript and jQuery experience and is
familiar with the asynchronous nature of the language.
Since jQuerys initial release in 2006, it has grown from a simple utility library into the defacto standard for
writing Javascript in the browser. jQuery solved many issues, such as cross-browser incompatibilities and
shaky DOM querying, and also introduced features like the $.ajax() function which made it easier than ever
for developers to build dynamic pages and applications without the need for full page reloads.
11
The Best Of
Listing 2. Multiple AJAX requests with nested callbacks
$.ajax(
type: GET,
url: /user,
success: function (user) {
$(#user-name).val(user.name);
$.ajax(
type: POST
url: /user/login,
data: { userId : user.id },
success: function (loginResult) {
alert(loginResult);
},
error: function (err) {
// Error handling
});
},
error: function (err) {
// Error handling
});
Even in this basic example adding a single nested request and some trivial error handling makes the code much
more difficult to read (partly due to switching to the longer form $.ajax() for error handling. Note that post()
and get() are wrappers for ajax()). In practice, callback and error handling functions are typically much longer
and the need for three or four nested requests commonly becomes necessary. This is especially true if the
application uses more than one web service to function. At this point the code becomes effectively unreadable.
12
The Best Of
The above code accomplishes the same set of tasks as the code in Figure 2. Through the use of jQuery
promises we are able to chain the asynchronous requests together rather than to rely on messy nested
callbacks resulting in a script that is both easier to read and more maintainable. A few things to note:
and then() appear to be similar. A key difference is that then() will pipe the result of the callback(s)
into the next piece of the chain
done()
Note that in the then() block we need to return the result of the second AJAX request rather than to call it
standalone. This is so that the resulting $.Deferred from the userLoginRequest gets passed into the done()
function, allowing us to make use of its result
We also hoisted a generic error handler for an even clearer solution.
What is $.Deferred?
Under the covers $.Deferred is a stateful callback register that follows the convention of Promises/A (http://
wiki.commonjs.org/wiki/Promises/A). The promise has three possible states: pending, resolved, and rejected.
Every $.Deferred object starts in the pending state and can move into either the resolved state via the
resolve() function or the rejected state via the reject() function. In practice, these two methods are called
internally by the library being used. For instance, the $.ajax() function handles resolving and rejecting the
promise. In fact, the resolve() and reject() functions will not even be available to us. This is because the
object returned from $.ajax() is actually a $.Deferred.promise which exposes only the functions to attach
callbacks and hides the functions that control the promise state. You can also take advantage of this if you
are writing code that returns a promise that other code will subscribe to.
When a promise is rejected, all callbacks registered to the promise via the fail() function will execute.
Similarly, when a promise is resolved the callbacks registered via the then() or the done() method will be
called. If we need a block of code to run when the promise completes regardless of whether it is failed or
resolved, we can attach those callbacks using the always() function. This is analogous to a finally statement
in a try/catch block and is generally used for running clean-up code.
Listing 4. Using always()
$.get(/user, function (user) {
$(#user-name).val(user.name);
}).always(function () {
alert(AJAX request complete); // This will always be called
});
Keeping a clean, maintainable, and readable code base requires active effort and diligence from all
developers involved. Promises are not a magic bullet and code can still get out of control when using
this pattern. When used correctly, promises can offer a large improvement in flow control relative to the
traditional callback pattern.
Ryan ONeill was born in Washington D.C. in 1986. Since then he has taken residence in Miami, Atlanta,
Chicago and San Francisco. He has worked with web technologies for the better part of a decade and is
currently a senior front-end engineer with Twitter (you can follow him @rynonl).
13
The Best Of
In this article I want to share with you some tips and tricks that could be useful when programming with Web2Py framework.
14
The Best Of
The only drawback in using Web2py in any IDEs (Aptana included) is that it doesnt understand the context
(the gluon module) and therefore autocompletion doesnt work. To solve this issue you can use a trick to add
in the models and controllers with the following code:
Listing 1. A trick
if False:
from gluon import *
request = current.request
response = current.response
session = current.session
cache = current.cache
T = current.T
This code isnt executed but it does force the IDE to parse it and understand where the objects in the global
namespace come from.
15
The Best Of
Listing 5. Change the style of all inputs
for input_field in form.element(input):
input_field[_style] = input_field[_style] + width: 200px
Init function requires the two field to validate. Parameter a is always the self-referenced field, while b is
the other. Optionally I can pass another validation function in the validator parameter.
The validation functions return a tuple where the first value is the formatted value (no formatting is done in
this case) and the second value is an error message (None if the value is correct).
In our case it looks like that.
16
The Best Of
Listing 7. A sample code
db.website.weblink.requires = LinkedFieldValidator(request.vars.weblink,
request.vars.ipaddress,
IS_URL(mode=generic, allowed_schemes=[ftp, http, https]))
db.website.ipaddress.requires = LinkedFieldValidator(request.vars.document_file,
request.vars.weblink,
IS_IPV4())
= False,
Im Davide Marzioni and I have worked since 2011 as software developer for a small company in Italy
mainly focused on research and development in automation and electronic fields. I use Web2py in many
projects I can because it bring in easy way your application to a web environment.
17
The Best Of
Figure 1. Image of Karuta Hyakunin-Issyu based card game (photo credit: aurelio.asiain via photopin cc)
Overview
In this tutorial, well see how to create a web application using Sinatra, the light-weight Ruby framework,
and how to deploy it on Heroku, the web application hosting service.
We create a simple web application as a sample, using Hyakunin-Issyu, the beautiful anthology of Japanese
ancient poems, as a theme.
This app has only two pages; the one shows the list of all the poems and the other one shows the detail of
each poem.
(Dont worry if you never heard of Hyakunin-Issyu. Youll see a quick guide at the end of this introduction.)
Ruby
Gem management using Bundler
Git
Haml
18
The Best Of
About Hyakunin-Issyu
Hyakunin-Issyu, or the one hundred poems by one hundred poets, is an anthology of one hundred tanka, a
Japanese poem of thirty-one syllables, selected by a famous poet in the medieval period.
http://en.wikipedia.org/wiki/Ogura_Hyakunin_Isshu Wiki page of Hyakunin Isshu
Tanka is made of thirty-one syllables, five-seven-five for the first half of the poem and the seven-seven for
the last half.
As it cant contain very much information on such a limited number of words, its very important to feel the
aftertaste of the poem.
Composing a poem with very selected words, describing the delicate feelings and the beautiful scenery of
nature, is a very Zen-like way and this is the culture we Japanese should be proud of.
We often play the Hyakunin-Issyu based card game called Karuta in the New Years holidays in Japan.
The basic idea of the Karuta game is to be able to quickly determine which card out of an array of cards is
required and then to grab the card before it is grabbed by an opponent.
Chihayafuru, the karuta themed comic, became a big hit in Japan and now this traditional culture has became
popular again.
Please take a look at this comic if you are interested.
http://www.youtube.com/watch?v=rxebYxY9NXE opening video for Chihayafuru anime
Okay, I think thats enough for the intro.
Now its time to start the tutorial.
Using Sinatra
The first half of this tutorial is to create an simple application with Sinatra.
To start with the smallest possible project, all you need is two files.
Listing 1. The construction of the project files
|-sample
|-main.rb
|-Gemfile
19
The Best Of
Listing 2. The minimum implementation of main.rb
#main.rb
require sinatra
get / do
hello world.
end
Next, make a Gemfile for gem management. Now you only need a Sinatra gem.
Listing 3. List gems on Gemfile
#Gemfile
source :rubygems
ruby 2.0.0
gem sinatra
From the terminal, run bundle install to install gems to the project.
The project settings are almost done!
Move to the project root and run ruby main.rb from the Terminal.
The application will be run on port:4567 (this may be different on your machine, so be sure to check the
output in Terminal).
Open your browser and access localhost:4567.
If successful, you should see the words hello world displayed there.
Adding more pages
Okay, now were going to add some more pages to this app (its just too simple, otherwise!).
Edit main.rb to do this:
Listing 4. Adding more page to main.rb
#main.rb
...
get /poem do
this is another page!
end
Well done! Now we have another page with the route /poem.
Restart the project by running ruby main.rb and access localhost:4567/poem in your browser.
You should now see this is another page! displayed there.
Auto reloading Sinatra
It can get tiresome to restart the process every time youve changed something in the code.
To make things easier, lets introduce auto-reloading into our app.
20
The Best Of
Listing 5. Add sinatra-contrib to Gemfile
#Gemfile
...
gem sinatra-contrib
Thats all we need. Try restarting main.rb again (its the last time, I promise!), then access localhost:4567
in the browser.
Next, change the hello world message on main.rb and refresh the page. If all goes well, youll now see the
message changed without having to restart.
Accept parameters
One last thing for this section is to accept URLs with parameters, like /poem/13, so that the page contents
update based on this new value.
Listing 7. Accept parameters in main.rb
#main.rb
get /poem/:id do
this page shows the detail of poem-#{params[:id]}
end
Add :id to the get part, and use that param with params[:id].
Now try accessing localhost:4567/poem/13. The content should have changed.
21
The Best Of
Listing 8. Add HyakuninIssyu gem to Gemfile
#Gemfile
gem ...
gem HyakuninIssyu
22
The Best Of
Index page
Okay, well now finish the index page using this gem.
This page shows the list of all the poems. Use the poems method of the gem:
Listing 11. List all the poems in index page
#main.rb
get / do
data = HyakuninIssyu.new
@poems = data.poems
end
Itll be messy if you write all the html document in main.rb, so we will divide the code and use separate
view files.
Listing 12. The construction of the project after adding view files
|-sample
|-...
|-views
|-index.haml
|-poem.haml
And now create the index.haml file to show the list of poems.
Listing 14. Index.haml
#views/index.haml
%h1 INDEX
@poems.each do |poem|
unless poem.nil?
%p #{poem.kanji}
%small #{poem.en}
23
The Best Of
Listing 15. Declare the use of haml file
#main.rb
get / do
...
haml :index
end
As we enabled the parameter handling already, we use it to get poem data from the gem.
Listing 16. Developing poem detail page
#main.rb
...
get /poem/:id do
id = params[:id].to_i #treat the parameter as an integer
data = HyakuninIssyu.new
@poem = data.poem(id)
@poet = data.poet(id)
haml :poem
end
We set the poem data to @poem and @poet, and declared that we use views/poem.haml as a view file.
The poem.haml file should look like this:
Listing 17. The content of poem.haml
#views/poem.haml
%h1 POEM
%div
%h2 Poem Info
%p #{@poem.kanji}
%small #{@poem.en}
%div
%h2 Poet Info
%p #{@poet.name.ja}
%small #{@poet.name.en}
Access localhost:4567/poem/13 in the browser, perhaps with a different poem number, and check the
poem data is shown correctly.
Finish the development
To finish the development of this app, well link these two pages.
24
The Best Of
Listing 18. Add a link to index.haml
#views/index.haml
%h1 INDEX
@poems.each do |poem|
%p
%a(href=/poem/#{poem.id}) #{poem.kanji}
%small #{poem.en}
Okay, weve now finished developing this very simple Sinatra web application.
It shows the list of all the poems of HyakuninIssyu, and you can see the detail of each poem.
Now lets try to deploy this to Heroku.
Heroku Deployment
The last half of this tutorial is deploying the Sinatra application to Heroku.
Before continuing, please sign up and create your account on Heroku.
https://id.heroku.com/signup Heroku Sign Up
Also youll need the Heroku Toolbelt to use the heroku command.
Please download this from the link below:
https://toolbelt.heroku.com/ Heroku Toolbelt
Okay, now lets get started.
Thats all. The empty app is created on heroku and its added to your git remote repository.
(You can check this by running the git
remote
command)
25
The Best Of
Create config.ru file as shown below:
Listing 21. Create a config.ru file
#config.ru
require bundler
Bundler.require
require ./main
#requiring main.rb
run Sinatra::Application
If youre not familiar with git, check the Git Book or other tutorials.
http://git-scm.com/book Git Book
Now were ready for deployment!
Deploy to Heroku
Deploying to Heroku is extremely easy. Just run the following command:
Listing 23. Deploy command to Heroku
git push heroku master
Thats it. After successfully building your app on Heroku, run heroku open or
access APP-NAME.heroku-app.com to see your app.
Is your app working well? If you find some errors, please run heroku logs to see whats wrong.
Okay, thats the end of the tutorial.
The final version of the codes are in my GitHub repository.
If your code doesnt work, please check there and compare it with yours.
And more..
This tutorial covers only the very basics of Sinatra and Heroku to keep it simple.
If you find them interesting, please go further to get to know them better.
The following topics would be your next challenges:
26
The Best Of
Sinatra
27
The Best Of
Solution
To solve the flickering screen use AJAX functionality and you are able to replace any identifiable part on the
web page without reloading the whole page. In other words, you need a real single page application. If you
choose a good AJAX libraries support features such as: changing input element types, for example, text input
to drop-down box depending on the entered value by identifying on server side and providing additional
content back to the page.
With AJAX, you can develop user-friendly applications like desktop applications.
Developers corner
Web application architecture contains not only the server side it covers also the client side. To get an
efficient client application it is not necessary to hold all the JavaScript code initially time loaded in the single
page. Such designs frequently result in slow, inefficient web applications with too much overhead that suffer
from lost flexibility and maintainability. Keep an eye on the client side HTML code structure and reloading
and disposing partial JavaScript code.
Solution
Use pure HTML on the client side! The reward for your efforts: approximately 80% compatibility with
common browsers.
28
The Best Of
Developers corner
Avoid using hacks to get a nonstandard or incorrectly implemented browser element running! When the
browser is fixed, your hack will produce mostly side effects so that you have to remove the code previously
fixing a bug. Use code that will run in all browsers at development time and you will be on the better side!
No Menu
A lot of web applications are not designed with the elements commonly used in desktop applications.
The look and feel of desktop applications is given by standard user interfaces like a menu bar with all the
commands needed to handle the application. Web applications are frequently designed against marketing
pages standards, which, as described above, are not the right approach for web applications.
Solution
Consider application processes, keeping the focus on making your web application function like common
desktop applications. Use a common menu element to make all options available in the menu bar and control
the icons and descriptions only if the function is available in the current context. Use common icons and
domain specific or a general (common) naming for the menu items.
Developers corner
Use a state machine to handle all the combinations of menu items state and the availability of menu
functions. Show and open events are not clear enough because menu parts can be set as inactive or hidden
too on different content stated like in a common desktop menu.
Ribbon
The ribbon user interface element is not often used in web applications. But this element provides of fast
access to many of the web applications functions and provides handling closer to the common Microsoft
desktop applications.
Solution
If it makes sense add a ribbon to your web application to make more useful functions accessible for users.
Dont hide functions in the depth of menu structures!
Developers corner
Use a state machine to handle all the combinations of ribbon items state and the availability of ribbon
functions. The same applies for any ribbon item as for any menu item applies too to have additional states
like hidden or inactive.
29
The Best Of
Solution
Keep the last actions in the background to implement the Undo and Redo functionality. Practically speaking,
it is not so simple but try to add it in your next update or new web application.
Developers corner
You must check on each Redo operation to ensure it makes sense at the current position. The Undo is not a
big problem because the content refers directly to the content part on that a Redo makes sense.
Wizard functionality
In some desktop applications a wizard helps users step by step through entering and editing data. The wizard
makes it easier for users to get enter structured data into the application. Such functionality is also used in
online survey tools. But in many web applications, a wizard would make it easier for the user to enter the
data. Another option is to allow the user to switch between dialog and wizard-based data editing.
Solution
Provide a wizard for the dialog-based data editing and allow switching between the two views.
Developers corner
Use in the edit dialog web parts and make them visible or hide them to get the switch between common
dialog content and wizard content.
Push function
In some desktop applications you can be pushed by other users activities when using the same data or file.
The common workaround is notifying a user that you are editing the data which the other user is already
viewing Another method is presenting a read-only view until the editing has been completed.
This functionality needs information about what you are currently viewing and what other users are doing.
Web applications can also provide this functionality but I have not seen many web applications, other than
my own, implement this functionality.
Solution
It is possible to implement a notification to other users handling the same data, but keep the same in mind,
just as with desktop applications use this functionality only when circumstances call for it.
Developers corner
Use a simple JavaScript timer to ask the server who is using the same data as I use currently and hold
the notifications ready for the other user. With a second timer ask for notifications on the server. Without
any web socket you can provide content like from server pushed. At the moment for web sockets is no
implementation available working on any browser and operating system.
30
The Best Of
Local devices
In desktop applications it is mostly not a problem to add local devices or other devices in the network to the
functionality of the solution. In web applications the usage of local devices is mostly a reason why a web
application is not practical to develop except printing. But that assumption is not true! With a little more
effort it is possible to use most local devices in a web application.
Solution
Expose devices to web applications accessible by wrapping them in a local service with a web service
interface. With this trick it is possible to access local devices through the server, and by proxy the
external application.
Developers corner
If you build such services it is the easier way to use REST services.
Solution
Adding such a function to the web application does not take much effort. You have to cover the URL based
actions (GET, POST actions) and store them into a first in last out (FILO) queue. The dump is quite easy to
implement: select the html part, copy the outer HTML and send it through the server to the support team.
Developers corner
Use the standard functions from jQuery to capture the HTML dump. The FILO needs a little bit more
JavaScript effort, but not too much.
Device detection
Device detection in desktop applications has no significance because it runs mostly in similar environments
or is designed to run on different operating systems with standard desktop screens. Exemplary web
applications support a wide range of devices such as tablets and phones along with desktop machines.
By detecting the device, you are able to deliver device depending content to the target. Common web
application solutions support the usual boring actions such as zooming and moving screen content, but
frequently suffer from issues such as clicking the wrong link because your fingers are too thick. Mobile users
benefit from device specific content that is intended for fingers.
Most web applications detect devices at CSS level, but deliver every time the full content and hide only
some parts or enable alternate designs for smart phones and tablets. Such a solution is ineffective because
it wastes bandwidth by forcing the device to download extra, unnecessary content. Only a delivery of the
effectively needed content will be an efficient solution.
31
The Best Of
Solution
Detect the device in your web application and deliver only device-specific code.
Developers corner
Dont be satisfied with common CSS solutions go a step further by detecting the device and delivering
content specified for that device. It is required to get at any time the screen of the smart phone in 100% scale
visible! Overall, a smart phone ready design will involve some effort.
Common URL
In the content of device detection it is mandatory that you have the same URL and sub URLs for all devices.
The common URL is needed to store page links in the cloud or in a common link collection. Only with a
common URL base the web application will be handy for users working with different device types.
Developers corner
Dont think about URL switches they dont solve the problem of URLs stored in link collections!
Final statement
Web applications are not dead! In todays multi-device environment, web applications soar to new prime
of life. Most web applications are developed for low budgets but they are used as desktop applications
developed for high levels. This gap and the described missing functions can lift up the web applications to
a higher level. Its clear that mostly it needs some effort to achieve the higher state, but finally you get an
actual solution ready to use in several devices.
Another fact
Web applications developed more than thirteen years ago runs without any update of the user interface
without problems in the current browsers. How many desktop applications can get such a lifetime with all
the operation system changes in the past?
The discussions about apps reminds me of the late 90s, when the battleground was between operating
systems. At the moment we have the same kind of solution as we got at the past by JAVA a crutch not
really working perfectly for the app development.
For the foreseeable future, the web application provides a common base for all operating systems and
devices the browser.
Manfred is CEO and Chief Architect of several products and customer projects and has more than 17
years of experience in web applications and more than 28 years in general information technology.
Contact: jehle@cetris.ch
32
The Best Of
You will learn to develop a simple geographic application using Django. You will learn to set up a geospatial database using PostgreSQL and PostGIS, to represent and manipulate the data stored in this database with Django models and GeoDjango extensions
for this models and to present it to the user using the HTML5 map framework Leaflet.js.
In order to fully understand this article some knowledge about the basics of the Django web framework are recommended, as
well as knowledge of the Python programming language, even though they are not required. It is also advisable to have some
knowledge about the JavaScript programming language.
A Geographic Information System or GIS is a computer system that enables users to work with spatial
data. Even if this concept was invented around the 60s, it has only taken relevance in the past years, with
powerful applications like Google Maps or OpenStreetMap. The proliferation of this kind of applications has
been huge to the point that now event the smallest local transport company uses this technologies. We have
all kinds of projects, from social networks based on routes like Wikiloc to project which attempt to bring a
spatial dimension to The Semantic Web, like LinkedGeoData or GeoSPARQL.
One of the biggest benefits that the developer community have gotten from this phenomena is the
appearance diverse tools and framework for spatial data manipulation, and this is where GeoDjango comes
into play. Django an open source web development framework written in Python, it has a huge community
and a wide amount of tools for the developers. Many of this tools come included in the contrib package of
the framework, where we can find the geographic web framework GeoDjango.
What this package offers to the web developers is the following:
The Model API, to store, query and manipulate the geographic data stored in the database using the
typical Django models,
The Database API, to manipulate different spatial database back ends,
The Forms API, which provides some specialized forms and widgets to display and edit the data on a map,
The GeoQuerySet API, which allows using the QuerySet API for spatial lookups,
The GEOS API, a wrapper for the GEOS open source geometry engine, for C++,
The GDAL (Geospatial Data Abstraction Library) API,
The measurement objects, which allow convenient representation of distance and area measure units.
Apart from the aforementioned, several utility functions, commands are included in the package, as well as a
specialized administration site.
We will be developing a very simple GIS application, which allows user to upload routes and to visualize them
in maps. We have already seen that we can store and manipulate all this data with GeoDjango, however, we
still need some way to present this data adequately to the users of the web page. Fortunately, there are several
choices for this purpose, however, we will usually find two alternatives, OpenLayers and Leaflet.
33
The Best Of
Both are JavaScript libraries which allow to create a dynamic map on a web page. Which library to choose is
up to each developer, I personally prefer Leaflet.js for its ease of use and learning. However, OpenLayer is a
more mature project and promises several improvements in its third version which is yet to come.
With these two tools we can easily create a GIS web application of any kind. However, when developing
one of these we will have several concerns, not related with the available technologies, for example, where
can we get our data from? One approach which is followed by many is to let our users generate the data,
however this is not always suitable for our application. It is also quite common to use external information
sources, like available web services. Even if we are not going to explore the possibilities that these web
services offer, I will give the following list of web services with some of the functions they offer.
Nominatim, a tools to search OSM (OpenStreetMaps) data. It allows address lookup and reverse
geocoding, among other functions. A guide to this search engine is published on http://wiki.
openstreetmap.org/wiki/Nominatim,
The OSM API. OpenStreetMaps offers an XML API which allows to upload to and download data from their
database. You can find more about it the following address: http://wiki.openstreetmap.org/wiki/API_v0.6,
LinkedGeoData. For those desiring to implement a semantic spatial web application, know that
LinkedGeoData offers and API and has developed an ontology. It even has one SPARQL endpoint. More
information on http://linkedgeodata.org/OnlineAccess,
Google Maps API web services. Google maps has its own API (even has a library for map visualization).
However it imposes several limitations, so it is not used for more advanced GIS applications. More information
on the google developers webpage: https://developers.google.com/maps/documentation/webservices.
From now on I will be assuming that PostgreSQL, Django and Python2.7 are installed. Since I am working
with an Arch Linux distribution, so some installation steps may vary. Also, I will not be explaining all the
basics of the Django framework, some aspects like the settings file and the urls.py file will be omitted, if
you dont know the framework I encourage you to look up the Django documentation page, which explains
everything very nicely. You can find it in the following address: https://docs.djangoproject.com/en/1.5/.
34
The Best Of
Installing PostGIS will be different depending on the OS you are using. In my case I can obtain it from the
official repositories of my Linux distribution, however, PostGIS offers some binary installers for Windows,
OSX and Linux, plus instructions for downloading and compiling the source code in the following page:
http://postgis.net/install.
First, we will create a user for our spatial database, and then we will create a database in which we will load
the PostGIS spatial types later. We will also need to install the pl/pgSQL language on the database since this
extension need it. Then, we will load the postgres spatial types from the directory in which they reside (in
my case /usr/share/postgresql/contrib/postgis-2.1/). Next step is common to make this database a template,
so that we can create spatial databases without repeating all these steps.
Listing 1. Sample
$ su simplegisuser
Password:
$
$
$
$
$
>
Platform specific instructions can be found in the PostGIS homepage and on the GeoDjango documentation
page. https://docs.djangoproject.com/en/dev/ref/contrib/gis/install/#installation.
After all the installation are done we can finally get into creating our project. First we will create a django
project. The first thing to do is to access the settings.py file in order to add django.contrib.gis to the installed
apps. We will also need to edit the database connection setting, in order to match the database we created in
the previous section. The modified parts of the settings.py file should look similar to this:
DATABASES = {
default: {
ENGINE: django.contrib.gis.db.backends.postgis,
NAME: simplegisdb,
USER: simplegisuser,
}
}
INSTALLED_APPS = (
django.contrib.auth,
django.contrib.contenttypes,
django.contrib.sessions,
django.contrib.sites,
django.contrib.messages,
django.contrib.staticfiles,
django.contrib.gis,
)
Once all the setup steps are done, we can finally start coding our application.
35
The Best Of
The models
One essential part of most Django applications are the models, and this case is no different. Since we want to
store routes in our web page, we will first create a route model in a models.py file, following the convention.
If you are familiar with this framework, you should know that the first thing to do is to import the model,
however, since we are storing spatial data we wont use the conventional model, but the models defined in
GeoDjango. For this, we will import the models from django.contrib.gis.db.
Apart from this little change, we can define our models as usual with the advantage that we now have some
additional fields related to spatial data. Taking advantage of this feature, we will declare the model for our
routes, which will contain the following fields:
A name for the field (Django CharField),
The date in which was uploaded (Django DateField),
The geometric representation of the route (GeoDjango MultiLineString).
Here we start seeing the tools that this package offers us. In our route model, we have declared a
MultiLineString field, which corresponds to one of the geometry objects specified in the OpenGIS Simple
Feature specification. Simply put, a MultiLineString is formed by a list of LineStrings, which represent a set
of points or coordinates. you can find more about the models API in the Django documentation page: https://
docs.djangoproject.com/en/dev/ref/contrib/gis/model-api/.
The models.py file should look similar to this:
from django.contrib.gis.db import models
class
Route(models.Model):
name = models.CharField(max_length=255)
creation_date = models.DateField()
representation = models.MultiLineStringField(dim=3)
objects = models.GeoManager()
The reason for dim (dimension) to be 3 is to allow the field to save the altitude. This attribute specifies
the dimension that the geometric field has, which defaults to 2. All geometric fields are composed
by points, which must have at least two dimension (latitude and longitude), but can by extended by a
third dimension (altitude). The choice on the dimensions of the geometries depends on the application
to build and on the sources of information, and since the GPX files allow to record altitude, we will
include the three dimensions.
Of course, it is possible to work a way around to represent this geometrical object without the use of this
package. We could have defined our own Point model in which we store coordinates as floats and then
define LineString model and so on, however, this would require us to do extra work and more importantly,
we wouldnt have access to all the utilities that the GEOS API offers.
Once the model is defined we can finally synchronize the models with the database, using the following
command: python2 manage.py syncdb.
36
The Best Of
The views
The Django views are functions that take a web request and return a web response. For this simple example,
we will define a single view which will always return a HTML response. The document we will return will
contain the list of all the uploaded routes and a form which will allow our users to upload files.
As is usual, we will create a forms.py file in which the form will be defined. This form will contain two
fields, the first one for the name of the route and the second one for the file. We will also perform two
validations, to see if the name already exists and to check if the uploaded file is a GPX (though at this point
we can only check it by looking at the extension of the file).
Listing 2. Forms file
from django import forms
def clean_name(self):
name = self.cleaned_data[name]
if Route.objects.filter(name=name).count():
raise forms.ValidationError(That name is not available)
return name
def clean_file(self):
f = self.cleaned_data[file]
extension = f.name.split(.)[-1]
if extension not in [gpx]:
raise forms.ValidationError(Format not supported)
return f
Next we will create a view which will handle the uploading of files and will return the HTML file containing
the map and the form. However, before that, we should take care of the parsing of the documents that will be
uploaded to our page. The GPX files we will be parsing follow a structure which is similar to the following:
<gpx>
<trk>
<trkseg>
<trkpt lat=XXX lon=XXX>
<ele>XXX</ele>
<time></time>
</trkpt>
</trkseg>
</trk>
</gpx>
For this we will create a file called utils.py and define a method for parsing the file. This function will create
a new LineString for every trkseg found, which will contain all the Points identified in the trkpt tags. When
the trk tag ends, all these LineString will be used to create the MultiLineString which will be stored in the
database. There are many ways to do this, so I wont enter into the details of the implementation, you can
anyway find the utils.py file in the repository. Just one note, I have used the iterator parser from the lxml
python package to parse the file iteratively. This is due to the fact that GPX files may have quite a size (For
testing purposes I used a file with 33000 lines), so the iterator parser may improve the speed and solve some
recursion problems.
37
The Best Of
On the view, we will just check if the method of the request is POST or GET. If it is the first case, it means
that the user has submitted a form, in which case we will check if it is valid and we will parse and store it. In
both cases we will retrieve a list of routes and we will embed it in the HTML file, so the views.py file should
look more or less like in the following example.
Listing 3. views.py
from
from
from
from
def route(request):
if request.method == POST:
form = GPXUploadForm(request.POST, request.FILES)
if form.is_valid():
data = form.cleaned_data
f, name= data[file], data[name]
route = parse_gpx(f=f, name=name)
else:
form = GPXUploadForm()
routes = Route.objects.all()
dict = {form:form,routes:routes}
dict.update(csrf(request))
return render_to_response(routepage.html, dict)
And with this last step we have a very simple application working. Of course we have to configure the
settings file to point to the right templates and static files directory, but I will leave that out of the article. A
guide can be found in https://docs.djangoproject.com/en/dev/ref/settings/.
At this point however, we have not used all the power of the GeoDjango package, and we havent developed
any kind of map to show the routes to the users. On the next section we will see some functions of the GEOS
API, and we will get into the development of the frontend later.
We can, of course, make more complex operations, for example, we will implement a function that given
a route, tells us which is the nearest. For this we will be using the distance function, which returns us the
distance between the nearest points on two geometries. We will define the method nearest in the route model.
38
The Best Of
Listing 5. nearest method
def nearest(self):
minDist = sys.maxint
rt = self
for route in Route.objects.exclude(pk=self.pk):
dist = self.representation.distance(route.representation)
if dist < minDist:
minDist = dist
rt = route
return rt
Finally, we will define another method to get the GeoJSON representation of a route. GeoJSON is a format
defined to encode simple geographic features in JavaScript Object Notation, which is supported by the JS
mapping framework we are using.
Listing 6. geoJSON
def geoJSON(self):
return self.representation.json
With this we have seen some of the most simple applications of the GEOS API. However, we have only
scratched the tip or the iceberg, there is much more it can us, so I encourage anyone to explore this library
and discover the powerful applications that can be easily developed using it. A complete guide to GeoDjango
and all of its features can be found on the Django documentation pages: https://docs.djangoproject.com/en/
dev/ref/contrib/gis/.
39
The Best Of
Listing 7. Sample HTML code
<!DOCTYPE html>
<html lang=en>
<head>
<meta charset=utf-8 />
<title>Simple GIS App</title>
<link rel=shortcut icon href=/favicon.ico />
<link rel=stylesheet href=/static/css/style.css/>
<script src=/static/js/jquery.min.js></script>
<!-- Leaflet CSS and JS files -->
<link rel=stylesheet href=/static/leaflet/leaflet.css/>
<link rel=stylesheet href=/static/leaflet/leaflet.ie.css/>
<script src=/static/leaflet/leaflet.js></script>
<script src=/static/leaflet/leaflet-src.js></script>
</head>
<body>
<div id=text>
<form id=form method=post enctype=multipart/form-data>{% csrf_token %}
<legend><h2>Upload GPX file</h2></legend>
{{ form.as_p }}
<input type=submit value=Submit />
</form>
<div id=list>
<h2>Routes</h2>
<ul>
{% for route in routes %}
<li id={{ route.pk }} class=route-link>{{ route.name }}</li>
{% endfor %}
</ul>
</div>
<div id=data></div>
</div>
<div id=map></div>
<script src=/static/js/map.js></script>
</body>
</html>
The body of the HTML file can be divided into four pieces. The first is the form which will allow the users
to upload the files. The second is a container for the list of routes in the database. The third is an empty
container, which will be filled via AJAX with some data about the route the user is visualizing. The fourth
container is initially empty, but will contain the map once the page is loaded.
In order to use Leaflet.js, we have to download some JavaScript and some CSS file which have to be
included in the document. This files can be downloaded from the Leaflet homepage: http://leafletjs.com/
download.html. Once they are downloaded, we only have to include them in the static files directory and
load them as regular JavaScript and CSS files. However, we have to be careful with two details; first of all,
Leaflet need jQuery to work, so we have to download it (from http://jquery.com/download/) and include it
in the document before the Leaflet scripts. Second, we will create a script to initialize the map, which has to
be executed strictly after the container for the map is loaded, for this we can simply include the script in the
body of the document, below the map container.
As mentioned, we will load the details of each route via AJAX, so we will need to create another view
which will return a JSON object containing the details of the route. We could also return an XML document,
however, since we have to embed a GeoJSON object in it and we will parse it in JavaScript, it seems more
adequate to use a JSON.
40
The Best Of
Listing 8. Our new view
def routeJSON(request, pk):
route = Route.objects.get(pk=pk)
if route is not None:
rt = {name:route.name, dist:route.length(),
nearest:route.nearest().name}
rt[geojson] = json.loads(route.geoJSON())
return HttpResponse(json.dumps(rt),
content_type=application/json)
return HttpResponse(, content_type=application/json)
Note that we load the GeoJSON string into a Python object before dumping it again. This seems redundant,
however it is necessary, for if we dump a JSON string, we will have issues with characters like the quotes.
Once all this is ready, we can follow to create our map. We will create a file called map.js in the static files
directory, which will contain the script initializing the map and the functions that allow the asynchronous
loading the routes. First we will take care of creating the map, the code needed is the following.
Listing 9. The sample code
var route;
var map = L.map(map);
var osmLayer = L.tileLayer(http://{s}.tile.openstreetmap.org/{z}/{x}/ {y}.png);
map.addLayer(osmLayer);
map.fitWorld();
First, we declare a variable called route, which will later contain the route the user is currently viewing.
Next, we call the map() function from the Leaflet library, which receives an identifier and creates a map on
the container with that id, we store it on a variable so that we can manipulate it later.
Leaflet works mainly with layers; markers, lines, tiles, etc. are all layers, which can be added and remove to
the map. In order to be able to actually see something, we have to include at least one tile layer, which is in
charge of rendering the map. There are several free tile providers, but for this example we will be using the
ones provided by OpenStreetMaps, though we can add several tile layers at the same time and allow he user
to switch among them at will.
Note
You can find a script which creates short cuts for several popular tile providers in the following URL: https://
gist.github.com/mourner/1804938.
After we have created all the tile layers we wish we just have to add them to the map, with the addLayer()
function on the map or with the addTo() function of the layers. Finally, it is recommended to set view port
of the map to something, since it will show nothing if it has no view port. An easy way to do this when
developing is the fitWorld() function of the map.
Finally, we have add an event to each element on the list so that when the user click on it, a route is loaded
and the details of the route are displayed
41
The Best Of
Listing 10. A route
$(.route-link).click(function(){
var id = $(this).attr(id);
$.getJSON(/ajax/+id+/route, function(data) {
//Remove the previous route and add the new one
if(route!=null){
map.removeLayer(route);
}
route = L.geoJson(data.geojson);
route.addTo(map);
map.fitBounds(route.getBounds());
When the user clicks on one of the links, a AJAX call is made to an URL which returns the details of the
route. The first thing to do, is to remove the route which is currently being displayed, if not, we can end up
with a mess of lines in the map. Then, we just create a layer from the GeoJSON object, we add it to the map
and we set the view port of the map to the bound of the route.
Here, we have transparently created a geometric object and added it to the map, however, Leaflet provides
some classes to represent polygons, LineString and other geometric objects in a similar way to GeoDjango
(but much more primitive). Though I wont explain all the functions on the library, I encourage anyone
interested to explore the leaflet API (on http://leafletjs.com/reference.html) which gives a comprehensive
guide to using and extending leaflet.
Finally, we just have to add the rest of the data downloaded to the document and it is finished.
The results
After all these steps we should have obtained something like this (the CSS file is provided in the repository):
42
The Best Of
On the left part we can see all the information panels we created, and on the right part we can visualize the
map, and a blue line representing the route we are using. Anyway, we have only used a tiny part of the two
frameworks involved, which have way more functions than the ones explores.
GeoDjango allows to manipulate and store spatial data in a very transparent way, but the real power of this
library comes from the operations it allows us to do. The capacity to make spatial queries and to manipulate
spatial data, allow us to create very rich GIS applications without having to worry about complex algorithms
and different spatial representation systems.
Leaflet on the other side, is a lightweight library, but at the same time very complete. It comes with a set of
built in GUI elements, for example, a panel which allows switching among different tile layers with a click.
It also has some spatial utility functions and classes similar to the ones on GeoDjango, though its more
focused on the visualization.
Also, in a similar fashion to Django, Leaflet has a very active community, which develops different plug-ins
for this framework. This combined with its comprehensive API and the ease of use of the framework, makes
the task of plug-in development really easy, its quite common to see developers extend this framework to
serve their purpose on a single project.
The application we have developed can be considered GIS, though it is really simple, but as anyone can see
there are really huge GIS projects like google maps. If you are interested on the subject, I would recommend
to check out some of the following projects:
GeoSPARQL ,
an extension of the SPARQL protocol which adds support for spatial queries, it currently
has very few implementations. More information can be found in the following URL: http://www.
opengeospatial.org/standards/geosparql,
The GEOS library, which is the core of GeoDjango. It is completely open source and its hosted on http://
trac.osgeo.org/geos/,
Osmdroid,
a library which allows the use of OpenStreetMaps in native android applications. It is a good
way to work around the restrictions of the google maps API. The project is hosted in https://code.google.
com/p/osmdroid/,
Wikipedia has a nice list of GIS data sources in the following page: http://en.wikipedia.org/wiki/List_of _
GIS_data_sources.
There are many more projects and papers, of course, but as you can see there is a big proliferation on the
world of GIS. Every day new projects appear, be they libraries, mobile apps, web apps, data sources or
whatsoever.
Aimar Rodriguez is a Computer Science student in the last year of his bachelor. He is currently working
in the MORElab research group, in areas related to The Semantic Web and The Internet of Things,
working with technologies like GeoSPARQL and GeoDjango.
43
The Best Of
Solving Metrics
Processes
Within
Distributed
by Dotan Nahum
If youre building a Web backend or a processing backend nowadays, chances are youre
building a distributed system. In addition, youre probably deploying to the cloud, and
using paradigms such as SOA and REST. Today, these methods are very common, and I
watched them evolve from a best kept secret 10 years ago, into best practices, and then into
a common, trivial practice today. This article will show you how to tackle the problem of
handling metrics around complex architectures.
Youll learn how to use Ruby to build a performance-first processing server, using technologies such as Redis,
Statsd, Graphite and ZeroMQ. More importantly, youll learn about the whys of each of those components in
the context of this problem. Lastly, I hope youll be inspired enough to either use the solutions suggested in the
text, or build your own tailor-made solution using the building blocks that are outlined.
You should have a basic to intermediate understanding of Ruby, service architectures such as SOA, and
concepts within the HTTP protocol such as REST.
Evolved Complexity
Something that evolved along with building distributed systems is complexity; breaking up a system to many
components will almost always introduce additional overhead, and theres one thing that in my opinion isnt
keeping up with being very common amongst developers monitoring such complexity. Or specifically,
monitoring distributed processes.
Ruby makes building distributed systems dead easy. With Sinatra, for example, due to its simplistic
programming model and ease of use, you can build a RESTful architecture spanning across servers and
processes very easily, without focusing on much of the typical cruft and overhead that usually appear
when building and deploying new services. By lowering the price you pay for deploying new services and
maintaining them, Ruby makes building distributed systems fun.
Distributed Processes
Youre building a product which has many services that span over different machines at the backend. These
services co-ordinate to implement business processes.
How could you track it?
In general, how can you provide visibility for
A series of processing stages that are arranged in succession,
Performing a specific business function over a data stream (i.e. transaction),
Spanning across several machines,
Note: I use the terms transaction, workflow and pipeline interchangeably to mean the same thing a
series of actions bound together logically, leading to a final result under the same business process.
Process Tracking
44
The Best Of
A business process might span several machines and services. As in the physical world, stages such as
planning, provisioning, packing, shipping apply in many other domains as well.
Tracking In Practice
So how can you track these at the infrastructure level?
How would you have better visibility for an entire multi-stage process which may start at machine A and
service X, and then end a few machines and services later at machine B and service Z. How would you also
measure and be able to reason about the overall performance of such a process across all of the players in
your architecture and at each step of the way? You need to be able to correlate.
Internal Tracking
You may have bumped into this before. Take a look at manufacturing in real-life: an item gets a ticket
slapped onto it when it is first pronounced as an actual entity in the factory. This ticket is then used to record
each person who handled the item, and the time and station it was handled at.
Looking back at a distributed system implementing such a pipeline, if the data handed from process to
process is such that you can tack on additional properties easily, that is it will be persisted after each
step, and persisting it doesnt cost that much, then you may be in luck. In such a scenario it is common
to include tracking metadata within the object, and just stamp it with relevant trace information (such as
time, location, handler) per process checkpoint or stage, within the lifetime of that object and the length of
the processing pipeline.
45
The Best Of
If you dig deeper into this sort of a solution though, youll find a couple of pitfalls that exist when you
realize that this is a proper distributed system performing a single goal of tracking: first, since youre
tracking time, time must be synchronized on all machines. This may only seem easy at first glance, becomes
harder when measuring sub-second accuracy. Second, failure points; additional moving parts in the process,
that increase the probability of failure grows higher.
External Tracking
You may also have been aware of workflows in factories, or even physical shops, where operators enter an
item ID, their signature, and a time stamp onto a thin terminal in order to indicate they have processed the
item at their station. The system will then log all those details into an external tracking data store.
Roundtrip
The problem I was facing, is I didnt want to introduce infrastructural changes in order to use a system like
Zipkin, but still have the ability to take any process spanning any number of services, and be able to point it
to some kind of tracking and tracing endpoint to report progress to. This way, I get the benefits of tracking
my business process with as little overhead as possible.
This service needed to have good performance, so that it wouldnt hinder the progress of the workflow. It
needed to be exact, so that no tracking data was lost (i.e. no UDP). It needed to be maintainable and fun to
work with. Since I wanted to achieve all those goals, and yet I didnt want to prematurely optimize, I used
Ruby with an HTTP endpoint for ease of use, and offered an additional ZeroMQ endpoint for the more
performance-heavy scenarios.
46
The Best Of
I called this service Roundtrip and open sourced it, required only Ruby and Redis installed on your machine.
Next up, well investigate how its API behave, what makes it performant, and how you can get inspired by it
and build such a similar custom solution.
Using Roundtrip
Heres how you use Roundtrip with its default HTTP endpoint:
Listing 1. Roundtrip API Usage
# create a new business process trip
# a trip is a synonym for a workflow, or transaction.
$ curl -XPOST -droute=invoicing http://rtrip.dev/trips
{id:cf1999e8bfbd37963b1f92c527a8748e,route:invoicing,started_at:2012-1130T18:23:23.814014+02:00}
# now add a checkpoint as many as you like.
# a checkpoint is a step within the transaction.
$ curl -XPATCH -dcheckpoint=generated.pdf http://rtrip.dev/trips/cf199...a8748
{ok:true}
# now end the process.
$ curl -XDELETE http://rtrip.dev/trips/cf1999...a8748e
{id:cf1999e8bfbd37963b1f92c527a8748e,route:invoicing,started_at:2012-11-30T18:54:20.
098477+02:00,checkpoints:[[generated.pdf,2012-11-30T19:08:26.138140+02:00], [emailed.
customer,2012-11-30T19:12:41.332270+02:00]]}
A given distributed system may generate a ton of business workflows and transactions over many or few
machines, and the point is that a transaction or a workflow starts at a certain machine, goes to one or more,
and then ends up at some other (or same) machine.
We need a way to keep track of when a transaction starts and when it ends. A bonus would be to be able to
track stages in the transaction that happen before it ends lets call that checkpoints.
That is, basically, what Roundtrip is. Roundtrip will store the tracking data about your currently running
transactions: start, end, and any number of checkpoints, and will provide metrics as a bonus.
When a transaction ends, it is removed from Roundtrip this allow Rountrip to be bounded in size of storage
and have good performance.
47
The Best Of
Listing 2. Adding a trip
@conn.set(trip_key(trip.id), Marshal.dump(trip))
@conn.zadd(route_key(trip.route), trip.started_at.to_i, trip.id)
Adding a trip is just setting the data within the trip in a key/value pair, and more importantly being able to
add the ID of the trip to a Redis ZSET. A ZSET is a sorted set in Redis, and in our case we'll be having time
as the sorted component. This will allow us to trim out data as a torrent of processes hit the server constantly,
and be able to have bounded data size at all times.
Listing2. Adding a checkpoint to a trip
time = Time.now
# Redis: ZADD key score member
@conn.zadd(checkpoints_key(trip.id), time.to_f, at)
Again we're using the awesome ZSET. Essentially, each trip holds a set, or in our case a sorted set of the
checkpoints within it. A checkpoint is just a stage within any business process.
Listing 3. Removing a trip
@conn.del(trip_key(trip.id))
@conn.del(checkpoints_key(trip.id))
@conn.zrem(route_key(trip.route), trip.id)
Clearing off data from Redis is important. Although Redis has a built-in EXPIRE function which allows
you to expire data automatically, it's often not enough, because often an entity will be composed of several
disconnected Redis data structures like in our case, and there's no way to describe a dependency between
keys currently.
That is basically the meat of Roundtrip. It's a bunch of Ruby code glued on top of Redis and that's why it's
so fast (although the store component is pluggable you can replace Redis with anything conforming to the
protocol within Roundtrip).
Next up we'll see how Roundtrip integrates internal monitoring even into itself using StatsD, and how it
goes even further up the performance tree using ZeroMQ.
48
The Best Of
Listing 4. Statsd integration into Roundtrip
require statsd
class Roundtrip::Metrics::Statsd
def initialize(opts={:statsd => {:host => localhost, :port => 8125}})
@statsd =opts[:statsd][:connection] || ::Statsd.new(opts[:statsd][:host], opts[:statsd]
[:port])
end
def time(route, event, time)
@statsd.timing(#{route}.#{event}, time)
end
end
I often recommend to wrap infrastructural concerns such as metrics, logging, configuration in something that
will be easy to swap. Here, we've wrapped Statsd in an abstract Metrics module, so that in the future I could
use TSDB, Cassandra, Redis, ZooKeeper or anything that can provide good, scalable and atomic counters
should I be not satisfied with how Statsd/Graphite is working out for me. I also chose to use the standard
'statsd' Ruby gem, as I've verified it is thread-safe and I widely use it for my open-source and day-job work.
Along the Roundtrip code, with the help of this module, there will be 'time' calls scattered. These will be
responsible to time various operations within the internals of Roundtrip, so that I could later monitor and
review its operation in production.
Listing 5. Usage of the Metrics module
@metrics.time(trip.route, at, msec(res[1] trip.started_at))
This is simple enough to develop, and light-weight enough to include in your code, that it is worth to have
more metrics radiated out than not to have it at all if in doubt, just add metrics as you see fit; later you can
always either remove them if they appear to be useless from a business-value point of view, or sample
them (i.e. make only one of 100 calls generate a metrics call, or any other ratio that makes sense). In either
case the traffic generated by these calls in the case of Statsd is UDP its asynchronous, low-overhead, and
nothing critically bad happens if the receiving server is down.
49
The Best Of
Listing 6. A ZeroMQ server
#
# quick protocol desc:
#
# S metric.name.foo i-have-an-id-optional
#
# U metric.name.foo checkpoint.name
#
# E metric.name.foo
#
# All replies are serialized JSON
#
ACTIONS = { S => :start, U => :checkpoint, E => :end }
def listen!(port)
context = ZMQ::Context.new(1)
puts Roundtrip listening with a zeromq socket on port #{port}...
socket = context.socket(ZMQ::REP)
socket.bind(tcp://*:#{port})
while true do
select(socket)
end
end
def select(socket)
request =
rc = socket.recv_string(request)
unless request && request.length > 2
socket.send_string({:error => bad protocol: [#{request}]}.to_json)
return
end
action, params = ACTIONS[request[0]], request[1..-1].strip.split(/\s+/)
begin
resp = @core.send(action, *params)
socket.send_string(resp.to_json)
rescue
puts error: #{$!} if @debug
socket.send_string({ :error => $! }.to_json)
end
end
Within the comments, is the description of the protocol. This is a very simple line-protocol where every
line represents a transactional unit of data. This makes parsing very simple, and data very compact; and
the server can leverage the fact that it is relatively dumb do less, and have better performance. Responses
are transmitted out in JSON form, because of the fact that most clients are smart and want to see more
meaningful format and description being sent out of the server.
All in all using the ZeroMQ endpoint yielded a major jump in throughput.
Closing Up
Weve seen a problem thats currently in its infancy monitoring distributed systems and seemingly
disconnected cross-server, cross-farm processing workflows. Weve laid out a couple of solutions, and seen
how its solved in the real world. Ive also walked with you through the path of how I got to solve this kind
of problem, and the context behind every step in the way of building the solution in Ruby; using relatively
cutting-edge backing technologies such as Redis, Statsd, Graphite and ZeroMQ.
50
The Best Of
Hopefully, you can not only solve this problem for your own infrastructure, but take the tips and contexts
Ive laid out and use them in other scenarios. Of course, youre also welcome to clone Roundtrip itself, use it
within your products, and hopefully contribute anything you see fit as its open-sourced on github: http://
github.com/jondot/roundtrip.
Dotan Nahum is a Senior Architect at Conduit. Aside from building polyglot, mission critical, large-scale
solutions (millions of users, thousands of requests/sec) as part of his day job, he is also an avid opensource contributor, technical writer, and an aspiring entrepreneur. Youll find his blog at http://blog.
paracode.com, twitter at http://twitter.com/jondot and contributions on Github, http://github.com/jondot.
51
The Best Of
CreateJS In Brief
by David Roberts
Over the past several months, Ive been making games and animations with a Javascript
library called CreateJS. The library contains series of four components to assist with
developing for HTML5; one for graphics (via the <canvas> https://developer.mozilla.org/
en-US/docs/HTML/Canvas element), one for tweening values, one for sound (using <audio>
https://developer.mozilla.org/en-US/docs/Web/HTML/Element/audio, webAudio https://dvcs.
w3.org/hg/audio/raw-file/tip/webaudio/specification.html, or flash), and one for preloading.
This article introduce the graphics component, EaselJS, as it is the most interesting and the
easiest to misuse. A basic working knowledge of HTML5 is required for this article.
Layers
When we start a project, it is natural to make the scene by adding different objects to the stage, in order of
back to front. This stands up fairly well, provided we only want to add objects in front of everything. In the
following example, a cloud scuds past our actor.
Listing 1. A single cloud
<HTML>
<head>
<title>Example 1-0: Clouds</title>
<script src=http://code.createjs.com/createjs-2013.05.14.min.js></script>
</head>
<body>
<canvas id=output width=300 height=150></canvas>
<script>
use strict
//Create a new stage, from the createjs library, to put our images in.
var stage = new createjs.Stage(output);
//Well add some scenery to the stage, then add an actor.
addLand(stage);
addCloud(stage, 0, 40);
addActor(stage, 99, 38);
//CreateJS comes with a Ticker object which fires each frame. Well make it so that we
repaint the stage each time its called, via our stages update function.
createjs.Ticker.addEventListener(tick, function paintStage() {
stage.update(); });
function addLand(stage) {
var land = new createjs.Bitmap(images/land.png);
stage.addChild(land); //The background image includes the blue sky.
}
function addCloud(stage, x, y) {
var cloud = new createjs.Bitmap(images/cloud.png);
cloud.x = x, cloud.y = y;
stage.addChild(cloud);
//Well move the cloud behind the player because it looks good.
createjs.Ticker.addEventListener(tick, function moveCloud() {
cloud.x += 2;
52
The Best Of
});
function addActor(stage, x, y) {
//All the images in this scene have been drawn from the Open Pixel Platformer. See
http://www.pixeljoint.com/forum/forum_topics.asp?FID=23 for more details.
var actor = new createjs.Bitmap(images/male native.png);
actor.x = x, actor.y = y;
stage.addChild(actor);
}
</script>
</body>
</HTML>
Here, weve created a stage, added some objects to it, and set it to continually redraw itself to reflect the
changing position of the cloud. Looking at the output, however, that one cloud seems awful lonely. Well add
in a little timer to give him some friends. Add the following code around at line 22 of the script.
Now we have more clouds, but theyre going in front of our character. To fix this, well create several
containers. A container is holds other objects, like a stage does. If when we add our clouds to a container
behind our actor, the container will keep them behind our actor where they belong. Replace the calls to
addLand, addCloud, and addActor, starting on line 14, with the Listing 3:
53
The Best Of
This is how you implement z-layers in EaselJS, although it doesnt seem to be explicitly stated in the
documentation.
This is quite a nice approach to z-layers. First, it is quite scalable. Because we have named layers, we can
easily add more layers between them and move around existing layers. Second, our layer orders are now
defined in one place. This means that when we need to rearrange them and we will if were working
on a large project we wont have to go hunting for a hundred different constants in our files. Lastly, we
can apply almost any effect to a container that we can also apply to an object. For example, if we had our
background in a separate container, we could easily add parallax scrolling just by moving the container.
Supplementary Listing 4. The final product
54
The Best Of
<HTML>
<head>
<title>Example 1-2: Clouds</title>
<script src=http://code.createjs.com/createjs-2013.05.14.min.js></script>
</head>
<body>
<canvas id=output width=300 height=150></canvas>
<script>
use strict
//Create a new stage, from the createjs library, to put our images in.
var stage = new createjs.Stage(output);
//First, well add some containers to keep our scenery organized.
var backgroundContainer, sceneryContainer, actorContainer;
stage.addChild(backgroundContainer = new createjs.Container());
stage.addChild(sceneryContainer = new createjs.Container());
stage.addChild(actorContainer = new createjs.Container());
//Well add some scenery to the stage, then add an actor.
addLand(backgroundContainer);
addCloud(sceneryContainer, 0, 40);
addActor(actorContainer, 99, 38);
//Well add in another cloud, at a random height, every 2 seconds.
window.setInterval(function addClouds() {
addCloud(sceneryContainer, 0, Math.random()*60);
}, 2000);
//CreateJS comes with a Ticker object which fires each frame. Well make it so that we
repaint the stage each time its called, via our stages update function.
createjs.Ticker.addEventListener(tick, function paintStage() {
stage.update(); });
function addLand(stage) {
var land = new createjs.Bitmap(images/land.png);
stage.addChild(land); //The background image includes the blue sky.
}
function addCloud(stage, x, y) {
var cloud = new createjs.Bitmap(images/cloud.png);
cloud.x = x, cloud.y = y;
stage.addChild(cloud);
//Well move the cloud behind the player because it looks good.
createjs.Ticker.addEventListener(tick, function moveCloud() {
cloud.x += 2;
});
function addActor(stage, x, y) {
//All the images in this scene have been drawn from the Open Pixel Platformer. See
http://www.pixeljoint.com/forum/forum_topics.asp?FID=23 for more details.
var actor = new createjs.Bitmap(images/male native.png);
actor.x = x, actor.y = y;
stage.addChild(actor);
}
</script>
</body>
</HTML>
55
The Best Of
Performance
In a large game of minesweeper (such as http://mienfield.com), we can have a few thousand tiles on the
screen at once. A simple, direct implementation will happily use up all our available processing power in
CreateJS, though.
Listing 5. A hard-to-compute version of a Minesweeper field
<HTML>
<head>
<title>Example 2-0: Minesweeper</title>
<script src=http://code.createjs.com/createjs-2013.05.14.min.js></script>
</head>
<body>
<canvas id=output width=320 height=320></canvas>
<script>
use strict
var stage = new createjs.Stage(output);
//Set up some z-layers, as in example 1.
var tileContainer, uiContainer;
stage.addChild(tileContainer = new createjs.Container());
stage.addChild(uiContainer = new createjs.Container());
//Add 1600 tiles in a square. This should load one of our processors a little, and we
can observe it with our task manager. You can open one up in Chrome by pressing shift-esc.
var tiles = [];
for (var x = 0; x < 40; x++) {
tiles.push([])
56
The Best Of
for (var y = 0; y < 40; y++) {
tiles[x][y] = addTile(tileContainer, x, y);
};
};
//When we click on the tile, we should make it respond. Well use the question mark
in place of an actual game of minesweeper.
stage.addEventListener(mousedown, function revealTile(event) {
var x = Math.floor(event.stageX/8); //StageX is the pixel of the stage we clicked
on.
var y = Math.floor(event.stageY/8); //8 is how wide our tiles are.
tiles[x][y].image.src=images/question mark tile.png;
});
//Add two blue bars to the stage to track the mouse.
var horizontalBlueBar = addGridTool(uiContainer, -90);
var verticalBlueBar = addGridTool(uiContainer, 0);
//Well make them track our mouse cursor. How quickly they do so will also give us a
good feel for our framerate.
stage.addEventListener(stagemousemove, function updateGridTool(event) {
horizontalBlueBar.y = event.stageY;
verticalBlueBar.x = event.stageX;
});
//When we redraw the stage, we should make the blue bars flicker a bit for effect.
createjs.Ticker.addEventListener(tick, function paintStage() {
horizontalBlueBar.alpha = 0.3 + Math.random()/3;
verticalBlueBar.alpha = 0.3 + Math.random()/3;
stage.update();
});
function addTile(stage, x, y) {
var tile = new createjs.Bitmap(images/blank tile.png);
tile.x = x*8,
tile.y = y*8;
//Our tile is 16 pixels wide, but well scale
them down for this example.
tile.scaleX = 0.5, tile.scaleY = 0.5; //We need to draw lots of objects to produce
a measurable stress on a modern computer.
stage.addChild(tile);
return tile;
}
function addGridTool(stage, rotation) {
var gridTool = new createjs.Bitmap(images/bar gradient.png);
gridTool.regX = 4;
//Offset the bar a bit in the narrow dimension, so our mouse
will be over the middle of it.
gridTool.scaleY = 320; //Make the bar as long as the gamefield.
gridTool.rotation = rotation;
stage.addChild(gridTool)
return gridTool;
}
</script>
</body>
</HTML>
On my computer, this version takes over half of the processing power of the page to run. (To open this task
list, you can press shift-esc in Chrome https://www.google.com/intl/en/chrome/browser/ or Chromium http://
www.chromium.org/).
57
The Best Of
Why does this version use so much processing power? It turns out that CreateJS does not implement the
dirty rect optimization (http://c2.com/cgi/wiki?DirtyRectangles) when it redraws the scene. This is because
it is prohibitive to calculate the bounding box for some of the elements the library can draw, such as vector
graphics and text. http://blog.createjs.com/width-height-in-easeljs/ explains the trouble in more detail its
quite an interesting problem. For our purposes, this means that each time we call stage.update() the backing
canvas is cleared and every single object on the stage has to be drawn again. All 1600 of them. To fix this,
well cache() our background to a new canvas and call updateCache() when we need to refresh the tiles.
Listing 6. Optimized tile drawing
58
The Best Of
//Set up some z-layers, as in example 1.
var tileContainer, uiContainer;
stage.addChild(tileContainer = new createjs.Container());
stage.addChild(uiContainer = new createjs.Container());
tileContainer.cache(0,0,320,320);
//Add 1600 tiles in a square. This should load one of our processors a little, and we can
observe it with our task manager. You can open one up in Chrome by pressing shift-esc.
var tiles = [];
for (var x = 0; x < 40; x++) {
tiles.push([])
for (var y = 0; y < 40; y++) {
tiles[x][y] = addTile(tileContainer, x, y);
};
};
tiles[39][39].image.onload = function() {tileContainer.updateCache()}; //When the last
tiles image has loaded, we need to refresh the cache. Otherwise, well just draw a blank
canvas.
//When we click on the tile, we should make it respond. Well use the question mark in
place of an actual game of minesweeper.
stage.addEventListener(mousedown, function revealTile(event) {
var x = Math.floor(event.stageX/8); //StageX is the pixel of the stage we clicked on.
(The formula gives us the index of our tile.)
var y = Math.floor(event.stageY/8); //8 is how wide our tiles are.
tiles[x][y].image.src=images/question mark tile.png;
tiles[x][y].image.onload = function() {tileContainer.updateCache()}; //Update the cache
when our new image has been drawn.
});
You can paste these new functions in over top of their old versions, or you may refer to supplementary listing
7 for the complete file.
Internally, CreateJS is now drawing everything to another canvas, and then drawing that canvas to our stage
when we call stage.update(). (We can obtain a reference to this internal stage via tileContainer.cacheCanvas
if we want to). The performance of this cached mode results in a great performance gain, and Chrome now
reports only a few percent of its cycles used on the minesweeper mockup page.
59
The Best Of
Supplementary Listing 7. The cached Minesweeper field
<HTML>
<head>
<title>Example 2-1: Minesweeper</title>
<script src=http://code.createjs.com/createjs-2013.05.14.min.js></script>
</head>
<body>
<canvas id=output width=320 height=320></canvas>
<script>
use strict
var stage = new createjs.Stage(output);
//Set up some z-layers, as in example 1.
var tileContainer, uiContainer;
stage.addChild(tileContainer = new createjs.Container());
stage.addChild(uiContainer = new createjs.Container());
tileContainer.cache(0,0,320,320);
//Add 1600 tiles in a square. This should load one of our processors a little, and we
can observe it with our task manager. You can open one up in Chrome by pressing shift-esc.
var tiles = [];
for (var x = 0; x < 40; x++) {
tiles.push([])
for (var y = 0; y < 40; y++) {
tiles[x][y] = addTile(tileContainer, x, y);
};
};
tiles[39][39].image.onload = function() {tileContainer.updateCache()}; //When the
60
The Best Of
last tiles image has loaded, we need to refresh the cache. Otherwise, well just draw a blank
canvas.
//When we click on the tile, we should make it respond. Well use the question mark
in place of an actual game of minesweeper.
stage.addEventListener(mousedown, function revealTile(event) {
var x = Math.floor(event.stageX/8); //StageX is the pixel of the stage we clicked
on. (The formula gives us the index of our tile.)
var y = Math.floor(event.stageY/8); //8 is how wide our tiles are.
tiles[x][y].image.src=images/question mark tile.png;
tiles[x][y].image.onload = function() {tileContainer.updateCache()}; //Update the
cache when our new image has been drawn.
});
//Add two blue bars to the stage to track the mouse.
var horizontalBlueBar = addGridTool(uiContainer, -90);
var verticalBlueBar = addGridTool(uiContainer, 0);
//Well make them track our mouse cursor. How quickly they do so will also give us a
good feel for our framerate.
stage.addEventListener(stagemousemove, function updateGridTool(event) {
horizontalBlueBar.y = event.stageY;
verticalBlueBar.x = event.stageX;
});
//When we redraw the stage, we should make the blue bars flicker a bit for effect.
createjs.Ticker.addEventListener(tick, function paintStage() {
horizontalBlueBar.alpha = 0.3 + Math.random()/3;
verticalBlueBar.alpha = 0.3 + Math.random()/3;
stage.update();
});
function addTile(stage, x, y) {
var tile = new createjs.Bitmap(images/blank tile.png);
tile.x = x*8,
tile.y = y*8;
//Our tile is 16 pixels wide, but well scale
them down for this example.
tile.scaleX = 0.5, tile.scaleY = 0.5; //We need to draw lots of objects to produce
a measurable stress on a modern computer.
stage.addChild(tile);
return tile;
}
function addGridTool(stage, rotation) {
var gridTool = new createjs.Bitmap(images/bar gradient.png);
gridTool.regX = 4;
//Offset the bar a bit in the narrow dimension, so our mouse
will be over the middle of it.
gridTool.scaleY = 320; //Make the bar as long as the gamefield.
gridTool.rotation = rotation;
stage.addChild(gridTool)
return gridTool;
}
</script>
</body>
</HTML>
61
The Best Of
Resizing
When a canvas has its width or height properties set, it is also cleared. Without intervention, this will cause
our stage to occasionally render a blank frame to screen. The graphics will be drawn by EaselJS; the canvas
resized and cleared; and then the canvas will be rendered to screen by the browser. To fix this, well just call
stage.update() after the canvas has been resized. Listing 8 has this call commented out on line 60, so you can
see the difference.
Listing 8. A resizable canvas
<HTML>
<head>
<title>Example 3-1: Resizing</title>
<meta charset=utf-8>
<script src=http://code.createjs.com/createjs-2013.05.14.min.js></script>
<style>
div {
background-color: black;
overflow: hidden; /*Make the div resizable.*/
resize: both;
width: 275px;
height: 200px;
position: relative; /*Make #instructions positionable in the corner.*/
}
#output {
pointer-events: none; /*This would cover up our resizing handle otherwise.*/
width: 100%;
height: 100%;
}
#instructions {
pointer-events: none;
color: white;
font-family: Arial;
font-size: 10px;
position: absolute; /*Position the instructions in the corner with the drag handle.*/
margin: 0px;
bottom: 5px;
right: 5px;
}
</style>
</head>
62
The Best Of
<body>
<div id=container>
<p id=instructions>Drag me!
</p>
<canvas id=output width=275 height=200></canvas>
</div>
<script>
use strict;
var stage = new createjs.Stage(output);
var circle = new createjs.Shape();
circle.graphics //Draw a circle with a line through it.
.beginFill(white)
.drawCircle(0,0,50)
.beginStroke(black)
.moveTo(-50,0)
.lineTo(+50,0)
.endStroke();
circle.x = 100;
circle.y = 100;
stage.addChild(circle);
logic.
this.)
//Watch for our parent container getting resized. (There is no native event for
CreateJS encourages separation of logic and rendering, so we can simply tell it to draw the stage twice a
frame. This is also useful on mobile devices, where the user can rotate the phone. It is not nice to have the
entire screen flicker if youve got a full-screen canvas displayed. The solution really is simple, but it had
eluded me for a long time.
Side note: There is no onresize function for HTML elements, even ones marked as resizable in CSS! In the
solution here, I have sacrificed some speed and correctness for simplicity.
63
The Best Of
<HTML>
<head>
<title>Example 4-0: DOM Interface</title>
<meta charset=utf-8>
<script src=http://code.createjs.com/createjs-2013.05.14.min.js></script>
<style>
#speech-bubble { /*A grey speech-bubble.*/
background-color: lightgrey;
position: absolute; /*Make bubble repositionable.*/
display: inline-block;
border-radius: 7.5px;
padding: 7.5px;
margin-left: 0px; /*These will get set from the Javascript.*/
margin-top: 0px;
border: 1px solid darkgrey;
}
#speech-bubble:after { /*Give the speech bubble a triangular point.*/
content: ;
position: absolute;
left: 50%; /*Center the triagle.*/
margin-left: -15px; /*Give the triangle a negative margin of half the triangle width, so
the triangle is centered.*/
bottom: -15px; /*Make a triangle 15px high and offset it downwards by that much.*/
border-width: 15px 15px 0;
border-style: solid;
border-color: lightgrey transparent;
}
canvas {
outline: 1px solid black;
cursor: move;
}
</style>
</head>
<body>
<div id=speech-bubble>
Whats your name?<br>
<input id=name onkeypress=if(event.which === 13) getName()></input> <button
onclick=getName()>OK</button>
</div>
<canvas id=output width=300 height=200></canvas>
<script>
use strict;
64
The Best Of
var stage = new createjs.Stage(output);
createjs.Ticker.addEventListener(tick, function paintStage() {
stage.update(); });
//Create a new player. Hes draggable with the mouse.
var playerSpriteSheet = new createjs.SpriteSheet({
images: [images/frogatto.png], //A simplified edition of Frogatto, from Frogatto &
Friends. Used with permission.
frames: [
//[x, y, width, height, imageIndex, regX, regY]
[124,18,32,32,0,16,31], //Idle animation.
[159,18,32,32,0,16,31],
[194,18,32,32,0,16,31],
],
animations: { //Refer to http://www.createjs.com/Docs/EaselJS/classes/SpriteSheet.html
for documentation.
idle: { //We will use an idle animation for this example, to give it some life.
frames:[0,1,2,1],
frequency: 6,
next: idle,
},
}
});
var player = new createjs.BitmapAnimation(playerSpriteSheet);
player.gotoAndPlay(idle);
player.x = 150, player.y = 150;
stage.addChild(player);
player.onPress = function(event) {
var offset = { //Capture the offset of the mouse click relative to the player.
x: event.target.x event.stageX,
y: event.target.y event.stageY,
};
event.onMouseMove = function(event) { //During this click, when we move the mouse, update
the player position and the speech bubble position.
event.target.x = event.stageX + offset.x;
event.target.y = event.stageY + offset.y;
repositionSpeechBubble(event.target);
stage.update(); //Update the stage early to synch with user input better. This does
make the player animation play faster, however.
}
}
//Position the speech bubble HTML element above
var speechBubble = document.getElementById(speech-bubble);
function calculateSpeechBubbleOffset() { //We dont have access (that I know about) from
CSS to calculate half our width as a margin value. This is essentially a regX/regY value for
the DOM speech bubble, which makes later positioning easier and faster.
speechBubble.style.marginLeft = -speechBubble.offsetWidth/2+px;
speechBubble.style.marginTop = -speechBubble.offsetHeight+px;
}
calculateSpeechBubbleOffset();
function repositionSpeechBubble(object) {
object = player.localToGlobal(8,-40); //The offset of the speech bubble point from our
regX/regY point.
speechBubble.style.left = object.x + px,
65
The Best Of
speechBubble.style.top = object.y + px;
}
repositionSpeechBubble(player);
function getName() {
var name = document.getElementById(name).value;
document.getElementById(speech-bubble).innerHTML = Hello, +name+.;
calculateSpeechBubbleOffset();
}
</script>
</body>
</HTML>
We can drag Frogatto around, and the text box moves with him.
Drawing a text input box would be time-consuming using CreateJS, since wed have to figure out how to
draw: a box; text; a cursor; text selection, an ok button; and then how to position it all. Since we have the
power and depth of HTML available to us, we should use it where we can!
This also helps with separation of duty in code. We can style our text boxes without having to figure out
the program first. We dont have to parse all the style details of our text boxes when were figuring out how
the program positions them over Frogatto.
In closing, I would recommend using CreateJS if you want to draw animations on a web page. As with
the majority of Javascript libraries, its most useful in conjunction with the rest of HTML5. CreateJS is a
powerful abstraction, although it can introduce significant overhead if misused.
66
The Best Of
Be familiar with Joomla as a CMS, and the more thorough one is with Joomla the more one will appreciate this article.
An appreciation of object-oriented architecture, the Model-View-Controller pattern, and CMS features.
In my opinion in the 90s, application development centered on the desktop and local networks. As we
have become an Internet-connected society, the expectations for applications are now mostly web-centered.
The challenge is to remain focused on building the functionality that a business needs while integrating
it with the Internet technologies (such as AJAX, session cookies, web forms, ecommerce, etc.) as well as
web concerns (such as security and cross-browser consistency). A framework-based CMS, which contains
reusable code for managing web-related issues, provides a smart platform for developing web apps.
Unlike most other open source CMSs, Joomla is architected the way a software engineer expects: objectoriented, structured around design patterns like the model-view-controller, a library-based framework to
reuse important functionality, and a design that expects developers to extend it. As a software engineer who
built desktop and systems applications, I find Joomla meets my technology demands for building custom
web applications. I leverage Joomla for the web details so I can focus on coding the business part of the
application. Arguably, Joomla is more than a CMS for building websites it serves well as a framework for
building web applications.
67
The Best Of
CMS features like WYSIWYG editor, search, CAPTCHA, SEF URL routing, categories and tagging,
administrative backup and security tools, etc.
library of reusable classes. Being object-oriented, Joomlas CMS functionalities can be reused at the
class level: input fields and validation, database connections, toolbars, CAPTCHA integration, session
management, pagination of lists, etc. The library also includes a framework of non-CMS functionality for
things like email messaging, manipulating images, or connecting to APIs of social media like Facebook,
LinkedIn, and Google.
event-based extensions. Joomla calls them plugins. They are fired upon standard and custom events
invoking code that can be aware of the current user, session, application being run, etc. Common uses
include changes to the pages content, logging information, and even overriding PHP classes.
Admin panel. Manage data records with regard to content, ordering, publishing state, and creation/
deletion. Set application options.
Website integration. More often than not, a web application needs to be accessible through a website,
display information on site pages, and integrate with the websites users and data.
reusable extensions. Developers list their installable extensions through a directory of several thousand
items from simple plugins to full-featured applications. These can be reused and leveraged by custom
applications.
Community driven, Joomla is constantly evolving with security updates and new feature or enhancements.
68
The Best Of
auto-personalize with information for the given consumer and store. The emails are sent through Mandrill, a
branch of MailChimp, which handles whitelisting and reporting.
This application leverages user management of consumers and retail representatives, a popular extension
for managing a directory of stores, another extension for bridging to Mandrill, access control for managing
dashboard data, JCE editor for composing email content and media management, standard article creation
for each campaigns landing page, and the components admin panel for composing, testing, and launching
emails. Segmentation of consumers to one of over a thousand affiliate stores is maintained by the Joomla
extensions, so list segmentation was best handled by the site-integrated web app, which has access to these
ever changing records, instead of constantly synchronizing segmentation with MailChimp.
Reusing Extensions
Often I can find a Joomla component that already implements most of what I need. In those cases, I will start
with that extension and tailor it with the customized code I need. For example, a training company needed a
way to list their classes (schedule, location, description) and a way to register and pay online. Starting with
an event registrant system, I used the language feature to change terms, coded the clients unique business
rules, and wrote a payment plugin that interfaced with their accounting package and payment processor.
I was able to reuse the code providing functionalities like calendaring, popup Google maps of locations,
various types of display modules, and the shopping cart. Of course, this approach means forking from an
existing extension so can no longer look to the extension developer for updates, but it does allow you to
reuse a lot of functionality instead of re-inventing it.
CAN BE A SIDE BAR ITEM
Resources
To build nontrivial web applications with Joomla a programmer needs to understand its architecture and how it
works at code level. Whether you are new to Joomla or a seasoned professional, if you intend to work with its
PHP and XML code, you ought to read Joomla Programming (Dexter & Landry, Addison Wesley Publishing),
a book written by two key developers who helped to architect this CMS. The book is essential reading as it
thoroughly explains how Joomla works at the code level and provides thorough coding examples.
A second recommended resource is Learning Joomla 3 Extension Development (Tim Plummer, Packt
Publishing). This book is to-the-point of building custom Joomla extensions. But arguably the first book
provides a more thorough explanation as to what is happening in the code.
end
The Best Of
options are listed in this directory: http://extensions.joomla.org/extensions/tools/webbased-tools. Tools like
these build upon the Joomla framework, allowing you to focus almost entirely on just the business logic your
application needs.
Because the component is built upon its data, the first thing to do is sketch out the data fields and how they
will be organized within tables. In this example, the primary table will list the details of a puppy or kitten:
breed, sex, color, date-of-birth, image, price, and an optional description. To distinguish between puppy
versus kitten, I will create a category for each and each pet record will need to set the category to one of
these two. To illustrate the use of related tables, I will create a table of breeds (name, description) that the pet
record will reference.
70
The Best Of
more you understand Joomla, its library, its MVC structure, and object-oriented PHP programming, the more
sophisticated you can be in adding custom features. This article can cover only a few examples, but they
should demonstrate typical techniques for customization a Joomla application.
Customizing the layout
The most common need is to tailor the display of data. Our base component provides some bland layouts of
all the data. To see the layout, create a new menu item of type Petstore -> Pets, then view that page on the
front-end. In Joomla the front-end layouts are found under the directory
/components/<com_component_name>/views/<view_name>/tmpl/<layout_name>.php
The pre-built code holds a list of all items this page should display (accommodating for pagination). It
iterates through the list displaying each item in an HTML table. Following this pattern we can code the
layout by rewriting the foreach loop to look like this.
Listing 1. An example
<table class=pet-list>
<?php foreach ($this->items
<tr>
<td><?php echo
<td><?php echo
<td><?php echo
<td><?php echo
<td><?php echo
<td><?php echo
</tr>
<?php endforeach; ?>
</table>
as $item) : ?>
$item->id; ?></td>
$item->category; ?></td>
$item->breed; ?></td>
$item->color; ?></td>
$item->image; ?></td>
$.$item->price; ?></td>
This generates an HTML table listing all the data, but it is an unformatted table. Joomla provides library
functions to add CSS to the header either as an embedded declaration or as a link to a CSS file. To include a
file, add this code within a php section
Listing 2. A sample code
Jfactory::getDocument()->addStyleSheet(JURI::base()./components/com_petstore/assets/pets.css);
Of course, make sure you create an assets directory under the component and create this CSS file within it.
A recommended practice is to put all component styling in a file like this, then add the line of code in each
view.html.php file of each view directory.
Add conditional features at runtime
When the server should run some logic based upon a scenario known only at runtime, develop and install a
plugin. Plugins are fired upon certain events, such as during certain stages in the process of building a web
page in response to a browser request. Plugin developing is out of scope for this article, but the relevant
function shown here illustrates the power and versatility of a plugin. Here, we create a system plugin that
will check if the current process involves this component, and if it does, the plugin invokes the line of code
that adds the style sheet to the header.
71
The Best Of
Listing 3. A sample code
// called within a system plugin
public function onAfterRoute() {
$app = Jfactory::getApplication();
if($app->isSite() && $app->input->get(option) == com_petstore){
//conditionally add this style sheet
Jfactory::getDocument()->addStyleSheet(JURI::base()./components/com_petstore/assets/
pets.css);
}
}
The fields category and breed are returning the id for those entries, but we want to display the text. The
model for this table contains SQL that returns the values we get for each item. What we want is a JOIN
statement in the SQL that allows us to get the related values. An investigation of the model found at
/components/com_petstore/models/pets.php
reveals that our component builder did just that. And if it didnt, we could always add the SQL ourselves. So
to get the category name instead of its id value, we simple call $item->category_title, and we do likewise to
get the name of the breed.
But we can go a step further. Lets say we want to incorporate the description of the breed within our list of
pets. We can simply replicate the line of code that gets the name of the breed and change the copied line to
get the description instead. Once that field is added to the returned $item object, our layout file can add the
description text to the HTML as a tooltip or lightbox.
This function (the models getListQuery() is an important one. It is here where we manipulate the SQL to
filter the items we return from the database, to order the results, to declare which fields will be returned, and
to enforce access control
Access control
Lets assume that the shop does not want the public to see pets until they have been on the site for X-days.
However, for a small fee a user can subscribe to get the complete and most current listing. We would install a
subscription extension that allows the public to register, pay the subscription online, and then be added to the
subscription user group. Through Joomlas ACL we create an access level for subscribers. All that with no coding.
Now we return to the model class. We will use Joomla code to determine if the site visitor is a subscribed user
$isSubscriber = in_array(6,$user->getAuthorisedViewLevels()));
and if so we show the whole list. If not, we have the model add an SQL condition to filter out all records
that are not X-days old.
Adding configuration settings
Of course it is better not to hard-code the number of days, nor to hard-code the id of the access level used
for subscriptions. We want the store owner to be able to set values like these in a configuration screen.
Configuration values are easily added to a component though a file named config.xml which is found in the
components base directory (on the administrator side). For our component that is
/administrator/components/com_petstore/config.xml
Examine this file that our component builder generated, or look at the config.xml file of other components,
and you will quickly see the XML pattern for adding fields. Heres the code I would use for X-days.
72
The Best Of
Listing 4. A sample code
<field name=xdays type=list default= label=X-days description=days to defer public
viewing>
<option value=1>1</option>
<option value=2>2</option>
<option value=3>3</option>
</field>
The components admin screen provides an Options button for reaching the configuration screen, and the
code within the model can reference configuration values this way
$xdays = JComponentHelper::getParams(com_petstore)->get(xdays);
Library functions
Most of the functionality in Joomla is rooted in the classes of its library. A savvy app builder will leverage
these. As an example, we will want to add a feature that automatically resizes uploaded photos to pixel
dimensions for website use. The Joomla class JImage provides the needed functionality as the following
code demonstrates
Listing 5. A code demonstration
jimport(joomla.image.image);
$jimg = new Jimage($item->image);
if($jimg->isLoaded() &&($jimg->getWidth() > $maxWidth || $jimg->getHeight() > $maxHeight)){
$jimg->cropResize($maxWidth,$maxHeight,false);
$jimg->tofile($item->image);
}
Here again, it would be nice to set $maxWidth and $maxHeight values within the components configuration
settings.
Going deeper
Real business needs typically calls for more customization that runs deeper. For example, the application
could allow subscribers to sign up for daily email digests of newly added puppies and kittens, and the user
could be allowed to select the type and breeds to monitor. Maybe the store owner wants to track how long
each pet is listed before it is sold, and reports can be run to show the average time each breed or price range
remains unsold. Starting with the Joomla platform and the base component we generated, an experienced
developer should be able to deliver such a web app.
As stated earlier, Joomla application development does require the developer to understand the code and
architecture of Joomla. The better one understands it, the more sophisticated applications one can develop
and deliver. As one can see, the Joomla platform provides the reusable functionality for most web needs,
freeing you to focus your coding effort on the custom functionality that the business needs.
73
The Best Of
Drupal as a framework
Drupal basic content structure
Overview of what is Drupal able to do
Why Drupal?
Many people knows content management systems (CMS) just as some web application used to generate
blogs, news columns and many kind of content oriented sites. The most used CMS at the Web are:
Wordpress, Joomla and Drupal (gratefully all of them are open-source). So, how to know which of them you
should choose for your project?
According to users, Wordpress is easy and fast to use but too little extendable. Drupal is like a big control
panel in which people needs to know how to use it. Joomla fits somewhere in the middle. As it seems,
Drupal is often seen like the most difficult CMS to use. So why that many people use it? What makes Drupal
a unique CMS?
Drupal isnt even just considered as a CMS, but as a web development framework and platform. Thats why
some organizations and corporations uses Drupal as their information technology solution:
Nokia Research
The White House
MIT Division of Student Life
There are more examples of people using Drupal on drupal.org https://drupal.org/case-studies.
Drupal is scalable. You can develop sites to integrate to other web services like social networks, CRMs,
mobile applications, etc. You can even use Drupal not as a website but as a web service that other
applications as mobile applications, desktop applications or other Drupal sites may serve to.
Drupal is secure. Inside Drupal.org community is dedicated to finding and solving security issues with
Drupal. Besides, Drupal maintains high standards for security procedures for system administration. The
code is all developed in Drupal.org, although some of it has been supported by third party applications. As
the code is open-source, there is no chance on getting malware on Drupal core. All these points becomes
Drupal to one of the safest content management platforms.
74
The Best Of
The Best Of
have fields for users and allow them to register or not (maybe your sites doesnt need to have new users but
admin only or your site is in beta version, so you preffer to invite your guests).
As mentioned, Drupal can be extended by installing or creating modules. Modules are divided in 3 types:
Core modules: These comes alongside with Drupal instalation
Contrib modules: Installed from Drupal.org. Noteworthy that almost none of these modules are hosted by
third party organizations, but Drupal.org and its contributors. So theres no risk of finding malware on any
of these modules. All contributed modules are shared under GNU GPL license, as same as Drupal itself.
Custom modules: Modules built by website developer.
Drupal architecture allows to get modules installed without modifying core content, in theory it shouldnt be
even necessary to modify contributed modules, but only to create or alter things in your custom modules.
A basic structure for a module is:
mymodule (folder in sites/all/modules/custom)
mymodule.info (basic information: name, description, version, etc.)
mymodule.module (the code itself, it may call to other files inside or outside this module)
Themes
As almost every CMS, Drupal allows to install themes. Themes works the same as modules. There are
core, contributed and custom themes. But the difference between themes and modules is that some themes
are not made to be used as a front-end theme but as a base theme, so front-end developer is able to build a
subtheme. One of the most used and well developed base themes is Omega. This theme is prepared to work
along with some of the best theming practices and tools like grids (960gs, blueprint), sass, responsive design,
HTML5, media CSS3, media queries, etc.
76
The Best Of
Hooks
A hook is something like an programming event. When a node is saved or when sites initializes, when
defining new pages for the site, when an entity is loaded, after a form is builded, etc. Basically, its a way to
alter some pre-defined behaviour of an activity. Lets see an example when a form is builded.
Listing 1. An example
function mymodule_form($form, &$form_state) {
$form = array();
$form[text] = array(
#type => textfield,
#title => t(Foo),
#required => TRUE,
);
return $form;
}
Thats a normal definition of a form (the interesting part is that now you know how to create forms in Drupal
by php code although it still needs a submit action). To see that form, it needs to be called by drupal_get_
form function, and that function needs to be called by a hook which defines pages (hook_menu).
If that form is defined on my module, there is no problem to modify it because I have access to this code.
But what about if I need to modify something in the node creation form? This is how a hook works: a hook
is a way to extend another functionality and hooks are not used itself but implemented. The hook we are
going to implement as an example is hook_form_alter. To implement a hook you create a function with this
name: mymodule_form_alter . Notice that we replaced the word hookby the name of our module. Thats the
way our module implements a hook. Now, its important to know that hooks have parameters that must be
the same in all hook implementations. For example: hook_form_alter has these parameters: (&$form, &$form_
state, $form_id). So mymodule_form_alter must also use this parameters.
77
The Best Of
Now our hook implementation has this
Listing 2. A hook implementation
function mymodule_form_alter(&$form, &$form_state, $form_id) {
// Code goes here
if ($form_id == mymodule_form) {
$form[text][title] = Bar;
}
}
This code will be executed after mymodule_form is builded by drupal_get_form. Notice that is enough to define the
name of the function based on the name of the hook, use the correct parameters and Drupal will do the rest.
Another very important example of hook is hook_menu, this hook is used to define urls and their callback
Listing 3. The hook_menu
function mymodule_menu() {
$items[my-form-url] = array(
title => My form,
page callback => drupal_get_form,
page arguments => mymodule_form,
access callback => TRUE,
);
return $form;
}
This function implementing hook_menu will create an url called my-form-url so each time we enter to http://
mysite.com/my-form-url, this page will show us whatever the function in page callback returns. In this case it
will return a form because we called a Drupal function drupal_get_form and we passed, and argument which is
mymodule_form, access callback is used to define who can see this page. In this case, everyone can do that.
Drupal API
Drupal core brings an API that allows developers to build and extend core and contributed modules, but as you
can notice, not everything in Drupal can be based on hooks, an API has functions, constants, even classes.
Drupal facilitates the task of create forms, pages, creation of entity types, bundles,entity instances, etc.
Specifically these are the APIs in the core:
DBTNG (Database data manipulation)
Entity API (can be extended by the Entity API contributed module)
FIeld API (creates field types by PHP to use them on different entities)
Form API
Image API (based on php-gd extension to manipulate images)
Node API
Theme API
78
The Best Of
Drupal distros
Have you ever heard about a Linux distro? Basically its a Linux operating system distributed with different
programs and functions. Drupal is so scalable that its also possible to apply the same analogy in Drupal. A
Drupal distribution is a Drupal installation configured with some installed modules, themes and pre-defined
settings called installation profiles. All these works together to solve an entire use case.
An example for a Drupal, distro is Commerce Kickstart. This one brings up a Drupal copy focused to build
an eCommerce site based on the Drupal Commerce modules suite. It brings up lots of things ready to use
like administration of products, orders, users, taxes, payment options, etc.
Other example is RedHen CRM. This distro is created to offer a fast way to create a CRM based on Drupal.
It can manage contacts, organizations, relationships, etc. This CRM is ready to connect with other enterprise
CRMs like Salesforce.
But the coolest part of it is not necessarily the use case a drupal distro may achieve but the flexibility and
scalability that Drupal allows itself to build and extend even more the functionality of one of these distros.
You can install other modules or themes like if they were a normal Drupal installation or make your own
modules to achieve more specialized functions. For example: you may need to create a new shipping option
that is valid only in the country the website is being developed for or you may need that a RedHen CRM
would be able to serve data to a mobile application. Of course, you can do all of that.
Summary
Drupal provides all the tools that a project development team needs to build, program, theme, maintain and
extend a Web project. Drupal enhances time and quality boost with less man-hours and less costs for the
project, because team does not need to worry on building a database structure, develop a safe product, or
make application scalable, because Drupal already done it for you.
The author has been working as a web developer and web designer in a small company for the last 2
years. He has used various tools for web development like Javascript, PHP, MySql, Wordpress, Drupal.
He made some contributions in Fedora design team and helped to solve issues, also he create patches on
Drupal.org.
79
The Best Of
The readers of this article will have gained a strong working knowledge of how to leverage the tools built around the AngularJS
framework resulting in development efficiency, standards met and best practices followed. We will achieve these goals by learning how to scaffold the structure of our application using the Yeoman command line utility, rapidly prototype the user interface with Bootstrap from Twitter, employ the AngularJS Sublime Text package, and debug with the Batarang Chrome Developer
Tools extension. All of the aforementioned tools are developed, documented, maintained, supported and open sourced by the AngularJS community for developers everywhere to build the best Angular applications possible.
To get the most out of this article developers should have a working knowledge of the fundamental Web technology stack including but not limited to
JavaScript
Google Chrome/Developer Tools
MV* architecture
Shell/Bash
Node.js/NPM
The AngularJS community has been facilitating a positive developer experience from day one. Angular
achieves this by setting and following proven standards that ease a lot of the pain points felt when
developing in previously popular frameworks, while distilling the best of them into a single lightweight
JavaScript file for our convenience. A key attribute retained from past frameworks is an emphasis on
tooling that empowers developers to build the best application possible in the shortest amount of time.
Take for example the Ruby on Rails framework. Rails focus on a standard naming convention and file/
folder architecture allowed for a robust command line utility to ship with the framework that provided
developers a means to quickly scaffold the structure of an application, add 3rd party gems and therefor
develop more efficiently. AngularJS has followed this pattern by adopting a similar standard in the MVC, or
Model View Controller architecture (Although it is technically a Model-View View Model pattern, I prefer
Addy Osmanis term MV*). Angulars charm doesnt stop there, the community has built tools that foster
a positive developer experience throughout the development lifecycle including generators, test suites, UI
libraries, editor integration and debugging tools.
When building an application with a new framework for the first time it is important that the developer
experiences quick wins with minimal effort. AngularJS achieves this by integrating with the Yeoman
command line utility as a generator. Yeoman ties together a number of highly useful tools including
Grunt, Twitters Bower package manager for open source GitHub repositories, LiveReload integration
with your Web browser of choice and a powerful build script. Grunt gives us access to a fast Node.js
server that works in conjunction with LiveReload. This pair will allow us to rapidly develop our Angular
app locally. Well then employ Bower to install the Angular-UI Twitter Bootstrap prototyping library to
build the interface with ease. All the while using the AngularJS Sublime Text plugin to make editing our
app painless and debugging with the Batarang Chrome DevTools extension to ensure our app behaves as
intended. Let the development commence.
80
The Best Of
server.
angular.
Yeoman will ask you a series of questions such as, Would you like to include Twitter Bootstrap? (Y/n) type
Y for all.
Notice that your default Web browser has opened (if not previously open) with a new tab pointed at
localhost:127.0.0.1:9000. The default content of the page will display a list of libraries added to our
application by Yeoman and the Angular generator. At this point we can begin speeding up the process of
boilerplating by leveraging the Angular sub-generators to scaffold views, controllers, routes, services, etc.
Yo Angular
The following sub-generator commands can all be run to scaffold a new portion of your AngularJS
application. This is immensely powerful because all of the grunt work is done for you. Take for example the
command yo angular:route tasks.
Creates a new controller file in app/scripts/controllers/ named tasks.js
Creates a new tests file in test/spec/controllers named tasks.js
Creates a new view file in app/views/ named tasks.html
And lastly adds a new route to /tasks in the existing app.js that is found in /app/scripts/
The Angular sub-generator commands can be scoped granularly by allowing us to create any one of the
MVC components individually. Some of the more common commands are listed below. For a full list
navigate to yeoman.io.
yo angular:view <NAME>
yo angular:controller <NAME>
yo angular:route <NAME>
yo angular:service <NAME>
yo angular:provider <NAME>
yo angular:factory <NAME>
81
The Best Of
AngularUI Bootstrap
The AngularJS community has built a components library for Angular called AngularUI Bootstrap based
upon Twitters Bootstrap front-end UI framework. This is a very quick way to build you interface upon
two well supported and well documented Open Source projects. Because Angular supports the bleeding
edge HTML5 Web Components specification, all of the individual UI elements are implemented as Web
Components. We can install AngularUI by running bower install angular-ui.
A relatively simple example of using the angularUI library of Web Components is a Bootstrap alert
Listing 1. HTML in app/views/tasks.html
<div ng-controller=AlertDemoCtrl>
<alert ng-repeat=alert in alerts type=alert.type close=closeAlert($index)>{{alert.msg}}</
alert>
<button class=btn ng-click=addAlert()>Add Alert</button>
</div>
$scope.closeAlert = function(index) {
$scope.alerts.splice(index, 1);
};
+ shift + P
in Sublime
The Best Of
panel, to inspect the models attached to a given elements scope. The extension is easy to install from either
the Chrome Web Store or the projects GitHub repository and inspection can be enabled by
Opening the Chrome Developer Tools
Navigating to the AngularJS panel
Selecting the Enable checkbox on the far right tab.
Your active Chrome tab will then be reloaded automatically and the AngularJS panel will begin populating
with inspection data.
Summary
AngularJS is proving to be a valuable member of the Web stack for many reasons, tooling being only one.
Through these tools developers are able to build their applications faster, with greater ease and with more
robust feature without the framework getting in their way. For these reasons the Angular community has
continued to grow at an accelerated rate since its inception three years ago. To conclude we have learned
how to use the Yeoman command line utility to scaffold our MV* application, prototype our views with the
AngularUI library, write code faster with the AngularJS Sublime Text package and debug with the AngularJS
Batarang Chrome Developer Tools extension. These tools are constantly being refined by the Angular
community to evolve in parallel to the framework and will therefore continue to improve our development
experience.
Zachariah Moreno is a 22 year old Web developer from Sacramento, California that enjoys contributing
to and working with Open Source projects of the Web flavor. He can usually be found on Google+
posting and discussing design, developer tools, workflow, technology, photography, golf and his English
Bulldog, Gladstone.
83
The Best Of
84
The Best Of
Figure 1. Model, View, Controller in the AngularJS World
As shown above, the Model is just data. It represents the truth of your application, of what the user sees. But
it is up to the Controller and the View to decide what part of the model gets displayed to the user, and how.
Instead of you manually changing parts of the view, or grabbing the content of the form.
Your prime concern with respect to any user action should consist of one of the following:
grab the current state of the model and send it to the server,
update the model based on the server response,
modify the model to change how the UI looks.
Between these three actions, the majority of your use cases would be covered.
Let us now take an example of this concept plays into the real world with a few common use cases:
{id:
{id:
{id:
{id:
{id:
1,
2,
3,
4,
5,
subject:
subject:
subject:
subject:
subject:
Hi,
Hi,
Hi,
Hi,
Hi,
Im
Im
Im
Im
Im
the
the
the
the
the
Listing 1 shows a simple array of JSON objects, each of which has an id, a subject, a timestamp, and a
boolean which signifies whether the mail is unread or not.
Now, the traditional jQuery way of highlighting these unread emails would be as shown in Listing 2.
Listing 2. Highlighting unread emails using jQuery
for (var i = 0; i < emails.length; i++) {
if (emails[i].unread === true) {
$(email- + emails[i].id).addClass(unread-mail);
}
}
In Listing 2, we loop over all the emails, and when we find an unread email, we add the CSS class unread-mail
to the HTML. This is an imperative way of doing it, and is what most people are used to. Hence, when they
switch to AngularJS, this is the type of code that often shows up in controllers. What developers should instead be
thinking is how can I declaratively define this, so that the view decides based on the model what to do.
Listing 3 shows how the code might look in AngularJS (a purely HTML solution):
Listing 3. AngularJS template solution to highlight unread emails
85
The Best Of
<li ng-repeat=email in emails ng-class={unread-mail: email.unread}>
<!-- Display email subject and timestamp here -->
</li>
Immediately, two things should stand out from Listing 3. First, we have completely, in a declarative manner,
defined what our UI is going to look like. In a jQuery world, this would have involved looping over the
emails, and then adding a template and inserting it into the DOM. In AngularJS, the magic of data-binding
takes care of all of this. Second, we have also declaratively mentioned which emails need to be highlighted
because they are unread by using the ng-class directive. The ng-class basically tells AngularJS to add the
unread-mail class when email.unread is true, and to remove it otherwise.
Tabs
Lets talk about another common case where a jQuery approach is not what we want. Lets say we have two
tabs, and based on which tab is selected, we want to highlight the tab, as well as change the content. So let us
first take a look at the HTML backing this tab structure, as shown in Listing 4.
Listing 4. HTML for showing Tabs
<ul class=tabs>
<li class=tab1 selected>Tab 1</li>
<li class=tab2>Tab 2</li>
</ul>
<div class=tab1 content>Content for Tab 1 here</div>
<div class=tab2 content>Content for Tab 2 here</div>
A Unordered list holds our tabs at the top, and the divs hold the contents. Now in jQuery, we would have to
do the following every time someone clicks on Tab 1 or Tab 2:
add selected class to the Tab,
remove selected class from the other tabs,
hide all the tab contents,
show only selected Tabs contents.
Yikes! That is a lot of work. Now, how can we leverage AngularJSs model to do this instead?
Listing 5. AngularJS approach to having Tabs
<ul class=tabs>
<li class=tab1
ng-class={selected: isSelected(tab1)}
ng-click=selectTab(tab1)>Tab 1</li>
<li class=tab2
ng-class={selected: isSelected(tab2)}
ng-click=selectTab(tab2)>Tab 2</li>
</ul>
<div class=tab1 content ng-show=isSelected(tab1)>Content for Tab 1 here</div>
<div class=tab2 content ng-show=isSelected(tab2)>Content for Tab 2 here</div>
Listing 5 shows how we can use ng-class here again, similar to before, by setting a class selected based on
a function call. Well take a look at the function in a second, but basically, it will return true or false based on
whether the currently selected tab is the one specified in the argument. Now, we reuse the same isSelected
86
The Best Of
function to conditionally show and hide the contents of the tab as well. How does the isSelected, selectTab
functions look? Something as simple as the code in Listing 6.
Listing 6. AngularJS functions to support the Tab app
var currentTab = tab1;
$scope.selectTab = function(tab) {
currentTab = tab;
};
$scope.isSelected = function(tab) {
return tab === currentTab;
};
Again, we have, in a declarative manner, specified what the UI is going to show, how it is going to display
certain elements and style them. No need to dig through multiple javascript files looking for where the
element ID is being used to manipulate the DOM.
In AngularJS, we modify the Model, and let AngularJS do the heavy lifting.
Now, lets say we get these fields from the server when the page loads as JSON. And, when the user hits
submit, we might have to do some other validation and then finally send it across the wire. Listing 8 shows
how these two functions might look like.
Listing 8. jQuery way of handling forms
function setFormValues(userDetails) {
// userDetails is JSON from the server
$(#nameField).value(userDetails.name);
$(#emailField).value(userDetails.email);
}
function getFormValues() {
var userDetails = {};
userDetails.name = $(#nameField).value();
87
The Best Of
userDetails.email = $(#emailField).value();
// Do some other work with it and then send it
Now consider if we had radio buttons. Or check boxes. You would have to loop through each one to grab its
value and figure out the final state of the model. It is extra code you shouldnt have to write.
Now let us take a look at how we can leverage the two way binding in AngularJS to accomplish the same in
Listing 9.
Listing 9. AngularJS Form example
<form id=my-form>
<input type=text id=nameField ng-model=user.name>
<input type=email id=emailField ng-model=user.email>
<button>Submit</button>
</form>
Furthermore, anytime we need access to the contents of the form, we can simply refer to the $scope.user
variable and use it as we need to. No need to reach out into the DOM, manipulate state or anything else.
AngularJS handles all this for you. Want to send the form contents to the server as part of the registration
flow? Just send $scope.user, which has the most up to date value of the form!
Let us take a look at Listing 10 which demonstrates how such a normal flow would work in jQuery, vs
Listing 11 which shows the same flow in AngularJS. Both fetch some data from the server, get the data to
display in the UI and then let the user edit it and save it.
Listing 10.1. HTML code for using jQuery to handle a form based applications
<form name=myForm>
<input type=text id=nameTxt>
<input type=email id=emailTxt>
<button class=updateButton>Update</update>
</form>
88
The Best Of
});
You can immediately see that in the AngularJS code, we dont have to write any code to transfer the data
from the UI to the code and back from the code the UI. We leverage AngularJSs data-binding and thus
significantly reduce the amount of code we write (and thus the possibility of errors as well!).
Now consider adding other options on a case by case basis. You might want to know when the user selects a
date. or you might want to set a different date format instead of the default MM/DD/YYYY. While you can
do all of this normally, there are a few pain-points. Namely,
This is not declarative. Someone would look at the HTML and never realize that the input field magically
becomes a datepicker at some point. You would have to dig through the code to find out who is doing what
For someone who has no experience with jQuery-UI, they would have to sift through the API docs to
89
The Best Of
figure out how to do common things.
Anyone looking to reuse the component would have to end up copy pasting this code, and rewrite callback
functions and instantiations wherever they need it. Or copy paste a lot of code.
Now consider the alternative, where jQuery UI datepicker is wrapped as a reusable component, exposing just
the most commonly used APIs in a declarative manner. For example, in an ideal world, I would want to do
something like:
<input type=text jqui-datepicker ng-model=startDate date-format=dd/mm/yyyy onselect=dateSelected(date)>
Anyone looking at the HTML can immediately understand that the value of the datepicker is available in the
model variable called startDate, and that when the date is selected, a function called dateSelected is called.
Listing 12 demonstrates how we would write such a directive. This directive needs to take care of two
things:
Getting the data from the jQuery UI datepicker, and informing AngularJS about its change
Telling jQuery UI about any changes that happen inside of AngularJS
Also note the use of scope.$apply(). This is to let AngularJS know that the model has changed outside of
AngularJSs control, and it needs to update all the views to reflect this new change. In this case, it is the user
selecting a date in the jQuery UI datepicker widget.
Listing 12. A Simple Jquery UI Datepicker Directive
angular.module(fundoo.directives, [])
.directive(datepicker, function() {
return {
// Enforce the angularJS default of restricting the directive to
// attributes only
restrict: A,
// Always use along with an ng-model
require: ngModel,
// This method needs to be defined and passed in from the
// passed in to the directive from the view controller
scope: {
select: &
// Bind the select function we refer to the right scope
},
90
The Best Of
link: function(scope, element, attrs, ngModelCtrl) {
var optionsObj = {};
// Use user provided date format, or default
optionsObj.dateFormat = attrs.dateFormat || mm/dd/yy;
optionsObj.onSelect = function(dateTxt, picker) {
scope.$apply(function() {
// Update AngularJS model on jQuery UI Datepicker date selection
ngModelCtrl.$setViewValue(dateTxt);
if (scope.select) {
scope.select({date: dateTxt});
}
});
};
};
});
This is one type of directive, where we care about input from the user. On the other hand, sometimes, we
might want to just get data into our widget and display it.
For example, if we wanted a custom component that we use to display a photo along with its comments,
likes and other metadata in a grid, we could create a component that we end up using as follows:
<div my-photoview photo-meta=photoObj></div>
The JS code for the myPhotoview widget would look something like Listing 13.
Listing 13. A photo display widget
angular.module(fundoo.directives).directive(myPhotoview, function() {
return {
restrict: A,
scope: {
photoMeta: =,
},
template: <div class=photo-widget> +
<img ng-src={{photoMeta.url}}/> +
<span class=caption>{{photoMeta.caption}}</span> +
</div>,
link: function($scope, $element, $attrs) {
// More specific rendering logic, watches, etc can go here
}
91
The Best Of
});
};
Here, photoObj is a javascript object that contains the URL of the photo, the caption, the comments
information and the number of likes. The directive could encapsulate all the logic of how this is rendered,
and other additional functionality like liking the photo, commenting on the photo, etc. It might even decide
to conditionally include other templates, or use jQuery to manipulate certain parts of its template.
The interesting things to note here are
The naming convention: When we declare our directive in the JS code, we defined it as myPhotoview.
But when we use it in the HTML, we need to use it as my-photoview. The camelcase from the JS gets
translated to dash separated words in the HTML. This is true for the directive as well as all the attributes
defined on it.
The scope definition: The scope defines something called photoMeta, with its value as =. This means
that when the directive is used, we can pass in any javascript object to it using the attribute photo-meta
in the HTML, and the value of the javascript object will be available within the directive as $scope.
photoMeta. In the case of Listing 13, we can access the contents of photoObj as $scope.photoMeta and
display it.
Data-binding: The best part about defining the photoMeta in the scope as = is that it tells AngularJS
that the object needs to be kept upto date inside the directive. That is, if photoObj changes in the parent
controller, then the latest value must be made available to the directive. No extra code needed!
Link function: The link function is the place to put additional logic. For example, while the caption and the
image itself would change automatically if photoObj ended up changing in Listing 13, if we wanted to do
some additional data manipulation or DOM manipulation, the link function is where that code would go.
The main take-away from both these examples is to encapsulate all this DOM modifying behavior within
Directives.
92
The Best Of
Developing an offline model that uses LocalStorage
Storing state of views to remember what to display when the user switches views
How does this work? Let us take a simple AngularJS service that is defined in Listing 14.
Listing 14. AngularJS Service as an App Store
angular.module(MyApp).factory(AppStore, function() {
return {
value: 200,
doSomething: function() {}
};
});
Now any directive, controller or service that asks for AppStore will get the same instance of the AppStore
service. That means if one controller sets AppStore.value to 250, then the second controller will see the same
value there as well.
You might ask at this point, what is the difference then between a Service and a Factory. The simplest way to
think of each one is as follows:
Factory: A factory is a function that is responsible for creating a value or an object. The advantage of a
factory is that it can ask for other dependencies, and use them when creating the value. The AngularJS
factory just invokes the function passed to it, and returns the result.
Service: An AngularJS service is a special case of a factory, where we are more OO oriented, and thus
want AngularJS to invoke the new operator on the function and pass us the result.
Let us take a look at Listing 15 which demonstrates how these are different:
Listing 15. AngularJS Service and Factory
// Ask for the $http service as a dependency
function TestService($http) {
this._$http = $http;
}
TestService.prototype.fetchData = function() {
return this._$http.get(/my/url);
}
angular.module(TestApp, [])
.service(TestService, TestService)
.factory(TestFactoryObject, function($http) {
return {
93
The Best Of
fetchData: function() {
return $http.get(/my/url);
}
};
}).controller(TestCtrl, function(TestFactory, TestService) {
TestFactory.fetchData();
TestService.fetchData();
});
Dependency Injection
AngularJS heavily relies on Dependency Injection, and you should too.
94
The Best Of
Dependency Injection is a concept that says users of a service or a dependency should declare them and ask
for it, instead of trying to instantiate it yourself when you need it. This has a few advantages:
Dependencies are explicit Anyone can look at the declaration of a controller or service and figure out
what is needed to make it work
Testability In tests, we can easily swap out a heavy service (something that talks to the server, say) with
a mock and test just the functionality we care about. This is covered in more depth in the next section.
What this means for you is that you should try and leverage AngularJS' dependency injection system and let
Angular do the heavy lifting whenever possible. Let AngularJS figure out how to get you a fully configured
service (which in turn might depend on 5 other things).
And remember, Dependency Injection is everywhere in AngularJS.
Need access to a service from a directive? Add the dependency and you can use it.
Need to access one service from another? Dependency Injection!
Need a constant value in the Controller? You can ask AngularJS for it
And this makes our testing life way easier.
});
95
The Best Of
If you didnt catch the bug there, dont worry about it Neither did we for quite some time. So what was the
attempt there in the first place? We were trying to fetch some data from the server, and then keep polling the
server every 10 seconds to see if it was up to date. This was of course part of a larger codebase, and I have
stripped away everything that is not relevant.
Now, we didnt have any unit tests for this, so obviously, we were relying on manual QA and pure luck to
ensure it works. And when it didnt, finding this needle in the haystack was not fun.
There are two problems here, both of which have to do with the use of setInterval. Firstly, setInterval is not
AngularJS aware, so we need to manually tell AngularJS to update its views by calling $scope.$apply(). But
the second is more insidious. Instead of calling setTimeout (or better, the angular version of $timeout), we
are calling setInterval. The number of calls happening to /my/url with time is show in Figure 2.
});
96
The Best Of
beforeEach(module(MyApp));
var timeout, $scope, ctrl;
beforeEach(inject(function($rootScope, $controller, $timeout) {
$scope = $rootScope.$new();
timeout = $timeout;
// Create the controller and trigger the constructor
// AngularJS will automatically figure out how to inject most of
// the dependencies other than $scope
ctrl = $controller(MainCtrl, {
$scope: $scope
});
}));
});
If we had had these kinds of unit tests right from day 1, we could have saved multiple man days in tracking
this bug. And writing these kinds of unit tests in AngularJS, once you have the harness in place, is extremely
straight forward. And AngularJS gives you inbuilt mocks to mock out XHR requests, and timers and the ilk.
Other than catching these one off bugs, why write these tests? You should write your unit tests so that
They do the job of your compiler Any typo or syntax error is caught immediately rather than waiting for
your browser to tell you
They act as specification for your code The tests define what the expected behavior is, what should the
side effects be, what requests should be made, etc.
They prevent regressions & bugs It stops some other developer from unknowingly changing expected
behavior and side effects.
AngularJSs dependency injection system allows you to manipulate the tests exactly how you want it, and
get it to the state you care about, before triggering any functions you want.
YearOfMoo has a great article (http://www.yearofmoo.com/2013/01/full-spectrum-testing-with-angularjsand-karma.html) on the other kinds of testing you can do with AngularJS, including End to End scenario
tests where you open up the browser and reliably test behavior without the flakiness that is inherent in End to
End tests. But at the end of the day, just ingrain the habit of writing your unit tests early and often. You will
thank yourself for it later.
97
The Best Of
Instead, what seems to work better is to organize your files inside the JS folder by module or functionality.
What do I mean by that?
Consider a simple Client Server application, with some 3rd party components like jQuery UI wrapped as
directives. The traditional recommended structure would have been something like Figure 3 below.
What instead works better, and is more extensible and reusable is grouping by functionality. In this case, let
us first create a module for jQueryUI directives
angular.module(MyApp.directives.jqui, []);
This would have both the Datepicker and Accordion directives. If I now wanted to reuse these directives in
another project, I can just pluck these files along with the module, and just add a dependency on directives.
jqui and start working away.
Similarly, if tomorrow I decide to switch from jQuery UI to say, Twitter Bootstrap, I just change my
dependency to MyApp.directives.bootstrap, and as long as I name the directives and have their API in the same
manner, I can seamlessly switch between dependencies. That is the power of Directives and Modules.
Similarly, the entire XHR service layer could be included in one module, say MyApp.services.xhr. This gives
us the flexibility of reusing the same service layer across multiple apps, or say between a mobile version of
the app and the desktop version. Each functional component (Search, Checkout) could each be a separate
module (MyApp.services.search, MyApp.services.checkout), which allows you to plug and play different
modules for various apps. This sort of structure really pays big dividends in case of a large company, where
code reusability, maintainability and division of responsibility is needed. Your final app structure might look
something like Figure 4 below.
98
The Best Of
Figure 4. A more modular AngularJS app structure
A much more nested structure, but easier to manage, maintain and modify. Your final App module would just
pull in all its needed dependencies:
angular.module(MyApp, [MyApp.directives.jqui,
MyApp.services.xhr,
MyApp.services.search,
MyApp.services.checkout,
MyApp.controllers.search,
MyApp.controllers.checkout]);
In Summary
We covered a whole bunch of slightly unrelated stuff in the span of a few pages. But internalizing these short
tidbits of information goes a long way towards having a smooth, productive AngularJS experience. Try and
let AngularJS do the heavy-lifting, minimize your work, and remember, the aim in AngularJS is to write the
least amount of code to do the most work, while having fun!
Shyam Seshadri was present at the birth of AngularJS, and has co-authored a book on it for OReilly
Publications. An Ex-Googler, he now splits his time between consulting on exciting web and mobile
projects and developing his own applications as part of Fundoo Solutions (http://www.befundoo.com).
99
The Best Of
What To Expect
You should have some experience with JavaScript, HTML, and Object Oriented Design. Familiarity with
Design Patterns (Dependency Injection in particular) would also be useful and knowledge of the basics
of AngularJS is highly recommended. If you are looking to build a library of reusable components that
will then be composed into a single page app or just want to learn more about AngularJS, then please
keep reading. The core concepts covered will be: using directives and controllers to compose modular
components; isolate scope and its relevance to reusability; using dynamic dependency injection and services
to share data between scopes.
In the Beginning
We will be building a set of two tables which will shuttle data from the first table to the second one. Table
one will be populated with items from a mock endpoint and when a user clicks an item in the first table, it
will be displayed in the second.
AngularJS provides a powerful feature to extend native DOM functionality. At its core, a directive is a
function that executes when the AngularJS HTML compiler reaches it in the DOM. They can be passed
controllers to provide logic to drive specific features, and even templates to set innerHTML. It is important
to note, that by default directives do not create a new scope; they share scope with their parent object.
However, this can be overridden by creating an isolate scope using the syntax scope: {}, which will create
a brand new scope object which does not inherit prototypically from its parent. As such, this is useful for
encapsulating functionality into a DOM Element, which does not depend on any of its parent scopes. The
simplest way to pass a string into an isolate scope, is to use the @ operator, which will bind a scope variable
to a string passed in as an attribute on the DOM node. For a sample, see Listing 1a through 1c, which will
demonstrate the above by way of creating a wrapper directive to house all further components.
Listing 1a. index.html
<body ng-app=myApp>
<div two-grid-shuttle source-service=mockService></div>
</body>
100
The Best Of
Listing 1b. twoGridShuttle.js directive
angular.module(myApp).directive(twoGridShuttle, function () {
return {
scope: { sourceService: @ },
controller: twoGridShuttleController,
templateUrl: two_grid_shuttle.html
};
});
The Best Of
to the value in the directives isolate scope will propagate to the parent scope it was bound from, and vice
versa. We also access our two methods in the parent scope by passing a function reference into the isolate
scope by way of the & operator. Lastly, the directive makes use of a link function that will bind the scope to
the DOM, after it runs.
For a sample see Listing 3a through 3b:
Listing 3a. shuttleGrid.js directive
angular.module(myApp).directive(shuttleGrid, function () {
return {
scope: { sourceModel: @, sourceService: =, clickFunction: & },
templateUrl: shuttle_grid.html,
});
Communicating Is Hard
The final piece of the puzzle is to cement our components with a service. It is useful to note that services
are singletons and are therefore a self evident vehicle to share values between different scopes. While they
can be used to query a backend for JSON or to access an AngularJS resource, both of these applications
are outside the scope of this article. Instead we will use our service to define a mock data structure. When
implementing a component based on this template, it is imperative to have an architectural discussion about
the design of the specific data held by the service (and by extension returned by any server side APIs), as all
components will have to make assumptions about this. In our case, we will posit the existence of a models
object populated with data. For an example see Listing 4:
Listing 4. mockService.js service:
angular.module(myApp).service(mockService, function () {
var service;
service = { models: {} };
service.models.sourceItems = [foo, bar, bazz];
service.models.selectedItems = [];
return service;
});
On the web
A fully working implementation of the above code can be found here: http://bit.ly/1fddMGY.
The Best Of
such a way as to easily allow bugs to be tracked down at the component level. Finally, it is worth noting that
JavaScript minification renames function parameters, and care needs to be taken to either use the square
bracket Dependency Injection syntax as demonstrated in our example, or apply $inject to avoid mysterious
production only bugs.
Abraham Polishchuk graduated with a B.S. in Computer Science from the University of Edinburgh.
Previously he was a Chef guy and a Test Automation Engineer. He enjoys travel, rock music, martial
arts, and programming with new technologies. His latest hobby is Haskell and Yesod. Feel free to contact
Abraham at apolishc@gmail.com.
Elliot Shiu is DevBootcamp graduate who in a previous life was a Network Engineer. You can find him
mentoring aspiring programmers, looking for fresh powder at Mammoth Mountain. He is passionate
about using elegant technologies to solve practical problems. Drop him a line at elliot@sandbochs.com
or read his blog at http://www.sandbochs.com.
Currently, they are colleagues over at goBalto Inc. working on the the full stack: AngularJS, Ruby on
Rails, and PostgreSQL.
103
The Best Of
104
The Best Of
Understanding the WebDriver APIs will provide the solution to the tasks mentioned above.
Features of WebDriver
Launch a browser using respective driver. WebDriver supports browsers like FireFox, IE, Safari and
Chrome. For Firefox, it has native inbuilt support. For other browsers WebDriver needs to know the
executable path of the browser.
Listing 1. Code sample
//For Firefox Browser
WebDriver driver = new FireFoxDriver();
For IE:Download and install Internet Explorer
Launch the AUT (Application Under Test). The get method of the driver object will access a valid
url as a parameter and opens in current active browser window. (NOT SURE, I THINK IT OPENS A
NEWBROWSER).
driver.get(<url>)
WebDriver provides lot of Locator Strategies, By Class having list of static methods to handle web
elements
By.className
By.cssSelector
By.id
By.linkText
By.name
By.partialLinkText
By.tagName
By.xpath
These methods will return an object of WebElement.
105
The Best Of
Listing 2. Code sample
Handling Complicated Elements: when in a web Application other than normal html tags are used, there may
be complicated elements such as Dropdown, iframes and tables.
Handle Dropdown: WebDriver API provides Select class to handle Dropdown list and Its options, WebElement
can converted in to Select Object to fetch options.
Listing 3. Code sample
HTML CODE
<select id=city>
<option value=Op1>Chennai</option>
<option value=Op2>Hyderabad</option>
<option value=Op3>Bangalore</option>
</select>
WebElement selectElement= driver.findElement(By.id(city));
Select selectObject=new Select(selectElement)
Handle iframe: An inline frame is used to embed another document within the current HTML document. It
means the iframe is actually a webpage within the webpage which have its own DOM for every iframe on
the page.
Listing 4. Code sample
<frame name=frame1 id=Frame1>
</iframe>
To access DOM elements inside the driver control needs to change to this frame
driver.switchTo().frame(frame1);
Switching to frame can be handled in different ways
frame(index)
frame(Name of Frame [or] Id of the frame
frame(WebElement frameElement)
driver.switchto.defaultcontent();
//This will change the control of driver in
to parent window
WebDriver Interactions with WebElement. You need different types of interactions with different types of
WebElements like Textbox, Button, Link, Checkbox or Dropdown.
106
The Best Of
webElement.sendKeys()
webElement.click
webElement.clear //
webElement.submit //
selectObject.getOptions() //
selectCatogory.selectByValue(value) //
selectCatogory.deselectByValue(value) //
Verifying WebElement state: WebElements should visible to interact from driver, or it must be enable to
click/type. To get those state of element.
Listing 5. Code sample
WebElement element = driver.findElement(by. cssSelector(#Name) //this is username Text Box
element.isEnabled() / element.isDisabled()
Identifying Attributes and properties for WebElements. For a chosen element, we can verify other properties
in DOM by providing the attribute name
userName.getAttribute(name)//name= name of the attribute
Navigating between the browser Windows. In a web application, a functionality can be opened in a new
window, or can navigate to next page, driver object have a facility to navigate between back and forth
between windows and also to switch to a new window.
//To Open a URL
driver.navigate().to(https://www.google.co.in/);
//Refresh the Current Page
driver.navigate().refresh();
//move back from current window
driver.navigate().back();
//step forward from current window
driver.navigate().forward();
There may be delays in web pages load times due to many factors, this can be due to network speed, more
Ajax Calls, more images, etc. Until an element is loaded, WebDriver cannot interact with that element.
Webdriver API has wait commands built in.
//Implicit wait hold the driver before each element interaction before throw an error
driver.manage().timeouts().implicitlyWait(30, TimeUnit.SECONDS);
//Wait for the page to load completely before throwing error
driver.manage().timeouts().pageLoadTimeout(30, SECONDS);
107
The Best Of
//Tell driver to wait until the element load/gets visibility-Explicit wait
new WebDriverWait(driver, 60)
.until(new ExpectedCondition<Boolean>(){
@Override
public Boolean apply(WebDriver d) {
return theElement.isDisplayed();
Action Builder. mIf you want to perform complicated actions like DoubleClick or drag a element from one
place to other place, you need to use the Action interface
Actions action = new Actions(ffDriver) ;
//To Double Click
action.doubleClick(element);
action.build().perform();
//To drag from one position to another
Actions dragAction = new Actions(ffDriver) ;
dragAction.dragAndDrop(dragthis, dropHere);
dragAction.build().perform();
Now, a user can extend the framework using the TestNG framework for good Reporting structure as well as
for Test Execution Control like number of Test cases executed by configuring Test Groups, Data driver test
case using Data Provider, and also use a Configuration and Integration Tools like Maven/Ant will gives the
effective maintenance for Test Framework.
The Combination of WebDriver+TestNg+Maven supports a Effective, easy maintenance Test Framework in
a Hybrid Way.
Happy Testing
Veena Devi, 32, having strong background of software development and Testing, Test Automation over
9 yrs, Trainer and consultant for Web Application Automation Testing, also Testing Consultant for
TinyNews, a startup company. She is part of Quality Learning, a place for all software testing training.
108
The Best Of
What are the different locators, which can be used to identify element?
How to get the handle of element you need to write your selenium test?
What should be the strategy to choose the locator when there are lots of options?
What you should know
helps find the elements based on the value of the class attribute.
Class attribute may have more than one value and in that case both the values can be used. Refer the pic
below to see how the class attribute is specified in HTML and the usage of it.
109
The Best Of
helps find the elements based on the value of the id attribute. Refer the pic below to see how
the id attribute (with value origin_autocomplete) is specified in HTML and the usage of it.
id(String id)
helps to find elements based on the value of the text. Generally these are used
when you dont find id or className. Refer the screenshot below for the usage of it.
linkText(String linkText)
driver.findElement(By.linkText(My Trips)).click();
name(String name)
helps to find the elements based on the value of the name attribute.
If you refer the picture 3 above, you will notice that one of the attributes for the input field is name with
value origin.
driver.findElement(By.name(origin)).sendKeys(Bangalore);
partialLinkText(String linkText)
given link text. In picture 5 below, there is a link on the website with text Tell us what you think, we can very
well use partialLinkText for such kind of links. Implementation is shown below.
helps to find the elements based on the XPath. XPath stands for XML Path
Language and basically provide you a way for traversing to the element through a hierarchical structure
of aXML document. There is couple of browser add-ons that could be used to get the XPath of an element,
some of them being:
xpath(String xPathExpression)
110
The Best Of
Firebug (https://addons.mozilla.org/en/firefox/addon/firebug)
XPather (https://addons.mozilla.org/en-US/firefox/addon/xpather)
If you use any of the above tool to find out xpath for the element highlighted in Picture 3, you would find it
as mentioned below.
XPath = //*[@id=origin_autocomplete]
helps to find the elements based on the CSS patterns specified. We plan to not
divert ourselves into the detail of what is CSS, how to construct cssSelector etc.
cssSelector(String selector)
However we will tell you an easy way to figure out the selector. If you are using Firefox as the browser,
install Firebug and Fire Path add on.
Once these add ons are installed, select the element you want to use and right click on it to select Inspect
element with Firebug. On the highlighted element in the HTML Tree in the Firebug window, right click to
select Copy CSS Path.
Once you get the CSS path, the above test can be expressed using cssSelector.
driver.findElement(By.cssSelector(input#origin_autocomplete.autocomplete)).
sendKeys(Bangalore);
To summarize what we discussed just now, there are different ways to identify an element. And each
identifier has its own pros and cons.
is the most simplest and easy to use locators. An advantage with them is, it increases the
readability of your test code. Its also better than other locators in terms of test performance.
id or className
However if you are using lot of ids and your test code is becoming too clumsy.
One suggestion here would be to have a separate file and then probably you can put more meaningful name
to them if they are not properly named in the page source (example: Google search textbox on the home
page has the value q for the attribute id).
or partialLinkText is mostly used with links and are limited to that. They are easy to use. However
they are a little problematic to maintain because of often changing link texts.
linkText
XPath is simple to use but makes your test code look ugly. XPath should generally be used when the object
is having neither id nor className. When we run the test that uses XPath, browser runs it XPath processor to
check if it can find any object. This actually impacts the test performance.
One important thing that we tend to forget while using XPath is, to ensure the order of the elements. So it
should be ideally used to verify some object with respect to certain other object.
Nishant is a Computer Science Engineer by education and has close to 8 years of experience in Test
Automation & Management domain, which spans over different companies and multiple projects. He
has also worked extensively on test automation tools like Selenium, Watin, QTP, Loadrunner in past and
is currently working as a Lead QA Consultant with ThoughtWorks Technologies. He maintains his own
websitewww.nishantverma.com and actively right articles on Testing Techniques, Test Automation, Agile
Testing and Tool Comparison. His hobby is reading and writing blogs, listening music and reading books.
111
A BZ Media Event
2 Days of Exhibits
Business-Critical Panels
Special Events
Industry Keynotes
www.wearablestechcon.com