Escolar Documentos
Profissional Documentos
Cultura Documentos
Ansible
Easy guide for beginners
By Nathan Hull
Copyright2016 by Nathan Hull
All Rights Reserved
Table of Contents
Introduction
Chapter 1- A Brief Overview of Ansible
Chapter 2- Installation
Chapter 3- Running Commands
Chapter 5- Roles
Chapter 6- Tempates
Chapter 7- Tasks
Chapter 8- Templates
Chapter 9- Developer Information
Conclusion
Disclaimer
While all attempts have been made to verify the information provided in this book,
the author does assume any responsibility for errors, omissions, or contrary
interpretations of the subject matter contained within. The information provided in
this book is for educational and entertainment purposes only. The reader is responsible for
his or her own actions and the author does not accept any responsibilities for any liabilities
or damages, real or perceived, resulting from the use of this information.
The trademarks that are used are without any consent, and the publication of the
trademark is without permission or backing by the trademark owner. All
trademarks and brands within this book are for clarifying purposes only and are the
owned by the owners themselves, not affiliated with this document.
Introduction
Ansible is a very useful IT automation tool. It makes things easy to be done even by
beginners. This is why you should learn how to use this IT tool. In this guide, this tool is
discussed with no detail left out. Enjoy reading!
Chapter 2- Installation
Of course, the first step for us should be installation of Ansible. This will make it possible
for us to run tasks on the machine which has been installed with Ansible. There usually
exists a central server tasked with the responsibility of running Ansible commands, but
there exists nothing special about the server in which the Ansible has been installed. You
have to note that Ansible is agentless, and it can be run from any server, even a laptop.
For Ubuntu 14.04 users, we use the repository ppa:ansible/ansible which is easy to
remember. The following commands can be used:
sudo apt-add-repository -y ppa:ansible/ansible
sudo apt-get update
sudo apt-get install -y ansible
Management of servers
In Ansible, there is a default inventory file which is used for the purpose of management
of the servers which will be managed. Once the installation has been completed, the
referencing can be done at /etc/ansible/hosts.
In most cases, it is recommended that you copy and move the default one so that you can
reference it later. This is shown in the command given below:
sudo mv /etc/ansible/hosts /etc/ansible/hosts.orig
After that, one can create their own inventory file right from scratch. Once the example
inventory file has been moved, one can create a new file /etc/ansible/hosts, and then
perform a definition of the servers which are to be managed. In our case, we want to
define two servers under the label web:
[web]
192.168.22.11
192.168.22.12
That is good at this point. If we need, it is possible for us to define multiple groups, a
range of hosts, and reusable variables.
necessary for us to connect to the server, then it is possible. When the process of testing is
done locally from the vagrant, the following command can be used:
ansible all -m ping -s -k -u vagrant
The following are the commands:
all will use all the servers which have been defined from the inventory file.
-m ping - uses the module ping which will run the ping command and then return
the results.
-s - Use sudo for running the commands.
-k - asks for the password rather than using the key-based authentication.
-u vagrant Logs into the servers by use of the user vagrant.
Modules
Ansible makes use of modules for the purpose of accomplishing a number of its tasks.
These modules can be used for the accomplishment of tasks such as installation of
software, copying files, using templates, and for some more other tasks.
Modules define the way in which we use Ansible, as they make use of the available
context so as to determine the type of actions which need to be done so as to accomplish
our tasks.
If modules were not available, then we would have to run shell commands in an arbitrary
manner as shown below:
ansible all -s -m shell -a apt-get install nginx
In this case, the command sudo apt-get install nginx will be executed by use of the shell
module. The flag -a is used for the purpose of passing arguments to a module.
However, this is not powerful. Although it is good and possible for us to run a command
on all our servers, we will just have accomplished what can be done by use of any bash
script.
In case we used a very appropriate module, the commands can be executed with an
assurance that we will get the best results. With modules in Ansible, we ensure that there
is indempotence, meaning that the same result will be executed over and over and the
result will not be affected.
When we are installing software on Ubuntu/Debian servers, the module apt will run the
same command, but it will ensure that there is indempotence. This is shown in the code
given below:
ansible all -s -m apt -a pkg=nginx state=installed update_cache=true
127.0.0.1 | success >> {
changed: false
}
With the above code, the apt module will be used for the purpose of updating the
repository cache and then installing the Nginx if it has not been installed. The result of
execution of the task was changed: false. This is an indication that there was no
change, as the Nginx had already been installed. The command can be executed over and
over again, and you do not have to worry that the result will change in any way. The
following are the various parts of the command:
all will run on all the defined hosts from our inventory file.
-s will run using sudo.
-m apt will make use of the apt module.
-a pkg=nginx state=installed update_cache=true- this is where the arguments of the
apt module are provided. These include the state of the package, the end result
which we desire, and whether there is a need to update the repository cache or not
to do it.
Playbook
With playbooks, one can run multiple tasks and then perform some functionality which is
more advanced which one will not be in a position to perform by use of ad-hoc
commands. Let us demonstrate how the above task can be moved into a playbook. Create
the file nginx.yml with the following code:
- hosts: local
tasks:
- name: Install Nginx
apt: pkg=nginx state=installed update_cache=true
The above task when used will just do the same thing which was done by the ad-hoc
command, however, I need to specify a local group of my servers other than using all of
the available servers. The ansible-playbook command can be used for doing this as
shown below:
$ ansible-playbook -s nginx.yml
PLAY [local]
******************************************************************
GATHERING FACTS
***************************************************************
ok: [127.0.0.1]
TASK: [Install Nginx]
*********************************************************
ok: [127.0.0.1]
PLAY RECAP
********************************************************************
127.0.0.1 : ok=2 changed=0 unreachable=0 failed=0
The s option can be used for the purpose of telling the Ansible to use the sudo. It is after
that that you can pass your playbook file.
When this is running, you will realize that you will get very useful feedback, including
the tasks which are being run by Ansible, and the expected result. In our case, it is very
clear that we ran OK, but it is very clear that nothing was changed. The Nginx had already
been installed into the system.
To run the playbook, the command $ ansible-playbook -s -k -u vagrant nginx.yml can be
used and this will run it locally within the installation of vagrant.
Chapter 4- Handlers
A handler can be seen as being the same as a task, but it only runs once it has been called
by another task. It can be thought to be part of an event system, and it will act when it is
called by an event it has been listening to.
This technique is important when it comes to secondary actions, as they are required even
after the task has been executed, such as when we need to start a new service after
installation or the reloading of a service after a service change has happened. This is
shown in the code given below:
- hosts: local
tasks:
- name: Install Nginx
apt: pkg=nginx state=installed update_cache=true
notify:
- Start Nginx
handlers:
- name: Start Nginx
service: name=nginx state=started
The nitify directive can be added to the installation task. This will notify any task
having the name Start Nginx once the task has been run. It is after this that the handler
with the name Start Nginx can be started.
In this handler, we have used the Service module which one can use so as to start, stop,
restart, and reload system services. In our case, we have just told the Nginx that we need
to start our Nginx.
Let us run the playbook again:
$ ansible-playbook -s nginx.yml
PLAY [local]
******************************************************************
GATHERING FACTS
***************************************************************
ok: [127.0.0.1]
TASK: [Install Nginx]
*********************************************************
ok: [127.0.0.1]
NOTIFIED: [nginx | Start Nginx]
***********************************************
ok: [127.0.0.1]
PLAY RECAP
********************************************************************
127.0.0.1 : ok=2 changed=0 unreachable=0 failed=0
With the above, we will just obtain a similar output, but in this case, we have run the
handler. Note that we only run notifiers once the task has been run. In case you had
already installed the Nginx, the task for installing Nginx will not be run and the calling of
the notifier will not be done.
Note that playbooks can be used for running multiple tasks.
More Tasks
Consider the playbook given below. A few more tasks can be added to it, and then explore
some other functionalities. Here is the playbook:
- hosts: local
vars:
- docroot: /var/www/myservers.com/public
tasks:
- name: Add Nginx Repository
apt_repository: repo=ppa:nginx/stable state=present
register: ppastable
- name: Install Nginx
apt: pkg=nginx state=installed update_cache=true
when: ppastable|success
register: nginxinstalled
notify:
- Start Nginx
- name: Create Web Root
when: nginxinstalled|success
file: dest={{ {{ }} docroot {{ }} }} mode=775 state=directory owner=www-data
group=www-data
notify:
- Reload Nginx
handlers:
- name: Start Nginx
service: name=nginx state=started
- name: Reload Nginx
service: name=nginx state=reloaded
We have the tasks given below:
Add Nginx Repository will add a Nginx stable PPA for getting the latest stable
version of the Nginx, by use of the apt_repository module.
Install Nginx will install the Nginx by use of the Apt module.
Create Web Root this is creates a web root directory.
We have also used new directives, namely register and many, and they will tell
Ansible to run the task whenever something happens.
We can use the usual command so as to run the playbook. This is shown below:
ansible-playbook -s nginx.yml
# Or, just as I did on my Vagrant machine
ansible-playbook -s -k -u vagrant nginx.yml
Next, we want to explore Ansible further and explore other functionalities associated with
it.
Chapter 5- Roles
Roles are very good when we are in need of organizing multiple and related tasks and for
the purpose of the encapsulation of data which is necessary for the accomplishment of
such tasks. An example of this is the installation of Nginx, which may involve the addition
of a package repository, installation of the package, and setting up any necessary
configuration.
The configuration option will always need extra data such as the variables, dynamic
templates, files, and others.
The directory structure for roles is as shown below:
rolename
- files
- handlers
- meta
- templates
- tasks
- vars
In each directory, the Ansible will search and read the present Yaml file with the name
Files
First, within the directory for files, we can add the files which need to be copied into our
servers. In Nginx, this can be done as shown below:
nginx
- files
- - h5bp
The above configurations should be added by us of the copy module.
Handlers
Inside the directory handlers, all our handlers can be put which were once contained in
the playbook nginx.yaml. The following is the code for the handlers/main.yml:
Meta
The file main.yml of our meta directory has the role meta data, even the dependencies.
If this role was depending on another role, it could have been defined there. For example,
in my case, the Nginx role is dependent on the SSL role, and this will be tasked with
installation of SSL certificates. This is shown in the code given below:
dependencies:
- { role: ssl }
-
If I call the Nginx role, it will first attempt to call the SSL role. The file can also be
omitted or define the role with the dependencies. This is shown below:
dependencies: []
We have not defined any dependencies, but the brackets have been left blank. This means
that our role will have no dependencies.
Chapter 6- Tempates
Template files can have template variables, and this will be determined by the Jinja2
template engine of Python. These types of files should end with a .j2 extension, but any
name can be used. Similar to the files, it will be impossible for us to find the file
main.yml in our templates directory.
The example given below shows virtual host configuration in nginx. The file defines the
variables which will later be used in the file vars/main.yml. The virtual host file for
nginx can be found in templates/serversforhackers.com.conf. This is shown below:
server {
# Enforce the use of HTTPS
listen 80 default_server;
server_name *.{{ {{ }} domain {{ }} }};
return 301 https://{{ {{ }} domain {{ }} }}$request_uri;
}
server {
listen 443 default_server ssl;
root /var/www/{{ {{ }} domain {{ }} }}/public;
location ~ ^/(fpmstatus|fpmping)$ {
access_log off;
allow 127.0.0.1;
deny all;
include fastcgi_params; # fastcgi.conf for version 1.6.1+
fastcgi_pass unix:/var/run/php5-fpm.sock;
}
}
The above Nginx configuration is very standard for a PHP app. We have defined three
variables in the above case:
Domain
ssl_crt
ssl_key
The above variables will be defined in the section for variables.
Variables
Before exploring tasks, it is good for us to explore variables. The directory vars has the
file main.yml, and this has the list of variables which we are going to use. With that, we
will have a very convenient place in which we can change our variable-wide settings. The
file vars/main.yml should be as shown below:
domain: serversforhackers.com
ssl_key: /etc/ssl/sfh/sfh.key
ssl_crt: /etc/ssl/sfh/sfh.crt
Above are the three variables which can be used elsewhere in our code. We saw them
being defined in our above template, but they will also be visible in our defined template.
Chapter 7- Tasks
All of the above can be put together to get a series of tasks. This should be the code for the
file tasks/main.yml.
notify:
- Reload Nginx
The above file shows a long series of tasks, and they define a complete installation of the
Nginx file. The above tasks, in the order in which they have been written, will accomplish
the following tasks:
1. Add the repository nginx/stable.
2. Install and launch Nginx, and then register a successful installation so that the
remaining tasks can be triggered.
3. Add the configuration for H5BP.
4. Disabling our default virtual host by removal of the simlink to our default file from
the directory sites-enabled.
5. Copy the virtual host template serversforhackers.com.conf.j2 into our Nginx
configuration.
6. Enable the configuration of the virtual host by simlinking it to the directory sitesenabled.
Let us now see how we can create a master yaml file which helps us define the roles to use
and the hosts on which we will run them.
The following should be the file server.yml:
- hosts: all
roles:
- nginx
-
The roles can then be run as shown below:
ansible-playbook -s server.yml
# Or as I usually do with the Vagrant VM:
ansible-playbook -s -k -u vagrant server.yml
When we run the Nginx role, we get the following output:
PLAY [all]
********************************************************************
GATHERING FACTS
***************************************************************
ok: [127.0.0.1]
NOTIFIED: [nginx | Start Nginx]
***********************************************
ok: [127.0.0.1]
NOTIFIED: [nginx | Reload Nginx]
**********************************************
changed: [127.0.0.1]
PLAY RECAP
********************************************************************
127.0.0.1 : ok=8 changed=7 unreachable=0 failed=0
In the above case, all of our components have been put together to get a coherent role, and
the Nginx has been installed and well configured.
Facts
Before any tasks can be run in Ansible, it always first gathers all the information about the
system which is being provisioned. These are always referred to as facts, and it is an array
having wide information about our system such as the available IPV4 or IPV6 networks,
number of CPU cores, Linux distribution, mounted disks, and much more.
Facts are very useful when dealing with Template or Tasks configuration. An example of
this is the Ngtinx, as this has been set to use any of the worker processors since there are
CPU cores. When you are aware of this, the template for the configuration file
nginx.conf can be set as shown below:
user www-data www-data;
worker_processes {% verbatim %}{{ ansible_processor_cores }}{% endverbatim
%};
pid /var/run/nginx.pid;
# And the other configurations
For a server having multiple CPUs, this can be done as follows:
user www-data www-data;
worker_processes {% verbatim %}{{ ansible_processor_cores *
ansible_processor_count }}{% endverbatim %};
pid /var/run/nginx.pid;
# And the other configurations
In Ansible, all facts have to start with _ansible and can be used globally in any place the
variables can be used such as in Tasks, variable files, and even templates.
Consider the NodeJS example given below:
file:
path=/etc/apt/sources.list.d/repo-ubuntu-node_js-{% verbatim %}{{
ansible_distribution_release }}{% endverbatim %}.list
state=absent
when: distrosupported|success
- name: Add Nodesource Keys
apt_key:
url=https://deb.nodesource.com/gpgkey/nodesource.gpg.key
state=present
- name: Add Nodesource Apt Sources List Deb
apt_repository:
repo=deb https://deb.nodesource.com/node {% verbatim %}{{
ansible_distribution_release }}{% endverbatim %} main
state=present
when: distrosupported|success
- name: Add Nodesource Apt Sources List Deb Src
apt_repository:
repo=deb-src https://deb.nodesource.com/node {% verbatim %}{{
ansible_distribution_release }}{% endverbatim %} main
state=present
when: distrosupported|success
Vault
When we have sensible data, we always need to keep it in Ansible files, templates, or
variable files, and it is hard for us to avoid this. The Ansible vault provides us with a
solution to this.
With the vault, we are able to encrypt yaml files so as to get variable files. However, it
doesnt encrypt files and templates.
Whenever one is creating an encrypted file, they are always asked to provide a password
which they can use so as to edit the file later, and when we are calling Playbooks or roles.
Consider the example given below, which shows how a new variable file can be created in
Ansible:
ansible-vault create vars/main.yml
Vault Password:
Once you have entered the above password, your file will then be opened in your default
editor, and in most cases, this is the Vim editor.
The editor which is to be defined should be defined in the environmental variable
EDITOR. Note that this is usually set to the Vim editor, but if you dont like using this,
it will be good for you to change this and all will be okay. This is shown below:
export EDITOR=nano
Example
I normally use vault whenever I am creating new users. In a particular user role, one can
set a password file with the passwords for users and a public key which will be added to
the authorized_key file of the users. Public SSH keys are very safe, and the public can be
allowed to view them. They allow an individual to access their own servers. If you need to
access a system by use of a public key, then this will be useless if you dont have a paired
private key. However, this has not been put in our role.
Below is an example file which we can create and then encrypt it using vault. It has be
edited while in plain text. This is shown below:
admin_password:
$6$lpQ1DqjZQ25gq9YW$mYMAmGhFpPVVv0JBCUFaDovu8u5JqvWi.Ih
deploy_password:
$6$edOqVumZrYW9$d5zj1Ok/T80DrncjixhjQDpXlffACDfNx2DHnC
common_public_key: ssh-rsa ALongSSHPublicKeyHere
You have to note that the passwords for the users have also been hashed. You can research
how to generate encrypted files in Ansible from its documentation. The user will always
need this so as to set the password. It should look as shown below:
# The package whois will make the mkpasswd
# command is available on Ubuntu
user=admin
key={% verbatim %}{{ common_public_key }}{% endverbatim %}
state=present
- name: Create Deploy User
user:
name=deploy
password={% verbatim %}{{ deploy_password }}{% endverbatim %}
groups=www-data
append=yes
shell=/bin/bash
- name: Add Deployer Authorized Key
authorized_key:
user=deploy
key={% verbatim %}{{ common_public_key }}{% endverbatim %}
state=present
These tasks will make use of the user module so as to create new users, and it will pass the
passwords which have been set in the variable file. It will also use the module
authorized_key for the purpose of adding the SSH public key as the authorized SSH key
in our server for each user. To use variables, we do it as usual with our task file. However,
in order for the role to be run, Ansible should be told to ask for the vault password which
- hosts: all
sudo: yes
roles:
- user
For the Playbook to be run, Ansible has to be told to ask for the vault password since we
are running a role having an encrypted file. This is shown below:
ansible-playbook ask-vault-pass provision.yml
Chapter 8- Templates
We need to copy our virtual host to the server, and a template can be used for
accomplishing this. A file could have been used for that purpose, and then hard code the
DocumentRoot but this process is the same for the templates, and a template has much
flexibility.
Ansible has been written in Python, and the template it uses is the Python Jinja2 library.
You do not have to worry, as the syntax used in this case is very similar to the one used in
modern template languages.
For us to get started, you have to create a new directory and give it the name templates,
and this should be done at the root of your project. You can then create the file with the
name virtual-hosts.conf.j2 in the new directory. It is customary for us to call a template
with its name, the extension of the file, and then append it to the .j2 at the end.
I can at this point SSH into the provisioned vagrant box and then grab the contents of our
already existing Virtual Host. The contents of the file can then be pushed into the
templates/virtual-hosts.conf.j2. This is shown below:
<VirtualHost *:80>
ServerAdmin webmaster@localhost
DocumentRoot /var/www/html
ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
</VirtualHost>
The line:
DocumentRoot /var/www/html
Should be replaced with:
DocumentRoot {{ document_root }}
The above procedure shows how one can echo a variable in jinja2. In our case, the
variable is called document_root.
Since Apache works in a unique way, we should add a new section for config for the
directory /vagrant, but failure to do this will give you 403 Forbidden errors. This is why
you should add the following code below the DocumentRoot:
<Directory {{ document_root }}>
Require all granted
</Directory>
All together, the code should be as shown below:
<VirtualHost *:80>
ServerAdmin webmaster@localhost
DocumentRoot {{ document_root }}
<Directory {{ document_root }}>
Require all granted
</Directory>
ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
</VirtualHost>
You can then save the file and open the tasks/apache.yml and then a new task can be
added to this for the purpose of copying the template to the server. This is shown below:
- name: Copy across new virtual host
template: src=virtual-hosts.conf.j2 dest=/etc/apache2/sites-available/vagrant.conf
The directory templates doesnt have to be included in the value src since Ansible
will always assume that the template is located in the Templates directory, and if it fails
to get it there, it will look into the root directory before it can give up. The whole file at
this point should be as follows:
It will make it easy for us to read, but note that the behavior will not be affected in any
way. For us to copy the template across, we also need to tell Ansible the value which is
needed for using the document_root during the process of copying the template. There
are a number of ways how to do this, but the simplest way to do this is to add the key
vars to the file playbook.yml, and that is what we are going to do. Update the file
playbook.yml to be as follows:
- hosts: all
sudo: true
vars:
document_root: /vagrant
tasks:
- name: update apt cache
apt: update_cache=yes
- include: tasks/apache.yml
- name: include mysql
include: tasks/mysql.yml
- include: tasks/php.yml
The key vars: can be used in any position, but it is recommended that it should come
somewhere before the tasks: in the Playbook. Open the terminal, and then run the
- name: install apache
apt: name=apache2 state=present
- name: Copy across the new virtual host
template:
src=virtual-hosts.conf.j2
dest=/etc/apache2/sites-available/vagrant.conf
- name: Remove default virtual host
file:
path=/etc/apache2/sites-enabled/000-default.conf
state=absent
- name: Enable new vagrant virtual host
file:
src=/etc/apache2/sites-available/vagrant.conf
dest=/etc/apache2/sites-enabled/vagrant.conf
state=link
Run back to your terminal, then run the command vagrant provision, and the normal
Ansible output will be shown in the output including the new tasks. Once done, just SSH
into the vagrant box, and then execute the command run ls /etc/apache2/sites-enabled.
You should then observe that you are missing 000-default.conf, but the vagrant.conf
should be missing in place.
to do this, we can make use of the service module and then add another task. This is
shown below:
- name: reload apache
service: name=apache2 state=reloaded
You can then run the command vagrant provision and then add a new file to the root of
your project and give it the name info.php and then add the simple line <?php
phpinfo(); to it. Launch your browser and then open the URL
http://localhost:8080/info.php. If you observe the familiar phpinfo() output, then just
know that everything is running well.
Python API
Note that when we are trying to make this API accessible, we do not intend to make it be
used directly, and it is available for supporting the command line tools for Ansible.
Python 2.0
In this version of Python API, things tend to get a bit tough but at the end of it all, we will
have classes which are more discrete and easy to read. This is shown in the code given
below:
#!/usr/bin/python2
from ansible.inventory import Inventory
from ansible.playbook.play import Play
from collections import namedtuple
]
)
play = Play().load(play_source, variable_manager=variable_manager, loader=loader)
# actually run it
tqm = None
try:
tqm = TaskQueueManager(
inventory=inventory,
variable_manager=variable_manager,
loader=loader,
options=options,
passwords=passwords,
stdout_callback=default,
)
result = tqm.run(play)
finally:
if tqm is not None:
tqm.cleanup()
contacted : {
web2.sample.com : 1
}
}
Note that a module can return any time of the JSON data which it wants, meaning that we
can use Ansible for the development of very useful and powerful applications.
We need to give you an example of a detailed API. In this example, we want to print out
uptime information for all of our hosts:
#!/usr/bin/python
import ansible.runner
import sys
# constructing the ansible runner and then executing it on all hosts
results = ansible.runner.Runner(
pattern=*, forks=10,
module_name=command, module_args=/usr/bin/uptime,
).run()
if results is None:
Script Conventions
Once the external node script has been called using a single argument, that is, - - list, the
script should return a JSON dictionary/hash of all groups which are to be managed. The
value for each group should either be a dictionary/hash having a list of each IP/host
potential group variables and potential child groups.
This is shown below:
{
databases : {
hosts : [ host1.sample.com, host2.sample.com ],
vars : {
a : true
}
},
webservers : [ host2.sample.com, host3.sample.com ],
atlanta : {
hosts : [ host1.sample.com, host4.sample.com, host5.example.com ],
vars : {
b : false
},
children: [ marietta, 5points ]
},
marietta : [ host6.sample.com ],
5points : [ host7.sample.com ]
}
When it has been called with host <hostname>; as the argument, the script should
return a hash/dictionary of variables or an empty JSON hash/dictionary, so as to make it
available to playbooks and templates. It is optional for us to return variables, and if your
script is not in need of returning this, we can return an empty hash/dictionary. This is
shown below:
{
favcolor : red,
ntpserver : wolf.sample.com,
monitoring : pack.sample.com
}
The data which is to be added to our top level JSON dictionary should be as shown below:
{
# results of the inventory script should be added here
#
_meta : {
hostvars : {
moocow.sample.com : { asdf : 1234 },
llama.sample.com : { asdf : 5678 },
}
}
}
Developing Modules
Modules can be written in the programming language of choice. Their path has to be
specified by ANSIBLE_LIBRARY or by the command line option module-path. In
our case, we will use Python. Create a file and add the following code into it:
#!/usr/bin/python
import datetime
import json
date = str(datetime.datetime.now())
print json.dumps({
time : date
})
Save the file with name timet.py. Note that we are in need of developing a module
which will help for the task of setting the system time.
Testing Modules
In our source checkout for Ansible, we always have a very useful test script. This is shown
below:
git clone git://github.com/ansible/ansible.git recursive
source ansible/hacking/env-setup
chmod +x ansible/hacking/test-module
The following command can be used for running the script which we have just written:
ansible/hacking/test-module -m ./timetest.py
After that, you should observe some output which looks as shown below:
{utime: u2016-03-07 22:13:48.539183}
Reading Input
We are in need of modifying our module so that it can allow us to set the current time.
This will check on whether the key value pair in the form of time=<string> has been
passed in the module.
Below is how the time can be set:
time time=March 15 22:10
If we do not set any time parameter, the time will be left as it is, and then the system will
return the current time. Here is the code for the module:
#!/usr/bin/python
# importing some python modules which well use. These should all
# be available in the Pythons core
import os
import datetime
import json
import shlex
import sys
ansible/hacking/test-module -m ./timetest.py -a time=\March 07 11:23\
The command should then give us output similar to the one given below:
{changed: true, time: 2016-03-07 11:23:00.000307}
}
}
}
The above facts will be available to the statements which have been called after module in
the playbook. It is recommended that you make a module and then give it the name sitefacts, and this will be called at the top of any playbook, although we can also improve
the process of selection of core facts in Ansible.
Conclusion
We have come to the conclusion of this guide. Ansible is an IT automation tool which can
be used for the purpose of deployment of software applications. Note that these
configurations are usually done on the server machines. This is why the tool depends on
SSH (Secure Shell) for the purpose of establishing a connection to these servers. This
makes it easy for us to use Ansible, even for beginners.
Note that this tool is open-source, and is used for management of various tools via SSH.
Powershell can also be used for this purpose. In this tool, the modules work via JSON, and
these can be written in any programming language. YAML is used in this tool for
expression of the reusable description of the systems. Ansible identifies two types of
servers, mainly nodes and controlling machine. The controlling machine is used for the
purpose of describing the location of a machine through the inventory. This makes use of
an agentless architecture as compared to other tools such as Puppet, Chef, and CFEngine.
This means that the nodes are not expected to install any daemons in the background, so as
to use them when connecting with machines.