Você está na página 1de 31

CHAPTER-1

INTRODUCTION

Nature provides us different things. When we try to think about nature, there are complex

mechanisms, operations involved in it. There are many scientific theories proposed

on specific part of nature. One among the successful theory is DARWINS theory of

EVOLUTION. The most important is that it predicted the need for a biological way for
passing information between generations. That ultimately led to the discovery of the
DNA molecule and within half a century the mapping of the human genome as well as
that of other animals.
In the other direction the ideas of evolution have given computer scientists ideas for new
ways to program - the notion of genetic algorithms. Computer science is about coming up
with solutions to problems and that is exactly what nature does over time - adapt animal
species via natural selection to allow them to survive better in their changing
environments. The idea is that to solve a problem into "digital DNA" and evolve a
solution.

1.1 MOTIVATION

It was in the 1950/60s that several independent researchers were studying the idea that
evolution could be used as an optimization tool for engineering problems. The idea
behind it all was to evolve solutions to problems by using natural means based on
survival of the fittest. Evolutionary strategies were introduced in the mid 60s by
Rechenberg as a method he used to optimize real-valued parameters for hardware
devices. Owens, Fogel and Walsh developed evolutionary programming, a technique
used where candidate solutions to problems or tasks were represented as finite-state
machines which were evolved by randomly mutating their state-transition diagrams and
then selecting the fittest. Together with genetic algorithms, these three areas form the
backbone of evolutionary computation



1



Genetic algorithms were invented by Holland in the 60's and developed later in the 70's.
This method was defined as a way to move from one population of chromosomes to
another by utilizing natural selection and the operator of crossover, mutation, and
inversion. In recent years, there was been widespread interaction among researchers from
varied evolution-based studies and as a result, we find some breakdown in the boundaries
that define and separate the fields of genetic algorithms, evolution strategies, and
evolutionary programming. Often, the term - genetic algorithm - is used to describe
something very different from what was originally defined.There are many tasks for
which we know fast (polynomial) algorithms. There are also some problems that are not
possible to be solved algorithmically. For some problems was proved that they are not
solvable in polynomial time.
But there are many important tasks, for which it is very difficult to find a solution, but
once we have it, it is easy to check the solution. This fact led to NP-complete problems
[1]. NP stands for nondeterministic polynomial and it means that it is possible to "guess"
the solution (by some nondeterministic algorithm) and then check it, both in polynomial
time. If we had a machine that can guess, we would be able to find a solution in some
reasonable time. Studying of NP-complete problems is for simplicity restricted to the
problems, where the answer can be yes or no. Because there are tasks with complicated
outputs, a class of problems called NP-hard problems has been introduced. This class is
not as limited as class of NP-complete problems

1.2 LITERATURE SURVEY

Several bodies of research directly deal with the placement of nodes in network design.
From the early 1990s to a few years ago, a large body of research was devoted to the

Base Station (BS) location problem for cellular phone networks. At that time the problem
was to find the optimal location of BS (transmitters) in order to satisfactorily cover
subscribers. Although this problem differs in many aspects from the sensor network









2
planning problem (notably because in WSN the sensors (BS) also need to communicate
with each other (connectivity)), it is insightful to review the methods used. These range
from Dynamic Programming , to Genetic Algorithms , and Tabu Search . Virtually every
type of optimization technique was tested on this problem, many of which dealt with
multiple objectives (though often blended into a single objective function, except in
which uses Pareto optimality) while using non-trivial communication models taking the
terrain into account.
The BS location problem is part of the larger area of Facility Location in Operations
Research . Here a set of demand points must be covered by a set of facilities (which
corresponds in WSN to covering an area with a set of sensors). The goal is to locate these
facilities so as to optimize a certain objective (e.g. minimize the total distance of demand
points to their closest facility). A classic example close to the WSN problem is the
Maximal Covering Location Problem (MCLP) where as many demand points as possible
must be covered with p sensors of fixed radius. It is also referred to as a location-
allocation problem, since each demand point must be assigned to a certain sensor. Again
in all these discussions, the main difference with WSN is that the nodes are not required
to be connected. Another problem of interest is the Facility Location-Network Design
problem, where facilities positions need to be determined (just as in MCLP) and the
network connecting these facilities must also be optimized. Unfortunately, in WSN
design it is impossible to decouple sensor placement and network design, since the
location of the sensors determines the network topology.
The past three years have seen a rising interest in sensor network planning, focusing
mostly on optimizing the location of the sensors in order to maximize their collective
coverage (a problem almost identical to the BS location problem). Several techniques
were used, but the research on BS location is never mentioned. Chakrabarty used Integer
Programming, while Bulusu , Dhillon and Howard , devised different greedy heuristic
rules to incrementally deploy the sensors. Zou adapted Virtual Force Methods (often used
for the deployment of robots ) for the sensor deployment. As was mentioned before,
current work on WSN mainly focuses on the maximization of sensing coverage, with
little or no attention





3

given to the communication requirement between sensors. Meguerdichian assumes that
the communication radius of the sensors will be much larger than the sensing radius, so
that connectivity will arise naturally.

But this assumption is unrealistic for two reasons. First there exist sensors where the
sensing range is of the same order or larger than the communication range (e.g. seismic
sensors), so that maximizing the coverage without caring about the communication range
will result in a disconnected network. Second, if the areas to be covered are disjoint, the
network will be partitioned. In addition, in our WSN model the sensors must be
connected not only to each other, but also to the HECN. Therefore the communication
connectivity requirement cannot be trivialized, and both aspects of the sensors (sensing
and communication) must be taken into account for the network planning. Also, only a
single objective is considered (almost always coverage), whereas it seems other
considerations are also of vital practical importance in the choice of the network layout
(lifetime, robustness to node failure, etc.). Current work on WSN does not deal with
multiple objectives and pays little attention to the communication connectivity
requirement, essential for the data relay. This work attempts to start addressing these
gaps.































4




CHAPTER-2

EXISTING SYSTEM


2.1 INTRODUCTION

A genetic algorithm (GA) is a search technique used in computing to find exact or
approximate solutions to optimization and search problems. Genetic algorithms are
categorized as global search heuristics. Genetic algorithms are a particular class of
evolutionary algorithms (EA) that use techniques inspired by evolutionary biology such
as inheritance, mutation, selection, and crossover. GA has a number of features:

Genetic algorithm is a population-based search method

GA uses recombination to mix information of candidate solutions into a new one.

GA is stochastic.

2.1 EXPLANATION

Genetic Algorithms are nondeterministic stochastic search/optimization methods that

utilize the theories of evolution and natural selection to solve a problem within a complex
solution space. Genetic algorithms are basically computer-based problem solving
systems which use computational models of some of the known mechanisms in
evolution as key elements in their design and implementation. They are a member of a
wider population of algorithm Evolutionary Algorithms (EA). The major classes of EAs
are:

GENETIC ALGORITHMs,

EVOLUTIONARY PROGRAMMING,

EVOLUTION STRATEGIEs,

CLASSIFIER SYSTEM,

GENETIC PROGRAMMING.





5

They all share a common conceptual base of simulating the evolution of individual
structures through methods of selection, mutation, and reproduction. The methodologies
depend on the performance of the individual structures as defined by an environment.
Genetic Algorithms are heuristic, which means it estimates a solution. We won't know if
we get the exact solution, but that may be a minor concern. In fact, most real-life
problems are like that: we estimate a solution rather than calculating it exactly.
GAs work within a Complex solution space: GAs can be used where optimization is
needed. I mean that where there are complex large solutions to the problem but we have
to find the best one. Like we can use GAs in finding best moves in chess, mathematical
problems, financial problems and in many more areas.




























Figure 2.1: GA representation
















6
Algorithm:

begin

INITIALIZE population with random candidate
solutions; EVALUATE each candidate;

repeat

SELECT parents;
RECOMBINE pairs of parents;
MUTATE the resulting
children; EVALUATE children;

SELECT individuals for the next generation
until TERMINATION-CONDITION is satisfied

end






























Figure 2.2: GA





7
2.3 COMPONENTS OF GENETIC ALGORITHMS

The most important components in a GA consist of:

Representation (definition of individuals)

Evaluation function (or fitness function)

Population

Parent selection mechanism

Variation operators (crossover and mutation)

Survivor selection mechanism (replacement)

2.3.1 Representation

Objects forming possible solution within original problem context are called phenotypes,
their encoding, the individuals within the GA, are called genotypes. The representation
step specifies the mapping from the phenotypes onto a set of genotypes.

Candidate solution, phenotype and individual are used to denote points of the
space of possible solutions. This space is called phenotype space.

Chromosome, and individual can be used for points in the genotype space.

Elements of a chromosome are called genes. A value of a gene is called an allele.

2.3.2 Population

The role of the population is to hold possible solutions. A population is a multiset of
genotypes. In almost all GA applications, the population size is constant, not changing
during the evolutional search.

2.3.3 Variation Operators

The role of variation operators is to create new individuals from old ones. Variation
operators form the implementation of the elementary steps with the search space.









8
2.3.3.1 Mutation Operator

A unary variation operator is called mutation. It is applied to one genotype and delivers a
modified mutant, the child or offspring of it. In general, mutation is supposed to cause a
random unbiased change. Mutation has a theoretical role: it can guarantee that the space
is connected.

2.3.3.2 Crossover Operator

The crossover operator is the most important in GA. Crossover is a process yielding
recombination of bit strings via an exchange of segments between pairs of
chromosomes..A binary variation operator is called recombination or crossover. Similarly
to mutation, crossover is a stochastic operator: the choice of what parts of each parent are
combined, and the way these parts are combined, depends on random drawings.The
principle behind crossover is simple: combining two individuals with different but
desirable features, we can produce an offspring which combines both of those
features.There are many kinds of crossover
One-point Crossover: The procedure of one-point crossover is to randomly generate a
number (less than or equal to the chromosome length) as the crossover position. Then,
keep the bits before the number unchanged and swap the bits after the crossover position
between the two parents. Example: With the two parents selected above, we randomly
generate a number 2 as the crossover position:

Parent1: 7 3 7 6 1 3

Parent2: 1 7 4 5 2 2

Then we get two children:

Child 1 : 7 3| 4 5 2 2

Child 2 : 1 7| 7 6 1 3









9

Two-point Cross Over: The procedure of two-point crossover is similar to that of one-
point crossover except that we must select two positions and only the bits between the
two positions are swapped. This crossover method can preserve the first and the last parts
of a chromosome and just swap the middle part. Example: With the two parents selected
above, we randomly generate two numbers 2 and 4 as the crossover positions:

Parent1: 7 3 7 6 1 3

Parent2: 1 7 4 5 2 2

Then we get two children:

Child 1 : 7 3| 4 5| 1 3

Child 2 : 1 7| 7 6| 2 2

Uniform Crossover

The procedure of uniform crossover : each gene of the first parent has a 0.5 probability of
swapping with the corresponding gene of the second parent. Example: For each position,
we randomly generate a number between 0 and 1, for example, 0.2, 0.7, 0.9, 0.4, 0.6, 0.1.
If the number generated for a given position is less than 0.5, then child1 gets the gene
from parent1, and child2 gets the gene from parent2. Otherwise, vice versa.

Parent1: 7 *3 *7 6 *1 3

Parent2: 1 *7 *4 5 *2 2

Then we get two children:

Child 1 : 7 7* 4* 6 2* 3

Child 2 : 1 3* 7* 5 1* 2











10
2.3.4 Parent Selection Mechanism

The role of parent selection is to distinguish among individuals based on their quality to
allow the better individuals to become parents of the next generation.
Parent selection is probabilistic. Thus, high quality individuals get a higher chance to
become parents than those with low quality. Nevertheless, low quality individuals are
often given a small, but positive chance, otherwise the whole search could become too
greedy and get stuck in a local optimum. The chance of each parent being selected is in
some way related to its fitness.

2.3.4.1 Fitness-based selection

The standard, original method for parent selection is Roulette Wheel selection or fitness-
based selection. In this kind of parent selection, each chromosome has a chance of
selection that is directly proportional to its fitness. The effect of this depends on the range
of fitness values in the current population. Example: if fitness range from 5 to 10, then
the fittest chromosome is twice as likely to be selected as a parent than the least fit.

If we apply fitness-based selection on the population given in example 3.1, we select the
second chromosome 7 3 7 6 1 3 as our first parent and 1 7 4 5 2 2 as our second parent.

2.3.4.2 Rank-based selection

In the rank-based selection method, selection probabilities are based on a chromosomes
relative rank or position in the population, rather than absolute fitness.

2.3.4.3 Tournament-based selection

Two individuals are chosen at random from the population. A random number r is then
chosen between 0 and 1. If r <k (where k is a parameter, for example 0.75), the fitter of
the two individuals is selected to be a parent; otherwise the less fit individual is selected.
The two are then returned to the original population and can be selected again.







11
2.3.5 Survivor Selection Mechanism

The role of survivor selection is to distinguish among individuals based on their quality.
In GA, the population size is (almost always) constant, thus a choice has to be made on
which individuals will be allowed in the next generation. This decision is based on their
fitness values, favoring those with higher quality.
As opposed to parent selection which is stochastic, survivor selection is often
deterministic, for instance, ranking the unified multiset of parents and offspring and
selecting the top segment (fitness biased), or selection only from the offspring (age-
biased).

2.3.5.1 Termination Condition

Notice that GA is stochastic and mostly there are no guarantees to reach an optimum.

Commonly used conditions for terminations are the following:

1. the maximally allowed CPU times elapses

2. The total number of fitness evaluations reaches a given limit

3. for a given period of time, the fitness improvement remains under a threshold
value.

4. the population diversity drops under a given threshold.

Note: Premature convergence is the well-known effect of loosing population diversity too

quickly and getting trapped in a local optimum.


















12
2.4 WORKING OF GA

Before understanding the working of GA, lets understand some biological terms related

to this.

Chromosome: A set of genes. Chromosome contains the solution in form of
genes.

Gene: A part of chromosome. A gene contains a part of solution. It determines the
solution. E.g.16743 is a chromosome and 1,6,7,4 and 3 are its genes.

Individual: Same as chromosome.

Population: No of individuals present with same length of chromosome.

Fitness: Fitness is the value assigned to an individual. It is based on how far or
close a individual is from the solution. Greater the fitness value better the solution
it contains.

Fitness function: Fitness function is a function which assigns fitness value to the
individual. It is problem specific.

Breeding: Taking two fit individuals and intermingling the chromosome to create
new two individuals.

Mutation: Changing a random gene in an individual.

Selection: Selecting individuals for creating the next generation.

Genetic algorithm applies the rules of evolution to the individuals. Each individual in the
GA population represents a possible solution to the problem. It selects the fit individuals
according to fitness function then combines these individuals into new individuals. Using
this method repeatedly, the population will hopefully evolve good solutions.

Specifically, the elements of a GA are:

1. Selection (according to some measure of fitness),

2. Cross-over (a method of reproduction, "mating" the individuals into new
individuals), and

3. Mutation (adding a bit of random noise to the off-spring, changing their "genes").







13

As we can see here, Darwin's principles have been a major inspiration to GAs. It can be
performed through following cycle of stages.

i) Creation of a "population" of strings

ii) Evaluation of each string

iii) Selection of best strings and

iv) Genetic manipulation to create new population of strings.

This flowchart illustrates the basic steps in a GA:



























Figure 2.3:steps in GA

Now lets concentrate on how all the steps are done:


Each cycle in Genetic Algorithms produces a new generation of possible solutions for a
given problem. In the first phase, an initial population, describing representatives of the
potential solution, is created to initiate the process.











14

1. The elements of the population are encoded into bit-strings, called chromosomes.
Although encoding of chromosomes is done by many ways, binary encoding is most
used. In binary encoding every chromosome is a string of bits, 0 or 1.

Chromosome A 101100101100101011100101

Chromosome B 111111100000110000011111

2. The performance of the strings, often called fitness, is then evaluated with the help of
some functions, representing the constraints of the problem. A fitness function is a
particular type of objective function that prescribes the optimality of a solution (that is, a
chromosome) in a genetic algorithm so that that particular chromosome may be ranked
against all the other chromosomes. Depending on the fitness of the chromosomes, they
are selected for a subsequent genetic manipulation process.

3. Selection process is mainly responsible for assuring survival of the best-fit
individuals. Here individual genomes are chosen from a population for later breeding
(recombination or crossover).

A generic selection procedure may be implemented as follows:

The fitness function is evaluated for each individual, providing fitness values,
which are then normalized. Normalization means dividing the fitness value of
each individual by the sum of all fitness values, so that the sum of all resulting
fitness values

equals 1.This can be done and represented through Roulette wheel selection method.


Roulette wheel selection method: Imagine a roulette wheel where all chromosomes are

placed in the population, every chromosome has its place big accordingly to its fitness
function, its looks like on the following picture .












15









Figure 2.4

The population is sorted by descending fitness values.

Accumulated normalized fitness values are computed (the accumulated fitness
value of an individual is the sum of its own fitness value plus the fitness values of
all the previous individuals). The accumulated fitness of the last individual should
of course be 1 (otherwise something went wrong in the normalization step!).

A random number R between 0 and 1 is chosen.

The selected individual is the first one whose accumulated normalized value is
greater than R .This step is repeated until chromosome is found.

The selected chromosomes (individuals) are called parents.

4. After selection of the population strings is over, the genetic manipulation process
consisting of two steps is carried out. In the first step, the crossover operation that
recombines the bits (genes) of each two selected strings (chromosomes) is executed. The
second step in the genetic manipulation process is termed mutation, where the bits at one
or more randomly selected positions of the chromosomes are altered.

CROSS-OVER

The cross-over is the method for combining those selected individuals into new
individuals. Remember that the individuals are simply strings of values. The cross- over
splits up the "parent" individuals and recombines them. Here's an example of how two
"parents" cross over to make two "children".








16

The simplest way how to do this is to choose randomly some crossover point and
everything before this point copy from a first parent and then everything after a crossover
point copy from the second parent.

Chromosome 1 11011 | 00100110110 11011 | 11000011110 child1
Chromosome 2 11011 | 11000011110 11011 | 00100110110 child2
It is illustrated in figure below














Figure 2.5: crossover

MUTATION

Mutation is used to maintain genetic diversity from one generation of a population of
chromosomes to the next. It is analogous to biological mutation. The classic example of a
mutation operator involves a probability that an arbitrary bit in a genetic sequence will be
changed from its original state. A common method of implementing the mutation
operator involves generating a random variable for each bit in a sequence. This random
variable tells whether or not a particular bit will be modified.
The purpose of mutation in GAs is preserving and introducing diversity. Mutation should
allow the algorithm to avoid local minima by preventing the population of chromosomes
from becoming too similar to each other, thus slowing or even stopping evolution. This
reasoning also explains the fact that most GA systems avoid only taking the fitness of the
population in generating the next but rather a random (or semi-random) selection with a
weighting toward those that are fitter. Mutation is illustrated in below figure.






17















Figure 2.6: Alteration of 5th bit.

2.5 ADVANTAGES AND DISADVANTAGES OF GAS

GA has number of advantages, some important among them are:

This example is an excellent illustration of how GA achieves the optimisation.

Parallelism. GA works with multiple offsprings thus making it ideal for large

problems where evaluation of all possible solutions in serial would be too time taking, if

not impossible..

It can quickly scan a vast solution set. The inductive nature of the GA means that
it doesn't have to know any rules of the problem - it works by its own internal rules. This
is very useful for complex or loosely defined problems

They are also easy to implement. Once you have some GA, you just have to write
new chromosome (just one object) to solve another problem. With the same encoding you
just change the fitness function and it is all. On the other hand, choosing encoding and
fitness function can be difficult.

Disadvantages

Certain optimisation problems (they are called variant problems) cannot be solved by
means of genetic algorithms. This occurs due to poorly known fitness functions which







18
generate bad chromosome blocks in spite of the fact that only good chromosome blocks

cross-over.

There is no absolute assurance that a genetic algorithm will find a global
optimum. It happens very often when the populations have a lot of subjects.

Like other artificial intelligence techniques, the genetic algorithm cannot assure
constant optimisation response times. Even more, the difference between the shortest and
the longest optimisation response time is much larger than with conventional gradient
methods. This unfortunate genetic algorithm property limits the genetic algorithms use in
real time applications.















































19




CHAPTER-3

PROPOSED SYSTEM


3.1 INTRODUCTION

Wireless sensor network (WSN) consists of large number of devices that use sensors to
monitor physical or environmental conditions such as temperature, pressure, motion etc.
These devices are known as sensor nodes. Sensor nodes can be in number of hundreds to
thousands. These sensor nodes communicate with each other and all the sensor nodes can
organize themselves after the deployment in particular sensing area which we want to
measure. It means all the sensor nodes have self-organizing capabilities .These sensor
nodes combine with routers and a gateways to make the wireless sensor network. Every
sensor node in wireless sensor network consists of processing unit consisting one or more
microcontrollers, different types of memories, a radio frequency transceiver, a power unit
for example batteries, and various numbers of sensors to sense the field. There are two
types of wireless sensor network based on the node parameters. One are homogeneous
wireless sensor networks and other are heterogeneous wireless sensor networks. There
are three common types of resource heterogeneity in sensor node that is computational
heterogeneity, link heterogeneity, and energy heterogeneity. The most important
heterogeneity is the energy heterogeneity because if there is no energy heterogeneity then
the computational heterogeneity and link heterogeneity will, decreasing the network
lifetime of the network. The heterogeneous networks increases the lifetime of network
and provide the reliable transmission of information.















20
























Figure 3.1. Wireless sensor network structure

There are many protocols are discovered for the heterogeneous wireless sensor networks
such as SEP, EEHC, ETLE etc. SEP is stable election protocol which improves the stable
region of the clustering hierarchy process using the characteristic parameters of
heterogeneity. Stable Election Protocol (SEP) is among the first an energy efficiency
routing protocol that used a heterogeneous network, in the sense that election
probabilities are weighted by the initial energy of the node relative to that of other nodes
in the network. It is two- level heterogeneous WSNs, which is composed of two types of
nodes accordingly to the initial energy. First nodes called as normal nodes and seconds
nodes known as advanced nodes with more energy at the beginning. SEP may extend the
lifetime of the network, but it cannot apply to multilevel heterogeneous WSNs. This
contains the fraction of advanced nodes (m) and the additional energy factor between
advanced and normal nodes (). In order to prolong the stable region, SEP attempts to
maintain the constraint of well-balanced energy consumption. Advanced nodes have to
become cluster heads more often than the normal nodes, which is equivalent to a fairness
constraint on energy consumption.








21

EEHC Energy Efficient Heterogeneous Clustered EEHC is three-level heterogeneous
wireless sensor networks. This EEHC is the heterogeneous aware protocol whose aim is
to increase the lifetime and stability of the network in the presence of heterogeneous
nodes.. In the model, it will assume m is the fraction of the total number of nodes n, mo is

the percentage of the total number of nodes m, which is equipped, with times more
energy

resources than the normal node, which called as super nodes. The rest (1-mo)*m*n nodes
are equipped with time more energy than the normal nodes known as advance node
and remained n*(1-m) as normal nodes. EEHC may extend the network lifetime and
suitable for multilevel heterogeneous wireless sensor networks as compared to the
LEACH protocol. EEHC has extended the lifetime of the network by 10% as compared
with LEACH.

ETLE is the Efficient Three Level Energy algorithm. All the sensor nodes in the network
were randomly distributed and not mobile. Node clustering algorithm is use to form a
cluster based network in the WSNs. ETLE algorithm for WSNs has a periodic round;
each round is divided into four different phases known as information revise, cluster head
selection, cluster creation and data communication. Each sensor node selects itself as a
cluster head independently and by considering remaining energy for each node in each
round. Some nodes are added with some percentage of energy, in order to form the
energy heterogeneity in the network. In this m symbol is used to present the percentage of
nodes and as times more energy of nodes. Normally, in cluster-based network, some
nodes will be selected as the cluster head. The cluster head will aggregate the sensing
data of their cluster members and transmit to the sink node. It uses a single-hop data
transmission to the sink node. Each sensor node selects itself as a cluster head
independently and by considering remaining energy for each node in each round. In the
ETLE the first node die more lately as compared to EEHC. This make the lifetime of the
network is large as compared to EEHC.










22
3.2 DESCRIPTION

There are large numbers of protocols and algorithms that are proposed for wireless ad hoc
networks. The sensor nodes are limited in power, computational capacities, and memory.
To perform routing in wireless sensor network with this limitation of low power, energy
and storage capabilities is a major problem. Due to which the lifetime of the network
decreases. In order to solve this problem Genetic algorithm is purposed for the routing to
enhance the lifetime of network.

3.2.1. Description of purposed algorithm

3.2.1.1 Deployment of sensor nodes

The three types of sensor nodes are deployed because of the heterogeneous wireless
sensor network. The nodes are common energy nodes, more energy nodes and most
energy nodes. These nodes are deployed in the 100m100m area. The sink node is placed
at the center location (50, 50).

3.2.1.2. Cluster formation

After the deployment of sensor nodes the clusters are formed. There are different
methods for the cluster formation. In purposed algorithm clusters are formed by using K
gridding method. The value of K is 3 in the purposed algorithm.

3.2.1.3. Cluster head election

The cluster heads are elected by using GENETIC ALGORITHM. The node that has the
minimum fitness value is elected as the cluster head. In the Genetic algorithm fitness
value is defined by a function that is defining the particular problem. The function is
called nutrient function. The nutrient function is an equation that is derived by analyzing
the problem but the mathematical solution of such equation is not possible. One the bases
of this the cluster head is elected in order to improve the network lifetime.








23



3.2.1.4. Data transmission

After the cluster head selection, all the nodes in the clusters transmit their data to their
respective cluster heads. The cluster heads further transmit the data to the sink node.

3.3 ENERGY MODEL

Let there is k bit of data which is to be transmitted. The amount of energy consumed
during sending k bit of data to a distance d is calculated by using the equations that are
given below.


D<d0

Etx-con =k*( Etx+Eampd
2
)


The below equation is called the free space model and this equation is used only when the
value of d will be such as: 0
b
<b


Etx-con =k*( Etx+Eampd
4
)



The below equation is known as the multipath fading model and this equation is used
when the value of d is such that
Etx-con

The energy consumption during transmission is represented as The energy for the
transmission of the data is represented as:
Etx

The amount of energy for the amplification is represented as:
Eamp











24

The amount of energy consumed during the reception of k bit of data is calculated from
the equation given above.

The energy consumed during the reception is represented such as:

Erx-con

3.4. NETWORK MODEL

The network is formed by three-level energy because the network is heterogeneous
network. The network consists of three types of nodes: common energy nodes, more
energy nodes and most energy nodes.

Let n is the total number of nodes. The most energy nodes are those which have more
times energy. The most energy nodes are in the fraction of m2 percent of n nodes.

The more energy nodes are those which have /2 more times energy. The mort energy

nodes are in the fraction of m1 percent of n nodes.

The common energy node has the initial energy Ec. These nodes are in the fraction of (1-
(m2+m1)) percent of n nodes.

The total energy of the nodes is given below:

Total Energy =common energy nodes +more energy nodes+most energy nodes.

The energy of common nodes is represented as Ec,

The energy of more nodes is represented by Em2

The energy of most nodes is represented by Em1















25




CHAPTER-4

RESULTS AND ANALYSIS


This section contains many simulations that are used to analyze and evaluate the
performance of the proposed algorithm. This paper uses the MATLAB for the simulation
and for results. After that to verify the proposed algorithm we will compare the results
with ETLE (Efficient Three Level Energy).

4.1. SIMULATION SETUP

The numbers of nodes simulated in the wireless sensor network are 100. All the nodes are
deployed randomly in the sensing field of 100m 100m area . There are three different
types of nodes. The nodes are common energy nodes, more energy nodes and most
energy nodes. Every node transmits the k bit of data to the cluster head in a round. The
simulation parameters that are used in the purposed algorithm are given in the Table.

Description of parameter

Number of nodes
Area
Energy consumed by
the amplifier to transmit the data

Energy required for the
transmission of the signal

Energy required for the
reception of the signal
Data packet

Data aggregation
energy

Symbol of
parameter
n
MM

Eamp


Etx


Erx
k


Eda

Value of parameter

100
100m 100m
0.0013pJ /bit/m4


50pJ /bit


50pJ /bit

4000 bits

5pJ /bit/report

Table 1. Simulation parameters values





26
4.2. SIMULATIONS AND ANALYSIS

The 100 nodes are deployed in the sensing field of 100m100m area. The simulation

Results are shown below at the values of =1 and m2=m1=0.1. The common nodes are

represented by 0 in red color. The more energy nodes are represented by + in blue

color. The most energy nodes are represented by * in green color. The sink node is

represented by in yellow color.

















Figure 4.1 Deployment of sensor nodes

The following figure shows the cluster formation and the transmission of data between
the nodes.




















Figure 4.2 Snapshot of cluster formation and transmission






27

The following fig shows the network lifetime comparison between purposed algorithm
and ETLE algorithm. The fig. 4.3 shows that the purposed algorithm is better than the
ETLE algorithm because in the purposed algorithm the first node dies later as compared
to ETLE algorithm. So this make the lifetime of network short in ETLE algorithm and the
purposed algorithm enhance the lifetime of network.



















Figure 4.3 The number of dead nodes in each round

A comparison for the first node die is shown in Table 2.The purposed algorithm and
ETLE algorithm is compared in terms of lifetime of the network. The comparison is done
on the basis of three different types of initial energies.


Algorithm Initial energy in Round at which
ETLE 0.5 609
Purposed 0.5 647
ETLE 0.25 1256
Purposed 0.25 1294
ETLE 1 2484

Purposed 1 2589

Table 4.2. First node dies comparison






28

The above table shows that the first node dies later in purposed algorithm on 1294th
round while in the ETLE algorithm the first node dies on 1256th round. It means the first
node dies in purposed algorithm 38 round later. So it increases the lifetime of network as
compared to ETLE algorithm.




























































29




CHAPTER-5

CONCLUSION


There are many protocols are discovered for the heterogeneous wireless sensor networks
such as SEP, EEHC, ETLE etc. In this paper we proposed an algorithm to extend the
lifetime of the network. The nodes are deployed randomly and cluster head us elected by
the Genetic Algorithm on the basis of fitness value. It is shown that the first node dies
later in the proposed algorithm as compare to ETLE (Efficient Three Level Energy). Thus
increase the lifetime of the heterogeneous wireless sensor networks by 3% as compared
to ETLE (Efficient Three Level Energy).










































30



REFERENCES

[1] Smriti J oshi & Anant Kr. J aywalk Energy-Efficient MAC Protocol for Wireless
Sensor Networks - A Review International

J ournal

of

Smart Sensors and Ad Hoc


Networks (IJ SSAN) ISSN No. 2248 9738 Volume 1, Issue 4, 2012.

[2] D.kumar, T.C.Aseri and R.B.Patel EEHC: Energy efficient hetergenous clustered
scheme for wireless sensor networks Computer Communications 32(2009), pp.662-667

[3] G. Smaragdakis, I.Matta and A.Bestavros, SEP: A Stable Election Protocol for
Clustered Heterogeneous Wireless Sensor Networks Proceeding of 2nd International

Workshop on Sensor and Actor Network Protocol and Applications (SANPA), Boston,
U.S.A., 2004, pp.1-11.

[4] N.Tuah,M.Ismail,K. J umari Energy Efficient Algorithm for Heterogeneous Wireless
Sensor Network International Conference on Control System, Computing and
Engineering2011 IEEE.

[5] Chien-Chih Liao and Chuan-Kang Ting Extending the Lifetime of Dynamic
Wireless Sensor Networks by Genetic Algorithm WCCI 2012 IEEE World Congress on
Computational Intelligence J une, 10-15, 2012 - Brisbane, Australia

[6] Sajid Hussain, Abdul Wasey Matin, Obidul Islam Genetic Algorithm for Hierarchical
Wireless Sensor Networks journal of networks, vol. 2, no. 5, September 2007.

[7] Navdeep Kaur, Deepika Sharma Genetic Algorithm for Optimizing the Routing in the

Wireless Sensor Network International J ournal of Computer Applications (0975 8887)

Volume 70 No.28, May 2013

[8] Navdeep Kaur Department of Electronics and communication Lovely Professional
University Phagwara, India Genetic Algorithm for Optimizing the Routing in the
Wireless Sensor, NetworkInternational J ournal of Computer Applications (0975 8887)
Volume 70 No.28, May 2013.
[9] Wikipedia.com

31

Você também pode gostar