Você está na página 1de 111


ICAET - 2014

Sponsored By
((Registered Under Indian Trust Act, 1882)

Technical Program
19TH October, 2014
Hotel NRS Sakithyan, Chennai

Organized By


Copyright 2014 by IAETSD

All rights reserved. No part of this publication may be reproduced, stored in a
retrieval system, or transmitted, in any form or by any means, electronic,
mechanical, photocopying, recording, or otherwise, without the prior written
consent of the publisher.

ISBN: 978 - 1502893314


Proceedings preparation, editing and printing are sponsored by


The International Association of Engineering and Technology for Skill
Development (IAETSD) is a Professional and non-profit conference organizing
company devoted to promoting social, economic, and technical advancements
around the world by conducting international academic conferences in various
Engineering fields around the world. IAETSD organizes multidisciplinary
conferences for academics and professionals in the fields of Engineering. In order
to strengthen the skill development of the students IAETSD has established.
IAETSD is a meeting place where Engineering students can share their views,
ideas, can improve their technical knowledge, can develop their skills and for
presenting and discussing recent trends in advanced technologies, new educational
environments and innovative technology learning ideas. The intention of IAETSD
is to expand the knowledge beyond the boundaries by joining the hands with
students, researchers, academics and industrialists etc, to explore the technical
knowledge all over the
world, to publish proceedings. IAETSD offers
opportunities to learning professionals for the exploration of problems from many
disciplines of various Engineering fields to discover innovative solutions to
implement innovative ideas. IAETSD aimed to promote upcoming trends in

The aim objective of ICAET is to present the latest research and results of
scientists related to all engineering departments topics. This conference provides
opportunities for the different areas delegates to exchange new ideas and
application experiences face to face, to establish business or research relations and
to find global partners for future collaboration. We hope that the conference results
constituted significant contribution to the knowledge in these up to date scientific
field. The organizing committee of conference is pleased to invite prospective
authors to submit their original manuscripts to ICAET 2014.
All full paper submissions will be peer reviewed and evaluated based on
originality, technical and/or research content/depth, correctness, relevance to
conference, contributions, and readability. The conference will be held every year
to make it an ideal platform for people to share views and experiences in current
trending technologies in the related areas.

Conference Advisory Committee:

Dr. P Paramasivam, NUS, Singapore

Dr. Ganapathy Kumar, Nanometrics, USA
Mr. Vikram Subramanian, Oracle Public cloud
Dr. Michal Wozniak, Wroclaw University of Technology,
Dr. Saqib Saeed, Bahria University,
Mr. Elamurugan Vaiyapuri, tarkaSys, California
Mr. N M Bhaskar, Micron Asia, Singapore
Dr. Mohammed Yeasin, University of Memphis
Dr. Ahmed Zohaa, Brunel university
Kenneth Sundarraj, University of Malaysia
Dr. Heba Ahmed Hassan, Dhofar University,
Dr. Mohammed Atiquzzaman, University of Oklahoma,
Dr. Sattar Aboud, Middle East University,
Dr. S Lakshmi, Oman University

Conference Chairs and Review committee:

Dr. Shanti Swaroop, Professor IIT Madras

Dr. G Bhuvaneshwari, Professor, IIT, Delhi
Dr. Krishna Vasudevan, Professor, IIT Madras
Dr.G.V.Uma, Professor, Anna University
Dr. S Muttan, Professor, Anna University
Dr. R P Kumudini Devi, Professor, Anna University
Dr. M Ramalingam, Director (IRS)
Dr. N K Ambujam, Director (CWR), Anna University
Dr. Bhaskaran, Professor, NIT, Trichy
Dr. Pabitra Mohan Khilar, Associate Prof, NIT, Rourkela
Dr. V Ramalingam, Professor,
Dr.P.Mallikka, Professor, NITTTR, Taramani
Dr. E S M Suresh, Professor, NITTTR, Chennai
Dr. Gomathi Nayagam, Director CWET, Chennai
Prof. S Karthikeyan, VIT, Vellore
Dr. H C Nagaraj, Principal, NIMET, Bengaluru
Dr. K Sivakumar, Associate Director, CTS.
Dr. Tarun Chandroyadulu, Research Associate, NAS



Location-Based Services Using Autonomous GPS

Performance Analysis of Discrete Cosine Transform based image compression

The Worlds Smallest Computer For Programmers and App

Developers(Raspberry Pi)


Enhancing Vehicle to Vehicle Safety Message Transmission Using Randomized



An Efficient and Accurate Misbehavior Detection Scheme in Adversary





Load Stabilizing and Energy Conserving Routing Protocol for Wireless Sensor


Multi-View and Multi Band Face Recognition Survey






Dense Dielectric Patch Array Antenna - A New Kind of Low-Profile Antenna

Element For 5G Cellular Networks



Advanced mobile signal jammer for GSM, CDMA and 3G Networks with
prescheduled time duration using ARM7 TDMI processor based LPC2148



Removal of a Model Textile Dye from Effluent using fenugreek powder as an




Artifact Facet Ranking and Its Application: A Survey









Secure Data Sharing of Multi-Owner Groups in Cloud






Proceedings of International Conference on Advancements in Engineering and Technology


Location-Based Services Using Autonomous GPS

R. Jegadeeswari

S. Parameswaran

Department of Computer Science and Engineering,

M.E. Scholar, Anna University,
Sree Sowdambika College of Engineering,
Aruppukottai, TamilNadu, India.

Department of Computer Science and Engineering,

Assistant Professor, Anna University,
Sree Sowdambika College of Engineering,
Aruppikottai, TamilNadu, India.

AbstractIn a mobile era, people need to know about

everything at any place and any time. To achieve this need,
Location Based Services (LBS) can be the solution for their
search. Normally, LBS are achieved with the help of signals
of the service provider of that particular mobile. But it is not
appropriate if there is poor/no availability of signal strength.
Therefore it might be less efficient or even completely useless
in some circumstances. Most of the existed LBS consume
more battery power, long time to make connection to the
server and cost. To overcome this problem, the proposed
Android App developed with the Autonomous GPS to track
the location of a user with the help of the devices location
services. The incentive of this App is, To aid with the
precise information of current location of a person in real
time. This Android APP is designed to find and tag the
services and people where they are located with the concept
of Autonomous GPS. These autonomous GPS find the user
location from the mobiles device and it is in-built with smart
phones. It works faster with reduced cost compared to the
existing system.
Keywords Android operating system, LBS, Autonomous
GPS, Google Map.

Android is an open source, mobile operating system
(OS) based on the Linux kernel, developed by the Open
Handset Alliance, led by Google, and other companies.
The latest version of Android is 4.4 KitKat. Mobile
applications can be build using Android and iOS.
An Android app is a software application running on
the Android platform. Because the Android platform is
built for mobile devices, a typical Android app is designed
for a Smartphone or a tablet PC running on the Android
OS. Android apps are available in the Google Play Store
(formerly known as the Android Market), Amazon
AppStore, and 1Mobile market and on various Android
App-focused sites.
Benefits of an Android are scheduled as:
Open source
Android scales to all devices
Time for a change
Third party development is encouraged.
Android has following limitations:
Not hold by any huge corporation yet except HTC

ISBN NO : 978 - 1502893314

Not support a few applications similar to Firefox

Several restrictions exist in blue tooth
1.2 LBS
A Location based services (LBS) is an information
and entertainment services, available with mobile devices
through the mobile network and utilizing the ability to
make use of geographical position of the mobile device.
LBS services can be used in a mixture of contexts, such
as health, work, personal life, etc. It comprises parcel
tracking and vehicle tracking services. LBS have two
foremost actions, that is:
1. Finding the location of user
2. Utilizing this information to offer as a service.
Location based Services can be classified into 3
A) Public Safety/Emergency Services
B) Consumer Services
Some cases of location based Services are:
To determine the nearest business or service,
such as an Bank or Hotels
Getting alerts, such as notification of Sale in
Shopping Mall or news of Traffic Jam nearby
Receiving the location of the stolen phone.
In order to provide more useful, attractive and engaging
social networks, apps and services location-components
have been added to new innovative projects. I have
catalogued some of the applications of LBS in the
following way:
Information Services
Location Based Social Media
Mobile Location-Based Gaming
Augmented Reality
The remainder of this paper is organized as follows.
The background and related work of Autonomous GPS

International Association of Engineering and Technology for Skill Development


Proceedings of International Conference on Advancements in Engineering and Technology


and other techniques are discussed in Section II. The

methodology of LBS components is presented in Section
III. Android application: working with use of GPS,
Google Maps API is explained in Section IV and the idea
of proposed system is evaluated in Section V. Section VI
concludes this paper. Future scope of this application is
conferred in Section VII.
2.1 Autonomous GPS
The Global Positioning System (GPS) is a
space- based satellite steering system that includes 24
satellites placed network into orbit by the U.S.
Department of Defense. On Earth, GPS receivers take this
signal information from many satellites and utilize
triangulation to determine the users precise location.
Cellular networks are being used to fill in gaps in
transmission signals.
Today, there are three different transmission modes
being used in community corrections to carry the
deliverance of accurate, reliable signals to GPS tracking
receivers and overcome geographical or atmospheric
challenges. These modes include:
Receiving signals via the GPS satellite network
only (Autonomous GPS);
A combination of GPS satellite data with support
from a cellular network (Assisted GPS);
A method that measures signals from nearby
cellular towers and reports the time/distance
readings back to the locating towers (Advanced
Forward Link Trilateration) to determine an
approximate location of the GPS receiver.
In the proposed system, we use autonomous GPS for
tracking the users location from the mobile device. It
offers location and time information in all weather
conditions, anywhere on or near the Earth where there is
an unobstructed line of sight to four or more GPS
On the ground, any GPS receiver contains a
computer that "triangulates" its own position by
receiving bearings from at least three satellites. The
result is provided in the form of a geographic position longitude and latitude - to, for most receivers, w i t h i n
a n a c c u r a c y o f 1 0 t o 1 0 0 meters. Software
applications can then use those coordinates to give
driving or walking directions.

Figure.2.1: GPS Accuracy and Precision

The accuracy expected to be obtained using a GPS
receiver will vary according to the overall system used.
While accuracy level actually achieved will depend upon
many factors, typical estimations of the level of GPS
accuracy can be given.
GPS system
GPS with S/A activated
GPS without S/A activated
Differential GPS

Expected GPS accuracy


B. Levels of Accuracy
The following graph shows the accuracy levels of all
currently available systems. The vertical axis is the
expected accuracy or error level, shown both in
centimeters and meters. The horizontal axis is the distance
along the earth's surface between the reference station and
the remote user. If there is no reference station, the line is
drawn all the way to 10,000 km, all over the earth.

A. GPS Accuracy & Precision

One of the key points and advantages of GPS is its
accuracy. Levels of GPS accuracy are extremely high,
even for civilian use GPS units. It is also worth defining
the difference between accuracy and precision.

GPS accuracy: The accuracy refers to the

degree of closeness the indicated readings are to
the actual position.
GPS precision: Is the degree to which the
readings can be made. The smaller the circle of
unknown the higher the precision.

ISBN NO : 978 - 1502893314

The accuracy obtained in this way depends mainly on the:

Quality of the reference receiver

International Association of Engineering and Technology for Skill Development


Proceedings of International Conference on Advancements in Engineering and Technology

Quality of the user's receiver

Update rate and data latency (communications
Distance from reference to user
Multipath at the reference receiver
Multipath at the user's site
Recently, the smart phones (Android, Black berry and
iPhone) come furnished with A-GPS technology which
provides the spatial coordinates of the user location with
the combination of Global Positioning System (GPS)
satellites data with support from cellular network and it
works indoor and outdoor, responds faster, and uses less
battery power.
2.2 Assisted-GPS
Assisted GPS, also known as A-GPS or AGPS is a
system that improves the startup performance, or time-tofirst-fix (TTFF), of a GPS satellite-based positioning
system. In AGPS, the Network Operator deploys an
AGPS server. These AGPS servers download the orbital
information from the satellite and store it in the database.
An AGPS capable device can connect to these servers and
download this information using Mobile Network radio
bearers such as GSM, CDMA, WCDMA, and LTE or
even using other wireless radio bearers such as Wi-Fi.
Generally, the data rate of these bearers is high; hence
downloading orbital information takes less time.


takes longer to establish connectivity with 4 satellites.

AGPS system costs money to use A-GPS devices on
an ongoing basis because they use mobile network
resources. GPS devices communicate directly with
satellites for free. There is no cost of operation once the
device is paid for.

Figure.2.3: Difference between Standard GPS and

GPS is real-time solution provider whereas AGPS is not.
The network usage is required every time we move out of
the service area. It is useful only for locating a particular
place in small area. There is no privacy in GPS and AGPS since the Assistance server knows the location of the
device. There needs to be communication over the
wireless for processing of GPS information so this could
be expensive.

Figure.2.2: Architecture of A-GPS System

It deals signal and wireless network problems by using
assistance from other services. Such a technology in our
smart phones can aid in different ways like tracking
current location, getting turn-by-turn direction
instructions, route tracking, etc.
2.3 The Difference Between Autonomous GPS and
Assisted- GPS
The major difference between both systems is speed
and cost. A-GPS devices determine location coordinates
faster because they have better connectivity with cell sites
than directly with satellites whereas GPS devices may
take several minutes to determine their location because it

ISBN NO : 978 - 1502893314

Most of Location Based Services need several

components. I have suggested the model of 5+1
components of LBS five technological and 1 human
1. Positioning systems allow geographically
localizing the mobile device both outdoor and
indoor using: satellite-based systems, Cell-ID,
RFID, Bluetooth, WiMax, and Wireless LANs.
2. Communication Network the wireless
network that allows for transfer of data between
user (thought mobile device) and server (service
provider). Nowadays it is in most cases wireless
internet (e.g. GPRS, 3G, and 4G).
3. Service and Application Provider the LBS
provider, including the software (e.g. GIS) and
other distributed services and components that
are used to resolve the query and provide the
tailored response to the user.
4. Data and Content Provider service providers
will usually not store and maintain all the
information, which can be requested by users.
Therefore geographic base data and location
information data will be usually requested from
the maintaining authority or business and
industry partners.

International Association of Engineering and Technology for Skill Development


Proceedings of International Conference on Advancements in Engineering and Technology



Figure.3.1: Components of LBS



Mobile Devices any portable device that has

capabilities to utilize stated above components of
LBS, for example: mobile phones (including
smart phones), tablets, palmtops, personal
navigation devices, laptops etc.
User operator of the mobile device and the
person that is utilizing potential of modern
mobile device and infrastructures in order to get
value added information or entertainment.

Two methodologies used to implement LBS To process location data in a server and to
promote the generated response to the clients.
To find location data for a mobile device-based
application that can use it directly.
To discover the position of the mobile, LBS must use
positioning methods in real time. The accuracy of the
methodology depends on the approach used. Locations
can be represented in spatial terms or as text descriptions.
A spatial location can be represented in the used
latitude-longitude-altitude coordinate system. Latitude is
defined as 0-90 degrees north or south of the equator and
longitude as 0-180 degrees east or west of the prime
meridian, that passes through the Greenwich, England.
Altitude is represented in meters above sea level.
A text description is usually defined as a street
location, including city, pin code.
The location of the device can be retrieved by,
i) Mobile Phone Service Provider NetworkThe current cell ID is used to locate the Base Transceiver
Station (BTS) that the mobile phone is interacting with
and the location of that BTS. It is the most basic and
cheapest method for this purpose.
A GSM cell may be anywhere from 2 to 20 kilometers in
diameter. Other approaches used along with cell ID can
achieve location granularity within 150 meters.
ii) Satellites
The Global Positioning System (GPS) uses 24 satellites
orbiting the earth. It finds the user position by calculating
differences in the times the signals, from different
satellites. GPS are in-built with the smart phones.
Assisted-GPS (A-GPS) is the new technology for smart
phones that integrates the mobile network with the GPS to
give a better accuracy of 5 to 10 meters. This fixes the
position within seconds, has better coverage and
consumes less battery power and requires fewer satellites.

Figure.3.2: LBS Components and Service Process

Figure-3.2 shows the connections between these
components, and the process of a LBS service. First,
user sends a service request using the application
running on mobile device (Step 1). The service request,
with user's current location information received from the
positioning component (in this example, GPS data), is
sent to service server via the mobile communication
network (Step 2). The service server requests geographic
database and other associated database to gain needed
information (Step 3, 4). At last, the requested
information is sent back to user's mobile phone via
mobile communication network paragraphs must be

ISBN NO : 978 - 1502893314

It is the biggest and most important part of the

entire application. It uses Android SDK API to manage
the GPS Sensor, Google Maps API to show the Map
powered by Google Maps, to display the markers about
events on the Map.
Knowing where the user is allows your application to
be smarter and deliver better information to the user.
When developing a location-aware application for
Android, you can utilize GPS and Android's Network
Location Provider to acquire the user location. Although
GPS is most accurate, it only works outdoors; it quickly
consumes battery power, and doesn't return the location as
quickly as users want. Android's Network Location
Provider determines user location using cell tower and
Wi-Fi signals, providing location information in a way
that works indoors and outdoors, responds faster, and uses
less battery power. To obtain the user location in your

International Association of Engineering and Technology for Skill Development


Proceedings of International Conference on Advancements in Engineering and Technology

application, you can use both GPS and the Network

Location Provider, or just one.
Challenges in Determining User Location
There are several reasons why a location reading
(regardless of the source) can contain errors and be
inaccurate. Some sources of error in the user location
Multitude of location sources - GPS, Cell-ID,
and Wi-Fi can each provide a clue to users location.
Determining which to use and trust is a matter of tradeoffs in accuracy, speed, and battery-efficiency.
User movement - Because the user location
changes, you must account for movement by re-estimating
user location every so often.
Varying accuracy - Location estimates coming
from each location source are not consistent in their
accuracy. A location obtained 10 seconds ago from one
source might be more accurate than the newest location
from another or same source.
Android also provide an API to access the Google maps.
So with the help of the Google maps and the location
APIs the application can show required places to the user
on the map.
On 10 May, 2011, at the Google I/O developer
Conference in San Francisco, Google announced the
opening up and general availability of the Google Places
The Google Places API is a service that returns data
about Places defined within this Web Service as,
spatial locations, or preferred points of interest using
HTTP requests. Place response specifies locations as
latitude/longitude coordinates.
The four types of requests are available with the
Google Places API- There are 4 fundamental Place
services available:
o Place Searches - It returns an array of nearby
Places based on a location defined by the user.
o Place Details - It returns more specific data
about a user defined Place.
o Place Check-ins - Check-ins is used to gauge a
Place's popularity; frequent check-ins will boost
a Place's priority in application's Place Search
o Place Reports - It allows the users to add new
locations to the Place service, and to delete
Places that the application has added to the
Proposal of an integrated android application is
based on location information.


In the proposed system, we are going to track the

location of a user with the help of the devices location
services. This does not require a user to have a sim card to
be mandatory (as we are not tracking based on the signal).
The user can easily tag his/her location with the help of
WIFI (if available). The user (admin) can then view the
current location of particular person with their posted
comments and time this information must be available in
android mobile device and also in user personalized
Users are requested to login in the app. This is
only for security purpose. Two or more user details (user
name, password) will be stored in the database. User logs
in to the app. If the user is unauthorized or the username
and password is incorrect, then the alert message will be
The app open with main page that will have a text
box (to type something), and a tag button. When the user
taps the button, his current location will be gathered with
the help of his mobile devices location services. If the
mobiles location services had not been enabled, the app
will show an alert message saying Please Enable Location
Services. If enabled, his current location will be stored in
the database with his name. The users can logout of the
application. Figure 5.1 shows this scenario.

Figure.5.1: User Scenario

When the admin wants to know the current
location of User, then the admin logs in, his main page will
contain a map button. On tapping the button, Google
maps will be generated with the help of Google maps API.
The map will contain the Users tags with his comments (if
any). The following figure 5.2 shows this scenario.


ISBN NO : 978 - 1502893314

International Association of Engineering and Technology for Skill Development


Proceedings of International Conference on Advancements in Engineering and Technology


mobile device with the user customized format, then its

helpful to manage their valuable time effectively and
Yet now we can manage limited number of users
identity which is visualization of user location, this will be
overcome by means of fetching separate server for
gathering users information by launching this, we can
break the limitation which was occurred in previous task.
So my future research will be able to implement the any
number users identity. And this any number of users
information will also be grouped according to the nature
of role they performing; it should be achieved by using
filtering mechanism.
Figure.5.2: Admin Scenario


This application is mainly focused to design for

marketing field. For example, sales executives or sales
representatives tag their location at each point; it will be
viewed by their sales manager, if he wants to monitor
their team members or sales executives.
Android System provides the opportunity to determine the
current geographical location and to build location-aware
applications, without focusing on the details of the
underlying location technology. To obtain the user
location, we can use one of:

GPS Provider
(availability of cell tower and Wi-Fi access

or both of them. Every provider has its cons and pros, and
can be used depending on the circumstances at each
situation. The use of Location Services is, in order to get
the current location, by choosing the best provider at each
In this paper, we developed the LBS for people
with ease of their communication usage through Android
mobile. Once you learn about the location-related APIs
and capabilities that the Android platform provides, you
can use them to create the next great location aware
application. If all the information must be available on a

ISBN NO : 978 - 1502893314

[1] Amit Kushwaha, Vineet Kushwaha, Location

Based Services using Android Mobile Operating
System, IJAET, Vol. 1,Issue 1,pp.14-20, Mar.
[2] Prof. Seema Vanjire, Unmesh Kanchan, Ganesh
Shitole, Pradnyesh Patil,Location Based
Services on Smart Phone through the Android
Application, IJARCCE, Vol. 3, Issue 1, Jan.
[3] Manav
Implementation of Location Based Services in
Android using GPS and Web services, IJCSI,
Vol. 9, Issue 1, No 2, Jan. 2012
[4] Bhupinder S. Mongia Vijay K. Madisetti, Fellow,
IEEE, Reliable Real-Time Applications on
Android OS, submitted for publication-June 18,
[5] [Online]:http://www.smashingmagazine.com/201
[6] [Online]:http://www.sencha.com/products/touch
[7] [Online]:file:///F:/gps/Autonomous
Assisted GPS or AFLT What and Why-BI
[8] [Online]:file:///F:/gps/GPS Accuracy, Errors &
Precision Radio-Electronics.Com.htm
[9] [Online]:file:///F:/gps/GPS Accuracy Levels.htm
[10] [Online]:http://geoawesomeness.com/knowledge

International Association of Engineering and Technology for Skill Development


Proceedings of International Conference on Advancements in Engineering and Technology


Performance Analysis of Discrete Cosine

Transform based image compression
1, 2
Research Scholars, 3 Assistant Professor-II,
SCSVMV University.
professorvijayece@gmail.com, maarchaname@gmail.com, drsarathy45@gmail.com
Image compression is one of the most important
criteria in multimedia applications. Compression allows
efficient utilization of channel bandwidth and storage size.
Typical access speeds for storage mediums are inversely
proportional to capacity. Through data compression, such
tasks can be optimized. Image compression is a part of that
data compression. The discrete cosine transform (DCT) is
a technique for converting a signal into elementary
frequency components. Here we develop some simple
functions to compute the DCT and to compress images.
Different images are taken for compression using DCT
and the performance parameters are analyzed using Mat
lab. Image Compression is studied using 2-D discrete
Cosine Transform. The original image is transformed in
different window sizes. The implementation of this work
was successful on achieving significant PSNR values.
Keywords: Discrete Cosine Transform, Pixels, Bit Rate,
Mean Square Error, Signal to Noise Ratio, PSNR

information. The basic objective of image compression is to

find an image representation in which pixels are less
correlated. The two fundamental principles used in image
compression are redundancy and irrelevancy. Redundancy
removes redundancy from the signal source and irrelevancy
omits pixel values which are not noticeable by human eye.
JPEG and JPEG 2000 are two important techniques used for
image compression.
Work on international standards for image
compression started in the late 1970s with the CCITT
(currently ITU-T) need to standardize binary image
compression algorithms for Group 3 facsimile
communications. Since then, many other committees and
standards have been formed to produce de jure standards
(such as JPEG), while several commercially successful
initiatives have effectively become de facto standards (such
as GIF). Image compression standards bring about many
benefits, such as: (1) easier exchange of image files between
different devices and applications; (2) reuse of existing
hardware and software for a wider array of products; (3)
existence of benchmarks and reference data sets for new and
alternative developments

Image compression is very important for efficient
transmission and storage of images. Demand for
communication of multimedia data through the
telecommunications network and accessing the multimedia
data through Internet is growing explosively. With the use
of digital cameras, requirements for storage, manipulation,
and transfer of digital images, has grown explosively.
These image files can be very large and can occupy a lot of
memory. A gray scale image that is 256 x 256 pixels has 65,
536 elements to store, and a a typical 640 x 480 color image
has nearly a million. Downloading of these files from
internet can be very time consuming task. Image data
comprise of a significant portion of the multimedia data and
they occupy the major portion of the communication
bandwidth for multimedia communication. Therefore
development of efficient techniques for image compression
has become quite necessary. A common characteristic of
most images is that the neighboring pixels are highly
correlated and therefore contain highly redundant

ISBN NO : 978 - 1502893314

The need for image compression becomes apparent
when number of bits per image is computed resulting from
typical sampling rates and quantization methods. For
example, the amount of storage required for given images is
(i) a low resolution, TV quality, color video image which
has 512 x 512 pixels/color,8 bits/pixel, and 3 colors
approximately consists of 6 x 10 bits;(ii) a 24 x 36 mm
negative photograph scanned at 12 x 10mm:3000 x 2000
pixels/color, 8 bits/pixel, and 3 colors nearly contains 144 x
10 bits; (3) a 14 x 17 inch radiograph scanned at 70 x
10mm: 5000 x 6000 pixels, 12 bits/pixel nearly contains
360 x 10 bits. Thus storage of even a few images could
cause a problem. As another example of the need for image
compression, consider the transmission of low resolution
512 x 512 x 8 bits/pixel x 3-color video image over
telephone lines. Using a 96000 bauds (bits/sec) modem, the
transmission would take approximately 11 minutes for just a
single image, which is unacceptable for most applications.

International Association of Engineering and Technology for Skill Development


Proceedings of International Conference on Advancements in Engineering and Technology

Figure 1 Block Diagram of Image Compression

Number of bits required to represent the
information in an image can be minimized by removing the
redundancy present in it. There are three types of
(i) Spatial redundancy, which is due to the correlation or
dependence between neighboring pixel values.
(ii) Spectral redundancy, which is due to the correlation
between different color planes or spectral bands.
(iii) Temporal redundancy, which is present because of
correlation between different frames in images.
Image compression research aims to reduce the
number of bits required to represent an image by removing
the spatial and spectral redundancies as much as possible.
Data redundancy is of central issue in digital image
compression. If n1 and n2 denote the number of information
carrying units in original and compressed image
respectively ,then the compression ratio CR can be defined
as CR=n1/n2;And relative data redundancy RD of the
original image can be defined as RD=1-1/CR;
Three possibilities arise here:
(1) If n1=n2, then CR=1 and hence RD=0 which implies
that original image do not contain any redundancy between
the pixels.
(2) If n1>>n1, then CR and hence RD>1 which implies
considerable amount of redundancy in the original image.
(3) If n1<<n2, then CR>0 and hence RD- which
indicates that the compressed image contains more data than
original image.
Types of compression
Lossless versus Lossy compression: In lossless
compression schemes, the reconstructed image, after
compression, is numerically identical to the original image.

ISBN NO : 978 - 1502893314


However lossless compression can only a achieve a modest

amount of compression. Lossless compression is preferred
for archival purposes and often medical imaging, technical
drawings, clip art or comics. This is because lossy
compression methods, especially when used at low bit rates,
introduce compression artifacts. An image reconstructed
following lossy compression contains degradation relative
to the original. Often this is because the compression
scheme completely discards redundant information.
However, lossy schemes are capable of achieving much
higher compression. Lossy methods are especially suitable
for natural images such as photos in applications where
minor (sometimes imperceptible) loss of fidelity is
acceptable to achieve a substantial reduction in bit rate. The
lossy compression that produces imperceptible differences
can be called visually lossless.
Predictive versus Transform coding: In predictive
coding, information already sent or available is used to
predict future values, and the difference is coded. Since this
is done in the image or spatial domain, it is relatively simple
to implement and is readily adapted to local image
characteristics. Differential Pulse Code Modulation
(DPCM) is one particular example of predictive coding.
Transform coding, on the other hand, first transforms the
image from its spatial domain representation to a different
type of representation using some well-known transform
and then codes the transformed values (coefficients). This
method provides greater data compression compared to
predictive methods, although at the expense of greater
Discrete Cosine Transform (DCT) exploits cosine
functions, it transform a signal from spatial representation
into frequency domain. The DCT represents an image as a
sum of sinusoids of varying magnitudes and frequencies.
DCT has the property that, for a typical image most of the
visually significant information about an image is
concentrated in just few coefficients of DCT. After the
computation of DCT coefficients, they are normalized
according to a quantization table with different scales
provided by the JPEG standard computed by psycho visual
evidence. Selection of quantization table affects the entropy
and compression ratio. The value of quantization is
inversely proportional to quality of reconstructed image,
better mean square error and better compression ratio. In a
lossy compression technique, during a step called
Quantization, the less important frequencies are discarded,
then the most important frequencies that remain are used to
retrieve the image in decomposition process. After
quantization, quantized coefficients are rearranged in a
zigzag order for further compressed by an efficient lossy
coding algorithm.

International Association of Engineering and Technology for Skill Development


Proceedings of International Conference on Advancements in Engineering and Technology


Let us present here briefly the computation technique for

DCTs of an image. The definition of DCT for a 2D images
x (m,n) of size NxN is as follows:

C k , l m0 n0 4 xm, n cos 2 m2K1k cos 2 n2N1l

N 1

N 1


0 k, l N 1
e low-low sub band xLL (m,n) of the image be obtained as:

x LL ( m , n )


{ x ( 2 m , 2 n ) x ( 2 m 1, 2 n )

( 2 m , 2 n 1) x ( 2 m 1, 2 n 1 )},
0 m,n


Figure 2 BUILDING- original image


Let CLL (k,l), 0 < k,l < N/2-1 be the 2D DCT of xLL(m,n).
Then the sub band approximation of DCT of x(m,n) is given

4cos(2Nk ) cos(2Nl )CLL (k, l), k, l 0,1,......,N2 1


It may be noted that depending upon the definition of DCT,

sub band DCTs are multiplied by a factor (in this

k l
). The definition of inverse
2N 2N

case 4 cos

Figure 3 BUILDING- Gray scale image

DCT (IDCT) should also be modified accordingly. We refer

this as sub band approximation of DCT as
k , l 0 ,1 ,......, N2 1
4 C LL ( k , l ),
C (k , l)

We refer this approximation as the low-pass truncated

approximation of DCT. Interestingly, the multiplication
factor 4 appears due to the definition of DCT used in this
work. However, this factor does not have any effect in the
final results obtained by them (PSNR values of downsized
(halved) and then upsized. While halving an image, DCT
coefficients for N/2-point DCT are obtained by dividing the
N-point DCT coefficients.

Figure 4 BUILDING- DCT image

Experimentations are carried out for studying the
performances of the three different images compressed at
different levels. In the context of the JPEG compression, the
effect of quantization on the approximated coefficients
during image-halving or image-doubling should be observed
here. The PSNR values for different compression levels for
the Building, BMW car, and Peacock images were plotted in
Figures as shown below. The performance of Bit Rate,
Mean Square Error, Signal to Noise Ratio, PSNR values for
the images is also tabulated.

ISBN NO : 978 - 1502893314

Figure 5 BMW CAR- original image

International Association of Engineering and Technology for Skill Development


Proceedings of International Conference on Advancements in Engineering and Technology

Figure 6 BMW CAR- Gray scale image


Figure 10 PEACOCKDCT Image









Figure 7 BMW CAR - DCT image

Figure 11 Bit rate (bps) vs. PSNR (dB) for DCT based
image compression of Peacock, BMW car and Buildings
TABLE I - Performance Analysis


Figure 8 PEACOCK- original image



Figure 9 PEACOCK- Gray scale image


ISBN NO : 978 - 1502893314









International Association of Engineering and Technology for Skill Development


Proceedings of International Conference on Advancements in Engineering and Technology



DCT based Image compression transform is

an efficient technique for obtaining better quality of
image in multimedia applications. The Performance
analysis of three different images illustrates that the
PSNR value varies for different bit rates and it also
shows that there is a better performance of mean
square error and for various bit rate. The outputs
were obtained using MATLAB 8.
The future of this is that it can be implemented using

NageswaraRaoThota, Srinivasa Kumar Devireddy.

Image Compression Using Discrete Cosine
Transform Georgian Electronic Scientific Journal:
Computer Science and Telecommunications 3
[10] Saraswathy, K., D. Vaithiyanathan, and R.
Seshasayanan. A DCT approximation with low
Communications and Signal Processing (ICCSP),
2013 International Conference on. IEEE, 2013.

[1] A.
w. Pan, and m. A.Bayoumi, NEDA: A LowPower High-PerformanceDCT rchitecture,
IEEE trans. Signal process. vol.54, no. 3, pp.
955964, mar. 2006.
[2] M. R. M. Rizk and m. Ammar, Low Power
Small Area High Performance 2D-Dct
Architecture, in proc. Int. Design test
workshop, 2007, pp. 120125.
[3] C. Peng, X. Cao, D. Yu, and X. Zhang, A 250
D is tr i bu t ed
A r c h i t e c t u r e O f 2 D 8 x 8 D C T , in Proc.
Int. Conf. ASIC, 2007, pp. 189192.
[4] Shinsuke Kobayushi, Kenturo Mita Graduate
S c h o o l of Engineering Science, Rapid
prototyping of jpeg encoder using the asip
development systempeas-111 Osaka University
[5] L.V. Agostini, I.S. Silva, and S. Bampi. Pipelined
fast 2d DCT architecture for JPEG image compression. In Integrated Circuits and Systems Design, 2001, 14th Symposium on, pages 226231,
Pireno polis, Brazil, 2001.
[6] Yun-Lung Lee, Jun-Wei Yang,
and Jer
Min Jou Design of a Distributed
Encoder on a Scalable NoC Platform
Department of Electrical
Cheng Kung University, No.1,
University Road, Tainan, Taiwan, R.O.C. 978-14244-1617-2/08/S25.00 2008IEEE.
[7] Telagarapu, Prabhakar, et al. Image Compression
Using DCT and Wavelet Transformations
International Journal of Signal Processing, Image
Processing and Pattern Recognition 4.3 (2011).
[8] Elamaran, V., and A. Praveen. Comparison of
DCT and wavelets in image coding Computer
Communication and Informatics (ICCCI), 2012
International Conference on. IEEE, 2012.

ISBN NO : 978 - 1502893314

International Association of Engineering and Technology for Skill Development


Proceedings of International Conference on Advancements in Engineering and Technology


The Worlds Smallest Computer for Programmers and App Developers

(Raspberry Pi)
N.Abirami1, V.Goutham2

Assistant Professor, Computer Science & Engineering,

Sree Sastha Institute of Engineering and Technology, Chennai.


Student, B.E.(CSE) V-Semester, Computer Science & Engineering,

Sree Sastha Institute of Engineering and Technology, Chennai.

Abstract The Raspberry Pi (RasPi) is an ultra-low-cost, singleboard, credit-card sized Linux computer which was conceived
with the primary goal of teaching computer programming to
children. It was developed by the Raspberry Pi Foundation,
which is a UK registered charity. The foundation exists to
promote the study of computer science and related topics,
especially at school level, and to put the fun back into learning
computing. The device is expected to have many other
applications both in the developed and the developing world.
Raspberry-Pi is manufactured and sold in partnership with the
world-wide industrial distributors Premier Farnell/Element 14
and RS Components Company.
The Raspberry Pi has a Broadcom BCM2835 system on chip
which includes an ARM1176JZF- S 700 MHz processor, Video
Core IV GPU, and 256 megabytes of RAM. It does not include a
built-in hard disk or solid-state drive, but uses an SD card for
booting and long-term storage.
The Foundation provides Debian and Arch Linux ARM
distributions for download. Also planned are tools for supporting
Python as the main programming language, with support for
BBC BASIC, C and Perl.
The gadget looks rather odd next to sleek modern offerings such
as the iPad, and appears to have more in common with the
crystal radio sets of the 1950s. However, the machine is a fullyfledged computer and can be connected to a monitor, keyboard
and mouse, as well as speakers and printers.

Rob Dudley of The Raspberry Pi Foundation

designed this little board here, the Raspberry Pi, to
address a lost generation of computer programmers
and hardware engineers. So, this little board here is
low cost, it's easily accessible, it's very simple to
use. When you power it up you get a nice little
desktop environment, it includes all of the things
that you need to do to get started to learn
programming. There's lots of information out there
on the internet that you can take away and start
programming code in to make things happen.
The great thing about these boards as well is in
addition to software, you can play with hardware.
So these little general purpose pins here allow
access to the processor and you can hang off little
hardware projects that you build and you can
control via the code you are writing through the
software application. So, this is a great tool for kids
to learn how computers work at a grassroots level.
II.What is a Raspberry Pi?

Fig1 : Raspberry Pi Module


ISBN NO : 978 - 1502893314

The Raspberry Pi is a credit-card sized computer

that plugs into TV and a keyboard. It is a capable
little PC which can be used for many of the things
that your desktop PC does, like spreadsheets, wordprocessing and games. It also plays high-definition
video. A Raspberry -Pi leads, a power supply or SD
cards that are not included but can be purchased
later. One can buy preloaded SD cards too. The
Raspberry Pi measures 85.60mm x 53.98mm x
17mm, with a little overlap for the SD card and

International Association of Engineering and Technology for Skill Development


Proceedings of International Conference on Advancements in Engineering and Technology

connectors which project over the edges. It weighs

around 45g. Overall real world performance is
something like a 300MHz Pentium 2. Raspberry-Pi
cannot boot without an SD card.The Raspberry Pi
uses Linux kernel-based operating systems.
Raspbian, a Debian-based free operating system
optimized for the Raspberry Pi hardware, is the
current recommended system.The GPU hardware is
accessed via firmware image which is loaded into
the GPU at boot time from the SD-card. The
firmware image is known as the binary blob, while
the associated Linux drivers are closed source.
Application software use calls to closed source runtime libraries which in turn calls an open source
driver inside the Linux kernel. The API of the
kernel driver is specific for these closed libraries.
Video applications use Open MAX.There are a
number of operating systems running, ported or in
the process of being ported to Raspberry-Pi. Like,
AROS, Android 4.0,Arch Linux ARM, Debian
Squeeze, Firefox OS etc.


ask the user for time information at boot time to get

access to time and date for file time and date
stamping. However, a real-time clock (such as the
DS1307) with battery backup can be added via the
IC interface.
Hardware accelerated video (H.264) encoding
became available on 24 August 2012 when it
became known that the existing license also
covered encoding. Previously it was thought that
encoding would be added with the release of the
announced camera module. However, no stable
software support exists for hardware H.264
encoding .The New Raspberry Pi model B's would
be fitted with 512 MB instead of 256 MB RAM.

Fig4: Internals of Raspberry Pi

5v micro USB connector
There has been a lot of speculation about the
power supply design for the production Raspberry
Pi devices. The alpha boards use a pair of switchmode power supplies to generate 5V and 3V3 rails
from a 6-20V input on a coaxial jack, and LDOs to
generate the low-current 2V5 and 1V8 rails for the
analog TV DAC and various I/O functions.

Fig 2: Raspberry Pi Board

III. Whats the filling of a raspberry pi?
Initial sales were of the Model B, with Model A
following in early 2013. Model A has one USB port
and no Ethernet controller, and costs less than the
Model B with two USB ports and a 10/100 Ethernet
controller or the B+ with four USB ports.
Though the Model A does not have an 8P8C
(RJ45) Ethernet port, it can connect to a network by
using an external user-supplied USB Ethernet or
Wi-Fi adapter. On the model B the Ethernet port is
provided by a built-in USB Ethernet adapter. As is
typical of modern computers, generic USB
keyboards and mice are compatible with the
Raspberry Pi.
The Raspberry Pi does not come with a real-time
clock, so an OS must use a network time server, or

ISBN NO : 978 - 1502893314

Fig 3: Power
1) RCA

International Association of Engineering and Technology for Skill Development


Proceedings of International Conference on Advancements in Engineering and Technology

An RCA connector, sometimes called a phono

connector or cinch connector, is a type of electrical
connector commonly used to carry audio and video
signals. The connectors are also sometimes casually
referred to as A/V jacks. The name "RCA" derives
from the Radio Corporation of America, which
introduced the design by the early 1940s for internal
connection of the pickup to the chassis in home
radio-phonograph consoles. It was originally a lowcost, simple design, intended only for mating and
disconnection when servicing the console.
Refinement came with later designs, although they
remained compatible.


communication, and power supply between

computers and electronic devices.USB was
designed to standardize the connection of computer
peripherals (including keyboards, pointing devices,
digital cameras, printers, portable media players,
disk drives and network adapters) to personal
computers, both to communicate and to supply
electric power.

In computer networking, Fast Ethernet is a
collective term for a number of Ethernet standards
that carry traffic at the nominal rate of 100 Mbit/s,
against the original Ethernet speed of 10 Mbit/s. Of
the Fast Ethernet standards 100BASE-TX is by far
HDMI (High-Definition Multimedia Interface) is the most common and is supported by the vast
a compact audio/video interface for transferring majority of Ethernet hardware currently produced.
uncompressed video data and compressed or Fast Ethernet was introduced in 1995 and remained
uncompressed digital audio data from an HDMI- the fastest version of Ethernet for three years before
compliant source device, such as a display being superseded by gigabit Ethernet.
controller, to a compatible computer monitor, video
projector, digital television, ordigital audio device.
General-purpose input/output ( GPIO) is a
HDMI is a digital replacement for existing analog
pin on an integrated circuit whose behavior,
video standards.
including whether it is an input or output pin, can
be controlled by the user at run time.GPIO pins
3) 3.5 mm
Personal computer sound cards use a 3.5 mm have no special purpose defined, and go unused by
phone connector as a mono microphone input, and default. The idea is that sometimes the system
deliver a 5 V polarizing voltage on the ring to integrator building a full system that uses the chip
power electret microphones. Compatibility between might find it useful to have a handful of additional
digital control lines, and having these available
different manufacturers is unreliable.
from the chip can save the hassle of having to
arrange additional circuitry to provide them.

Fig5: Audio/Video
Universal Serial Bus (USB) is an industry
standard developed in the mid-1990s that defines
the cables, connectors and communications
protocols used in a bus for connection,

ISBN NO : 978 - 1502893314

Fig6: Connectivity
1) SOC

International Association of Engineering and Technology for Skill Development


Proceedings of International Conference on Advancements in Engineering and Technology

A system on a chip or system on chip (SoC or

SOC) is an integrated circuit(IC) that integrates all
components of a computer or other electronic
system into a single chip. It may contain digital,
analog, mixed-signal, and often radio-frequency
functionsall on a single chip substrate. The
contrast with a microcontroller is one of degree.
Microcontrollers typically have under 100 kB of
RAM (often just a few kilobytes) and often really
are single-chip-systems, whereas the term SoC is
typically used for more powerful processors,
capable of running software such as the desktop
versions of Windows and Linux, which need
external memory chips (flash, RAM) to be useful,
and which are used with various external


image sensor interfaces and provides a standard

output that can be used for subsequent image
processing.A typical Camera Interface would
support at least a parallel interface although these
days many camera interfaces are beginning to
support the MIPI CSI interface.The Raspberry Pi
Foundation has released their Pi compatible camera
module that connects to the CSI. It is a 5 megapixel
camera with a fixed focused lens.
5) DSI
The display serial interface, or DSI as I will refer
to it from now on, is a high speed serial connector
located between the power connector and the GPIO
header on the Raspberry Pi. The purpose of the DSI
connector is to give the end user a quick and easy
way to connect an LCD panel to the Pi. In this case
the chip being interfaced with is the Broadcom
BCM2835, which is at the heart of the Raspberry Pi.

A wireless LAN controller is used in combination
with the Lightweight Access Point Protocol
(LWAPP) to manage light-weight access points in
large quantities by the network administrator or
network operations center. The wireless LAN
controller is part of the Data Plane within the Cisco
automatically handles the configuration of wireless
Fig7: Internals
The JTAG headers on the Raspberry Pi are
Secure Digital (SD) is a non-volatile memory
located near the audio jack. They are labeled P2 and card format for use in portable devices, such as
P3. JTAG stands for Joint Test Action Group. mobile phones, digital cameras.
Headers or pins with the JTAG label are mainly
used for debugging during the development of
embedded software and hardware.JTAG header P2
is connected to the Broadcom BCM2835. As you
may suspect from what I said about the DSI, It is all
closed source and there is virtually no way to use
this header. It is not a JTAG interface for the ARM
CPU like so many people always assume.JTAG
header P3 is connected to the LAN9512 LAN and
USB Hub chip. It is only on the model B Pi since
Fig 8: Storage
the model A does not use the LAN9512. The
LAN9512 is a USB 2.0 bus and 10/100 ethernet
4) CSI
IV.Uses Of Raspberry Pi
The CAMIF, also the Camera Interface block is
the hardware block that interfaces with different

ISBN NO : 978 - 1502893314

International Association of Engineering and Technology for Skill Development


Proceedings of International Conference on Advancements in Engineering and Technology

LibreOffice is a free and open source office suite,
developed by The Document Foundation. It was
forked from OpenOffice.org in 2010, which was an
open-sourced version of the earlier StarOffice. The
LibreOffice suite comprises programs to do word
processing, spreadsheets, slideshows, diagrams and
drawings, maintain databases, and compose math
LibreOffice uses the international ISO/IEC
standard OpenDocument file format as its native
format to save documents for all of its applications
(as do its OpenOffice.org cousins Apache
OpenOffice and NeoOffice).


a range of educational and entertainment

constructivist purposes from math and science
projects, including simulations and visualizations of
experiments, recording lectures with animated
presentations, to social sciences animated stories,
and interactive art and music. Simple games may be
made with it, as well. Viewing the existing projects
available on the Scratch website, or modifying and
testing any modification without saving it requires
no online registration.

Fig 10: Python


Fig 9: LibreOffice

A video game console is a device that outputs a

video signal to display a video game. The term
"video game console" is used to distinguish a
machine designed for consumers to use for playing
video games on a separate television in contrast to
arcade machines, handheld game consoles, or home

Python is a widely used general-purpose, highlevel programming language. Its design philosophy
emphasizes code readability, and its syntax allows
programmers to express concepts in fewer lines of
code than would be possible in languages such as C.
The language provides constructs intended to
enable clear programs on both a small and large
Fig 11:Game console
paradigms, including object-oriented, imperative
and functional programming or procedural styles. It
features a dynamic type system and automatic
memory management and has a large and
Minecraft is a sandbox indie game originally
comprehensive standard library.
created by Swedish programmer Markus "Notch"
Persson and later developed and published by
Mojang. It was publicly released for the PC on May
Scratch is a multimedia authoring tool that can be 17, 2009, as a developmental alpha version and,
used by students, scholars, teachers, and parents for after gradual updates, was published as a full

ISBN NO : 978 - 1502893314

International Association of Engineering and Technology for Skill Development


Proceedings of International Conference on Advancements in Engineering and Technology

release version on November 18, 2011. A version

for Android was released a month earlier on
October 7, and an iOS version was released on
November 17, 2011. On May 9, 2012, the game
was released on Xbox 360 as an Xbox Live Arcade
game, as well as on the PlayStation 3 on December
17, 2013. Both console editions are being codeveloped by 4J Studios. All versions of Minecraft
receive periodic updates.


photo, audio playback, and sometimes video

recording functionality. An HTPC system typically
has a remote control and the software interface
normally has a 10-foot user interface design so that
it can be comfortably viewed at typical television
viewing distances. An HTPC can be purchased preconfigured with the required hardware and software
needed to add video programming or music to the
PC. Enthusiasts can also piece together a system out
of discrete components as part of a software-based

Fig 12: Minecraft



Tor (previously an acronym for The Onion

Router) is free software for enabling online
anonymity and resisting censorship The term
"onion routing" refers to application layers of
encryption, nested like the layers of an onion, used
to anonymize communication
The NSA(National Security Agency) has a
technique that targets outdated Firefox browsers
codenamed EgotisticalGiraffe,and targets Tor users
in general for close monitoring under its XKeyscore

Fig 14: HTPC

4. Bartender
A robotic drink-dispensing rig is aiming to steal
your customers while pouring cocktail creations at
the push of a touchscreen button. Its creators call it
Operated through an iPad interface, the open
source, synthetic Al Swearengen holds up to 15
bottles of beverage plumbed into custom-designed,
Raspberry Pi-controlled pumps. Its capable of
mixing dozens of drinks, including black Russians,
Kahlua mudslides, or almost any other classy
beverage of your choosing.
A tiny Raspberry Pi serves as the brain, operating
up to 15 dispensers, which essentially suck booze
out of whatever bottles youve got handy, then mix

Fig13: Tor Router

A home theater PC (HTPC) or media center
computer is a convergence device that combines
some or all the capabilities of a personal computer
with a software application that supports video,

ISBN NO : 978 - 1502893314

International Association of Engineering and Technology for Skill Development


Proceedings of International Conference on Advancements in Engineering and Technology






check the display size), using the Free Pascal and

Lazarus IDE to compile a suitable wall-clock
program, learning how to auto-login on the
Raspberry Pi, and how to start a program
automatically on the desktop (i.e. using the GUI).

Fig 15: Bartender

Fig 17:Clock
The Raspberry Pi camera module can be used to
take high-definition video, as well as stills
photographs. Its easy to use for beginners, but has
plenty to offer advanced users if youre looking to
expand your knowledge. There are lots of examples
online of people using it for time-lapse, slowmotion and other video cleverness. You can also
use the libraries we bundle with the camera to
create effects.The camera module is very popular in
home security applications, and in wildlife camera
traps.You can also use it to take snapshots.

Fig 16:Camera
This simple project was to replace a radio clock
working from MSF with a clock based on NTP. It
just so happened that an older 10.2-inch LCD TV
became free, and I wanted to have a go at
programming the Raspberry Pi and learn just a little
(not too much) more about Linux.
The project involved setting up the Raspberry Pi
to display correctly on the TV (I converted a
Windows Testcard program to run on Linux to

ISBN NO : 978 - 1502893314

One of the coolest little developer boards out
there is the Raspberry Pi. That board can be used
for any project needing electronics for control that
you can dream up. A new robotics kit made for use
with the Raspberry Pi is getting ready to hit
That robotics kit is called the PiBot. PiBot will
have a range of features that electronics fans will
appreciate including voice recognition, face
recognition, and live HD streaming from the PiBot
camera. The robot will be controllable from a
smartphone and tablet.
PiBot will also be able to follow lines, measure
distance, and use GPS. The company behind the
PiBot also plans to have workshops that will allow
people to come in and play with the robotics kit.
There are a few things we don't know at this time
such as when the PiBot will hit market and how
much it will cost.

Fig 18:PiBot

International Association of Engineering and Technology for Skill Development


Proceedings of International Conference on Advancements in Engineering and Technology


V.Advantages of the Pi
1. Power consumption - The Pi draws about
five to seven watts of electricity. This is about one
tenth of what a comparable full-size box can use.
Since servers are running constantly night and day,
the electrical savings can really add up.
2. No moving parts - The Pi uses an SD card
for storage, which is fast and has no moving parts.
There are also no fans and other things to worry
3. Small form factor - The Pi (with a case) can
be held in your hand. A comparable full-size box
cannot. This means the Pi can be integrated inside
of devices, too.
4. No noise - The Pi is completely silent.
5. Status lights - There are several status lights
on the Pi's motherboard. With a clear case you can
see NIC activity, disk I/O, power status, etc.
6. Expansion capabilities - There are numerous
devices available for the Pi, all at very affordable
prices. Everything from an I/O board (GPIO) to a
camera. The Pi has two USB ports, however by
hooking up a powered USB hub, more devices can
be added.
7. Built-in HDMI capable graphics - The
display port on the Pi is HDMI and can handle
resolutions up to 19201200, which is nice for
making the Pi in to a video player box for example.
There are some converters that can convert to VGA
for backwards compatibility
8. Affordable - compared to other similar
alternatives, the Pi (revision B) offers the best specs
for the price, at least that I've found. It is one of the
few devices in its class that offers 512 MB of RAM.
9. Huge community support - The Pi has
phenomenal community support. Support can be
obtained quite easily for the hardware and/or
GNU/Linux software that runs on the Pi mainly in
user forums, depending on the GNU/Linux
distribution used.
10. Overclocking capability - The Pi can be
overclocked if there are performance problems with
the application used, but it is at the user's risk to do

ISBN NO : 978 - 1502893314

VI.Drawbacks of the Pi
With all of the positive things about the Pi, there
are a couple of items that I feel are very minor
1. ARM architecture - While ARM is a highly
efficient and low powered architecture, it is not x86
and therefore any binaries that are compiled to run
on x86 cannot run on the Pi. The good news is that
entire GNU/Linux distributions have been compiled
for the ARM architecture and new ones are
appearing all of the time.
2. RAM not upgradable - The main
components of the Pi are soldered to the
motherboard, including the RAM which is 512 MB.
This is not a problem though as GNU/Linux can
easily run on this. I've found the Pi uses about 100
MB of RAM while running as a small server (this is
without running X11).
Today virtualisation is very popular so some may
say that the cost of spinning up a virtual machine is
less than running a Raspberry Pi. But, calculate the
power consumption for your hypervisor, and weigh
out the differences to see which method in fact
costs less overall. Sometimes, a physical box or
physical segmentation is needed, or avoiding high
costs of running a full hypervisor is a factor, and
this is where the Pi can step in.




Online guidelines regulated by the members of

Schneider Electric.( 132 Fairgrounds Road, West
Kingston, RI 02892 USA).


Preferred links from www.google.com


Raspbian Technology Services






Raspberry Pi User Guide (2nd Edition)

International Association of Engineering and Technology for Skill Development


Proceedings of International Conference on Advancements in Engineering and Technology


12. Raspberry Pi Projects


Raspberry Pi For Dummies


Raspberry Pi Manual: A practical guide to the revolutionary small


ISBN NO : 978 - 1502893314

13. Some Notes Are Prepared By Myself


International Association of Engineering and Technology for Skill Development


Proceedings of International Conference on Advancements in Engineering and Technology


Enhancing Vehicle to Vehicle Safety Message

Transmission Using Randomized Algorithm
P.Ashok Ram
M.E-Computer Science and Engineering
Sree Sowdambika College Of Engineering
Aruppukottai Tamilnadu St,India

Abstract Reliable local information transmission is the

primary concern for periodic safety broadcasting in
VANETs. We suggest a sublayer in the application layer of
the WAVE stack to increase the reliability of safety
applications. Our design uses retransmission of network
coded safety messages, which significantly improves the
overall reliability. It also handles the synchronized
collision problem defined in WAVE as well as congestion
problem and vehicle-to-vehicle channel loss. We
recommend a discrete phase type distribution to model the
time transitions of a node state. Based on this model, we
derived network coding. Simulated results based on our
analysis show that our method performs better than the
previous repetition-based algorithms.
KeywordsAd-hoc networks, network architectures and
protocols, multiple access techniques, systems and services,
quality of service assurance.



VEHICULAR Ad-hoc NETworks (VANETs) have

developed as a potential framework for future active safety
systems. Wireless Access in Vehicular Environment (WAVE)
architecture is presented in the IEEE 1609.0 standard. IEEE
802.11p has been approved as an alternate to IEEE 802.11
standard and specifies the MAC layer enhancements for
vehicular environment. The remarkable changes are increased
maximum transmission
and reduced channel
bandwidth to 10Mhz, which provide a more reliable
communication. The IEEE 1609.4 standard also defines some
enhancements to the MAC layer to support multi-channel
operation. Both IEEE 1609.4and 802.11p are part of the
WAVE architecture.
Periodic broadcast and its related safety applications are one
of the major driving strengths for implementing VANETs[1].
In VANETs, a safety message is periodically generated (10Hz
frequency) and broadcasting to one-hop neighbor vehicles.
These periodic heartbeat messages are the building blocks
of many safety applications.

ISBN NO : 978 - 1502893314

Mr.V.Ramachandran M.Tech
Assistant professor(Computer Science and Engineering)
Sree Sowdambika College Of Engineering
Aruppukottai Tamilnadu St,India

IEEE 1609.4 explains how channel coordination is produced

for a WAVE device with a single radio[2].The time is divided
into 100ms sync intervals. Each sync interval consists of a
Service CHannel (SCH) and a Control CHannel (CCH) (Fig.
1). Every 50ms the WAVE device switches to CCH (channel
178). The periodic transmission of Wave Short Messages
(WSM) takes place in the CCH intervals. SCH intervals are
used for IP packets. Since WAVE devices should be in the
very same channel in order to be able to

Fig. 1. IEEE 1609.4 periodic channel switching.

communicate, the channel switching for all nodes is

synchronized. The synchronization mechanism is complete
with the IEEE 1609.4 standard.
The periodic broadcast is shown to perform well for low
node densities (less than 10 nodes). In a dense network,
however, congestion becomes a serious problem. It can
generate an excessive number of collisions and result in
unacceptable reliability measures for safety applications.
In [3] and [4], the authors have suggested a congestion
control mechanism based on the channel existence. The
message rate is adjusted in conjunction with the transmission
range to alleviate the congestion problem. Other than the
message rate control, adaptive control of the contention
window size based on the node density is another way to avoid
congestion.The congestion problem is also defined in IEEE
1609.4 where many nodes have a newly generated WSM for
transmission and they all switch to CCH at the same time
(synchronized collision). In this scenario, collision is expected
if two nodes pick the same timeslot in the contention window.
Naturally, for high node densities the collision probability
Aside from the congestion problem, unreliable vehicle-tovehicle channel and channel errors are other concerns for
safety applications. In the IEEE 802.11p broadcast mode,

International Association of Engineering and Technology for Skill Development


Proceedings of International Conference on Advancements in Engineering and Technology

there is no acknowledgement. Also, unlike the unicast
transmissions, there is no rebroadcasting for lost messages. In
[5] several repetitions based access schemes are suggesting to
guarantee a low message loss probability. Although message
repetition improves the reliability, it can potentially aggravate
the congestion problem.
We already studied the advantage of network coding in
safety applications [6].We exhibited, through simulations, that
network coding can obviously improve the successful message
delivery. In this paper, we extend our previous results through
analysis, and suggest a more comprehensive design. We
propose a sub-layer in the application layer of the WAVE
architecture (Fig. 2). This sub layer handles the periodically
produce messages in the application layer to improve the
overall reliability of the updated local neighborhood map and
cope with the congestion problem and channel loss. Solid
arrows in Fig. 2 represent the handshakings between the
sublayer and the rest of the networking stack. The dashed
arrow represents the timing feedback from the MAC layer
over the intermediate layers. Since our sub layer is at the
application layer it needs the timing information for
synchronization. For example, the synchronization is
necessary for the congestion control algorithm in order to
transmit in predefined subframes. We study how and when
repeating the message can be helpful to achieve more
reliability. In our scheme, each node not only can send its own
WSM, it can also transmit a random linear combination of
already received WSMs to cooperatively help delivering all
The low-level design and the exact definition of the
protocol is beyond the space and scope of this paper, which
aims to deal with the problem systematically. The main
purpose of this paper is the systematic studies of random
linear network coding in the periodic transmit of heartbeat
messages. Moreover, we introduce a novel method based on
the pseudo random number generator to minimize the network
coding overhead.This is especially critical for small safety
message size. Repetitive safety message rebroadcasting is an
efficient way to maximize the reliability; on the other hand
congestion is a serious problem in vehicular network. We also
show, through simulation, that in a congested network,
network coding can substantially minimize the safety message
loss probability. Finally, the network coding algorithm is
implemented in ns-2 which provides a more realistic
simulation framework.


The two main components of the suggested sub-layer in

this paper are safety message retransmission and the use of
network coding for safety message broadcasting. In the
following subsections, first we review the earlier work on
safety message repetition, and then we discuss some of the
previous works on the application of network coding in

ISBN NO : 978 - 1502893314


Fig. 2. Reliability sub-layer: interfaces with WAVE Short Message

Protocol (WSMP), sensors, and safety applications.

We also review the applications of network coding in gossipbased algorithms, which are closely similar to the application
in this paper
A. Message Rebroadcasting:
Repetitive broadcast of safety messages in VANETs were
first proposed in [7].When a safety message is produced it
should be delivered to neighbors within the message lifetime.
The time is slotted and the message life-time is assumed to be
time slots L. In IEEE 802.11p, safety messages are only
transferred once during a time frame. This is due to the fact
that in IEEE 802.11p broadcast mode, there is no
retransmission or acknowledgement. . In Synchronized Fixed
Repetition (SFR), w time slots are randomly chosen (out of L)
for rebroadcasting. In Synchronized Persistent Repetition
(SPR) at each time slot a message is transferred with
probability p. To limit the number of collisions, Positive
Orthogonal Codes (POC) is suggested [8].The retransmission
pattern of each node is assigned based on predetermined
binary codes. The 1s represent the transmitting time slots in
the frame. Any pairwise shifted version of two POC code
words has limited correlation. This limited correlation has
been shown to increase the message delivery ratio by limiting
the number of collisions.
B. Network coding:
In [9], authors have proved that random linear network
coding could achieve the multi-cast capacity in a lossy
wireless network. The work in [12] has not been suggested for
vehicular networks, but is the closest, in terms of the
techniques, to our work. This well-known result cannot be
extended to the application of network coding in safety
message broadcasting in VANETs. In a typical vehicular
network, the number of vehicles in one-hop neighborhood
does not extend asymptotically. In addition, the analysis in
only considers long-term throughput, but for delay sensitive
safety messages short-term throughput or the successful
message reception in a small time window is of interest. More
specifically, we describe an upper bound for the message loss
probability in the CCH interval (50ms according to IEEE
1609.4 standard) which cannot be inferred from the analysis.

International Association of Engineering and Technology for Skill Development


Proceedings of International Conference on Advancements in Engineering and Technology

1)Vehicular Networks: Most of the earlier work on the
application of network coding in vehicular network deal with
the content distribution from a Road Side Unit (RSU) to
multiple On Board Units (OBUs) or consider only throughput
performance[10] .To the best of our knowledge, there are only
a few works on network coding application in safety message
In [11], Symbol-Level Network Coding (SLNC) is used for
multimedia streaming from RSUs to vehicles. In [18], SLNC
is utilized for content distribution, in order to maximize the
download rate from the access points. SLNC is shown to
achieve excellent performance compared to packet level
network coding in unicast transmissions. This is effective for
large packet sizes. For small safety messages, if each message
is split into smaller symbols, the network coding over-head
can be compared to the symbol size leading to network
inefficiency. To address this drawback, our proposed network
coding algorithm does not need any message decomposition.
2) Gossip Algorithms: In the context of gossip algorithms,
the problem of diffuse k messages in a large network of n
nodes is considered. At each time slot, each node chooses a
communication partner in a random uniform fashion and only
one message is transmitted, while in the broadcast scenario
potentially more than one node can receive the message. In
[12], the network is large bounds are considered. In contrast,
in a typical VANET topology n does not increase
In the gossip algorithms, at each time slot, each node can
communicate at most one neighbor while a node can be
contacted by multiple nodes. Our communication mechanism
follows the opposite scenario: each node can potentially
communicate with multiple nodes, but if a node is contacted
by multiple nodes simultaneously, there will be a collision. In
the PULL mechanism a node i, contacting node j leads to a
transmission from node j to node i. If the entire node contacts
node j, it broadcasts a message to all nodes, which can be cast
as a broadcast transmission. However, in a single cell
scenario, if more than two nodes are contacted, transmissions
attack on the wireless channel while on the communication
model in transmissions are successful.
The information transmission considered in intends to
deliver global information through local communication.
However, in our problem, fast local information transmission
is done through local communication and vehicles are not
specifically interested in global information.
More recently, in broadcast gossip algorithms are proposed
for sensor networks to compute the average initial node
calculation over the network. It is shown that the broadcast
gossip almost surely converges to a consensus. In safety
broadcasting, however, nodes are interested to receive the
state information of all neighbor nodes not to achieve
consensus over the network.



pro-posed sub layer in Fig. 2. The message is produced based

on the sensor output and according to the dictated format by
SAE J2735 (Society of Automotive Engineers DSRC Message
Set Dictionary standard). The header for the new sublayer is
essentially the network coding overhead. Instead of passing
the message directly to the WAVE stack, our sublayer
combines the received messages, attaches the network coding
overhead (sublayer header) and send it to the lower layer.
The proposed reliability sub layer contains of two parts.
The congestion control algorithm determines if a node should
be active in a given CCH interval. If the node is not active in a
CCH interval, the produced message will be dropped. If the
node is active, it probabilistically transmits at each time slot
during a CCH interval. We assume that the back off
parameters of the IEEE 802.11p has been set to minimum and
the physical carrier sensing is disabled. This can be done
through the wireless card driver interface and ensures an
instant transmission when a message is sent to the lower layer.
The congestion control algorithm filters the produced
messages in order to reduce the number of active nodes in
each CCH interval. The Synchronized Persistent Coded
Repetition (SPCR) algorithm defines the main functionality of
the sub layer for transmission. At each time slot a random
linear combination of all the queued messages can be
transmitted by an active node.
The following subsections detail the functionality and
analysis of these components. First, we introduce the
congestion control algorithm that limits the number of active
nodes in a CCH interval. Then, the analysis for message
retransmission is presented. Finally, message coding based on
random linear network coding is utilized to further maximize
the success probability.
A. Congestion Control:
When the number of vehicles in the cluster, N, is big the
message reception probability drops considerably. We first
derive the Ps (n) for IEEE 802.11p to see how it behaves by
increasing the number of vehicles in a cluster.
Theorem 1. The success probability for IEEE 802.11p
broadcast node in a symmetric erasure channel with n active
nodes at the beginning of the CCH, and the contention
window size, CW, can be written as:

Here X is the number of idle timeslots in a subframe in

which n packets has been successfully transmitted, and is
given by:

Here TCCH is the CCH interval, This the PLCP interval,

This section demonstrates different components of the

ISBN NO : 978 - 1502893314

International Association of Engineering and Technology for Skill Development


Proceedings of International Conference on Advancements in Engineering and Technology

M is the Message size, R is the Channel rate, is the timeslot
Interval, Tg is the Guard time, AIFS is the Arbitration
interframe spacing, and pe is the Erasure probability.
Proof: To find the success probability, we should find the
Probability that all nodes pick distinct backoff counters (Ci )
and all transmissions are completed within the CCH period
TCCH. Each successful transmission takes (Th+ MR +AIFS)
seconds. So the maximum backoff counter must be less than
The success probability can be
written as follows: i, j: 1 i, j n and i _= j,

The first term is the complement of the Birthday Paradox

Problem .when we have n people and min(X + 1, CW)
days in a year and is equal to:

The second probability is that all Ci s are less than or equal to

To see how the IEEE 802.11p broadcast mode performs
when there is traffic congestion, we evaluated Ps(N) with the
parameter in Table I, and for various erasure probabilities.The
message size is assumed to be 200 bytes. Fig. 3 shows how the
probability of success declines with increasing the number of
nodes N. Even when there is no channel loss (pe = 0) the
success probability degrades to less than 0.3 for 50 nodes. For
higher erasure probabilities the congestion problem is more

Fig. 3. Success probability vs. number of nodes: IEEE 802.11p broadcast


Ideally, we are interested in delivering all N messages

with high probability, but for higher traffic collision, it is
unattainable. The main solution to the congestion problem is
message rate control. Rate control algorithm drops some of the

ISBN NO : 978 - 1502893314


messages to reduce congestion. This is the only way to

manage excessive collisions and provide an overall acceptable
Congestion control thats depends on the system
requirements and constraints and it can be implemented both
in the application layer and in the MAC layer. Since the IEEE
802.11p is released and the network cards based on this
standard is already in the market, solutions that are not based
on MAC modifications are more attractive. Several proposed
effective congestion control algorithms for VANETs are
implemented in the application layer.
Here, we contain a simple rate of that algorithm, which
randomly filters the load. At the beginning of every frame
each node randomly picks w sub frames and only transmits in
those sub frames. Hence, the number of active nodes in a sub
frame is a random number n N. Unlike the congestion
control algorithm in which the rate is controlled based on the
channel occupancy feedback, the feedback parameter in our
method is the number of neighbors N, which is derived based
on the most up-to-date neighborhood map.
Since MTM is a random variable, we use E(MTM) for
performance comparisons. The w that maximizes the
E(MTM) is set based on N. There are
ways to choose w
sub frames for each of N users. In order to have n active
users in a sub frame, N n users must choose w and the n
active nodes must choose w1 subframes out of the
remaining Lf 1 subframes. Therefore, the expected value of
MTM is:

By maximizing E (MTM), the optimal value for w can be

A. Message Coding:
In this section, we propose a novel algorithm that uses
Random linear network coding in combination with message
rebroadcasting. Based on the introduced repetition-based
scheme in the earlier section, all nodes potentially have
multiple transmission opportunities in a subframe. The same
copy of the message is retransmitted to account for the
channel loss. However, a node can linearly combine the
already heard messages and transmit the coded message. In
this section, we consider the application of random linear
coding together with SPR. We call this algorithm SPCR
(Synchronized Persistent Coded Repitition).
The random linear coding algorithm is simple: each node
enqueues all the received message and when it has a
broadcasting opportunity based on its retransmission pattern, it
broadcasts a random linear combination of all the already
received message in its queue with the coefficients in GF(q)
(Galois field with order q). At the end of the sub frame, if the
node has n linearly independent coded vector it can decode all
the original packets. Next, all nodes empty their queue and
start a new transmission for the next time slot.
To explain the overall structure of the reliability sub-layer,
here we give a summary of how the different components

International Association of Engineering and Technology for Skill Development


Proceedings of International Conference on Advancements in Engineering and Technology

work together. Each frame contains of Lf subframes (CCH
intervals). The congestion control determines in which sub
frames a node is active (w sub frames are select at random). In
the active subframes, nodes start tansmissions a random linear
combination of all the messages in their queues. Initially, each
node only has its own message in the Queue. As times goes by
and nodes overhear other broadcasting, their subsequent
broadcasts is not only their messages, but a random linear
combination of a subset that depending on their queue content.
Algorithm 1 outlines the transmission algorithm of the
sublayer. All nodes actively listen to the channel, when they
are not transmitting, and queue all the received packets. The
queue resets at the end of each CCH interval.


Let us consider a single cell of n active node (u1, u2, , un).

The time is slotted and the nodes are synchronized. Each sub
frame contain of L timeslots. At the beginning of each
subframe each active node produces a message that should be
received by all other nodes within the sub frame. The
produced message is retransmitted several times during a sub
frame. When a node has a transmission opportunity, it
transmits a random linear combination of all already received
packets in its queue. If mik represents the kth rebroadcasting of
uis message, then the mik can be expressed as a linear
combination of all the original messages:
in which cj is a random coefficient
in GF (q) and mj is the original produced message of node uj
. The coefficient vector c i = (c1, c2, , cn) is a vector in
q . Note that some of cis can be zero. If a node receives n
linearly independent equations, it can decode all the original
messages. The coefficient vectors for the original messages at
the starting of a subframe are the standard basis for q .
A. Coding Overhead:
In random linear network coding, the random coefficients
should be attached to the coded messages. In a congested
network with large n the coding overhead can be comparable
to the size of the message. For example, for GF (28), if there
are 200 vehicles in a cluster, the maximum coding overhead
will be 200 Bytes, which is comparable to the message size of
200-500 Bytes.
To reduce the overhead, the authors have proposed to attach
the seed of the random number generator. The seed specifies
the sequence of random coefficients in a coded message. New
coded messages can be produced from receiving coded
messages. In this case, the seeds of all the included coded
messages should be attached to the new coded message. This
can potentially result in excessive overhead. As a result, the
introduced algorithm can only encode the messages. However,
in our proposed algorithm, we need to encode all the received
coded messages. In the following, we show how we can
indeed find the corresponding seed to the new coded message
which considerably reduces the overhead.
Linear Feedback Shift Registers (LFSRs) are an efficient

ISBN NO : 978 - 1502893314


way for implementing PRNGs (Pseudo Random Number

Generators in a Gallois field). A LFSR implementation based
on a primitive polynomial of GF (2m) has m states and has a
period of 2m 1. The state of the shift register represents the
binary coefficients of the corresponding polynomial to a
member of the Gallois field. Based on the starting seed (LFSR
state), the sequence of the states of the LFSR is equivalent to
the sequence of random numbers from a Galois field. The
following lemma shows how the seed of the random
coefficients of the random linear combination of coded
messages relates to the seeds of the coded message


In this section, we present realistic simulation results

based on the introduced Nakagami channel model in the
earlier section. The simulator is implemented using Mat lab
based on the assumed channel and system model. Unlike the
earlier sections, it is assumed that the erasure probability
changes with distance. We assume 20 nodes are spaced
horizontally with a spacing of 25m in a 500m road segment.
Nodes are indexed from 1 to 20, in order, from left to right. A
Nakagami channel model with the same parameter as previous
section is utilized. All nodes broadcast with a channel rate of
12Mbps. The message size is 200 Bytes. The transmission
power is assumed to be 20dBm for each node. The simulation
results are averaged over 10000 runs. The loss probability (1
Ps(n)) versus the node index can be seen in Fig. 4. It is
observed that the SPCR loss probability of all nodes is almost
the same. For SPR, the less probability is dependent on the
location and is higher for all nodes. The location independent
performance of SPCR is due to the cooperative nature of
network coding. All nodes act as relays for their neighbors by
including the messages of farther nodes in their coded packets.
In SPR, even though message rebroadcasting can reduce the
poor channel condition, since every node only repeats its own
message, the edge nodes still are not able to receive all
messages within a CCH interval. Although the derived loss
probability upper bound in Theorem 2 is only valid for a
symmetric erasure network, we have evaluated the bound for
the maximum pe in the network as derived.

Fig. 4. Loss probability for all nodes: n = 20.

To further evaluate the performance under a more realistic

network model, SPCR and SPR have been implemented in the
ns-2 simulator. Unlike the assumed channel model for our
analysis in ns-2 some of the collisions can be resolved due to
capture. The transmission power for every node is set to

International Association of Engineering and Technology for Skill Development


Proceedings of International Conference on Advancements in Engineering and Technology

760mw (Class D in the IEEE 802.11p standard) and the
transmitter and receiver antenna gain is 2. The message size is
200 Bytes. The radio frequency, reception and carrier
threshold have been set according to the IEEE 802.11p
standard. A 4-lane road segment of length 500m and 1km is
assumed. In each lane a vehicle is placed uniformly every d
meters. Four traffic densities of 10, 15, 20 and 25 nodes per
lane are assumed. All nodes are assumed to be active. The
lower node densities correspond to highways in which the
vehicle interspacing is larger. Higher traffic densities represent
the urban areas. The simulations are performed for 100s. The
average loss probability over all nodes and within the
simulation time is calculated. The simulation results for
channel rates of 12Mbps and 27Mbps (maximum rate of IEEE
802.11p) and for the 500m road segment topology can be seen
in Fig. 13. It is observed that SPCR can significantly benefit
from the rate increase. For example, for n = 100, in SPCR, by
increasing the rate from 12Mbps to 27Mbps the average loss
probability drops
one order of magnitude in less than 0.05,
while for SPR the average loss probability remains at 1. This
proves the significant gain that can be achieved through
network coding specially in dense topologies. As we
mentioned in the earlier section, increasing the rate increases
the number of time slots in a sub-frame which provides more
opportunities for broadcasting the coded messages. Unlike the
SPR in which the additional transmission chance are not
necessarily helpful for all the receivers, in SPCR, most of the
receivers can advantage from the received coded message by
expanding their subspace of receiving coded vectors. The
simulation results for the 1km road segment topology can be
observed in Fig.5. Due to longer transmission ranges, channel
quality is worse compared to the earlier topology. It can be
seen that SPCR performance is robust.

Fig. 5. Expected loss probability vs. number of nodes.

It can be seen that SPCR performance is robust. To channel

error and does not change significantly compared to the earlier
topology. However, the SPR performance suffers from lower
channel quality. For example, for 40 nodes and 27Mbps rate,
the average loss probability increases to more than 0.56 from
0.08. This shows that network coding not only is effective in
dense topologies, but also is robust to channel errors.

ISBN NO : 978 - 1502893314


Fig. 6. Expected loss probability vs. number of nodes



We have proposed a sublayer that optimizes the reliability

of periodic broadcasting in VANETs. The core of our design
is the random linear network coding which is used to provide
reliability for small safety messages with low overhead. We
have also studied how the message rebroadcasting can be used
when there is congestion. Numerical results based on our
analysis confirm the superior performance for our method
compared to earlier schemes. Our design can be implemented
in conjunction with the WAVE architecture and does not need
any modification to the WAVE communication stack.


[1] B. Xu, A. Ouksel, and O. Wolfson, Opportunistic resource exchange in

inter-vehicle ad-hoc networks, in Proc. 2004 IEEE International Conf. on
Mobile Data Management, pp. 412.
[2] IEEE Standard for Wireless Access in Vehicular Environments (WAVE)
Multi-channel Operation, pp. 189, 2011, IEEE Std 1609.4-2010 (Revision
of IEEE Std 1609.4-2006).
[3] Y. P. Fallah, C. Huang, R. Sengupta, and H. Krishnan, Congestion control
based on channel occupancy in vehicular broadcast networks, in Proc. 2010
IEEE Veh. Technol. Conf. Fall.
[4] Y. Fallah, C.-L. Huang, R. Sengupta, and H. Krishnan, Analysis of
information dissemination in vehicular ad-hoc networks with application to
cooperative vehicle safety systems, IEEE Trans. Veh. Technol., vol. 60, no.
1, pp. 233247, Jan. 2011.
[5] Q. Xu, T. Mak, J. Ko, and R. Sengupta, Vehicle-to-vehicle safety
messaging in DSRC, in Proc. 2004 ACM International Workshop Veh. Ad
Hoc Netw., pp. 1928.
[6] B. Hassanabadi and S. Valaee, Reliable network coded MAC in vehicular
ad-hoc networks, in Proc. 2010 IEEE Veh. Technol. Conf.
[7] Q. Xu, T. Mak, J. Ko, and R. Sengupta, Medium access control protocol
design for vehicle-to-vehicle safety messages, IEEE Trans. Veh. Technol.,
vol. 56, no. 2, pp. 499518, Mar. 2007.
[8] F. Farnoud, B. Hassanabadi, and S. Valaee, Message broadcast using optical
orthogonal codes in vehicular communication systems, 2007 ICST QSHINE
Workshop on Wireless Netw. Intelligent Transportation Syst.
[9] D. S. Lun, M. Medard, R. Koetter, and M. Effros, On coding for reliable
communication over packet networks,Physical Commun., vol. 1, no. 1, pp.
[10] J.-S. Park, U. Lee, S.-Y. Oh, M. Gerla, D. S. Lun, W. W. Ro, and J. Park,
Delay analysis of car-to-car reliable data delivery strategies based on data
mulling with network coding, IEICE Trans., vol. 91-D, no. 10, pp. 2524
2527, 2008.
[11] Z. Yang, M. Li, and W. Lou, CodePlay: live multimedia streaming in
VANETs using symbol-level network coding, in Proc. 2010 IEEE Int. Netw.
Protocols Conf., pp. 223232,
[12] F. Ye, S. Roy, and H. Wang, Efficient inter-vehicle data dissemination, in
Proc. 2011 IEEE Veh. Technol. Conf. Fall, pp. . 15

International Association of Engineering and Technology for Skill Development


Proceedings of International Conference on Advancements in Engineering and Technology


An Efficient and Accurate Misbehavior Detection

Scheme in Adversary Environment
Kanagarohini.V1, Ramya.K2

Student:Sree Sowdambika College of Engineering

Guide:Sree Sowdambika College of Engineering



Abstract Misbehaviour detection is regarded as a variation in network conditions, difficult to predict mobility
great challenge in the adversary environment because of
distinct network characteristics. Harmful and egocentric
behaviours illustrate an insecure threat against routing in
delay tolerant networks (DTNs). In order to address this,
in this paper, we propose iTrust, a probabilistic
misbehaviour detection scheme for efficient and accurate
misbehaviour detection in DTNs. Our iTrust scheme
introduces the periodically available Trusted Authority
(TA) to estimate the nodes behavior based on the
collected routing evidences. To further enhance the power
of the proposed model, we associate the detection
probability with nodes reputation for effective inspection.
KeywordsDelay Tolerant Networks, Trusted Authority,



Delay Tolerant Network is a communication network

designed to tolerate long delays and outages. The current
networking technology depends on a set of basic assumptions
that are not true in all environments. The first and most
important assumption is that an end-to-end connection exists
from the source to the destination. This assumption can be
easily contravened due to mobility, power saving etc.
Examples of such networks are sensor networks with
scheduled infrequent connectivity, vehicular DTNs that
publish local ads, traffic reports, parking information [1].
Delay tolerant network (DTN) is an attempt to extend the
reach of networks. It give an assurance to enable
communication between challenged networks.

Fig. 1, Delay Tolerant Networking Environment

Delay Tolerant Networks have unique characteristics like
lack of contemporaneous path, short range contact high

ISBN NO : 978 - 1502893314

patterns and long feedback delay. Because of these unique

characteristics the Delay Tolerant Networks (DTNs) move to
an approach known as store-carry-and-forward strategy
where the bundles can be sent over the existing link and
buffered at the next hop until the next link in the path appears
and the routing is determined in an opportunistic fashion.
In DTNs a node could misbehave by refusing to forward
the packets, dropping the packets even when it has the
potential to forward (e.g., sufficient memory and meeting
opportunities) or modifying the packets to launch attacks.
These types of malicious behaviors are caused by rational or
malicious nodes, which try to maximize their own benefits.
Such malicious activities pose a serious threat against network
performance and routing. Hence a trust model is highly
enviable for misbehavior detection and attack mitigation.

Routing misbehavior detection and mitigation has been

well crammed in traditional mobile ad hoc networks. These
methodologies use neighborhood monitoring or destination
acknowledgement (ACK) to detect dropping of packets [2]. In
the mobile ad hoc networks (MANET) first complete route is
established from source to destination, before transmitting the
packet. But in DTN the nodes are intermittently connected,
hence there is no possibility for route discover and it has other
unique characteristics like dynamic topology, short range
contact, long feedback delay which made the neighborhood
monitoring unsuitable for DTN. Although many routing
algorithms [3, 4, 5, 6, 7] have been proposed for DTNs, most
of them do not consider the nodes willingness to forward the
packet and implicitly assume that a node is willing to forward
packets for all others. They may not work well since some
packets are forwarded to nodes unwilling to relay, and will be
There are quite a few proposals for misbehaviour detection
which are based on forward history verification (e.g., multi
layer formation [8]) and by providing encounter tickets [9],
which incur high transmission overhead as well as high
verification cost. Different from the exiting works in which
the Trusted Authority (TA) performs the auditing based on
checking the contact history [10], is critical and time
consuming. Our proposed system uses nectar protocol for
selecting the appropriate intermediate node such that the

International Association of Engineering and Technology for Skill Development


Proceedings of International Conference on Advancements in Engineering and Technology

inspection or auditing process can be simplified and the
packet dropping rate can be considerably reduced. To achieve
a tradeoff between detection cost and security, our Trust
model relies on inspection game [11] based on game theory.
This introduces a periodically available Trusted Authority
(TA) to judge the nodes based on collected routing evidences.
Our Trust model jointly considers the incentive and malicious
node detection scheme in the single framework along with the
effective nectar protocol for selecting the appropriate
intermediate node. The contributions of this paper can be
summarized as follows.
1. We propose a general misbehavior detection
framework based on a series of newly introduced data
forwarding evidences. The proposed evidence framework
could not only detect various misbehaviors but also be
compatible to various routing protocols.
2. Malicious node detection is carried out by the Trusted
Authority (TA) based on the evidences generated by nodes,
which are selected by the application of protocol.
3. Hence packet dropping rate can be considerably
reduced and the performance of the network can be improved.


applied to delegation-based routing protocols or multicopybased routing ones, such as MaxProp [18] and ProPHET [19].
We assume that the network is loosely synchronized (i.e., any
two nodes should be in the same time slot at any time).

A. System Model
Delay Tolerant Network consist of mobile devices
owned by individual users. Each node i is assumed to have
a unique ID Ni and a corresponding public/private key
pair. We assume that each node must pay a deposit before
it joins the network, and the deposit will be paid back after
the node leaves if there is no misbehavior activity of the
node. we assume that a periodically available TA exists so
that it could take the responsibility of misbehavior
detection in DTN. For a specific detection target Ni, TA
will request Nis forwarding history in the global network.
Therefore, each node will submit its collected Nis
forwarding history to TA via two possible approaches.
In a pure peer-to-peer DTN, the forwarding history
could be sent to some special network components (e.g.,
roadside unit (RSU) in vehicular DTNs or judge nodes in
[10]) via DTN transmission. In some hybrid DTN network
environment, the transmission between TA and each node
could be also performed in a direct transmission manner
(e.g., WIMAX or cellular networks [14]). We argue that
because the misbehavior detection is performed
Periodically, the message transmission could be performed
in a batch model, which could further reduce the
transmission overhead.
B. Routing Model
We adopt the single-copy routing mechanism such as First
Contact routing protocol, and we assume the communication
range of a mobile node is finite. Thus, a data sender out of
destination nodes communication range can only transmit
packetized data via a sequence of intermediate nodes in a
multihop manner. Our misbehaving detection scheme can be

ISBN NO : 978 - 1502893314

Fig 2: Trust model architecture

HOTE Hand Over Task Evidence
FD Forwarding
FC Forward Chronicle
CL- Contact Log
NI - Neighborhood Index
Ni, Nj, Nk Intermediate Nodes
CC Contact Counter
TOC- time of contact
C. Adversary Model
First of all, we assume that each node in the networks is
rational and a rational nodes goal is to maximize its own
profit. In this work, we mainly consider two kinds of DTN
nodes: selfish nodes and malicious nodes. Due to the selfish
nature and energy consuming, selfish nodes are not willing to
forward bundles for others without sufficient reward. As an
adversary, the malicious nodes arbitrarily drop others bundles
(black hole or gray hole attack), which often take place
beyond others observation in a sparse DTN, leading to
serious performance degradation. Note that any of the selfish
actions above can be further complicated by the collusion of
two or more nodes.


The basic iTrust has two phases, including routing
testimony generation phase and auditing phase. In the
evidence generation phase, the nodes will generate contact
and data forwarding evidence for each contact or data
forwarding. In the subsequent auditing phase, TA will

International Association of Engineering and Technology for Skill Development


Proceedings of International Conference on Advancements in Engineering and Technology


distinguish the normal nodes from the misbehaving nodes. For

an example, we take a three-step data forwarding process.
consider node X has packets, which will be delivered to node
Z. If node X meets another node Y that could helps to forward
the packets to Z, X will replicate and forward the packets to
Y. Afterwards, Y will forward the packets to Z when Z arrives
at the transmission range of Y. In this process, we define three
kinds of data forwarding evidences that could be used to judge
if a node is a malicious one or not:

accordingly. Each node also maintains a contact counter,

which keeps track of how often the nodes meet each other.
When two nodes Nj and Nk meet, a new contact log Ej<->kcontact
will be generated. Suppose that Mj<->k = {Nj ,Nk, Tts}. Nj and
Nk will generate their signatures Sigj = SIGj {H (Mj->k)} and
Sigk = SIGk{H(Mj<->k)}. Therefore, the contact history
evidence could be obtained as follows:

A. Hand Over Task Evidence

The contact log will be stored at both of meeting nodes.

In the audit phase both the nodes will submit their logs to the
TA. Maintenance of contact history could prevent the black
hole or grey hole attack. The nodes chosen by the nectar
protocol with sufficient contact with other users, but if it fails
to forward the data, will be regarded as a malicious or selfish

Hand Over Task evidences are used to record the

number of routing tasks assigned from the upstream nodes to
the target node Nj. We assume that source node (Nsrc) has
message M, in order to forward to the destination (Ndst). For
simplicity of presentation, consider that message is stored at
the intermediate node (Ni), when Nj comes within the
transmission or radio range of Ni ,then it will determine by
means of nectar protocol whether to choose node j(Nj) as the
intermediate node or not, in order to forward message M to
the destination.
If node j (Nj) is the chosen next node then the flag bit
will be enabled (or flag = 1) and the Task evidence Eijtask
need to be generated, to demonstrate that a new task has been
assigned from node i (Ni) to node j (Nj). Where Tts and TExp
refer to the time stamp and the expiration time of the packets.
we set Mi>jM={M,Nsrc, flag, Ni, Nj,Ndst, Tts, TExp, Sigsrc},
where Sigsrc= Sigsrc(H( M,Nsrc, Ndst, TExp)) refers to the
signature generated by the source nodes on message M. Node
Ni generates the signature Sigi =SIGi{ Mi->jM } to indicate that
this forwarding task has been delegated to node Nj while node
Nj generates the signature Sigj =SIGj{ IMi->jM } to show that
Nj has accepted this task. Therefore, we obtain the delegation
task evidence as follows:

Ej<->k contact = {Mj<->k, Sigj, Sigk} (3)

Since the selection of intermediate node is based on the
Nectar protocol, the dropping rate of packect is reduced
considerably. In order to further improve the network
performance and to avoid packet dropping, our trust model
introduces the Trusted Authority (TA), which periodically
launches the investigation request.
In the auditing phase, the Trusted Authority (TA) will send
the investigation request to node Nj in a global network during
a certain period [t1, t2]. Then, given N as the set of nodes in
the network, each node in the DTN will submit its collected
{Ei->jtask, Ej->kforward, Ej<->kcontact} to TA. After collecting all of
the evidences related to Nj , TA obtains the set of task
evidence Stask, the set of messages forwarded Sforward and the
set of contacted nodes Scontact. To check if a suspected node Nj
is malicious or not, TA should check if any message
forwarding request has been honestly fulfilled by Nj.

Ei jtask = {Mi->jM , Sigi, Sigj} (1)

A. Reliable data forwarding with adequate


B. Forwarding Chronicle evidence

When Nj meets the next intermediate node Nk, Nj will
check if Nk is the desirable next intermediate node in terms of
a specific routing protocol. If yes Nj will forward the packets
to Nk, who will generate a forwarding history evidence to
demonstrate that Nj has successfully finished the forwarding
task. Nk will generate a signature Sigk=SIGk{H(Mj->kM)} to
demonstrate the authenticity of forwarding history evidence.
Therefore, the complete forwarding history evidence is
generated by Nk as follows:
Ej->kforward = {Mj->kM, Sigk} (2)
In the audit phase, the node which is inspected will submit
its forwarding history evidence to TA to demonstrate that it
has tried its best to accomplish the routing tasks, which are
defined by hand over task evidences.

C. Contact log evidence

Whenever two nodes meet, a new contact log is
generated and the neighbourhood index is updated

ISBN NO : 978 - 1502893314

A normal user will honestly follow the routing protocol

by forwarding the messages to the sufficient users. Therefore,
the given message m is in Stask, the data is forwarded to the
presence of adequate users. The requested message has been
forwarded to the next hop, the chosen next hop nodes are
desirable nodes according to a specific DTN routing protocol,
and the number of forwarding copies satisfy the requirement
defined by a multicopy forwarding routing protocol.

B. Reliable data forwarding with inadequate

A normal users will also honestly perform the routing
protocol but fail to achieve the desirable results due to lack of
adequate users. Therefore, given the message m is in Stask,
the data is forwarded to the presence of adequate users. There
are two cases are here. First case is that there is no contact
during period [Tts (m), t2]. The second case is that only a
limited number of contacts are available in this period and the
number of contacts is less than the number of copies required

International Association of Engineering and Technology for Skill Development


Proceedings of International Conference on Advancements in Engineering and Technology

by the routing protocols. In both cases, even though the DTN
node honestly performs the routing protocol, it cannot fulfill
the routing task due to lack of sufficient contact chances. We
still consider this kind of users as honest users.

C. A misbehaving data forwarding with/without

adequate users
A misbehaving node will drop then packets or refuse to
forward the data even when there are sufficient contacts.
There are three cases are here. The first case is the forwarder
refuses to forward the data even when the forwarding
opportunity is available. The second case is that the forwarder
has forwarded the data but failed to follow the routing
protocol. The last case is that the forwarder agrees to forward
the data but fails to propagate the enough number of copies
predefined by a multicopy routing protocol




The TA judges if node Nj (Suspected node) is malicious

or not by triggering the Malicious node detection algorithm.
Where node j is the suspected malicious node, Stask is the set
of hand over task evidence, Sforward is the set of forward
chronicle, and R is the set of contacted nodes, Nk(m) as the
set of next-hop nodes chosen for message forwarding, C
represents the punishment (lose of deposit), we denotes the
compensation (virtual currency or credit) paid by TA.


In this algorithm, we introduce Basic Detection, which

takes j, Stask, Sforward, [t1, t2], R, D as well as the routing
requirements of a specific routing protocol R, D as the input,
and output the detection result 1 to indicate that the target
node is a misbehavior or 0 to indicate that it is an honest
node. To prevent malicious users from providing fake
delegation/forwarding/contact evidences, TA should check the
authenticity of each evidence by verifying the corresponding
signatures, which introduce a high transmission and signature
verification overhead.
Algorithm 2. The Proposed Malicious Node Detection

initialize the number of nodes n


for i1 to n do


generate a random number mi from 0 to 10n _ 1

if mi/10n < pb then


ask all the nodes (including node i) to provide

evidence about node i


if Basic Detection(I,Stask,Sforward,[t1, t2],R,D)



give a punishment C to node i




pay node i the compensation w

Algorithm 1. The basic misbehavior detection algorithm



procedure BASICDETECTION ((j,Stask, Sforward, [t1,

t2], R))
for each m is in Stask do

10. end if

if m is not in Sforward and R0 then


11. Else

return 1


12. pay node i the compensation w

else if m is in Sforward and Nk(m) is not in R

13. end if



14. end for

return 1
else if m is in Sforward and Nk(m) is in R then
|Nk(m)| is less than D then
return 1

The above algorithm shows the details of the proposed

probabilistic misbehavior detection scheme. For a particular
node i, TA will launch an investigation at the probability of
pb. If i could pass the investigation by providing the
corresponding evidences, TA will pay node i a compensation
w; otherwise, i will receive a punishment (lose its deposit).

end if


end for


return 0

12. end procedure

ISBN NO : 978 - 1502893314




There are two strategies available for the trusted authority

and the nodes. The Trusted Authority can choose inspecting
(I) or not inspecting (N). Each node also has two strategies,
forwarding(F) and offending (O).

International Association of Engineering and Technology for Skill Development


Proceedings of International Conference on Advancements in Engineering and Technology

If TA inspects at the probability of Pb = g+/w+C in Trust
Model, a rational node must choose forwarding strategy, and
the TA will get a higher profit than it checks all the nodes in
the same round.
This is a static game of complete information, though no
dominating strategy exists in this game, there is a mixed Nash
Equilibrium point.
If the node chooses offending strategy, its payoff is


take all the nodes whose PLR larger than 0 as the malicious
ones. On the other hand, since a normal node may also be
identified as the malicious one due to the depletion of its
buffer, we need to measure the false alert of iTrust and show
that iTrust has little impact on the normal users who adhere to
the security protocols. Thus, we use the misidentified rate to
measure the false negative rate. Moreover, we evaluate the
transmission overhead Costtransmission and verification overhead
Costverification in terms of the number of evidence transmission
and verification for misbehavior detection. In the next section,
we will evaluate the effectiveness of iTrust under different
parameter settings.

w(S) = C (g + /w + C) + w (g + /w + C)= w g
If the node chooses forwarding strategy, its payoff is
w (W) = Pb (w g) + (1 Pb) (w g) = w g
The latter one is obviously larger than the previous one.
Therefore, if TA chooses the checking probability g+_/w+C, a
rational node must choose the forwarding strategy.
Furthermore, if TA announces it will inspect at the probability
Pb = g+/w+C to every node, then its profit will be higher
than it checks all the nodes, for
v w (g + /w + C) h > v w h
Here the latter part in the inequality is the profit of TA when it
checks all the nodes. Note that the probability that a
malicious node cannot be detected after k rounds is (1 g+
/w+C )k 0, if k. Thus it is almost impossible that a
malicious node cannot be detected after a certain number of


We set up the experiment environment with the opportunistic
networking environment (NS2) simulator, which is designed
for evaluating DTN routing and application protocols. In our
experiment, we adopt the First Contact routing protocol,
which is a single-copy routing mechanism. We set the time
interval T to be about 3 hours as the default value, and we
deploy 50, 80, 100 nodes on the map, respectively. With each
parameter setting, we conduct the experiment for 100 rounds.
We use the packet loss rate (PLR) to indicate the misbehavior
level of a malicious node. In DTNs, when a nodes buffer is
full, a new received bundle will be dropped by the node, and
PLR denotes the rate between the dropped bundles out of the
received bundles. But, a malicious node could pretend no
available buffer and, thus, drop the bundles received. Thus,
PLR actually represents the misbehavior level of a node. For
example, if a nodes PLR is 1, it is totally a malicious node
who launches a black hole attack. If a nodes PLR is 0, we
take it as a normal node. Further, if 0 < PLR < 1, the node
could launch a gray hole attack by selectively dropping the
packets. In our experiment, we use the detected rate of the
malicious nodes to measure the effectiveness of iTrust, and we

ISBN NO : 978 - 1502893314

A. The Impact of Percentage of Malicious

Nodes on iTrust
We use malicious node rate (MNR) to denote the
percentage of the malicious nodes of all the nodes. In this
experiment, we consider the scenarios of varying MNR from
10 to 50 percent. In this experiment, PLR is set to be 1, and
the velocity of 80 nodes varies from 10:5 to 11:5 m=s. The
message generation time interval varies from 25 to 35 s, and
the TTL of each message is 300 s. The experiment result is
shown in Fig. 3. Fig. 3a shows that three curves have the
similar trends, which indicate that iTrust could achieve a
stable performance with different MNRs. Even though the
performance of iTrust under high MNR is lower than that with
low MNR, the detected rate is still higher than 70 percent.
Furthermore, the performance of iTrust will not increase a lot
when the detection probability exceeds 20 percent, but it is
good enough when the detection probability is more than 10
percent. Thus, the malicious node rate has little effect on the
detected rate of malicious nodes. That means iTrust will be
effective, no matter how many malicious nodes there are.
Further, a high malicious node rate will help reduce the
misidentified rate as shown in Fig. 3b because the increase of
the malicious nodes will reduce the proportion of the normal
nodes who will be misidentified.

International Association of Engineering and Technology for Skill Development


Proceedings of International Conference on Advancements in Engineering and Technology


Fig. 4. Experiment results with user number of 100, 80, 50.

Fig. 3. Experiment results with different MNRs.

B. The Evaluation of the Scalability of iTrust
First, we evaluate the scalability of iTrust, which is shown
in Fig. 4. As we predict in (12), the number of nodes will
affect the number of generated contact histories in a particular
time interval. So we just measure the detected rate (or
successful rate) and misidentified rate (or false positive rate)
in Fig. 4. Fig. 4a shows that when detection probability p is
larger than 40 percent, iTrust could detect all the malicious
nodes, where the successful detection rate of malicious nodes
is pretty high. It implies that iTrust could assure the security
of the DTN in our experiment. Furthermore, the misidentified
rate of normal users is lower than 10 percent when user
number is large enough, as shown in Fig. 4b, which means
that iTrust has little impact on the performance of DTN users.
Therefore, iTrust achieves a good scalability.

C. The Impact of Various Packet Loss Rate on

In the previous section, we have shown that iTrust could
also thwart the gray hole attack. In this section, we evaluate
the performance of iTrust with different PLRs. In this
experiment, we measure the scenarios of varying PLR from
100 to 80 percent. We set MNR as 10 percent, and the speed
of 80 nodes varying from 10:5 to 11:5 m=s. The message
generation interval varies from 25 to 35 s, and the TTL of
each message is 300 s. The experiment result and PLRs have
little effect on the performance of iTrust, as shown in Fig. 5.
This implies iTrust will be effective for both black hole attack
and gray hole attack. The misidentified rate is not affected by
PLRs either. It is under 8 percent when the detection
probability is under 10 percent. Thus, the variation of PLR
will not affect the performance of iTrust.

Fig. 5. Experiment results with different PLRs.

ISBN NO : 978 - 1502893314

International Association of Engineering and Technology for Skill Development


Proceedings of International Conference on Advancements in Engineering and Technology

In this paper we propose a Trust Model which could
effectively detect the malicious node and ensures secure
transmission of data. The selection of neighbour node is based
on AODV protocol, by which the packet dropping rate is
considerably reduced and it also simplifies the work of
Trusted Authority (TA). We also reduce the detection
overhead by introducing the Trusted Authority (TA) designed
on the basis of inspection theory, in a periodic fashion.
1] R. Lu, X. Lin, H. Zhu, and X. Shen, SPARK: A New
VANETBased Smart Parking Scheme for Large Parking
Lots, Proc. IEEE INFOCOM 09, Apr. 2009.
[2] T. Hossmann, T. Spyropoulos, and F. Legendre, Know
the Neighbor: Towards Optimal Mapping of Contacts to
Social Graphs for DTN Routing, Proc. IEEE INFOCOM 10,
[3] Q. Li, S. Zhu, and G. Cao, Routing in Socially Selfish
Delay-Tolerant Networks, Proc. IEEE INFOCOM 10, 2010.
[4] H. Zhu, X. Lin, R. Lu, Y. Fan, and X. Shen, SMART: A
Secure Multilayer Credit-Based Incentive Scheme for DelayTolerant Networks, IEEE Trans. Vehicular Technology, vol.
58, no. 8, pp. 828-836, 2009.
[5] H. Zhu, X. Lin, R. Lu, P.-H. Ho, and X. Shen, SLAB:
Secure Localized Authentication and Billing Scheme for
Wireless Mesh Networks, IEEE Trans. Wireless Comm., vol.
17, no. 10, pp. 3858- 3868, Oct. 2008.
[6] Q. Li and G. Cao, Mitigating Routing Misbehavior in
Disruption Tolerant Networks, IEEE Trans. Information
Forensics and Security, vol. 7, no. 2, pp. 664-675, Apr. 2012.
[7] S. Marti, T.J. Giuli, K. Lai, and M. Baker, Mitigating
Routing Misbehavior in Mobile Ad Hoc Networks, Proc.
ACM MobiCom 00, 2000.
[8] R. Lu, X. Lin, H. Zhu, and X. Shen, Pi: A Practical
Incentive Protocol for Delay Tolerant Networks, IEEE Trans.
Wireless Comm., vol. 9, no. 4, pp. 1483-1493, Apr. 2010.
[9] F. Li, A. Srinivasan, and J. Wu, Thwarting Blackhole
Attacks in Disruption-Tolerant Networks Using Encounter
Tickets, Proc. IEEE INFOCOM 09, 2009.
[10] E. Ayday, H. Lee, and F. Fekri, Trust Management and
Adversary Detection for Delay-Tolerant Networks, Proc.
Military Comm. Conf. (Milcom 10), 2010.
[11] D. Fudenberg and J. Tirole, Game Theory. MIT Press,
[12] M. Rayay, M.H. Manshaeiy, M. Flegyhziz, and J.
Hubauxy, Revocation Games in Ephemeral Networks, Proc.
15th ACM Conf. Computer and Comm. Security (CCS 08),
[13] S. Reidt, M. Srivatsa, and S. Balfe, The Fable of the
Bees: Incentivizing Robust Revocation Decision Making in
Ad Hoc Networks, Proc. 16th ACM Conf. Computer and
Comm. Security
(CCS 09), 2009.
[14] B.B. Chen and M.C. Chan, Mobicent: A Credit-Based
Incentive System for Disruption-Tolerant Network, Proc.
IEEE INFOCOM 10, 2010.

ISBN NO : 978 - 1502893314


[15] S. Zhong, J. Chen, and Y.R. Yang, Sprite: A Simple

Cheat-Proof, Credit-Based System for Mobile Ad-Hoc
Networks, Proc. IEEE INFOCOM 03, 2003.
[16] J. Douceur, The Sybil Attack, Proc. Revised Papers
from the First Intl Workshop Peer-to-Peer Systems (IPTPS
01), 2001.
[17] R. Pradiptyo, Does Punishment Matter? A Refinement
of the Inspection Game, Rev. Law and Economics, vol. 3, no.
2, pp. 197- 219, 2007.
[18] J. Burgess, B. Gallagher, D. Jensen, and B. Levine,
Maxprop: Routing for Vehicle-Based Disruption-Tolerant
Networks, Proc. IEEE INFOCOM 06, 2006.
[19] A. Lindgren and A. Doria, Probabilistic Routing
Protocol for Intermittently Connected Networks, draftlindgren-dtnrg-prophet- 03, 2007.
[20] W. Gao and G. Cao, User-Centric Data Dissemination in
Disruption-Tolerant Networks, Proc. IEEE INFOCOM 11,
[21] A. Keranen, J. Ott, and T. Karkkainen, The ONE
Simulator for DTN Protocol Evaluation, Proc. Second Intl
Conf. Simulation Tools and Techniques (SIMUTools

International Association of Engineering and Technology for Skill Development


Proceedings of International Conference on Advancements in Engineering and Technology



Dhivya.G1, Rajeswari.G2

Student: Sree Sowdambika College Of Engineering, Anna University

Guide: Sree Sowdambika College Of Engineering, Anna University
until the connection would be eventually established.DTN
ABSTRACT In many defense network,
introduced supply nodes where data are stored or
connections of wireless devices carried by soldiers may
replicated such that only authorized mobile nodes can
access the necessary information quickly and efficiently.
environmental factors, and mobility, especially when
Many military applications require increased protection of
they operate in hostile environments. Disruptionconfidential data including access control methods that
tolerant network (DTN) technologies are becoming
are cryptographically enforced. In many cases, it is
successful solutions that allow nodes to communicate
desirable to provide differentiated access services such
with each other in these extreme networking
that data access policies are defined over exploiter
environments.DTN networks introduced supply nodes
attributes or roles, which are managed by the key sway.
where data are stored or replicated such that only
For example, in a disruption-tolerant military network, a
authorized mobile nodes can access the necessary
commander may store confidential information at a
information quickly and efficiently. In this proposed
supply node, which should be entered by the subscriber of
system, cipher-text policy attribute based encryption
Corps 1 who is participating in Region 2. In this case,
(CP-ABE) provides a scalable method of cipher data,
it is a reasonable assumption that multiple key sway are
such that the encryptor defines the attributes set that
likely to manage their own dynamic attributes for soldiers
the decryptor needs to possess in order to decrypt the
in their deployed regions or echelons, which could be
encoded text. This paper provides how the data are
frequently changed (e.g., the attribute representing the
transmitted in a very safety manner.
current location of moving soldiers). We refer to this
DTN architecture where multiple sway issue and manage
authorized mobile nodes, attribute set
their own attribute keys independently as a decentralized
Attribute based encryption [11]-[14] is a vision
A disruption-tolerant network (DTN) is a
of public key encryption that allows exploiters to encrypt
network designed so that temporary or intermittent
and decrypt messages based on exploiter attributes (e.g.,
transmission troubles, flaws and abnormalities have the
the attribute representing the current location of moving
least possible opposite crash. There are various features to
soldiers)[4],[8],[9]. In a typical execution, the length of
the strong scheme of a DTN, incorporating: 1. The use of
the encrypted text is proportional to the number of
fault-tolerant methods and technologies. 2. The quality of
attributes associated with it and the decryption time is
graceful degradation under adverse conditions or extreme
proportional to the number of attributes used during
traffic loads. 3. The ability to prevent or quickly recover
from electronic attacks. 4. Ability to function with
However, the problem of applying the ABE to
minimal latency even when routes are ill-defined or
DTNs introduces several security and secrecy challenges.
Since some exploiters may change their associated
Mobile nodes in defense environs, such as a
attributes at some point (for example, motion in their
combat zone or a vicious area are likely to suffer from
area), or some unique keys might be compromised, key
intermittent network connectivity and frequent partitions.
revocation (or update) for each attribute is necessary in
Disruption-tolerant network (DTN) technologies are
order to make systems secure. However, this issue is even
becoming successful solutions that allow wireless devices
additional inconvenient, especially in ABE systems, since
carried by soldiers to communicate with each other and
each trait is conceivably shared by multiple exploiters
access the confidential information or command reliably
(henceforth, we refer to such a collection of exploiters as
by exploiting external supply nodes. Typically, when
a trait group). This points that revocation of any attributes
there is no end-to-end connection between a source and a
or any single exploiter in a trait group would affect the
target pair, the data from the origin node may need to wait
other exploiters in the group. For example, if an exploiter
in the intermediate nodes for a substantial amount of time

ISBN NO : 978 - 1502893314

International Association of Engineering and Technology for Skill Development


Proceedings of International Conference on Advancements in Engineering and Technology

joins or leaves a trait group, the associated attribute key

should be changed and redistributed to all the other
members in the same group for backward or forward
secrecy. It may result in neck of a bottle during rekeying
procedure or safety ignominy due to the windows of
vulnerability if the previous attribute key is not updated
Another challenge is the key escrow problem. In
CP-ABE, the key authorization generates private keys of
exploiters by applying the sways master secret keys to
exploiters associated set of traits. The last challenge is
the coordination of attributes issued by different sway.
For example, suppose that traits role 1 and region 1
are managed by the sway A, and role 2 and region 2
are managed by the sway B. Then, it is impossible to
generate an access policy ((role 1 OR role 2) AND
(region 1 or region 2)) in the previous schemes
because the OR logic between attributes issued by
different sway cannot be implemented. This is due to the
fact that the different sway generate their own attribute
keys using their own independent and individual master
secret keys.


Military applications in the DTN arena are
substantial, allowing the retrieval of critical information
in mobile battlefield scenarios using only intermittently
connected network communications. For these types of
applications, the delay tolerant protocol should transmit
data segments across multiple-hop networks that consist
of differing regional networks based on environmental
network parameters (latency, loss, BER). This essentially
implies that data from low-latency networks for which
TCP may be suitable must also be forward across the
long-haul interplanetary link. DTN achieves message
reliability via employing custody transfer. The concept of
custody transfer, where responsibility of some data
segment (bundle or bundle fragment), migrates with the
data segment as it progresses across a series of network
hops is a fundamental strategy such that reliable delivery
is accomplished on a hop-by-hop basis instead of an endto-end basis
DTN is a set of protocols that act together to
enable a standardized method of performing store and
forward communications.DTN operates in two basic
environments: low-propagation delay and highpropagation delay. In a low-propagation environment
such as may occur in near-planetary or planetary surface
environments, DTN bundle agents can utilize underlying
Internet protocols that negotiate connectivity in real-time.
In high-propagation delay environments such as deep
space, DTN bundle agents must use other methods, such
as some form of scheduling, to enable connectivity
between the two agents. The convergence layer protocols
provide the standard methods for transferring the bundles

ISBN NO : 978 - 1502893314


over various communications paths. The bundle agent

discovery protocols are the equivalent to dynamic routing
protocols in IP networks. To date, the location of bundle
agents, DTN agents, has been managed, analogous to
static routing in internet protocol (IP) networks.
The security protocols for DTN are important for
the bundle protocol. The stressed environment of the
underlying networks over which the bundle protocol will
operate makes it important that the DTN be protected
from unauthorized use, and this stressed environment
poses unique challenges on the mechanisms needed to
secure the bundle protocol. DTNs are likely to be
deployed in organizationally heterogeneous environments
where one does not control the entire network
infrastructure. Furthermore, DTNs may very likely be
deployed in environments where a portion of the network
might become compromised, posing the usual security
challenges related to confidentiality, integrity and
Fault-tolerant systems are designed so that if a
component fails or a network route becomes unusable, a
backup component, procedure or route can immediately
take its place without loss of service. At the software
level, an interface allows the administrator to
continuously monitor network traffic at multiple points
and locate problems immediately. In hardware, fault
tolerance is achieved by component and subsystem


There are two types of ABE are depending on
which of private keys or cipher texts that access policies
are associated with. In a key-policy attribute-based
encryption (KP-ABE) system, cipher texts are labelled by
the transmitter with a set of descriptive attributes, while
exploiter's private key is issued by the trusted attribute
sway captures a policy (also called the access structure)
that specifies which type of cipher texts the key can
decrypt. KP-ABE schemes are suitable for structured
organizations with rules about who may read particular
documents. Typical applications of KP-ABE include
secure forensic analysis and target broadcast. For
example, in a secure forensic analysis system, audit log
entries could be annotated with attributes such as the
name of the exploiter, the date and time of the exploiter
action, and the type of data modified or accessed by the
exploiter action. While a forensic analyst charged with
some investigation would be issued a private key that
associated with a particular access structure. The private
key would only open audit log records whose attributes
satisfied the access policy associated with the private
key[4], [7],[15].
In a cipher text-policy attribute-based encryption
(CP-ABE) system, when a transmitter encrypts a
message, they specify a specific access policy in terms of
the access structure over attributes in the cipher text,

International Association of Engineering and Technology for Skill Development


Proceedings of International Conference on Advancements in Engineering and Technology


stating what kind of receivers will be able to decrypt the

cipher text. Exploiters possess sets of attributes and obtain
corresponding secret attribute keys from the attribute
sway. Such an exploiter can decrypt a cipher text if
his/her attributes satisfy the access policy associated with
the cipher text. Thus, CP-ABE mechanism is conceptually
closer to traditional role-based access control method.
1) Attribute Revocation: Bethencourt et al. [10] and
Boldyreva et al. [10] first suggested key revocation
mechanisms in CP-ABE and KP-ABE, respectively. Their
solutions are to append to each attribute an expiration date
(or time) and distribute a new set of keys to valid
exploiters after the expiration. The periodic attribute
revocable ABE schemes [8][13],[16],[17] have two main
The first problem is the security degradation in
terms of the backward and forward secrecy. It is a
considerable scenario that exploiters such as soldiers may
change their attributes frequently, e.g., position or
location move when considering these as attributes [4],
[9]. Then, a exploiter who newly holds the attribute might
be able to access the previous data encrypted before he
obtains the attribute until the data is re-encrypted with the
newly updated attribute keys by periodic rekeying
(backward secrecy). The other is the scalability problem.
The key sway periodically announces a key update
material by unicast at each time-slot so that all of the nonrevoked exploiters can update their keys.
2) Key Escrow: Most of the existing ABE schemes are
constructed on the architecture where a single trusted
sway has the power to generate the whole private keys of
exploiters with its master secret information [11]. Thus,
the key escrow problem is inherently such that the key
sway can decrypt every cipher-text addressed to
exploiters in the system by generating their secret keys at
any time.
3) Decentralized ABE: Huang et al. [9] and Roy et al. [4]
proposed decentralized CP-ABE schemes on the multisway network environment. They achieved a combined
access policy over the attributes issued by different sway
by simply encrypting data multiple times. The main
disadvantages of this approach are efficiency and
expressiveness of access policy.


In this section, we describe the DTN architecture
and define the security model.

ISBN NO : 978 - 1502893314

Fig. 1. Architecture
transmission in defense network




A. System Description and Assumptions

Fig. 1 shows the layout of the DTN. As shown in
Fig. 1, the design consists of the following system
1) Key Sway: They are key generation centers that
generate public/secret guidelines for CP-ABE. The key
sway consists of a central domination and multiple local
domination. We assume that there are secure and reliable
transmission ducts between a central domination and each
local domination during the preliminary key conformation
and generation phase. Each local sway manages different
attributes and issues corresponding attribute keys to
exploiters. They grant different access entitlement to
individual exploiters based on the exploiters attributes.
The key sway is undertaking to be righteous-but-peculiar.
That is, they will honestly perform the assigned tasks in
the given order; however they would like to learn
information of encrypted contents as much as possible.
2) Supply node: This is an entity that supplies data from
transmitters and provide corresponding access to
exploiters. It may be dynamic or static [4], [5]. Similar to
the previous schemes, we also assume the supply node to
be semi-devoted, which is righteous-but-peculiar.
3) Transmitter: This is an entity that has the trusted
messages or data (e.g., a commander) and wishes to store
them into the exterior data supply node for ease of
apportioning or for trustworthy delivery to exploiters in
the extreme networking environs. A transmitter is credible
for describing (attribute based) access policy and
enforcing it on its own data by encrypting the information
under the policy before storing it to the supplied node.
4) Exploiter: This is a moving node that needs to gain
access the data stored at the supplied node (e.g., a
soldier). If an exploiter possesses a set of attributes
satisfying the access policy of the encrypted data defined
by the transmitter, and is not revoked in any of the
attributes, then he will be able to decrypt the cipher text
and obtain the data.
Since the key sway is semi-trusted, they should be
deterred from accessing plaintext of the data in the supply
node; meanwhile, they should be still able to issue secret
keys to exploiters. In order to realize this somewhat

International Association of Engineering and Technology for Skill Development


Proceedings of International Conference on Advancements in Engineering and Technology

contradictory requirement, the central sway and the local

sway engage in the arithmetic 2PC protocol with master
keys of their own and issue independent key components
to exploiters during the key issuing phase. The 2PC
protocol prevents them from knowing each others master
secrets so that none of them can generate the whole set of
secret keys of exploiters individually. Thus, we take an
assumption that the central sway does not collude with the
local sway (otherwise, they can guess the secret keys of
every exploiter by sharing their master secrets).
B. Security Requirements
1) Data confidentiality: Unauthorized exploiters who do
not have enough credentials satisfying the access policy
should be deterred from accessing the plain data in the
supply node. In addition, unauthorized access from the
supply node or key sway should also be prevented.
2) Collusion-resistance: If multiple exploiters collude,
they may be able to decrypt a cipher text by combining
their attributes even if each of the exploiters cannot
decrypt the cipher text alone. For example, suppose there
exist a exploiter with attributes {Battalion 1, Region
1} and another exploiter with attributes {Battalion 2,
Region 2}. They may succeed in decrypting a cipher
text encrypted under the access policy of (Battalion 1
AND Region 2), even if each of them cannot decrypt it
individually. We do not want these colluders to be able to
decrypt the secret information by combining their
attributes. We also consider a collusion attack among
curious local sway to derive exploiters keys.
3) Backward and forward Secrecy: In the context of
ABE, backward secrecy means that any exploiter who
comes to hold an attribute (that satisfies the access policy)
should be prevented from accessing the plaintext of the
previous data exchanged before he holds the attribute. On
the other hand, forward secrecy means that any exploiter
who drops an attribute should be prevented from
accessing the plaintext of the subsequent data exchanged
after he drops the attribute, unless the other valid
attributes that he is holding satisfy the access policy.

Cryptographic Background
We first provide a formal definition for access
structure recapitulating the definitions in [12] and [13].
Then, we will briefly review the necessary facts about the
bilinear map and its security assumption.
1) Access Structure: Let {P1,P2,,Pn} be a set of
parties. A collection is a subset of 2{P1, P2,.., Pn} is
monotone. An access structure (respectively, monotone
access structure) is a collection (respectively, monotone

{P1,P2,,Pn}.The sets in are called the authorized
sets, and the sets not in are called the unauthorized sets.
2) Bilinear Pairings: Let G0 and G1 be a multiplicative

ISBN NO : 978 - 1502893314


cyclic group of prime order p. Let g be a generator of G0.

A map e: G0 * G1G1 is said to be bilinear.
3)Bilinear DiffieHellman Assumption: Using the
above notations, the Bilinear DiffieHellman (BDH)
problem is to compute e (g,g)abc G1 given a generator g
of G0 and elements ga, gb ,gc for a,b,c. An equivalent
formulation of the BDH problem is to compute e(A,B)c
given a generator g of G0, and elements A,B and gc in G0.


In this section, we provide a multisway CP-ABE
scheme for secure data transmission DTNs. Each local
sway issue partial personalized and attribute key
components to an exploiter by performing secure 2PC
protocol with the central sway. Each attribute key of an
exploiter can be updated individually and immediately.
Thus, the scalability and security can be enhanced in the
proposed scheme.
A. Access Tree
1) Description: Let be a tree representing an access
structure. Each non leaf node of the tree represents a
threshold gate. If
is the number of children of a
node x and k is its threshold value, then 0 kx numx.
Each leaf node x of the tree is described by an attribute
and a threshold value kx=1.
2) Satisfying an Access Tree: Let x be the sub tree of
rooted at the node x. If a set of attributes satisfies the
access Tree x, we denote it as x()=1.
B. Scheme Construction
Let G0 be a bilinear group of prime order , and let be a
generator of G0. Let e : G0 * G0G1 denote the bilinear
map. A security parameter k, will determine the size of
the groups.
1) System Setup: At the initial system setup phase, the
trusted initializer2 chooses a bilinear group G0 of prime
order with generator according to the security parameter.
It also chooses hash functions H:{0,1}* ->G0 from a
family of universal one-way hash functions. The public
parameter param is given by (G0,g,H).
Central Key Sway: CA chooses a random exponent
R *. The master public/private key pair is given by
(PK c = h K c= )
Local Key Sway: Each Ai chooses a random exponent
R *P. The master public/private key pair is given by
(PK i = e(gg) K i= )
2) Key Generation: In CP-ABE, exploiter secret key
components consist of a single personalized key and
multiple attribute keys. The personalized key is uniquely
determined for each exploiter to prevent collusion attack
among exploiters with different attributes. The proposed
key generation protocol is composed of the personal key
generation followed by the attribute key generation
protocols. It exploits arithmetic secure 2PC protocol to
eliminate the key escrow problem such that none of the

International Association of Engineering and Technology for Skill Development


Proceedings of International Conference on Advancements in Engineering and Technology

sway can determine the whole key components of

exploiters individually.
During the key generation phase using the 2PC
protocol, the proposed scheme (especially 2PC protocol)
requires (3m + 1)C0 messages additively to the key
issuing overhead in the previous multisway ABE schemes
in terms of the communication cost, where m is the
number of key sway the exploiter is associated with, and
C0 is the bit size of an element in G0. However, it is
important to note that the 2PC protocol is done only once
during the initial key generation phase for each exploiter.
Therefore, it is negligible compared to the communication
overhead for encryption or key update, which could be
much more frequently performed in the networks.
C. Revocation
We observed that it is impossible to revoke
specific attribute keys of a exploiter without rekeying the
whole set of key components of the exploiter in ABE key
structure since the whole key set of a exploiter is bound to
the same random value in order to prevent any collusion
attack. Therefore, revoking a single attribute in the system
requires all exploiters who share the attribute to update all
their key components even if the other attributes of them
are still valid. This seems very inefficient and may cause
severe overhead in terms of the computation and
communication cost, especially in large-scaled networks.
One promising way to immediately revoke an
attribute of specific exploiters is to re-encrypt the ciphertext with each attribute group key and selectively
distribute the attribute group key to authorized (nonrevoked) exploiters who are qualified with the attribute.
Before distributing the cipher-text, the supply node
receives a set of membership information for each
attribute group G that appears in the access tree of CT
from the corresponding sway and re-encrypts it as
Generates a header message where each contains
the encrypted attribute group keys , which could be only
decrypted by non revoked attribute group members. This
can be done by exploiting many previous stateful or
stateless group key handler schemes. We will adopt the
complete sub tree method, which requires each exploiter
to store additional key encryption keys (KEKs). The
header message would be at most sizes for each attribute
group, where and are the number of all exploiters in the
system and that of exploiters in the attribute group,
D. Key Update
When a exploiter comes to hold or drop an
attribute, the corresponding key should be updated to
prevent the exploiter from accessing the previous or
subsequent encrypted data for backward or forward
secrecy, respectively. The key update procedure is
launched by sending a join or leave request for some

ISBN NO : 978 - 1502893314


attribute group from a exploiter who wants to hold or drop

the attribute to the corresponding sway. On receipt of the
membership change request for some attribute groups, it
notifies the supply node of the event. Without loss of
generality, suppose there is any membership change in Gi.

In this section, we first analyze and compare the
efficiency of the proposed scheme to the previous multisway CP-ABE schemes in theoretical aspects. Then, the
efficiency of the proposed scheme is demonstrated in the
network simulation in terms of the communication cost.
We also discuss its efficiency when implemented with
specific parameters and compare these results to those
obtained by the other schemes.
A. Efficiency
The logic expressiveness of access structure that
can be defined under different disjoint sets of attributes
(managed by different sway), key escrow, and revocation
granularity of each CP-ABE scheme. Here the logic can
be very expressive as in the single sway system like
BSW[13] such that the access policy can be expressed
with any monotone access structure under attributes of
any chosen set of sway; while HV[9] and RC[4] schemes
only allow the AND gate among the sets of attributes
managed by different sway. The revocation can be done
in an immediate way as opposed to BSW. Therefore,
attributes of exploiters can be revoked at any time even
before the expiration time that might be set to the
B. Simulation
In this simulation, we consider DTN applications
using the Internet protected by the attribute-based
encryption. Network Simulator NS2 is a primer providing
materials for NS2 beginners, whether students, professors,
or researchers for understanding the architecture of
Network Simulator 2 (NS2) and for incorporating
simulation modules into NS2. The authors discuss the
simulation architecture and the key components of NS2
including simulation-related objects, network objects,
packet-related objects, and helper objects.
The NS2 modules included within are nodes,
links, Simple link objects, packets, agents, and
applications. Further, the book covers three helper
modules: timers, random number generators, and error
models. Also included are chapters on summary of
debugging, variable and packet tracing, result
compilation, and examples for extending NS2. Two
appendices provide the details of scripting language Tcl,
OTcl and AWK, as well object oriented programming
used extensively in NS2.

International Association of Engineering and Technology for Skill Development


Proceedings of International Conference on Advancements in Engineering and Technology


In this section, we prove the security of our
scheme with regard to the security requirements

Fig. 2. Number of exploiters in an attribute group.

Fig. 2 represents the number of current exploiters and
revoked exploiters in an attribute group during 100 h.

Fig. 3. Communication cost in the multisway CP-ABE

Fig. 3 shows the total communication cost that the
transmitter or the supply node needs to send on a
membership change in each multi sway CP-ABE scheme.
It includes the cipher text and rekeying messages for nonrevoked exploiters. It is measured in bits. In this
simulation, the total number of exploiters in the network
is 10 000, and the number of attributes in the system is 30.
The number of the key sway is 10, and the average
number of attributes associated with a exploiters key is
C. Implementation
Next, we analyze and measure the computation
cost for encrypting (by a transmitter) and decrypting (by
an exploiter) a data. We used a Type-A curve (in the
pairing-based cryptography (PBC) library providing
groups in which a bilinear map e : G0 * G0G1 is
defined. Although such curves provide good
computational efficiency (especially for pairing
computation), the same does not hold from the point of
view of the space required to represent group elements.
Indeed, each element of G0 needs 512 bits at an 80-bit
security level and 1536 bits when 128-bit of security

ISBN NO : 978 - 1502893314

A. Collusion Resistance
In CP-ABE, the secret sharing must be
embedded into the Cipher text instead to the private keys
of exploiters. Like the previous ABE schemes, the private
keys (SK) of exploiters are randomized with personalized
random values selected by the CA such that they cannot
be combined in this scheme.
Another collusion attack scenario is the collusion
between revoked exploiters in order to obtain the valid
attribute group keys for some attributes that they are not
authorized to have (e.g., due to revocation). The attribute
group key distribution protocol, which is a complete sub
tree method in the proposed scheme, is secure in terms of
the key indistinguishability. Thus, the colluding revoked
exploiters can by no means obtain any valid attribute
group keys for attributes that they are not authorized to
B. Data Confidentiality
In our trust model, the multiple key sway are no
longer fully trusted as well as the supply node even if they
are honest. Therefore, the plain data to be stored should
be kept secret from them as well as from unauthorized
exploiters. Data confidentiality on the stored data against
unauthorized exploiters can be trivially guaranteed. If the
set of attributes of an exploiter cannot satisfy the access
tree in the cipher text, he cannot recover the desired value
e (g, g)rs during the decryption process, where r is a
random value uniquely assigned to him.
Another attack on the stored data can be launched by
the supply node and the key sway. Since they cannot be
totally trusted, confidentiality for the stored data against
them is another essential security criteria for secure data
retrieval in DTNs. The local sway issue a set of attributes
keys for their managing attributes to an authenticated
exploiter, which are blinded by secret information that is
distributed to the exploiter from CA. They also issue the
exploiter a personalized, secret key by performing the
secure 2PC protocol with CA. The key generation
protocol discourages each party to obtain each others
master secret key and determine the secret key issued
from each other. Therefore, they could not have enough
information to determine the whole set of secret key of
the exploiter individually. Even if the supply node
manages the attribute group keys, it cannot decrypt any of
the nodes in the access tree in the cipher text. This is
because it is only authorized to re-encrypt the cipher text
with each attribute group key, but is not allowed to
decrypt it (that is, any of the key components of exploiters
are not given to the node). Therefore, data confidentiality

International Association of Engineering and Technology for Skill Development


Proceedings of International Conference on Advancements in Engineering and Technology

against the curious key sway and supply node is also

C. Backward and Forward Secrecy
When an exploiter comes to hold a set of attributes
that satisfy the access policy in the cipher text at some
time instance, the corresponding attribute group keys are
updated and delivered to the valid attribute group
members securely (including the exploiter). In addition,
all of the components encrypted with a secret key in the
cipher text are re-encrypted by the supply node with a
random, and the cipher text components corresponding to
the attributes are also re-encrypted with the updated
attribute group keys. Even if the exploiter has stored in
the previous cipher text exchanged before he obtains the
attribute keys and the holding attributes satisfy the access
policy, he cannot decrypt the pervious cipher text.
On the other hand, when an exploiter comes to
drop a set of attributes that satisfy the access policy at
some time instance, the corresponding attribute group
keys are also updated and delivered to the valid attribute
group members securely (excluding the exploiter). Then,
all of the components encrypted with a secret key in the
cipher text are re encrypted by the supply node with a
random , and the cipher text components corresponding to
the attributes are also re-encrypted with the updated
attribute group keys. Then, the exploiter cannot decrypt
any nodes corresponding to the attributes after revocation
due to the blindness resulted from newly updated attribute
group keys. In addition, even if the exploiter has
e(g ,g)(1+.......+m)s before he was
revoked from the attribute groups and stored it, it will not
help to decrypt the subsequent cipher text e(g
,g)(1+.......+m)(s+s) re-encrypted with a new random .
Therefore, the forward secrecy of the stored data is
guaranteed in this scheme.

DTN technologies are becoming successful
solutions in military applications that allow wireless
devices to communicate with each other and access the
confidential information reliably by exploiting external
supply nodes. CP-ABE is a scalable cryptographic
solution to the access control and secures data retrieval
issues. In this paper, we proposed an efficient and secure
data retrieval method using CP-ABE for decentralized
DTNs where multiple key sway manages their attributes
independently. The inherent key escrow problem is
resolved such that the confidentiality of the stored data is
guaranteed even under the hostile environment where key
sway might be compromised or not fully trusted. In
addition, the fine-grained key revocation can be done for
each attribute group. We demonstrate how to apply the
proposed mechanism to securely and efficiently manage
the confidential data distributed in the disruption-tolerant
defese network.

ISBN NO : 978 - 1502893314


[1] J. Burgess, B. Gallagher, D. Jensen, and B. N. Levine,
Maxprop: Routing for vehicle-based disruption tolerant
networks, 2006,
[2] M. Chuah and P. Yang, Node density-based adaptive
routing scheme for disruption tolerant networks, 2006,.
[3] M. M. B. Tariq, M. Ammar, and E. Zequra, Mesage
ferry route design for sparse ad hoc networks with mobile
nodes, in Proc. ACM MobiHoc, 2006,.
[4] S. Roy andM. Chuah, Secure data retrieval based on
ciphertext policy attribute-based encryption (CP-ABE)
system for the DTNs, Lehigh CSE Tech. Rep., 2009.
[5] M. Chuah and P. Yang, Performance evaluation of
content-based information retrieval schemes for DTNs,
[6] M. Kallahalla, E. Riedel, R. Swaminathan, Q. Wang,
and K. Fu, Plutus: Scalable secure file sharing on
untrusted storage, 2003
[7] L. Ibraimi, M. Petkovic, S. Nikova, P. Hartel, and W.
Jonker, Mediated ciphertext-policy attribute-based
encryption and its application, 2009.
[8] N. Chen, M. Gerla, D. Huang, and X. Hong, Secure,
selective group broadcast in vehicular networks using
dynamic attribute based encryption, 2010
[9] D. Huang and M. Verma, ASPE: Attribute-based
secure policy enforcement in vehicular ad hoc networks,
[10] A. Lewko and B. Waters, Decentralizing attributebased encryption, Cryptology ePrint Archive: Rep.
2010/351, 2010
[11] A. Sahai and B. Waters, Fuzzy identity-based
encryption, in Proc. Eurocrypt, 2005
[12] V. Goyal, O. Pandey, A. Sahai, and B. Waters,
Attribute-based encryption for fine-grained access
control of encrypted data,2006
[13] J. Bethencourt, A. Sahai, and B. Waters,
Ciphertext-policy attributebased encryption, 2007,
[14] R. Ostrovsky, A. Sahai, and B. Waters, Attributebased encryption with non-monotonic access structures,
[15] S. Yu, C. Wang, K. Ren, and W. Lou, Attribute
based data sharing with attribute revocation, 2010, pp.
[16] A. Boldyreva, V. Goyal, and V. Kumar, Identitybased encryption with efficient revocation2008,
[17] M. Pirretti, P. Traynor, P. McDaniel, and B. Waters,
Secure attribute based systems, 2006

International Association of Engineering and Technology for Skill Development


Proceedings of International Conference on Advancements in Engineering and Technology


Load Stabilizing and Energy Conserving Routing

Protocol for Wireless Sensor Networks
Janakiraman V.
M.E(Computer Science and Engineering)
Sree Sowdambika College of Engineering
Aruppukottai, Tamilnadu, India.

Mr. G.Vadivel Murugan M.E.,

Assistant Professor
Dept of Computer Science and Engineering
Sree Sowdambika College of Engineering
Aruppukottai, Tamilnadu, India.

Abstract Wireless sensor network (WSN) is a system consists

of a massive collection of low-cost micro-sensors. WSN is used to
gather and send various types of the messages to a base station
(BS). WSN consists of low-cost nodes with limited battery
capacity, and replacement of the battery is not easy for WSN
with thousands of physically connected nodes, which means
energy conserving routing protocol should be employed to offer a
long-life work time. To accomplish the aim, we need not only to
minimize total energy utilization but also to balance WSN load.
Research workers have recommended many protocols such as
LEACH, HEED, TBC, PEGASIS and PEDAP. In this paper, we
suggest a Load Stabilizing Tree Based Energy Conserving
Routing Protocol(LSTEC ) which constructs a routing tree using
a process where as each round BS assigns a root node and
transmits this selection to all sensor nodes. Eventually, each node
chooses its parent by contemplating only itself and its neighbors'
information, thus making LSTEC a dynamic protocol.
Simulation results show that LSTEC has a better performance
than other protocols in balancing energy utilization, thus
extending the lifetime of WSN.
KeywordsEnergy utilization, Load balance, Network lifetime,
Routing protocol, Tree based, Wireless Sensor Network.

Generally, wireless sensor nodes are located
arbitrarily and tightly coupled in a target area, especially
where the physical environment is too hard that the macrosensor counterparts cannot be deployed. After deployment, if
there is no sufficient battery power, the network cannot work
properly [1], [2], [3]. In general, WSN may generate quite a
large amount of data, so if data fusion could be used, the
throughput could be decreased [4]. Because sensor nodes are
deployed densely, WSN might produce redundant data from
multiple nodes, and the redundant data can be unified to
reduce communication. Many familiar protocols execute data
fusion, but almost all of them suppose that the length of the
message dispatched by each relay node should be constant,
i.e., each node sends the same amount of data no matter how
much data it sustains from its child nodes[10].
PEDAP [7] and PEGASIS [8] are conventional protocols
based on this supposition and perform far better than HEED
and LEACH in this case. However, there are quite a few
applications in which the length of the message transferred by
a parent node depends not only on the length of its own, but
also on the length of the messages received from its child
nodes [10].

ISBN NO : 978 - 1502893314

Mr. M.Senthil Kumar M.Tech.,

Assistant Professor
Dept of Computer Science and Engineering
Sree Sowdambika College of Engineering
Aruppukottai, Tamilnadu, India.

Energy consumption of a node is due to either useful or

"wasteful" operations. The useful operations include
transmitting or receiving data messages, and processing
requests. On the other hand, the wasteful consumption is due
to the operation of routing tree construction, overhearing,
retransmitting because of rasping environment, and dealing
with redundant broadcast overhead messages.
In this paper, we propose Load Stabilizing Tree Based
Energy Conserving Routing Protocol (LSTEC).We
contemplates a situation in which the network collects data
periodically from a location where each node continuously
senses the environment and sends the data back to BS [9].
In General there are two definitions for network lifetime:
a) The starting time of the network operation to the
death of the first node in the network [8].
b) The starting time of the network operation to the
death of the last node in the network.
In this paper, we agree with the first definition. Moreover,
we consider two exceptional cases in data fusion:
Case (1) The data between any sensor nodes can be
fused totally. Each node sends the same volume of data that
does not matter how much data it receives from its child
Case (2) The data cannot be fused. The length of
the message transmitted by each relay node is the sum of its
own sensed data and received data from its child nodes.
The rest of the paper is categorized as follows:
Section II related works. The radio models and network of
our proposal are discussed in Section III. Section IV describes
the detailed architecture of LSTEC. In Section V we show our
simulations varied to the simulations of other protocols.
Finally, Section VI conclusion.
A major task of WSN is to collect information
periodically from the preferred area and transmit the
information to BS. A simple method to attain this task is that
each sensor node transmits data directly to BS. However,
when BS is sited a long away from the target area, the sensor
nodes will die rapidly due to more energy utilization. On the
other hand, since the gap between each node and BS are
different, direct transmission may lead to unbalanced energy
consumption. To solve these problems, many protocols have
been proposed such as LEACH, PEGASIS, HEED, PEDAP.

International Association of Engineering and Technology for Skill Development


Proceedings of International Conference on Advancements in Engineering and Technology

In LEACH [4], [5], for the entire network, nodes are
selected according to a fraction p from all sensor nodes are
fixed to serve as cluster heads (CHs), where p is a design
parameter. The operations of LEACH are split into several
rounds. Each round includes a design phase and a static-state
phase. During the design phase, each node will decide whether
to be a CH or not according to a predefined specification.
After CHs are selected, each of other nodes will choose its
own CH and join the cluster according to the power of many
received messages. Each node will choose the nearest CH.
During the static-state phase, CHs fuse the data received from
their cluster nodes and send the fused data to BS by single-hop
connection. LEACH uses randomization to turn around CHs
for each round in order to consistently allocate the energy
consumption. So LEACH can minimize the amount of data
directly broadcasted to BS and balance WSN load, thus
attaining a factor of 8 times growth compared with direct
In [6], the authors proposed a hybrid, energyefficient, distributed clustering algorithm (HEED). HEED is
an enhancement of LEACH on the method of CH choosing. In
each round, HEED selects CHs according to the remaining
energy of each node and a secondary parameter such as nodes
proximity to their neighbors or nodes degrees. By iterations
and competition, HEED ensures only one CH within a certain
range, so uniform CHs distribution is achieved across the
network. Compared with LEACH, HEED effectively prolongs
network lifetime and is suitable for situations such as where
each node has different initial energy.
For Case1, LEACH and HEED greatly reduce total
energy consumption. However, LEACH and HEED consume
energy heavily in the head nodes, which makes the head
nodes die quickly. S. Lindsey et al. proposed an algorithm
related to LEACH and it is called PEGASIS [7]. PEGASIS is
a nearly optimal power efficient protocol which uses
GREEDY algorithm to make all the sensor nodes in the
network form a chain. In PEGASIS, the (i mod N)th node is
chosen to be a leader and the leader is the only one which
needs to communicate with BS in round i. N is the total
amount of nodes. Data is collected by starting from both
endpoints of the chain, and transmitted along the chain, and
fused each time it transmits from one node to the next until it
reaches the leader. So PEGASIS sharply reduces the total
amount of data for long-distance transmission and achieves a
better performance than LEACH by 100% to 300% in
terms of network lifetime.
In our work, we assume that the system model
has the following properties:
Sensor nodes are randomly distributed in the
square field and there is only one BS deployed
far away from the area.
Sensor nodes are stationary and energy
constrained. Once deployed, they will keep
operating until their energy is exhausted.
BS is stationary, but it is not energy constrained.

ISBN NO : 978 - 1502893314


All sensor nodes have power control capabilities;

each node can change the power level and
communicate with BS directly.
Sensor nodes are location-aware. A sensor node
can get its location information through other
mechanisms such as GPS or position algorithms.
Each node has its unique identifier (ID).
For both cases, the medium is assumed to be
symmetric so that the energy required for transmitting a
message from node A to node B or from node B to node A is
the same.
The main aim of LSTEC is to achieve a longer
network lifetime for different applications. In each round, BS
assigns a root node and broadcasts its ID and its coordinates to
all sensor nodes. Then the network computes the path either
by transmitting the path information from BS to sensor nodes
or by having the same tree structure being dynamically and
individually built by each node. For both cases, LSTEC can
change the root and reconstruct the routing tree with short
delay and low energy consumption. Therefore a better
balanced load is achieved compared with the protocols
mentioned in Section II.
The operation of LSTEC is divided into Initial Phase,
Tree Construction Phase, Data Collection and Transmission
Phase, and Information Exchange Phase.
A. Initial Phase
In Initial Phase, the network parameters are
initialized. Initial Phase is divided into three steps.
Step 1: When Initial Phase begins, BS broadcasts a packet to
all the nodes to inform them of beginning time, the
length of time slot and the number of nodes N. When
all the nodes receive the packet, they will compute
their own energy-level (EL) using function:
EL is a parameter for load balance.
Step 2: Each node sends its packet in a circle with a certain
radius during its own time slot after Step 1. For
example, in the time slot, the node whose ID is I will
send out its packet. This packet contains a preamble
and the information such as coordinates and EL of
node i. All the other nodes during this time slot will
monitor the channel, and if some of them are the
neighbors of node i, they can receive this packet and
record the information of node i in memory. After all
nodes send their information, each node records a
table in their memory which contains the information
of all its neighbors.
Step 3: Each node sends a packet which contains all its
neighbors information during its own time slot when
Step 2 is over. Then its neighbors can receive this
packet and record the information in memory. The
length of time slots in Steps 2 and 3 is predefined,

International Association of Engineering and Technology for Skill Development


Proceedings of International Conference on Advancements in Engineering and Technology

thus when time is up, each node has sent its
information before Initial Phase ended. After Initial
Phase, each node records two tables in memory
which contain the information of all its neighbors and
its neighbors neighbors. These two tables are
defined as Table I and Table II.
B. Tree Construction Phase
Within each round, LSTEC performs the following
steps to build a routing tree. Between Case1 and Case2 there
are some differences in the steps of routing tree constructing:
Step 1: BS assigns a node as root and broadcasts root
ID and root coordinates to all sensor nodes.

For Case1, because data fusion technique is

implemented, only one node which
communicates directly with BS can transmit
all the data with the same length as its own,
which results in much less energy
consumption. In order to balance the
network load for Case1, in each round, a
node with the largest residual energy is
chosen as root. The root collects the data of
all sensors and transmits the fused data to
BS over long distance.
For Case2, because data cant be fused, it
will not save the energy for data transmitting
by making fewer nodes communicate
directly with BS. When one of the sensor
nodes collects all the data and sends it to BS,
it would deplete its energy quickly. In this
case BS always assigns itself as root.
Step 2: Each node tries to select a parent in
neighbors using EL and coordinates which
are recorded in Table I. The selection
criteria are:
1) For both Case1 and Case2, for a sensor
node, the distance between its parent node
and the root should be shorter than that
between itself and the root.
2) For Case1, each node chooses a neighbor
that satisfies criterion 1 and is the nearest to
itself as its parent. And if the node cant find
a neighbor which satisfies criterion 1, it
selects the root as its parent.

ISBN NO : 978 - 1502893314


3) For Case2, the process of Tree

Constructing Phase can be regarded as an
iterative algorithm. Besides criterion 1, for a
sensor node, only the nodes with the largest
EL of all its neighbors and itself can act as
relay nodes. If the sensor node itself has the
largest EL, it can also be considered to be an
imaginary relay node. Choosing the parent
node from all the relay nodes is based on
energy consumptions. Any of these
consumptions is the sum of consumption
from the sensor node to a relay node and
that from the relay node to BS. The relay
node which causes minimum consumption
will be chosen as the parent node. It is true
that this relay node should choose its parent
node in the same way. So a path with
minimum consumption is found by
iterations. And by using EL, LSTEC
chooses the nodes with more residual energy
to transmit data for long distance.
If the sensor node cannot find a suitable parent node,
it will transmit its data directly to BS.
Step 3: Because every node chooses the parent from its
neighbors and every node records its neighbors
neighbors information in Table II, each node can
know all its neighbors parent nodes by computing,
and it can also know all its child nodes. If a node has
no child node, it defines itself as a leaf node, from
which the data transmitting begins.
As discussed above, for Case1, because each packet

sent to the parent nodes will be fused, the minimum energy

consumption can be achieved if each node chooses the node
nearest to it. But if all nodes choose their nearest neighbors,
the network may not be able to build a tree. Fig. 1 shows a
network of 100 nodes in this situation. We can find that some

International Association of Engineering and Technology for Skill Development


Proceedings of International Conference on Advancements in Engineering and Technology

clusters are formed, but they cannot connect with others. Thus
in LSTEC, we use criterion 1 in Case1 to limit the search
direction. By this approach, a routing tree is constructed and
some nodes still have the possibility of connecting to their
nearest neighbors. For Case2, criterion 1 should also be
obeyed and this criterion helps to save the energy for data
transmitting to a certain extent.
To build a routing tree, for Case1, each node follows the
steps. But for Case2, we use BS to compute the topography.
C. Data Collection and Transmission Phase
After the routing tree is constructed, each sensor node
collects information to generate a DATA_PKT which needs to
be transmitted to BS. For Case1, TDMA and Frequency
Hopping Spread Spectrum (FHSS) are both applied. This
phase is divided into several TDMA time slots. In a time slot,
only the leaf nodes try to send their DATA_PKTs. After a

ISBN NO : 978 - 1502893314


node receives all the data from its child nodes, this node itself
serves as a leaf node and tries to send the fused data in the
next time slot.
Each TDMA time slot is divided into three segments
as follows (see Fig. 2).
Segment1: The first segment is used to check if there
is communication interference for a parent node. In this
segment, each leaf node sends a beacon which contains its ID
to its parent node at the same time.
Three situations may occur and they divide all the
parent nodes into three kinds. For the first situation, if no leaf
node needs to transmit data to the parent node in this time slot,
it receives nothing. For the second situation, if more than one
leaf node needs to transmit data to the parent node, it receives
an incorrect beacon. For the third situation, if only one leaf
node needs to transmit data to the parent node, it receives a
correct beacon.

International Association of Engineering and Technology for Skill Development


Proceedings of International Conference on Advancements in Engineering and Technology


Segment 2: During the second segment, the leaf

nodes which can transmit their data are confirmed. For the
first situation, the parent node turns to sleep mode until next
time slot starts. For the second situation, the parent node sends
a control packet to all its child nodes. This control packet
chooses one of its child nodes to transmit data in the next
segment. For the third situation, the parent node sends a
control packet to this leaf node. This control packet tells this
leaf node to transmit data in the next segment.
Segment 3: The permitted leaf nodes send their data
to their parent nodes, while other leaf nodes turn to sleep
mode. The process in one time slot is shown in Fig. 2.
For Case2, each node chooses its parent by
considering not the distance but the total energy consumption.
In our simulation results, we will show that there may be
many leaf nodes sharing one parent node in one time slot. If
all the leaf nodes try to transmit their data at the same time,
the data messages sent to the same parent node may interfere
with each other. By applying Frequency Division Multiple
Access (FDMA) or Code Division Multiple Access (CDMA),
the schedule generated under competition is able to avoid
collisions. However, the accompanying massive control
packets will cause a large amount of energy to be wasted.
Thus the process may be much simpler. At the beginning of
each round, the operation is also divided into several time
slots. In the ith time slot, the node whose ID is i turns on its
radio and receives the message from BS. BS uses the same
approach to construct the routing tree in each round, and then
BS tells sensor nodes when to send or receive the data.
D. Information Exchange Phase
For Case1, since each node needs to generate and
transmit a DATA_PKT in each round, it may exhaust its
energy and die. The dying of any sensor node can influence
the topography. So the nodes that are going to die need to
inform others. The process is also divided into time slots. In
each time slot, the nodes whose energy is going to be
exhausted will compute a random delay which makes only
one node broadcast in this time slot. When the delay is
ended, these nodes are trying to broadcast a packet to the
whole network. While all other nodes are monitoring the
channel, they will receive this packet and perform an ID
check. Then they modify their tables. If no such packet is
received in the time slot, the network will start the next
For Case2, BS can collect the initial EL and
coordinates information of all the sensor nodes in Initial
Phase. For each round, BS builds the routing tree and the
schedule of the network by using the EL and coordinates
information. Once the routing tree is built, the energy
consumption of each sensor node in this round can be
calculated by BS, thus the information needed for
calculating the topology for the next round can be known in
advance. However, because WSN may be deployed in an
unfriendly environment, the actual EL of each sensor node
may be different from the EL calculated by BS. To cope
with this problem, each sensor node calculates its EL and

ISBN NO : 978 - 1502893314

detects its actual residual energy in each round. We define

the calculated EL as EL1 and the actual EL as EL2. When
the two ELs of a sensor node are different, the sensor node
generates an error flag and packs the information of actual
residual energy into DATA_PKT, which needs to be sent to
BS. When this DATA_PKT is received, BS will get the
actual residual energy of this sensor node and use it to
calculate the topology in the next round.
A MATLAB simulation of LSTEC is done for both
Case1 and Case2 to evaluate the performance.
For Case1, we first compare LSTEC with
PEGASIS and use the same network model as PEGASIS.
We generate a randomly distributed 100 to 400 nodes
network of square area 100 m 100 m with BS located at
(50 m, 175 m) and use DATA_PKT length of 2000 bits and

International Association of Engineering and Technology for Skill Development


Proceedings of International Conference on Advancements in Engineering and Technology

CTRL_PKT length of 100 bits. We let each node have 0.25
J initial energy. Fig. 3 and Fig. 4 show the routing tree
generated by LSTEC and PEGASIS for exactly the same
100 node topology.
In Fig. 3 the triangle is root node and in Fig. 4 the
triangle is head node and the rectangle is tail node. As seen,
the routing tree generated by LSTEC is better. Since
PEGASIS uses GREEDY algorithm to form a chain, long
links may exist between parent nodes and child nodes,
which will cause an unbalanced load. As for LSTEC, each
node tends to choose the nearest neighbor to avoid long
links. Fig. 5 shows that the time when the first node dies
changes within a range from 100 nodes to 400 nodes in the
network. We can find that LSTEC performs much better
than PEGASIS and prolongs network lifetime by about
100% to 300% in Fig. 5.

Fig. 5. For Case1, we compare the time when first node dies for
GSTEB and PEGASIS for the number of nodes from 100 to 400.

Fig. 6. For Case1, we compare the time when first node dies for
GSTEB and HEED for the number of nodes from 100 to 400.

ISBN NO : 978 - 1502893314


In Fig. 6 shows that the time when the first node

dies changes within a range from 100 nodes to 400 nodes in
the network. Clearly, LSTEC performs better than HEED
and prolongs the network lifetime by more than 100%.
In this work, we introduce LSTEC. Two definitions of
network lifetime and two extreme cases of data fusion are
proposed. The simulations show that when the data
collected by sensors is strongly correlative, LSTEC
outperforms LEACH, PEGASIS and HEED. Because
LSTEC consumes only a small amount of energy in each
round to change the topography for the purpose of balancing
the energy consumption. All the leaf nodes can transmit data
in the same TDMA time slot so that the transmitting delay is
short. When lifetime is defined as the time from the start of
the network operation to the death of the first node in the
network, LSTEC prolongs the lifetime by 100% to 300%
compared with PEGASIS. In some cases, we are more
interested in the lifetime of the last node in the network.
Some slight changes are made to make the performance of
LSTEC similar to that of PEDAP. So LSTEC is nearly the
optimal solution in Case1. When the data collected by
sensors cannot be fused, LSTEC offers another simple
approach to balancing the network load. In fact, it is
difficult to distribute the load evenly on all nodes in such a
case. Even though LSTEC needs BS to compute the
topography, which leads to an increase in energy waste and
a longer delay, this kind of energy waste and longer delay
are acceptable when compared with the energy consumption
and the time delay for data transmitting. Simulation results
show that when lifetime is defined as the time from the start
of the network operation to the death of the first node in the
network, LSTEC prolongs the lifetime of the network by
more than 100% compared with HEED.
[1] K. Akkaya and M. Younis, A survey of routing protocols in wireless
sensor networks, Elsevier Ad Hoc Network J., vol. 3/3, pp. 325349, 2005.
[2] I. F. Akyildiz et al., Wireless sensor networks: A survey, Computer
Netw., vol. 38, pp. 393422, Mar. 2002.
[3] K. T. Kim and H. Y. Youn, Tree-Based Clustering(TBC) for energy
efficient wireless sensor networks, in Proc. AINA 2010, 2010, pp.
[4] M. Liu, J. Cao, G. Chen, and X.Wang, An energy-aware routing
protocol in wireless sensor networks, Sensors, vol. 9, pp. 445462, 2009.
[5] W. Liang and Y. Liu, Online data gathering for maximizing network
lifetime in sensor networks, IEEE Trans Mobile Computing, vol. 6,
no. 1, pp. 211, 2007.
[6] O. Younis and S. Fahmy, HEED: A hybrid, energy-efficient,
distributed clustering approach for ad hoc sensor networks, IEEE Trans.
Mobile Computing, vol. 3, no. 4, pp. 660669, 2004.
[7] S. Lindsey and C. Raghavendra, Pegasis: Power-efficient gathering in
sensor information systems, in Proc. IEEE Aerospace Conf., 2002, vol. 3,
pp. 11251130.
[8] H. O. Tan and I. Korpeoglu, Power efficient data gathering and
Aggregation in wireless sensor networks, SIGMOD Rec., vol. 32, no. 4,
pp. 6671, 2003.
[9] G. Mankar and S. T. Bodkhe, Traffic aware energy efficient routing
protocol, in Proc. 3rd ICECT, 2011, vol. 6, pp. 316320, .
[10] R. Szewczyk, J. Polastre, A.Mainwaring, and D. Culler, Lessons
from sensor network expedition, in Proc. 1st European Workshop on
Wireless Sensor Networks EWSN 04, Germany, Jan. 19-21, 2004.

International Association of Engineering and Technology for Skill Development


Proceedings of International Conference on Advancements in Engineering and Technology


Multi-View and Multi Band Face Recognition

PG Student, CSE Dept
Velammal Engineering College

Ms. A.BhagyaLakshmi
Asst.Prof, CSE Dept
Velammal Engineering College

Abstract - Face recognition is a challenging problem for security surveillance and become an active research
area during few decades. Due to the different levels of illumination conditions, variations due to lighting,
expression and aging, the recognition of such algorithms rate is considerably limited. To solve this
problem,multi-band face recognition algorithm is introduced in this paper. The multi-view and multi band face
recognition used in this paper is suitable for estimation the pose of the face from a video source. Unlike previous
eigenface or PCA approach, a small number (40 or lower) of eigenfaces are derived from a set of training face
images by using the Karhunen-Loeve transform or PCA. Instead, the similarity between feature sets from
different videos using Wavelet Transform, Entropy imaging is measured in this work. The experimental results
show that the wavelet transform takes less response time which is more suitable for feature extraction and face
matching with high accuracy, performance and accuracy in CBIR system.
Keywords: Image Processing, Face Recognition, Multi-View Videos, Wavelet Transform.

I. Introduction
A biometric system[4] provides automatic
recognition of an individual based on some sort of
unique feature or characteristic possessed by the
individual. Behavioral
biometrics includes
signatures, voice recognition, gait measurement,
fingerprinting, hand profiling, iris recognition,
retinal scanning, and DNA testing. Behavioral
methods tend to be less reliable than physiological
methods because they are easier to duplicate than
physical characteristics (Jain et al., 1999).
Physiological attributes are more trusted method in
biometrics among which iris recognition is gaining
much attention in accuracy and reliability. First
automatic face recognition[2][3][5] system was
Developed by Kanade 1973.
A face recognition system is expected to identify
faces present in images and videos automatically. It
can operate in either or both of two modes: Face
verification (or authentication): involves a one-toone match that compares a query face image
against a template face image whose Identity is
recognition)[8][9]: involves One-to-many matches
that compare a Query face image against all the
template images in the database to determine the
identity of the query face. During face recognition
major challenges is Inter-class similarity and Intraclass similarity. Inter-class similarity means people
having identified similar faces which make their

ISBN NO : 978 - 1502893314

distinction difficult. And Intra-class variations

Causes some changes in head pose, illumination
conditions, expressions, facial accessories,
expressions, aging effects. Lighting conditions
change the face appearances so approaches based
on intensity images are not sufficient for
overcoming this problem.
II. Background concepts
A. Feature Recognition: Biometric facial
recognition systems[1][7] compare images of
individuals from incoming video against specific
databases and send alerts when a positive match
occurs. The key steps in facial recognition are: face
detection, recording detected faces, Match recorded
faces with those stored in a database automatic
process to find the closest match. Applications
include: 1. VIP lists make staff aware of important
individuals (VIP) and respond in an appropriate
manner, 2. Black lists identify known offenders
or to register suspects to aid public
safety, 3.
Banking transactions - verification of the persons
attempting a financial transaction and so on.
Image Acquisition:
The image acquisition engine enables you to
acquire frames as fast as your camera and PC can
support for high speed imaging. Image is captured
using digital camera in RGB format. The first
function performed by the imaging system is to
collect the incoming energy and focus it onto an
image plane. Digital and analog circuitry sweeps

International Association of Engineering and Technology for Skill Development


Proceedings of International Conference on Advancements in Engineering and Technology












Database of Image



Fig.1 Multi-Band Face Recognition Processing

these outputs and Convert them to an analog signal,

which is then digitized by another section of the
imaging system. The output is a digital image is
formed finally.
Image captured not used for feature Extraction and
classification, because captured face Images are
affected by various factors such as noise, lighting
variance, climatic conditions, poor resolutions of an
image, wanted background etc.
RGB Image to GRAY Scale Image:
RGB images converts to gray scale by eliminating
the hue and saturation information while retaining
the luminance. Then, add together 30% of the red
value, 59% of the green value, and 11% of the blue
value. To convert a gray intensity value to RGB,
simply set all the three primary color components
red, green and blue to the gray value, correcting to
a different gamma if necessary.
Filtering Techniques: Filtering refers to accepting
or rejecting certain frequency components. A filter

ISBN NO : 978 - 1502893314

that passes low frequencies is called a lowpass

filter. The Net image produced by lowpass is to
blur (smooth) an image. Two Dimensional lowpass

1 if
H (u , v)
0 if

D(u , v ) D0
D(u, v) D0

where D0 is specified nonnegative quantity.

A filter that passes high frequencies but reduce
amplitude Signal with frequency lower than the
sscutoff frequencies.

1 if
H (u , v)
0 if

D(u , v ) D0
D(u, v) D0

where Do is the cutoff distance measured from the

origin Of the frequency plane.
Wavelets: Wavelets can be used to extract
information from many different kinds of data,
including but certainly not limited to audio
signals and images. Sets of wavelets are generally
needed to analyze data fully. A set of
"complementary" wavelets will decompose data

International Association of Engineering and Technology for Skill Development


Proceedings of International Conference on Advancements in Engineering and Technology

without gaps or overlap so that the decomposition

process is mathematically reversible.

principal components is less than or equal to the

number of original variables.

Wavelet transforms[10] are classified into discrete

transforms (DWTs)
and continuous
wavelet transforms (CWTs). Both DWT and CWT
are continuous-time (analog) transforms. They can
be used to represent continuous-time (analog)
signals. CWTs operate over every possible scale
and translation where as DWTs use a specific
subset of scale and translation values or
representation grid.

The steps involved in PCA can be summarized as

obtain the input matrix; calculate and subtract the
mean; calculate the covariance matrix; the
Eigenvectors; Eigen values and then forming a new
feature vector; once the new feature vector is
formed; the new dataset with low dimensions is
derived. The new feature vectors are passed to


Database Image
A continuous wavelet transform (CWT) is used to
divide a continuous-time function into wavelets.
Unlike Fourier transform, the continuous wavelet
transform possesses the ability to construct a timefrequency representation of a signal that offers very
good time and frequency localization.
A discrete wavelet transform (DWT) is any wavelet
transform for which the wavelets are discretely
sampled. As with other wavelet transforms, a key
advantage it has over Fourier transforms is
frequency and location information (location in
Haar Wavelets
The first DWT was invented by the Hungarian
mathematician Alfrd Haar. For an input
represented by a list of 2 n numbers, the Haar
wavelet transform [10] may be considered to
simply pair up input values, storing the difference
and passing the sum. This process is repeated
recursively, pairing up the sums to provide the next
scale: finally resulting in 2 1 differences and one
final sum.
B. Feature Extraction: When the input data is too
large to be processed then the input data will be
transformed into a reduced representation set of
features. Transforming the input data into the set of
features is called feature extraction. If the features
extracted are carefully chosen it is expected that the
features set will extract the relevant information
from the input data in order to perform the desired
task using this reduced representation instead of the
full size input.
Principal Component Analysis
After feature extraction is performed feature
vectors are need to minimize. Principal component
analysis (PCA)[8] is a statistical procedure that
uses an orthogonal transformation to convert a set
of observations of possibly correlated variables into
a set of values of linearly uncorrelated variables
called principal components. The number of

ISBN NO : 978 - 1502893314

To use a standard test data set for researchers to be

able to directly compare the results. While there are
many databases in use currently, the choice of an
appropriate database to be used should be made
based on the task given (aging, expressions,
lighting etc).
Another way is to choose the data set specific to the
property to be tested (e.g. how algorithm behaves
when given images with lighting changes or
images[6] with different facial expressions). If, on
the other hand, an algorithm needs to be trained
with more images per class (like LDA), Yale face
database is probably more appropriate than
FERET. Some face data sets often used by
1.The Color FERET Database, USA: The images
were collected in a semi-controlled environment.
To maintain a degree of consistency throughout the
database, the same physical s etup was used in each
photography session. Because the equipment had to
be reassembled for each session, there was some
minor variation in images collected on different
2. SCface - Surveillance Cameras Face Database:
SCface is a database of static images of human
faces. Images were taken in uncontrolled indoor
environment using five video surveillance cameras
of various qualities.
3. Natural Visible and Infrared facial Expression
database (USTC-NVIE): The database contains
both spontaneous and posed expressions of more
than 100 subjects, recorded simultaneously by a
visible and an infrared thermal camera, with
illumination provided from three different
directions. The posed database also includes
expression images with and without glasses.
C. Feature Matching: If the template image has
strong features, a feature-based approach may be
considered; the approach may prove further useful
if the match in the search image might
be transformed in some fashion. Since this
approach does not consider the entirety of the
template image, it can be more computationally
efficient when working with source images of

International Association of Engineering and Technology for Skill Development


Proceedings of International Conference on Advancements in Engineering and Technology

larger resolution, as the alternative approach,

template-based, may require searching potentially
large amounts of points in order to determine the
best matching location
III. Conclusion
Face recognition technology has come a long
way for recognising people. Normally the face
images are not accurate in single view videos as it
does not support pose variations, illumination
changes and so on. Hence in order to provide better
performance, this work presents the combination of
taking Multi -View videos, IR image and Wavelet
Transform[10]. Multi -View videos and IR image
provides the advantage of overcoming the
environmental constraints and providing more
accurate image in all conditions when compared
with RBG image which provides accurate image
only at normal lighting conditions. Wavelet
Transform removes redundancies and preserves the
originality of the image at multi scales and multiple
directions. Thus our approach helps in feature
extraction and face matching with high accuracy
and less response time.
IV. References
1. P. Viola and M. J. Jones, Robust real-time face
detection, Int. J. Comput.Vis., vol. 57, pp. 137
154, May 2004.
2. A.C. Sankaranarayanan, A. Veeraraghavan, and
R. Chellappa, Object detection, tracking and
recognition for multiple smart cameras,
Proc.IEEE, vol. 96, no. 10, pp. 16061624, Oct.
3.A. Li, S. Shan, and W. Gao, Coupled biasvariance tradeoff for crosspose face recognition,

ISBN NO : 978 - 1502893314


IEEE Trans. Image Process., vol. 21, no. 1, pp.

305315, Jan. 2012.
4. A.K. Jain, R. Bolle and S. Pankanti, Biometrics:
Personal Identification in Network Society,
Kluwer Academic Publishers, 1999.
5. V. Blanz and T. Vetter, Face recognition based
on fitting a 3D morphable model, IEEE Trans.
Pattern Anal. Mach. Intell., vol. 25,no. 9, pp. 1063
1074, Sep. 2003.
6. P. Breuer, K.-I. Kim, W. Kienzle, B. Scholkopf,
and V. Blanz, Automatic 3D face reconstruction
from single images or video, in Proc. IEEE Int.
Conf. Autom. Face Gesture Recognit., Sep. 2008,
7. A. Pentland, B. Moghaddam, and T. Starner,
View-based and modular eigenspaces for face
recognition, in Proc. IEEE Conf. Comput. Vis.
Pattern Recognit., Jun. 1994, pp. 8491.
8. V. Blanz and T. Vetter, Face recognition based
on fitting a 3D morphable model, IEEE Trans.
Pattern Anal. Mach. Intell., vol. 25,no. 9, pp. 1063
1074, Sep. 2003.
9. P. Breuer, K.-I. Kim, W. Kienzle, B. Scholkopf,
and V. Blanz, Automatic 3D face reconstruction
from single images or video, in Proc. IEEE Int.
Conf. Autom. Face Gesture Recognit., Sep. 2008,
pp. 18.
10. A. Pentland, B. Moghaddam, and T. Starner,
View-based and modular eigenspaces for face
recognition, in Proc. IEEE Conf. Comput. Vis.
Pattern Recognit., Jun. 1994, pp. 8491.
11.Z.Dezhong., C.Fayi, Face Recognition based
on Wavelet Transform and Image Comparison,
International Symposium on Computational
Intelligence and Design, 2008.

International Association of Engineering and Technology for Skill Development


Proceedings of International Conference on Advancements in Engineering and Technology



Assistant Professor, Department of CSE,
Velammal Engineering College,Anna University,

Abstract Data linkage refers to the process of matching the

data from several databases that refers to the entities of same
type. Data linkage is also possible for the entities that do not
share the common identifier. With the growing size of the todays
database, the complexity of the matching process becomes a
major challenge for Data linkage. Many Indexing techniques
were developed for data linkage but however those techniques
are not efficient. In this paper, a new data linkage method called
as One Class Clustering Tree(OCCT) is developed to overcome
the existing challenges and also to perform the data linkage
process for the entities that do not share a common identifier.
The developed technique builds the tree in such a way that the
inner nodes of the tree represents the features of the first set of
entities and the leaves of the tree represents the features of the
second sets that are similar. The one class clustering tree uses
certain splitting criteria and pruning methods for the data
Keywords--Linkage, classification, clustering, splitting, decision
tree induction, index techniques.



Data linkage is the process of identifying different entries that

refers to the same entity across different data sources[1]. The
main aim of the data linkage is to join the datasets that do not
share a common identifier or the foreign key. Data linkage is
usually performed to reduce the large data into the smaller
data. It also helps in removing the duplicate data in the
datasets. This technique is called as deduplication [19]. Data
linkage can be classified into two types namely, one-to-one
data linkage and one-to-many data linkage[15]. In one-to-one
data linkage, the aim is to link an entity from one dataset with
the matching entity from the other dataset. In one-to-many
data linkage the aim is to link an entity from first dat set with
the group of matching entities from the other data set. In this
paper a new data linkage approach is used called as One Class
Clustering Tree(OCCT) which is aimed at performing one-tomany data linkage. The OCCT is most preferable compared to
all the indexing techniques because it can easily be translated
to linkage rules.

ISBN NO : 978 - 1502893314

M.E(CSE),Department of CSE,
Velammal Engineering College,Anna University,

The paper is structured as follows: In Section II, we review on

indexing techniques,Section III deals with the data linkage
using OCCT and finally Section IV concludes the paper.
In this section the various indexing techniques are discussed
and the variation among them are discussed in more detail.
The indexing process of the data linkage can be divided into
two phases. 1)Build- All the records in the database are being
read and their Blocking Key Values(BKV) are generated.
Most of the indexing techniques uses inverted index approach
[6] where the record identifiers that have the same BKV will
be inserted into the same inverted index list.2)Retrieve- For
every block, the list of the record identifiers is retrieved from
the inverted index and the candidate record pairs are generated
from the list.
Traditional blocking is one of the technique used in the data
linkage[1]. In traditional Blocking all the records that have the
same BKV are being inserted into the same block and the
records within that block are compared with each other. This
technique can be implemented using the inverted index[6].The
main disadvantage of traditional blocking is that the errors and
the variations in the record fields used to generate the BKVs
will lead to the record being inserted into the wrong block.
The second disadvantage is that the sizes of the block
generated depend upon the frequency distribution of the BKVs
and thus it is difficult to predict the total number of candidate
record pairs that will be generated.
Sorted Neighborhood Indexing helps in sorting the database
according to the BKVs,and to subsequently move the window
of a fixed number of records over the sorted values and the
candidate record pairs are generated only from the records
within a current window. It uses three approaches namely
sorted array based approach [4],inverted index based

International Association of Engineering and Technology for Skill Development


Proceedings of International Conference on Advancements in Engineering and Technology

approach[16].The sorted array based approach is not
applicable when the window size is small. However the
inverted index based approach also has the same drawback of
traditional blocking and it is inefficient approach as it takes
lots of time for splitting the entities. The Adaptive sorted
Neighborhood approach is not suitable when window size is
too large.
Q-Gram Based Indexing technique overcomes the drawback
of the traditional blocking and the sorted neighborhood
indexing. The main aim of this technique is to index the
database such that the records that have the similar,and not
just the same,BKV will be inserted into the same
block[8].However, much larger number of candidate record
pairs will be generated,leading to a more time consuming
Suffix Array-Based Indexing technique is one of the most
efficient approach compared to the previous works. The basic
idea of this technique is to insert the BKVs and their suffixes
into a suffix array based inverted index[11]. It uses the
approach called Robust Suffix Array Based Indexing where
the inverted index lists of the suffix values that are similar to
each other in the sorted suffix array are merged[13]. This
technique also takes a lot of time to merge the values.



OCCT is induced using one of the splitting criteria. The
splitting criteria is used to determine which attribute should be
used in each step of building the tree. OCCT uses the
prepruning process to decide which branches should be








The canopy clustering[14]is built by converting BKVs into the
lists of tokens with each unique token becoming a key in the
inverted index. It uses the approach called as the Thresholdbased approach and Nearest Neighbor-Based approach.The
drawback of the canopy clustering is similar to that of the
sorted neighborhood technique based on the sorted array.
String-map-based indexing [9] is based on mapping BKVs to
objects in a multidimensional Euclidean Space,such that the
distance between the pairs of the strings are preserved.Group
of similar strings are then generated by extracting the objects
that are similar to each other. However this technique fails
when the size of the database is too large or too small.
Hence all the above discussed indexing techniques has few
drawbacks in the data linkage process. In order to overcome
those indexing problems associated with the data linkage
process a new approach called as the One Class Clustering
Tree is proposed, which uses four splitting criteria
namely,Coarse-Grained Jaccard coefficient,Fine-Grained
Jaccard Coefficient, Least Probable Intersection(LPI) and
Maximum Likelihood Estimation(MLE) for data split and
pruning techniques.

ISBN NO : 978 - 1502893314

Fig 1: Work Flow Diagram
Initially the tree is constructed where the inner nodes of the
tree consists of the attribute and the leaves represents the
clusters of the clusters of the matching entities. Secondly, the
prepruning technique is being used which means that the
algorithm stops expanding a branch whenever the subbranch
does not improve the accuracy of the model. OCCT uses the
probabilistic model to find the similar entities that are to be
matched. This probabilistic approach helps to avoid
overfitting. OCCT is chosen to be the best approach for data
linkage compared to indexing techniques.
In this paper OCCT approach is used which performs one-tomany data linkage.This method is based on the one class
decision tree model which sums up the knowledge of which
records to be linked together. This method uses one-class
approach which gives the results more accurately.OCCT
model has also been proved successful in three different
domains namely data linkage prevention,recommender system
and fraud detection.

International Association of Engineering and Technology for Skill Development


Proceedings of International Conference on Advancements in Engineering and Technology











I.P. Fellegi and A.B. Sunter, A Theory for Record

Linkage, J. Am. Statistical Soc., vol. 64, no. 328, pp.
1183-1210, Dec. 1969.
D.D. Dorfman and E. Alf, Maximum-Likelihood
Estimation of Parameters of Signal-Detection Theory
and Determination of Confidence IntervalsRatingMethod Data, J. Math. Psychology,vol. 6, no. 3, pp.
487-496, 1969.
J.R.Quinlan, Induction of Decision Trees, Machine
Learning, vol. 1, no. 1, pp. 81-106, March 1986.
M.A. Hernandez and S.J. Stolfo, The Merge/Purge
Problem for Large Databases, Proc. ACM SIGMOD
Intl Conf. Management of Data (SIGMOD 95),
P.Langley, Elements of Machine Learning, San Franc
Isco, Morgan Kaufmann, 1996.
I.H. Witten, A. Moffat, and T.C. Bell, Managing
Gigabytes, second ed. Morgan Kaufmann, 1999.
S.Guha, R.Rastogi and K.Shim, Rock: A Robust
Clustering Algorithm for Categorical Attributes,
Informat- ion Systems, vol. 25, no. 5, pp. 345-366,
July 2000.
L. Gravano, P.G. Ipeirotis, H.V. Jagadish, N. Koudas,
S. Muthukrishnan, and D. Srivastava, Approximate
String Joins in a Database (Almost) for Free, Proc.
27th Intl Conf. Very Large Data Bases (VLDB 01),
pp. 491-500, 2001.
L. Jin, C. Li, and S. Mehrotra, Efficient Record
Linkage in Large Data Sets, Proc. Eighth Intl Conf.
Database Systems for Advanced Applications
(DASFAA 03), pp. 137-146, 2003.
I.S.Dhillon, S. Mallela, and D.S. Modha,
Information-Theoretic Co-Clustering, Proc. Ninth
ACM SIGKDD Intl Conf. Knowledge Discovery
and Data Mining, pp. 89-98, 2003.
A. Aizawa and K. Oyama, A Fast Linkage Detection
Scheme for Multi-Source Information Integration,
Proc. Intl Workshop Chal- lenges in Web
Information Retrieval and Integration (WIRI 05),
A.J.Storkey, C.K.I.Williams, E.Taylorand R.G.Mann,
An Expectation Maximisation Algorithm for Oneto- Many Record Linkage, University of Edinburgh
Informatics Research Report, 2005.
P. Christen, A Comparison of Personal Name
Matching: Techniques and Practical Issues, Proc.
IEEE Sixth Data Mining Workshop (ICDM 06),

ISBN NO : 978 - 1502893314


14. P. Christen, Towards Parameter-Free Blocking for

Scalable Record Linkage, Technical Report TR-CS07-03, Dept. of Com- puter Science, The Australian
Natl Univ., 2007.
15. P. Christen and K. Goiser, Quality and Complexity
Measures for Data Linkage and Deduplication,
Quality Measures in Data Mining, vol. 43, pp. 127151, 2007.
16. S. Yan, D. Lee, M.Y. Kan, and L.C. Giles, Adaptive
Sorted Neighborhood Methods for Efficient Record
Linkage, Proc. Seventh ACM/IEEE-CS Joint Conf.
Digital Libraries (JCDL 07), 2007.
17. A.Gershman et al., A Decision Tree Based
Recomme- nder System, in Proc. the 10th Int. Conf.
on Innovative Internet Community Services, pp. 170179, 2010.
18. M.Yakout,
M.Quzzani and A.Qi, Behavior Based Record
Linkage, in Proc. of the VLDB Endowment, vol. 3,
no 1-2, pp. 439-448, 2010.
19. P. Christen, A Survey of Indexing Techniques for
Scalable Record Linkage and Deduplication, IEEE
Trans. Knowledge and Data Eng., vol. 24, no. 9, pp.
1537-1555, Sept. 2012, doi:10.1109/TKDE. 2011.
20. M.Dror, A.Shabtai, L.Rokach, Y. Elovici, OCCT: A
One-Class Clustering Tree for Implementing One-toMany Data Linkage, IEEE Trans. on Knowledge
and Data Engineering, TKDE-2011-09-0577, 2013.

International Association of Engineering and Technology for Skill Development


Proceedings of International Conference on Advancements in Engineering and Technology


'HQVH'LHOHFWULF3DWFK$UUay Antenna- A New Kind of

Low-Profile Antenna Element For 5G Cellular
B.Praveen Balaji, R.Sriram
UG Scholar, Jaya Engineering College, Chennai.

By replacing the metallic patch of a microstrip antenna with a
high permittivity thin dielectric slab, a new type of patch antenna,
designated as the dense dielectric patch antenna(DD patch
antenna),is proposed. At lower microwave frequencies, it has
similar performance as the conventional metallic circular
microstrip antenna operated in the fundamental TM11 mode. This
array antenna is proposed and designed with a standard printed
circuit board(PCB) process to be suitable for integration with
radio-frequency/microwave circuitary. The proposed structure
employs four circular shaped DD patch radiator antenna elements
fed by a 1-to-4 Wilkinson power divider surrounded by an
electromagnetic bandgap (EBG)structure. The DD patch shows
better radiation and total effeciencies compared with the metallic
patch radiator. For further gain improvement, a dielectric layer of
a superstrate is applied above the array antenna. The calculated
impedance bandwidth of proposed array antenna ranges from 27.1
GHz to 29.5 GHz for reflection coeffecient (S11) less than -10dB.
The proposed design exhibits good stable radiation patterns over
the whole frequency band of interest with a total realized gain
more than 16 dBi. Due to the remarkable performance of the
proposed array, it can be considered as the best candidate for 5G
communication applications.bandgap(EBG)



To overcome signal attenuation due to oxygen molecules
absorption at millimeter-wave frequencies high gain antenna
system is required. One of the main gain enhancing techniques
is using an antenna array with a proper feeding network. ,Q

A Patch Antenna is a type of radio antenna with a low profile,

which can be mounted on a flat surface. It consists of a flat
rectangular sheet or "patch" of metal, mounted over a large
sheet of metal called ground plane. The assembly is usually
contained inside a plastic radome, which protects the antenna
structure from damage. They are original type of microstrip DQWHQQDSURWRW\SHZLWKDVXSHUVWUDWHOD\HURSHUDWLQJDW*+]
antenna, two metal sheets together form a resonant piece of IRU WKH IXWXUH ILIWK JHQHUDWLRQ *  VKRUWUDQJH ZLUHOHVV
microstrip transmission line with a length of approximately one- FRPPXQLFDWLRQV DSSOLFDWLRQV LV LQWURGXFHG 7KH SURSRVHG
half wavelength of the radio waves. The radiation mechanism GHVLJQ RIIHUV EURDGVLGH UDGLDWLRQ SDWWHUQ ZLWK FRPSDFW VL]H
arises from discontinuties at each truncated edge of the VLPSOH IHHG VWUXFWXUH DQG OHVV RSWLPL]DWLRQ SDUDPHWHUV $Q
microstrip transmission line. The patch antenna is usually DUUD\ LV FRQVWUXFWHG XVLQJ IRXU FLUFXODU VKDSHG '' SDWFK
constructed on a dielectric substrate, using the same materials UDGLDWRUDQWHQQDHOHPHQWVDQGIHGE\DWR:LONLQVRQSRZHU
and lithographic processes used to make PCBs. 'LHOHFWULF GLYLGHU VXUURXQGHG E\ DQ HOHFWURPDJQHWLF EDQGJDS (%* 
978 - 1502893314
International Association of Engineering and Technology for Skill Development

Proceedings of International Conference on Advancements in Engineering and Technology





,, ''3$7&+$17(11$(/(0(17




ISBN NO : 978 - 1502893314

DQG G *+]

International Association of Engineering and Technology for Skill Development


Proceedings of International Conference on Advancements in Engineering and Technology


IUHTXHQFLHV  *+]  *+]  *+] DQG  *+] $V



: / 




/ 6




/ )

+ + 










,,, ''3$7&+$17(11$$55$<:,7+SUPERSTARTE

ISBN NO : 978 - 1502893314





International Association of Engineering and Technology for Skill Development


Proceedings of International Conference on Advancements in Engineering and Technology










F DQG G *+]




>1@ + : /DL . 0 /XN DQG . : /HXQJ 'HQVH 'LHOHFWULF 3DWFK
>3@ $O7DULIL 0$ $QDJQRVWRX '( $PHUW $. :KLWHV .:

We would like to thank our college staffs and officials for
providing us with constant support and encouragement.

ISBN NO : 978 - 1502893314

International Association of Engineering and Technology for Skill Development


Proceedings of International Conference on Advancements in Engineering and Technology


Advanced mobile signal jammer for GSM, CDMA and 3G

Networks with prescheduled time duration using
ARM7 TDMI processor based LPC2148 controller

Kaku Ramakrishna, Tulasi Sanath Kumar

PG Student, Asst. Professor, Department of Electronics & Communication Engineering, ASCET, Gudur, A.P, India

Abstract : This Paper is designed and implemented for mobile

phone signal jammers for GSM, CDMA and 3G networks with
prescheduled time duration using ARM7.the mobile jammers
block mobile phone use by sending out radio waves along the
same frequencies that mobile phone use .this case enough
interference with the communication between mobile phone
and communicating towers to render the phones unusable
.upon activating mobile jammers all mobile phones will
indicate NO NETWORK AVAILABLE . Incoming calls are
blocked as if the mobile phone were off. When the mobile
jammers are turned off, all mobile phones will automatically
re-establish communications and provide full service .the
activation and deactivation time schedules can be programmed
with microcontroller. Real time clock chip DS1307 is used to
set the schedule. In this project we have a jammer section,
microcontroller. RTC and GSM modem .this information is
given to the microcontroller will send data to GSM just before
the select time and GSM will send message to the users whose
number are programmed in the GSM that the jammer is going
to active and communication will be stopped. After message
sent by GSM. Mobile jammer circuit will active and signal are
jammed or blocked.

be mentioned that cell phone jammers are illegal devices in

most countries.

A. Mobile jammer:
.a mobile phone jammer is an instrument used to
prevent cellular phones from receiver signal from based
station. When used, the jammer effectively disables cellular
phones. This device can be used in practically any location,
but are found primarily in places where a phone call would
be particularly in places where a phone call would be
particularly disruptive because silence is expected. As with
other radio jamming, cell phone jammers block cell phone
use by sending out radio waves along the same frequencies
that cellular phone use. These causes enough interface with
the communication between cell phones and towers to radar
the phones unusable. On most retail phones, the network
would simply appear out of range. Most cell phones use
different bands to send and receive communications from
towers (called frequency division depleting, FDD). Jammers
can work by either disrupting phone to tower frequencies or
tower to phone frequencies. Smaller handheld models block
all bands from 800 MHz to 1900 MHz within a 30-foot
range (9meters). Small devices tend to used the former
method, while larger more expensive models many interfere
directly with the tower

Keywords: mobile jammer, ARM7.RTC, GSM modem




Communication jamming device were first developed

and used by military. This interest comes from the
fundamental objective of denying the successful transport of
information from the sender (tactical commander) to the
receiver (the army personnel), and vice-versa. Nowadays,
mobile phones are becoming essential tool in our daily life.
The technology behind cell phone jamming is very simple.
The widely use the mobile phone could create some
problems as the sound of ringing becomes annoying or
disrupting. This could happen in some places like
conference room, aw court, libraries, lecture room and


The jamming device broadcasts an RF signal in the

frequency range reserved for cell phones that interference
with the cell phone signal, which results in a no network
available display on the cell phone screen. All phones with
in the effective radius of the jammer are science. It should

Base station
Figure 1. basic principal mobile signal jammer

ISBN NO : 978 - 1502893314

International Association of Engineering and Technology for Skill Development


Proceedings of International Conference on Advancements in Engineering and Technology


The GSM jammer is a device that transmit signal on the

same frequency at which the GSM system operates, the
jamming success when the mobile phone in the area where
the jammer is located cant communicate, the basic idea
military use that was called denial of service attack. Mobile
jammer is used to prevent mobile phone from receiving or
transmitting signal with the base stations. When used, the
jammer effectively disables cellular phones. There are
several ways to jam an RF device. The three most common
techniques can be categorized as follows:

b. Circuitry:
The main electronic components of a jammer are:
Voltage-controlled oscillator--generates the radio signal that
will interface with the cell phone signal.

In this kind of jamming, the device forces the mobile to turn

off itself. This type is very difficult to be implemented since
the jamming device first detects any mobile phone in a
specific area, then the device sends the signal to disable the
mobile phone .some type of this technique can detect if a
nearby mobile phone is there and sends a message to tell the
user to switch the phone to the silent mode (intelligent
beacon disablers).

Tuning circuit controls the frequency at which the jammer

broad cast its signal by sending a particular voltage to the
Noise generator: -- produces random electronic output in a
specified frequency range to jam the cell-phone network
signal (part of the tuning circuit)
RF amplification (gain stage)boost the power of the radio
frequency output to high enough level to jam a signal

Shielding attacks:

This is known as TEMPEST or EMF shielding. This kind

required closing an area in a faraday cage so that any device
inside this cage cannot transmit or receiver RF signal from
outside of the cage. This area can be large as buildings.

Check your phone: -- if the battery on your phone is okay

and youd like to continue your conversation, try walking
away from the the area. You may be able to get out of the
jammers range with just a few steps.

Denial of service

c. Antenna:

This technique is referred to DOS. In this technique, the

device transmits a noise signal at the same operating
frequency of the mobile phone in order to decrease the
signal -to-noise ratio (SNR) of the mobile under its
minimum value. This kind of jamming technique is the
simplest one since the device is always on. Our device is of
this type.



Power supply:

Smaller jamming device are battery operated. Some look

like cell-phone batteries. Stronger device can be plugged
into a standard outlet or wired in to a vehicles electrical





Every jamming device has an antenna to send the signal.

Some are contained rical cabinet. On stronger device,
antennas are external to provide longer range and tuned for
individual frequencies.
The effect of jamming depends on the jamming-to- signal
ratio (j/s), modulation schema, channel coding and
interleaving of the target system. Generally jamming-tosignal ratio can measure according to the following equation


Pj=jammer power,Pt-transmitter power, Gjr=antenna gain

from jammer to receiver , Grj=antenna gain from receiver to
jammer , Gtr=antenna gain from transmitter to receiver ,
Grt=antenna gain from receiver to transmitter,
Br=communication receiver bandwidth, Bj=jamming
transmitter bandwidth, Rtr=rang between communication
transmitter and receiver, Rjt=range between jammer and
communication receiver, Lr=jammer signal loss,
Lr=communication signal loss

Figure2. Block diagram of the jammer device

B. Component of mobile jammer:
Electronically speaking, cell-phone jammers are very basic
device. The simplest just have an on/off and a light that
indicates its on. More complex device have switches to
activate jamming at different frequencies. Components of
jammer include:

Jammers simply overflow the frequency used by wireless

phones with radio waves. Jamming signal needs enough
power to collide and cancel GSM signal. Cell phones are

ISBN NO : 978 - 1502893314

International Association of Engineering and Technology for Skill Development


Proceedings of International Conference on Advancements in Engineering and Technology



designed to increase their power when facing low level

interface. Jammer should recognize that power. Power of
jamming signal should match the power increase from the

ARM is the abbreviation of advanced RISC machines,

it the names of a class of processor, and is the name of kind
technology too. The RISC instruction set, and related
decode mechanism are much simpler than those of complex
instruction set computer (CISC) design.
E. Liquid-crystal display (LCD):
A flat panel display electronic visual display that uses
the light modulation properties of liquid crystal. Liquid
crystals do not emit light directly.LCD are available to
display arbitrary image or fixed image which can be
displayed or hidden, such as preset words, digits, and 7segment displays as in a digital clock. They use the same
basic technology, except that arbitrary image is made up of a
large number of small pixels, while other displays have
large elements.

Sophisticated jammers can block several networks to head

off dual or tri-mode phone that automatically switch among
different network to find open signal.highend device can
block all frequencies.

F. Global system for mobile communication

GSM/GPRS RS232 modem from rhydoLABZ is built
with SIMCOM make sim900 quad-band GSM/GPRS engine
.GSM, which stands for global system for mobile
communication, reigns (important) as the worlds most
widely used cell phone technology. Cell phones use a cell
phone service carriers GSM network by searching for cell
phone towers in the nearby area.
Global system for mobile communication (GSM) is
globally accepted standard for digital cellular
communication. GSM is the name of a standardization
group established in 1982 to create a common European
mobile telephone standard that would formulate
specification for a pan-European mobile cellular radio
system operating at 900 MHz it is estimated that many
countries outside of Europe will join the GSM partnership.

GSM(global system for mobile communication) ranges

erope900 MHz, asia-1800 MHz, USA-1.9 MHz
C. Micro controller:
This section forms the control unit of the whole project.
This section basically consists of a microcontroller with its
associated circuitry like crystal with capacitors, reset circuit,
pull up resistors (if needed) and so on.

Table 1: Operating frequency bands

The microcontroller forms the heart of the project

because it controls the drives begins interfaced and
communicates with the drives according to the program
being written. a microcontroller is a small computer on a
single integrated circuit consisting of a relatively simple
CPU combined with support function such as crystal
oscillator ,timers, watchdog timer, serial and analog i/o etc.
microcontroller are also used in specific, high technology,
and aerospace projects. Microcontrollers are designed for
small or dedicated application.












In our design, the jamming frequency must be the same the

downlink, because it needs lower power to do jamming then
the uplink rang and there is no need to jam the base station
itself. So, our frequency design will be as follows:

ISBN NO : 978 - 1502893314

International Association of Engineering and Technology for Skill Development


Proceedings of International Conference on Advancements in Engineering and Technology

GSM 900---GSM 1800

-- 935-960 MHz

One LCD (liquid crystal display) also interface in this

micro controller, it is flat electronic visual display that used
that use the light modulating properties of liquid crystals.
These components are specialized for being used with the
microcontrollers, which means that they cannot be activated
by standard IC circuits.

1805-1880 MHz

The CDMA frequency range will be 860-894 MHz

(Asia&Europe) and 850-894 MHz (United States).



RTC (real time clock) is interface because it is widely

used device that provides accurate time and date for many
applications. LED indicator interface in this microcontroller
it have a life of at last ten years and consumer 90 percent
less power than conventional indicators, depend on the type
of the materials (Ga, AS, p).

The below fig.3 is the block diagram of the system that to

set the scheduled time duration for mobile jammer. The
activation and deactivation time schedules can be
programmed with microcontroller. In order to run the RTC
(real time clock) one crystal oscillator is externally
interfaced, an oscillator is an electronic circuit that produces
a repetitive electronic signal. The power supply interfaced it
is used to battery backup for the purpose of update the time
when the absence of power. The A.C input i.e., 230v from
the mains supply is step down by the transformer to 12v and
is fed to a rectifier. One reset button interface, Reset is used
for putting the microcontroller into a known condition.
That partially means that microcontroller can behave rather
inaccurately under certain undesirable condition. In order to
continuous its proper functioning it has to be reset. Relay is
an electrically controllable switch widely used in industrial
controls, automobile and appliances.



India: government and schools use jammers. United States:

illegal to operate, manufacture, import, or offer of up to
$11,000 and imprisonment of up to one year. Pakistan: legal
inside bank, often used also in libraries. United Kingdom:
illegal to use, but legal to


This paper is successfully completed using mobile

jammer and ARM7.by this system we can deactivate all the
mobile signals at any locations. We designed a device that
stops phone ringing in particular time period. This device
could be used in places where ringing is not desired at
specific times, as these ringing may disturb people in such
place. The design drive works in dual band. It jams both the
GSM900 and GSM1800 bands. The device was able to jam
the three main cell phone carriers in Jordan

Power supply




[1] www.howstuffwork.com


[2] en.wikipedia.org/wiki/mobile_phone_jammer






[3] multitopic conference2008.INMIC 2008.


Jammer circuitry

[5]ieeee explore.com
[6]zone of scilence[cell phone jammer],spectrum, ieee




Figure: block diagram system

ISBN NO : 978 - 1502893314

International Association of Engineering and Technology for Skill Development


Proceedings of International Conference on Advancements in Engineering and Technology

ISBN NO : 978 - 1502893314


International Association of Engineering and Technology for Skill Development


Proceedings of International Conference on Advancements in Engineering and Technology

ISBN NO : 978 - 1502893314


International Association of Engineering and Technology for Skill Development


Proceedings of International Conference on Advancements in Engineering and Technology

ISBN NO : 978 - 1502893314


International Association of Engineering and Technology for Skill Development


Proceedings of International Conference on Advancements in Engineering and Technology

ISBN NO : 978 - 1502893314


International Association of Engineering and Technology for Skill Development


Proceedings of International Conference on Advancements in Engineering and Technology

ISBN NO : 978 - 1502893314


International Association of Engineering and Technology for Skill Development


Proceedings of International Conference on Advancements in Engineering and Technology

ISBN NO : 978 - 1502893314


International Association of Engineering and Technology for Skill Development


Proceedings of International Conference on Advancements in Engineering and Technology

ISBN NO : 978 - 1502893314


International Association of Engineering and Technology for Skill Development


Proceedings of International Conference on Advancements in Engineering and Technology

ISBN NO : 978 - 1502893314


International Association of Engineering and Technology for Skill Development


Proceedings of International Conference on Advancements in Engineering and Technology

ISBN NO : 978 - 1502893314


International Association of Engineering and Technology for Skill Development


Proceedings of International Conference on Advancements in Engineering and Technology

ISBN NO : 978 - 1502893314


International Association of Engineering and Technology for Skill Development


Proceedings of International Conference on Advancements in Engineering and Technology


Artifact Facet Ranking and

Its Application: A Survey


P G Student
Department of CSE
Velammal Engineering College

Department of CSE
Velammal Engineering College

Chennai, Tamilnadu.

Chennai, Tamilnadu.

Abstract- Various customer surveys of items are currently

accessible on the Internet. Customer audits contain rich and
significant information for both firms and clients. Be that as it
may, the surveys are frequently disordered, prompting
challenges in data route and information procurement. This
article proposes an item perspective positioning skeleton,
which consequently recognizes the essential parts of items from
online customer surveys, going for enhancing the ease of use of
the various audits. The critical item perspectives are
recognized focused around two perceptions: 1) the essential
angles are normally remarked on by an extensive number of
customers and 2) purchaser suppositions on the essential
perspectives significantly impact their general assessments on
the item. Specifically, given the customer surveys of an item,
we first recognize item angles by a shallow reliance parser and
focus buyer suppositions on these angles through a conclusion
classifier. We then create a probabilistic perspective
positioning calculation to construe the criticalness of
perspectives by at the same time considering perspective
recurrence and the impact of customer presumptions given to
every angle over their general notions. The test comes about on
an audit corpus of 21 prevalent items in eight areas show the
viability of the proposed methodology. Besides, we apply item
viewpoint positioning to two genuine applications, i.e., report
level notion characterization and extractive audit synopsis, and
accomplish critical execution enhancements, which exhibit the
limit of item viewpoint positioning in encouraging certifiable

Late years have seen the quickly growing e-business. A late
study from Comscore reports that online retail using arrived
at $37.5 billion in Q2 2011 U.S. A huge number of items
from different traders have been offered on the web. Case in
point, flipkart has listed more than five million items.
Amazon.com files a sum of more than 36 million items.
Shopper.com records more than five million items from in
excess of 3,000 traders. Most retail Websites empowers
purchasers to compose surveys to express their conclusions

ISBN NO : 978 - 1502893314

on different parts of the items. Here, an angle, additionally

called peculiarity in written works, alludes to a part or an
characteristic of a certain item. A specimen survey "The
battery life of Nokia N70 is astonishing." uncovers positive
assumption on the angle "battery life" of item Nokia N70.
Other than the retail Websites, numerous discussion
Websites additionally give a stage for buyers to post audits
on a large number of items. For illustration, Cnet.com
includes more than seven million item audits; while
Pricegrabber.com contains a large number of audits on more
than 33 million items in 25 different classes in excess of
12,000 vendors. Such various shopper audits contain rich
and profitable information and have turn into an imperative
asset for both shoppers and firms [9]. Buyers usually look
for quality data from online audits preceding obtaining an
item, while numerous firms use online audits as vital inputs
in their item improvement, promoting, and customer
relationship administration.
By and large, an item may have many perspectives. For
instance, iphone 4S has more than three hundred
perspectives. We contend that a few perspectives are more
essential than the others, and have more prominent effect on
the consequent customers' choice making and additionally
firms' item advancement techniques. For instance, a few
angles of iphone 4S are concerned by most customers, and
are more essential than the others for example, "usb" and
"catch." For a Polaroid item, the perspectives, for example,
"lenses" and "picture quality" would extraordinarily impact
customer conclusions on the Polaroid, and they are more
essential than the angles, for example, "a/v link" and "wrist
strap." Hence, recognizing essential item viewpoints will
enhance the ease of use of various surveys and is
advantageous to both customers and firms. Shoppers can
advantageously settle on astute acquiring choice by paying
can concentrate on enhancing the nature of these viewpoints
and accordingly improve item notoriety viably. Then again,

International Association of Engineering and Technology for Skill Development


Proceedings of International Conference on Advancements in Engineering and Technology

it is unrealistic for individuals to physically recognize the
vital parts of items from various surveys. Accordingly, a
methodology to naturally recognize the vital angles is
profoundly requested. Persuaded by the above perceptions,
we in this paper propose an item viewpoint positioning
schema to naturally distinguish the vital parts of items from
online buyer audits. Our supposition is that the imperative
parts of an item have the accompanying qualities:
(a) They are regularly remarked in shopper surveys;
(b) Shoppers' assumptions on these perspectives
significantly impact their general sentiments on the item. A
clear recurrence based result is to respect the viewpoints
that are regularly remarked in shopper surveys as
On the other hand, purchasers' suppositions on the regular
viewpoints may not impact their general feelings on the
item, and would not impact their acquiring choices. For
instance, most shoppers often reprimand the awful "sign
association" of iphone 4, yet they may in any case give high
general evaluations to iphone 4. On the difference, some
angles, for example, "plan" and "pace," may not be
oftentimes remarked, yet typically are more essential than
"sign association." Therefore, the recurrence based result is
most certainly not ready to recognize the really critical
angles. On the other hand, a fundamental strategy to
adventure the impact of purchasers' feelings on particular
angles over their general appraisals on the item is to tally
the situations where their assessments on particular
viewpoints and their general evaluations are predictable, and
at that point positions the viewpoints as per the quantity of
the predictable cases. This technique basically expects that a
general rating was determined from the particular
assessments on diverse viewpoints separately, and can't
correctly portray the correspondence between the particular
assessments and the by and large rating. Subsequently, we
go past these strategies and propose a powerful angle
positioning methodology to deduce the significance of item
viewpoints. As indicated in Fig. 2, given the shopper audits
of a specific item, we first recognize viewpoints in the
audits by a shallow reliance parser [37] and afterward
dissect buyer feelings on these perspectives through an
opinion classifier. We then create a probabilistic angle
positioning calculation, which adequately abuses the
viewpoint recurrence and in addition the impact of buyers'
feelings given to every angle over their general sentiments
on the item in a brought together probabilistic model.
Specifically, we expect the general assessment in a survey is

ISBN NO : 978 - 1502893314


created focused around a weighted conglomeration of the

assumptions on particular perspectives, where the weights
basically measure the level of significance of these angles.
A probabilistic relapse calculation is created to derive the
essentialness weights by joining angle recurrence and the
relationship between the general assessment and the
assumptions on particular perspectives. So as to assess the
proposed item angle positioning structure, we gather a huge
gathering of item audits comprising of 95,660 purchaser
surveys on 21 items in eight spaces. These surveys are
creeped from different predominant forum websites, for
example, Cnet.com, Viewpoints.com, Reevoo.com and
Pricegrabber.com and so on. This corpus is avthe impact of
customers' conclusions given to each angle over their
general sentiments on the item. We show the capability of
viewpoint positioning in true applications. Critical
execution enhancements are acquired on the applications of
record level notion order and extractive survey rundown by
making utilization of angle positioning.
In this area, we survey the subtle elements of the proposed
Item Aspect Ranking schema. We begin with a diagram of
its pipeline comprising of three primary segments: (an)
angle distinguishing proof; (b) opinion characterization on
viewpoints; and (c) probabilistic angle positioning. Given
the shopper surveys of an item, we first distinguish the
viewpoints in the surveys and after that examine shopper
feelings on the angles by means of an opinion classifier. At
long last, we propose a probabilistic viewpoint positioning
calculation to construe the vitality of the angles by all the
while taking into account viewpoint recurrence and the
impact of customers' feelings given to every angle over their
general sentiments.
Let R = {r1, . . . , r|r|} mean a set of buyer surveys of a
certain item. In each one audit r R, buyer communicates
the notions on different parts of an item, lastly appoints a
general rating Or. Alternately is a numerical score that
shows diverse levels of general assumption in the audit r,
i.e. Then again [omin,omax], where Omin and Omax are
the least and greatest appraisals individually. Then again is
standardized to [0, 1]. Note that the shopper audits from
distinctive Sites may contain different disseminations of
appraisals. In general terms, the evaluations on a few
Websites may be a little higher or lower than those on
others. Additionally, distinctive Sites may offer distinctive
rating reach, for instance, the rating reach is from 1 to 5 on
Cnet.com and from 1 to 10 on Reevoo.com, individually.
Consequently, we here standardize the evaluations from

International Association of Engineering and Technology for Skill Development


Proceedings of International Conference on Advancements in Engineering and Technology

distinctive Websites independently, as opposed to
performing a uniform standardization on them. This method
is relied upon to mitigate the impact of the rating difference
among distinctive Websites. Assume there are m
perspectives A = {a1, . . . , am} in the survey corpus R
absolutely, where ak is the k-th angle. Customer conclusion
on angle ak in audit r is indicated as ork. The conclusion on
every angle possibly impacts the general rating. We here
expect the general rating On the other hand is created
focused around a weighted accumulation of the
presumptions on particular angle, where each weight r k
basically measures the essentialness of angle ak in audit r.
We intend to uncover these imperative weights, i.e., the
accentuation put on the perspectives, and distinguish the
imperative angles correspondingly.
In next subsections, we will present the previously stated
three segments of the proposed item angle positioning
system. It present the item perspective ID that distinguishes
perspectives, i.e., {ak}mk =1 in buyer surveys; It will
display the perspective level supposition order which
examines buyer assessments on perspectives i.e., {ork}|r|
r=1; and It will expound the probabilistic perspective
positioning calculation that gauges the criticalness weights
{rk}|r| r=1 and recognizes comparing essential perspective
2.1 Product Aspect Identification
As delineated customer audits are made in distinctive
organizations on different gathering Websites. The Websites
for example, Cnet.com oblige customers to give a by and
large rating on the item, portray compact positive and
negative conclusions on some item viewpoints, and in
addition compose a section of point by point audit in free
content. A few Websites, e.g., Viewpoints.com, request an
general rating and a passage of free-content survey. The
others for example, Reevoo.com simply require a general
rating and some succinct positive and negative assumptions
on certain viewpoints. In outline, other than a general rating,
a purchaser audit comprises of Pros and Cons surveys, free
content survey, then again both.
For the Pros and Cons audits, we distinguish the
perspectives by removing the continuous thing terms in the
audits. Past studies have demonstrated that perspectives are
typically things or thing phrases, and we [12] can acquire
exceptionally correct perspectives by concentrating
successive thing terms from the Pros and Cons audits [19].
For recognizing viewpoints in the free content audits, a clear
result is to utilize a current angle recognizable proof
methodology. A standout amongst the most striking existing
methodology is that proposed by Hu and Liu. It first

ISBN NO : 978 - 1502893314


distinguishes the things and thing expressions in the records.

The event frequencies of the things and thing expressions
are numbered, and just the successive ones are kept as
angles. In spite of the fact that this basic strategy is
successful sometimes, its well-known constraint is that the
distinguished angles typically contain commotions. As of
late, Wu et al. [37] utilized an expression reliance parser to
concentrate thing expressions, which structure applicant
viewpoints. To channel out the commotions, they utilized a
dialect demonstrate by an instinct that the more probable an
applicant to be an angle, the all the more nearly it identified
with the surveys. The dialect model was based on item
audits, also used to foresee the related scores of the
applicant angles. The hopefuls with low scores were then
sifted out. Then again, such dialect model may be
predispositioned to the regular terms in the surveys and can't
accurately sense the related scores of the angle terms,
accordingly can't channel out the commotions successfully.
With a specific end goal to get more exact ID of
perspectives, we here propose to adventure the Pros also
Cons surveys as assistant information to support distinguish
angles in the free content surveys. Specifically, we first part
the free content surveys into sentences, and parse each one
sentence utilizing Stanford parser2. The regular thing
expressions are then concentrated from the sentence parsing
trees as hopeful perspectives. Since these applicants may
contain clam

Fig. 1 .Flowchart of the proposed product aspect ranking

framework. [42]

Further power the Pros and Cons audits to aid recognize

angles from the hopefuls. We gather all the continuous thing
terms extricated from the [23] Pros and Cons audits to
structure a vocabulary. We then speak to every angle in the
Advantages and disadvantages audits into an unigram offer,
and use all the angles to take in an one-class Support Vector
Machine (SVM) classifier. The resultant classifier is thus
utilized to recognize perspectives in the applicants removed
from the free content surveys. As the distinguished angles

International Association of Engineering and Technology for Skill Development


Proceedings of International Conference on Advancements in Engineering and Technology


may contain some equivalent word terms, for example,

"headphone" and "earphone," we perform equivalent word
grouping to acquire special perspectives. In specific, we
gather the equivalent word terms of the angles as
characteristics. The equivalent word terms are gathered
from the equivalent word reference Website3. We speak to
every angle into a gimmick vector and utilize the Cosine
likeness for grouping. The ISODATA bunching calculation
[14] is utilized for equivalent word grouping. ISODATA
does not have to settle the number of groups and can gain
the number consequently from the information conveyance.
It iteratively refines bunching by part what's more uniting of
bunches. Groups are blended if the focuses of two groups
are closer than a certain edge. One group is part into two
separate bunches if the bunch standard deviation surpasses a
predefined limit. The qualities of these two edges were
experimentally set to 0.2 and 0.4 in our investigations.

A conclusion classifier is then gained from the Pros surveys

and Cons audits. The classifier might be SVM, Nave Bayes
or Most extreme Entropy model [23]. Given a free content
audit that may blanket various perspectives, we first find the
obstinate representation that changes the comparing
viewpoint, e.g. finding the representation "well" in the audit
"The battery of Nokia N70 works well." for the perspective
"battery." Generally, an obstinate representation is
connected with the angle on the off chance that it contains
no less than one conclusion term in the opinion vocabulary,
and it is the closest one to the angle in the parsing tree
inside the setting separation of 5. The educated opinion
classifier is then leveraged to focus the conclusion of the
obstinate representation, i.e. the presumption on the

2.2 Mawkishness Arrangement on Artifact Facets

In this area, we study a proposed probabilistic viewpoint

positioning calculation to distinguish the vital parts of an
item from shopper audits. For the most part, vital angles
have the accompanying qualities: (a) they are every now
and again remarked in shopper audits; and (b) purchasers'
feelings on these viewpoints extraordinarily impact their
general assessments on the item. The general feeling in an
audit is a conglomeration of the feelings given to particular
viewpoints in the survey; furthermore different viewpoints
have distinctive commitments in the collection. That is, the
feelings on (un) important angles have solid (powerless)
effects on the era of by and large sentiment. To model such
total, we define that the general rating Or in each one audit r
is created focused around the weighted aggregate of the
suppositions on particular perspectives, as mk =1 rkork or
in framework structure as r Tor. ork is the sentiment on
viewpoint ak and the criticalness weight rk reflects the
accentuation set on ak. Bigger rk demonstrates ak is more
paramount, furthermore the other way around. r signifies a
vector of the weights, as well as is the sentiment vector with
each one measurement demonstrating the sentiment on a
specific facet.

The undertaking of dissecting the assessments

communicated on viewpoints is called viewpoint level
estimation order in writing. Leaving systems incorporate the
administered learning approaches and the vocabulary based
methodologies, which are regularly unsupervised. The
dictionary based strategies use a feeling dictionary
comprising of a rundown of assessment words, expressions
and colloquialisms, to focus the estimation introduction on
every viewpoint [23]. While these system are effortlessly to
execute, their execution depends intensely on the quality of
the assumption dictionary. Then again, the managed
learning strategies prepare an assessment classifier based on
preparing corpus. The classifier is then used to anticipate the
assumption on every angle. Numerous learning-based order
models are relevant, for instance, Support Vector Machine
(SVM), Naive Bayes, and Maximum Entropy (ME) model
and so on [25]. Managed learning is subject to the preparing
information and can't perform well without sufficient
preparing specimens. Nonetheless, marking preparing
information is labor-intensive also drawn out. In this work,
the Pros and Cons surveys have unequivocally sorted
positive and negative assessments on the perspectives.
These audits are significant preparing specimens for taking
in a supposition classifier. We in this way abuse Pros and
Cons audits to prepare a conclusion classifier, which is thus
used to focus buyer assessments on the perspectives in free
content surveys. Particularly, we first gather the supposition
terms in Pros and Cons surveys focused around the notion
dictionary gave by MPQA venture [35]. These terms are
utilized as peculiarities, and each one survey is spoken to as
a gimmick vector.

ISBN NO : 978 - 1502893314

2.3 Probabilistic Aspect Ranking Algorithm


In this area, we audit existing works identified with the
proposed item perspective positioning system, and the two
assessed genuine applications. We begin with the works on
perspective recognizable proof. Existing strategies for
perspective recognizable proof incorporate directed and
unsupervised systems. Regulated system takes in an
extraction model from an accumulation of marked surveys.
The extraction show, or called extractor, is utilized to
distinguish perspectives in new audits. Most existing

International Association of Engineering and Technology for Skill Development


Proceedings of International Conference on Advancements in Engineering and Technology

regulated routines are focused around the consecutive
learning procedure [19]. Case in point, Wong and Lam [37]
learned viewpoint extractors utilizing Concealed Markv
Models and Conditional Random Fields, separately. Jin and
Ho [11] took in a lexicalized HMM model to separate
viewpoints and assessment representations, while Li et al.
[16] incorporated two CRF varieties, i.e., Skip-CRF and
Tree-CRF. All these strategies oblige sufficient named
specimens for preparing. Then again, the time it now,
prolonged and work concentrated to name tests. Then again,
unsupervised routines have risen as of late. The most
outstanding unsupervised methodology was proposed by Hu
and Liu. They expected that item perspectives are things and
thing expressions. The approach first concentrates things
and thing expressions as hopeful perspectives. The event
frequencies of the things what's more thing expressions are
numbered, and just the incessant ones are kept as
perspectives. In this manner, Popescu and Etzioni created
the OPINE framework, which removes angles based on the
Knowitall Web data extraction framework. Mei et al. used a
probabilistic point model to catch the mixture of
perspectives and opinions all the while. Su et al. [32]
outlined a common fortification technique to at the same
time bunch item viewpoints and conclusion words by
iteratively intertwining both substance and opinion join
data. As of late, Wu et al. [37] used an expression reliance
parser to concentrate thing expressions from surveys as
perspective hopefuls. They then utilized a dialect model to
channel out those impossible perspectives.
In the wake of distinguishing angles in surveys, the
following errand is viewpoint conclusion order, which
decides the introduction of estimation communicated on
every perspective. Two major methodologies for
perspective slant grouping incorporate dictionary based and
managed learning approaches. The vocabulary based
routines are normally unsupervised. They depend on a
notion dictionary containing a rundown of positive and
negative notion words. To create a superb dictionary, the
bootstrapping method is generally utilized. Case in point,
Hu and Liu [12] began with a set of descriptive word seed
words for every conclusion class. They used equivalent
word/antonym relations characterized in Wordnet to
bootstrap the seed word set, lastly got an assessment
vocabulary. Ding et al. introduced a comprehensive
dictionary based strategy to enhance Hu's strategy by
tending to two issues: the feelings of conclusion words
would be substance touchy and clash in the survey. They
determined a dictionary by misusing some requirements.
Then again, the regulated learning strategies characterize the
conclusions on angles by a supposition classifier gained

ISBN NO : 978 - 1502893314


from preparing corpus. Numerous learning based models are

relevant, for example, Support Vector Machine (SVM),
Naive Bayes and Maximum Entropy (ME) model and so
forth. More thorough writing audit of angle recognizable
proof and estimation characterization might be found in
As previously stated, an item may have hundreds of
viewpoints and it is important to recognize the paramount
ones. To our best learning, there is no past work
contemplating the theme of item perspective positioning.
Wang et al. [34] created an inert perspective rating
investigation model, which expects to gather analyst's
dormant conclusions on every viewpoint and the relative
stress on distinctive viewpoints. This work concentrates on
angle level assessment estimation and analyst rating conduct
dissection, instead of on viewpoint positioning. Snyder and
Barzilay [31] formed a various viewpoint positioning issue.
Be that as it may, the positioning is really to anticipate the
evaluations on individual angles. Record level assessment
characterization means to arrange an assumption record as
communicating a positive or negative assumption. Existing
works use unsupervised, managed or semi-managed
learning strategies to manufacture document level
assessment classifiers. Unsupervised system as a rule
depends on an assessment dictionary containing a gathering
of positive and negative assessment words. It decides the
general assessment of a survey archive focused around the
number of positive and negative terms in the survey.
Administered strategy applies existing administered
learning models, such as SVM and Maximum entropy (ME)
and so on while semi supervised methodology misuses
inexhaustible unlabeled surveys together with named
surveys to enhance arrangement execution. The other
related point is extractive survey outline, which means to
consolidate the source surveys into a shorter form saving its
data substance and general significance. Extractive outline
system structures the outline utilizing the most enlightening
sentences and sections and so on chose from the first
surveys. The most useful substance by and large alludes to
the "most incessant" or the "most positively situated"
content in leaving works.
The two broadly utilized strategies are the sentence
positioning and chart based strategies. In these works, a
scoring capacity was initially characterized to process the
usefulness of each one sentence. Sentence positioning
system [29] positioned the sentences as per their instruction
scores and afterward chose the top positioned sentences to
structure a rundown. Diagram based technique [7] spoke to
the sentences in a diagram, where every hub relates to a

International Association of Engineering and Technology for Skill Development


Proceedings of International Conference on Advancements in Engineering and Technology

sentence and each one edge describes the connection
between two sentences. An irregular walk was then
performed over the diagram t.
In this article, we have studies about an item angle
positioning structure to recognize the paramount parts of
items from various purchaser surveys. The schema contains
three principle segments, i.e., item viewpoint recognizable
proof, perspective slant characterization, and viewpoint
positioning. Initially, we misused the Pros and Cons audits
to enhance viewpoint recognizable proof what's more slant
characterization on free-content audits. We then created a
probabilistic perspective positioning calculation to gather
the vitality of different parts of an item from various
surveys. The calculation all the while investigates
perspective recurrence and the impact of shopper
suppositions given to every viewpoint over the general
feelings. The item perspectives are at long last positioned as
indicated by their essentialness scores. We have directed far
reaching investigations to methodicallly assess the proposed
schema. The trial corpus contains 94,560 shopper audits of
21 mainstream items in eight areas. This corpus is freely
accessible according to popular demand. Test results have
showed the adequacy of the proposed methodologies. In
addition, we connected item perspective positioning to
encourage two true applications, i.e., archive level
estimation arrangement and extractive survey synopsis.
Noteworthy execution upgrades have been gotten with the
assistance of item angle positioning.
[1] J. C. Bezdek and R. J. Hathaway, Convergence of alternating
optimization, J. Neural Parallel Scientific Comput., vol. 11, no. 4,
pp. 351368, 2003.
[2] C. C. Chang and C. J. Lin. (2004). Libsvm: A library for support vector
machines [Online]. Available: http://www.csie.ntu.edu.tw/cjlin/libsvm/
[3] G. Carenini, R. T. Ng, and E. Zwart, Multi-document summarization
of evaluative text, in Proc. ACL, Sydney, NSW, Australia, 2006, pp. 37.
[4] China Unicom 100 Customers iPhone User Feedback Report, 2009.
http://www.comscore.com/Press_events/Press_releases, 2011.
[6] X. Ding, B. Liu, and P. S. Yu, A holistic lexicon-based approach
to opinion mining, in Proc. WSDM, New York, NY, USA, 2008, pp. 231
[7] G. Erkan and D. R. Radev,LexRank: Graph-based lexical centrality
as salience in text summarization, J. Artif. Intell. Res., vol. 22, no. 1, pp.
457479, Jul. 2004.
[8] O. Etzioni et al., Unsupervised named-entity extraction from the
web: An experimental study, J. Artif. Intell., vol. 165, no. 1, pp. 91134.
Jun. 2005.
[9] A. Ghose and P. G. Ipeirotis,Estimating the helpfulness and economic
impact of product reviews: Mining text and reviewer characteristics, IEEE
Trans. Knowl. Data Eng., vol. 23, no. 10, pp. 14981512. Sept. 2010.
[10] V. Gupta and G. S. Lehal, A survey of text summarization extractive
techniques, J. Emerg. Technol. Web Intell., vol. 2, no. 3, pp. 258268,

ISBN NO : 978 - 1502893314


[11] W. Jin and H. H. Ho, A novel lexicalized HMM-based learning

framework for web opinion mining, in Proc. 26th Annu. ICML, Montreal,
QC, Canada, 2009, pp. 465472.
[12] M. Hu and B. Liu, Mining and summarizing customer reviews, in
Proc. SIGKDD, Seattle, WA, USA, 2004, pp. 168177.
[13] K. Jarvelin and J. Kekalainen, Cumulated gain-based evaluation
of IR techniques, ACM Trans. Inform. Syst., vol. 20, no. 4, pp. 422446,
Oct. 2002.
[14] J. R. Jensen, Thematic information extraction: Image classification,
in Introductory Digit. Image Process., pp. 236238.
[15] K. Lerman, S. Blair-Goldensohn, and R. McDonald, Sentiment
summarization: Evaluating and learning user preferences, in Proc. 12th
Conf. EACL, Athens, Greece, 2009, pp. 514522.
[16] F. Li et al., Structure-aware review mining and summarization, in
Proc. 23rd Int. Conf. COLING, Beijing, China, 2010, pp. 653661.
[17] C. Y. Lin, ROUGE: A package for automatic evaluation of
summaries, in Proc. Workshop Text Summarization Branches Out,
Barcelona, Spain, 2004, pp. 7481.
[18] B. Liu, M. Hu, and J. Cheng, Opinion observer: Analyzing and
comparing opinions on the web, in Proc. 14th Int. Conf. WWW, Chiba,
Japan, 2005, pp. 342351.
[19] B. Liu, Sentiment analysis and subjectivity, in Handbook of Natural
Language Processing, New York, NY, USA: Marcel Dekker, Inc., 2009.
[20] B. Liu, Sentiment Analysis and Opinion Mining. Mogarn & Claypool
Publishers, San Rafael, CA, USA, 2012.
[21] L. M. Manevitz and M. Yousef, One-class SVMs for document
classification, J. Mach. Learn., vol. 2, pp. 139154, Dec. 2011.
[22] Q. Mei, X. Ling, M. Wondra, H. Su, and C. X. Zhai, Topic sentiment
mixture: Modeling facets and opinions in weblogs, in Proc. 16th Int. Conf.
WWW, Banff, AB, Canada, 2007, pp. 171180.
[23] B. Ohana and B. Tierney, Sentiment classification of reviews using
SentiWordNet, in Proc. IT&T Conf., Dublin, Ireland, 2009.
[24] G. Paltoglou and M. Thelwall, A study of information retrieval
weighting schemes for sentiment analysis, in Proc. 48th Annu.
Meeting ACL, Uppsala, Sweden, 2010, pp. 13861395.
[25] B. Pang, L. Lee, and S. Vaithyanathan, Thumbs up? Sentiment
classification using machine learning techniques, in Proc.
EMNLP, Philadelphia, PA, USA, 2002, pp. 7986.
[26] B. Pang, L. Lee, and S. Vaithyanathan, A sentimental education:
Sentiment analysis using subjectivity summarization based
on minimum cuts techniques, in Proc. ACL, Barcelona, Spain,
2004, pp. 271278.
[27] B. Pang and L. Lee, Opinion mining and sentiment analysis, in
Found. Trends Inform. Retrieval, vol. 2, no. 12, pp. 1135, 2008.
[28] A. M. Popescu and O. Etzioni, Extracting product features and
opinions from reviews, in Proc. HLT/EMNLP, Vancouver, BC, Canada,
2005, pp. 339346.
[29] D. Radev, S. Teufel, H. Saggion, and W. Lam, Evaluation challenges
in large-scale multi-document summarization, in Proc. ACL, Sapporo,
Japan, 2003, pp. 375382.
[30] V. Sindhwani and P. Melville, Document-word co-regularization
for semi-supervised sentiment analysis, in Proc. 8th IEEE ICDM, Pisa,
Italy, 2008, pp. 10251030.
[31] B. Snyder and R. Barzilay, Multiple aspect ranking using the good
grief algorithm, in Proc. HLT-NAACL, New York, NY, USA, 2007, pp.
[32] Q. Su et al., Hidden sentiment association in chinese web opinion
mining, in Proc. 17th Int. Conf. WWW, Beijing, China, 2008, pp. 959
[33] L. Tao, Z. Yi, and V. Sindhwani, A non-negative matrix trifactorization approach to sentiment classification with lexical prior
knowledge, in Proc. ACL/AFNLP, Singapore, 2009, pp. 244252.
[34] H. Wang, Y. Lu, and C. X. Zhai, Latent aspect rating analysis on
review text data: A rating regression approach, in Proc. 16th ACM
SIGKDD, San Diego, CA, USA, 2010, pp. 168176.
[35] T. Wilson, J. Wiebe, and P. Hoffmann, Recognizing contextual
polarity in phrase-level sentiment analysis, in Proc. HLT/EMNLP,
Vancouver, BC, Canada, 2005, pp. 347354.
[36] T. L. Wong and W. Lam, Hot item mining and summarization from
multiple auction web sites, in Proc. 5th IEEE ICDM, Washington, DC,
USA, 2005, pp. 797800.
[37] Y. Wu, Q. Zhang, X. Huang, and L. Wu, Phrase dependency parsing
for opinion mining, in Proc. ACL, Singapore, 2009, pp. 15331541.

International Association of Engineering and Technology for Skill Development


Proceedings of International Conference on Advancements in Engineering and Technology



CSE, Final Year
Indra Ganesan College Of Engg


(HOD) Dept of CSE
Indra Ganesan College Of Engg

CSE, Final Year
Indra Ganesan College Of Engg



Today, though technology is increasing

rapidly in one hand, the health issues are
also increasing in the other hand. So I
proposed a concept to ease hospital
work. This work addresses a mini-ICU
as it observes all the necessary
parameters of the patient that could be
employed both in home and hospitals. At
the same time, the patient is able to get
remote treatment from the doctor which
could automatically be stored in the
database. This work is found to be more
useful at the time of natural disaster
where there are numerous patients who
need intense care. Also when the patients
get back home they might again get
infected with that disease so it is used to
monitor continuously in their home itself.
The main objective of this work is to
monitor the patient continuously
wherever they are and also to get remote
treatment. The main high light is that it
is able to work with solar power during
power shut downs.

In this Nano world, though technology

growing very rapidly to facilitate human life
style, but on the other hand health issues are
increasing rapidly. Due to current life style,
its difficult to monitor the dependants if
they are not placed together. This leads to
continuous monitoring of a person. This
makes us to provide a smarter solution to
monitor the dependants. In that aspect, we
young budding engineers proposed a
concept to ease hospital people work and to
monitor the person using android mobile.
So I proposed a concept to ease hospital
work. This work addresses a mini-ICU as it
observes all the necessary parameters of the
patient that could be employed both in home
and hospitals. At the same time, the patient
is able to get remote treatment from the
doctor which could automatically be stored
in the database.

Continuous monitoring, Remote treatment,

ISBN NO : 978 - 1502893314





The following parameters are measured.

Heart beat rate

Body temperature
Coma patient recovery
Blood glucose level
Saline level monitoring

International Association of Engineering and Technology for Skill Development


Proceedings of International Conference on Advancements in Engineering and Technology













FIG-1: Shows the heart beat sensor



All the necessary parameters are

Remote treatment immediately.
Even coma patient status can be
monitored continuously.

When patients gets well and come

back to home from hospital might
get infected again, so this is found to
be very useful.

Ability to run with solar power

Applicable for all android mobile.


The system architecture consists

hardware and software as follows:



The following

ISBN NO : 978 - 1502893314




Doctors measure our heart rate manually.

We also can feel the pulse on our finger.
Our heart does this around 72 to 84 times a
minute for a healthy person. But here, we
will pass light (using an LED) from one side
of the finger and measure the intensity of
light received on the other side (using an
LDR). Whenever the heart pumps blood
more light is absorbed by increased blood
cells and we will observe a decrease in the
intensity of light received on the LDR. As a
result the resistance value of the LDR
increases. This variation in resistance is
converted into voltage variation using a
signal conditioning circuit usually an OPAMP. The signal is amplified enough to be
detectable by the microcontroller inputs. The
microcontroller can be programmed to
receive an interrupt for every pulse detected
and count the number of interrupts or pulses
in a minute. To save time, only the number
of pulses for ten seconds are counted and
then multiplied by 6 to get pulse count for
60 seconds/1 minute.

International Association of Engineering and Technology for Skill Development


Proceedings of International Conference on Advancements in Engineering and Technology



4.1.4. ECG:

The LM35 is precision integrated-circuit

whose output is linearly proportional to the
Celsius temperature. The scale factor is
+10.0mv/c. Hence temperature is equal to
Vout*(100C/V).The voltage from LM 35 is
converted to digital format using ADC of
ARM controller. This digital value is used to
send via SMS after proper conversion.

The system consists of the following

1. Patient unit subsystem: This includes
electrodes that sense the electrical activity
going through the heart, signal amplification
circuit, conditioning circuit, data acquisition
circuit and home gateway. The circuit takes
a reading every 30 minutes and sends it to
home gateway PC.


A sensor called Accelerometer that would
help to detect any slight movement of an
object and the object being in our case is the
coma patient who is on bed for a longer
duration without any movement. And they
accelerometer for every time period to detect
any tilt made by the patients. Here the
transmitter end is fixed on the toe whose
movement is received by the receiver end
automatically and if there is any movement,
this report is sent immediately to the doctor.
public class SensorActivity extends
Activity, implements SensorEventListener
private final Sensor mAccelerometer;
public SensorActivity ()
getSystemService (SENSOR_SERVICE);
mAccelerometer =
FIG-2: Shows the java code for

ISBN NO : 978 - 1502893314

2. Web Server and Database subsystem: To

store the patient ECG signal data, detect any
abnormality in the ECG signal and publish
the results that can be accessed only by
authorized people.
3. Android unit subsystem: Android based
application that enables doctors to access the
patient details using smart phone.

FIG-3: Patients ECG in android mobile.

FIG-4: Maximized ECG screen.
The ECG signals have Electrode contact
noise such as loose contacts, motion
artifacts, and baseline drift due to
respiration. They also pick electro-magnetic
interference caused by other electronic
devices surrounding the ECG device and
electrodes. Knowing that the standard ECG
signal bandwidth ranges between 0.05 Hz
and 100 Hz with average amplitude of 1mV

International Association of Engineering and Technology for Skill Development


Proceedings of International Conference on Advancements in Engineering and Technology

only, we need to filter the signal with a Low

Pass, Band Pass, and a Notch filters. Finally,
the resulted signal must be amplified. Next
the ECG signals need to be exported to the
home gateway. In this project, USB DrDAQ
data logger is acquiring the ECG data. It
connects to the PC on a USB 2.0 port and to
the ECG circuit's output using a probe
through the scope channel of the DAQ. The
home gateway could be any PC, laptop,
iPad, PDA or any other device that can be
connected to the Internet. The Home
gateway will receive the ECG signal from
the data logger and sends it to the healthcare
For a patients on bed, there needs someone
to monitor the saline level in the bottle else
it tends to some risk factors. So here I
propose a system that incorporates the LDR
that works based on the intensity of light.
When the saline water in the bottle is empty
it alerts the guide at first and in turn, if he
does not notice it, it itself closes the flow
with the help of DC motor.


mobile which can then be sent to the doctor

if the condition is critical.


Hyper Next Android Creator (HAC) is a

software development system aimed at
beginner programmers that can help them
create their own Android apps without
knowing Java and the Android SDK. It is
based on HyperCard that treated software as
a stack of cards with only one card being
visible at any one time and so is well suited
to mobile phone applications that have only
one window visible at a time. Hyper Next
Android Creator's main programming
language is simply called Hyper Next and is
loosely based on Hyper card's Hyper Talk
language. Hyper Next is an interpreted
English-like language and has many features
that allow creation of Android applications.
It supports a growing subset of the Android
SDK including its own versions of the GUI
control types and automatically runs its own.


Glucometers use test strips containing
glucose oxidase, an enzyme that reacts to
glucose in the blood droplet, and an
interface to an electrode inside the meter.
When the strip is inserted into the meter, the
flux of the glucose reaction generates an
electrical signal. The glucometer is
calibrated so the number appearing in its
digital readout corresponds to the strength of
the electrical current: The more glucose in
the sample, the higher the number which is
sent via the blue tooth module to the android

ISBN NO : 978 - 1502893314

FIG- 5: Shows the patient details

International Association of Engineering and Technology for Skill Development


Proceedings of International Conference on Advancements in Engineering and Technology

SMS sending and SMS receiving programs are

working properly are checked on Eclipse
software by using AVD manager. The hardware
is successfully implemented for remote health



In this paper a real-time low cost patient

monitoring system is introduced. The
developed system produces live parameters
that show the analysis of the readings for the
abnormalities. If the system detects any of
the abnormalities, it will alert the doctor and
hospital by sending email and SMS
message. The system also implements an
application based on Android platform for
doctors and for patients. The doctor
application provide online information about
the patient status, patient history and
provides new reading every 30 minutes.
This system provides some sort of freedom
to both doctor and patient since the results
are shown at real-time and the doctor will be
alerted on his/her Android device in case of
abnormality detection. This system founds
to be more helpful at the time of natural
calamities where there are numerous
patients needing ICU. As a future work, the
system can be enhanced more using normal
mobiles instead of using android mobiles.


Electronics and Instrumentation Engineering

, Vol. 3, Issue 4, April 2014.
[3] Ashokkumar Ramalingam, Prabhu

Advanced Health Monitoring and Receiving

Using Smartphone in Global Networks,
International Journal on Recent and
Innovation Trends in Computing and
Communication , Volume: 2 Issue: 6 .
[4] Smart Elderly Home Monitoring System
with an Android Phone, International
Journal of Smart Home Vol. 7, No. 3, May,
Automation,Vol.2, No.3, September 2009.
[6] R. Fensli, E. Gunnarson, and O.
Hejlesen, A Wireless ECG System for
Communication to a Clinical Alarm
Conference of the IEEE Engineering in
Medicine and Biology Society, San
Francisco, USA; pp 2208-11, 2004
[7] W. Cheng , Y. Ming-Feng., C. KuangChiung, L. Ren-Guey, "Real-time ECG
telemonitoring system with mobile phone
platform", Measurement, Volume 41,

Rajeswari A Personal Safety Triggering
System On Android Mobile Platform,
journal of Health & Medical Informatics,
Vol4. Issue 2, pp. 1-6, 2013.
[2] Ashwini R. Jadhav, Prof. Dr. Virendra
V. Shete
Android Based Health
Monitoring, International Journal of

ISBN NO : 978 - 1502893314

International Association of Engineering and Technology for Skill Development


Proceedings of International Conference on Advancements in Engineering and Technology



K MANOJ KUMAR (R101226),N HEDGERAO(R101811),P SUNIL(R101844),SK IRFAN BASHA (R101883)
Dept. of Chemical Engineering, Rajiv Gandhi University of Knowledge Technologies, RK Valley
Cuddapah, India -516329

Abstract: Fulfillment of water requirement for flushing and

drinking we implement an natural purification technique to

filter out the calcium carbonate, magnesium carbonate and
fluorine by using the natural adsorption process, in this paper
we take BIO MASS (upper layer of soil),BRICK ASH,SAND as the
main raw materials, coming to the process in the first step we
will pass the contaminated water along with soil, here it self
no changes will occur just soil act as coolant and then from
soil to brick ash here florin will be adsorbed by the ash, and in
the next stage of the process calcium carbonate and
magnesium carbonate particles will be adsorbed by the sand
after this one we get the pure water which will be
consumable. It is an technique of no maintenance and with
low initial cost, including provision of relocate the setup.
Reference to this paper should be made as follows: [1] S
Chidambaram, AL Ramanathan* and S Vasudevan, (Technical
note)-Fluoride removal studies in water using natural materials.
[2] Resources and Environment 2013, 3(3): 53-58 DOI:
10.5923/j.re.20130303.02, Sure ndra Roy*, Gurcharan Dass,
Fluoride Contamination in Drinking Water A Review.
Nemade, P.D., Kadam, A.M. and Shankar, H.S. (2010)
Removal of arsenite from water by soil biotechnology ,

Water is the major medium of fluoride intake by humans.
Fluoride in drinking water can be either beneficial or
detrimental to health, depending on its concentration. The
presence of fluoride in drinking water with in permissible
limits is beneficial in the calcification of dental enamel.
According to the World Health Organization (WHO), the
maximum acceptable concentration of fluoride is 1.5 mg/l,
South Africas acceptable limit is 0.75 mg /l, while Indias
permissible limit of fluoride in drinking water is 1 mg/l.
Concentrations beyond these standards have shown dental and
skeletal fluorosis, and lesions of the endocrine glands, thyroid
and liver. Fluoride stimulates bone formation and small
concentrations have beneficial effects on the teeth by
hardening the enamel and reducing the incidence of caries.
Water treatment provides usable water for domestic
agricultural & industrial purposes helps to conserve & enhance
water in quality and quantity; in addition prevents degeneration
of our water sources of surface & ground. Green technologies
today provide impressive water quality at competitive costs
contributing to global warming this technical
specification presents a green biological purification engine
using a natural adsorption process
NO : 978 - 1502893314


The fluoride -bearing minerals or fluoride-rich minerals
in the rocks and soils are the cause of high fluoride content
in the groundwater, which is the main source of drinkingwater in India Water is the major medium of fluoride intake by
humans. Fluoride in drinking water can be either Beneficial or
detrimental to health, depending on its concentration. The
presence of fluoride in drinking water within permissible limits
is beneficial in the calcification of dental enamel. According to
the World Health Organization (WHO), the maximum
acceptable concentration of fluoride is 1.5 mg/l, South Africas
acceptable limit is 0.75 mg/l , while India s permissible limit
of fluoride in drinking water is 1 mg/l. Concentrations beyond
these standards have shown dental and skeletal fluorosis(see
the below fig 1 and 2), and lesions of the endocrine glands,
thyroid and liver. Fluoride stimulates bone formation and
small concentrations have beneficial effects on the teeth by
hardening the enamel and reducing the incidence of caries.
Mc Donagh et al. Described in great detail the role of fluoride
in the prevention of dental fluorosis. At low levels (<2 ppm)
soluble fluoride in the drinking water may cause mottled
enamel during the formation of the teeth, but at higher levels
other toxic effects may be observed. Severe symptoms lead to
death when fluoride doses reach 250450 ppm. It is found
that the IQ of the children in the high fluoride areas (drinking
water fluoride 3.15 ppm) is significantly low.

Fig: 1 Effect of fluoride on teeth

Fig: 2 Effect of fluoride on bones

International Association of Engineering and Technology for Skill Development


Proceedings of International Conference on Advancements in Engineering and Technology


carbonate will adsorbed by sand finally we get consumable

cleared water.

Raw materials
Here we are taking the biomass (upper layer of red soil) and

Incinerated Bick ash and sand as our basic raw materials.

Red soil

Contaminated water

Fig. red soil

Incinerated brick ash


Fig. Brick ash

Drinkable water

Rough flow sheet

With Reference to:

Fig. sand
Coming to the main process we are passing the
contaminated water through first stage in which we use red
soil as biomass of coolant here it self no reactions will occur
with water, suppose if we passing water at 20 degrees
centigrade after coming from soil we will get the water
around 18 or 17 degrees centigrade. In second stage we
passing the water through the incinerated brick ash here the
main adsorption will takes place where the fluoride will
adsorbed by the brick ash here we will get a indication like
the brick ash will change its color to light yellowish. After that
we send water that is come out from brick ash column
through grinded rock material - sand here water will repurified after the
calcium carbonate and magnesium

NO : 978 - 1502893314

Considerable work on DE fluoridation has been done all

over the world. The most economical adsorbent for fluoride
removal from drinking water is activated alumina. Borah and
Dey has reported other adsorbents like silica gel, soil, bone
charcoal, zeolites, betonies, etc which controls the fluoride
contamination. They also carried out pilot scale study for the
treatment of fluoride using coal particles as adsorbent
materials. The amount, contact time and particle size of the
adsorbent influenced the treatment efficiencies of fluoride.
Concluding Remarks
Rock minerals and waste disposal contributes fluoride
contamination in groundwater. Researchers have observed
different concentrations of fluoride for the different diseases.
To mitigate fluoride contamination for an affected area, the
provision of safe, low fluoride water from alternative sources
should be investigated as the first option otherwise various
methods, which have been developed for the DE fluoridation

International Association of Engineering and Technology for Skill Development


Proceedings of International Conference on Advancements in Engineering and Technology


of water can be used to prevent fluoride contamination.

Groundwater of a particular area should be thoroughly studied
before its use for domestic purposes and accordingly a suitable
method can be chosen for its treatment. Our process a bio
safety and low cost one without using any external energy to
give a better solution for contaminated water.

[1] WHO, 1984, Environmental Health Criteria for Fluorine and
Fluorides., Geneva, 1-136.
(2)WHO (World Health Organization), 2006,
Guidelines for
Drinking-Water Quality: Incorporating First Addendum to Third
Edition., World Health Organization, Geneva., 375 p
(3)McDonagh, M.S., Whiting, P.F., Wilson, P.M., Sutton,
A.J.,Chestnutt, I., Cooper, J., Misso, K., Bradley, M., Treasure, E.
and Kleihnen, J., 2000, Systematic review of water fluoridation.,
Brit. Med. J., 321, 855859.
(4)Borah, L. and Dey, N.C., 2009, Removal of fluoride from low
TDS water using low grade coal., Indian J. Chem. Technol., 16, 361363.
(5) Prof. Shankar(iit-b) research topic
(6) Meenakshi, R.C., Garg, V.K., Kavita, Renuka and Malik, A., 2004,
Groundwater quality in some villages of Haryana, India: focus on
fluoride and fluorosis., J. Hazardous Mater., 106, 85-97.
(7) Misra, A.K. and Mishra, A., 2007, Study of quaternary aquifers in
Ganga Plain, India: Focus on groundwater salinity, fluoride and
fluorosis., J. Hazardous Mater., 144, 438-448.
(8) Venkateswarulu, P., Rao, D.N., Rao, and K.R., 1952, Studies in
endemic fluorosis, Vishakapatnam and suburban areas., Indian J
Med Res, 40, 353-62.
(9) Meenakshi, R.C. and Maheshwari, 2006, Fluoride in drinking
water and its removal., J. Hazardous Mater., 137, 456-463.
(10) Yadav, A.K., Kaushik, C.P., Haritash, A.K., Kansal, A. and Rani,
N., 2006, Defluoridation of groundwater using brick powder as an
adsorbent., J. Hazardous Mater., 128, 289-293.
(11) Fluoride Contamination in Drinking Water A Review
Surendra Roy , Gurcharan Dass JCDM College of Engineering,
Haryana, India

NO : 978 - 1502893314

International Association of Engineering and Technology for Skill Development


Proceedings of International Conference on Advancements in Engineering and Technology


Secure Data Sharing of Multi-Owner Groups in Cloud


Sunilkumar Permkonda , 2D. Madhu Babu

ABSTRACT: Cloud computing provides an economical and

efficient solution for sharing group resource among cloud users.
Using cloud storage, users can remotely store their data a and
enjoy the on-demand high quality application and services from a
shared pool of configurable computing resources, without the
burden of local data storage and maintenance. However, the fact
that users no longer have physical possession of the outsourced
data makes the data integrity protection in cloud computing a
formidable task, especially for users with constrained computing
resources. Sharing data in multi-owner manner while preserving
data and identity privacy from an un strusted cloud is still a
challenging issues. So, secure cloud authentication system has
been proposed, in which users can check the integrity of
outsourced data by assigning a third party auditor (TPA) and be
worry-free. By using an encryption and hashing technique such
as Advanced Encryption Standard (AES),Merkle Hash
Tree(MHT) algorithm, any cloud users can anonymously share
data with others. Also trustworthiness will be increased between
the user and the cloud service provider.
KEYWORDS: Cloud computing, data
preserving, access control, dynamic groups.



Cloud Computing is recognized as an alternative to
traditional information technology due to its intrinsic resource
sharing and low-maintenance characteristics. In cloud
computing ,the cloud service provider(CSPs),such as Amazon,
are able to deliver various services to cloud users with the
help of powerful datacenters. By migrating the local data
management systems into cloud servers, users can enjoy highquality services and save significant investments on their local

Sunilkumar Permkonda, M. Tech Student, Department of

CSE, JNTUA, Anantapur/ Audisankara Institute of

D.Madhu Babu, Assistant Professor Department of CSE,

JNTUA/ Anantapur/ Audisankara Institute of Technology,
Gudur /India,( e-mail: dmadhubabu@yahoo.com).

One of the most fundamental services offered by cloud

providers is data storage. By utilizing the cloud ,the users can
be completely released from the troublesome local data
storage and maintenance. However, It also poses a significant
risk to the confidentiality of those stored files. Specifically, the
cloud servers managed by cloud providers are not fully trusted
by users while the data files stored in cloud may be sensitive
and confidential, such as business plans. To preserve data
privacy, as basic solution is to encrypted data files, and then
upload the encrypted data into the cloud. Unfortunately.
Designing an efficient and secure data sharing schema for
groups in the cloud is not an easy task due to the following
challenging issues.
First, identity privacy is one of the most significant
obstacles is one the wide deployment of cloud computing .
Without the guarantee of identity privacy, users may be
unwilling to join in cloud computing systems because their
identities could be easily disclosed to cloud providers and
Second it is highly recommended that any member in a
group should be able to fully enjoy the data storing and
sharing services provided by the cloud , which is defined as
the multi-owner manner. Compared with the single-owner
manner, where only the group manager can store and modify
data in the cloud , the multiple-owner manner is more flexible
in practical applications. More concretely, each user in the
group is able to not only read data , but also modify their part
of data in the entire data file shared by the company. groups
are normally dynamic in practice, e.g., new staff participation
and current employee revocation in a company. The changes
of membership make secure data sharing extremely difficult.
On one hand , the anonymous system challenges new granted
users to learn the content of data files stored before their
participation, because it is impossible for new ranted users to
contact with anonymous data owners, and obtain the
corresponding decryption keys. On the other hand, an efficient
membership revocation mechanism without updating the
secret keys of the remaining users is also desired to minimize
the complexity of key management.
Several security schemes for data sharing on un
trusted servers have been proposed. In these approaches, data

ISBN NO : 978 - 1502893314

International Association of Engineering and Technology for Skill Development


Proceedings of International Conference on Advancements in Engineering and Technology

owners store the encrypted data files in un trusted storage the

encrypted and distribute the corresponding decryption keys
only to authorizes users. Thus, unauthorized users as well as
storage servers can't learn the content of the data files because
they have no knowledge of the decryption keys. However the
complexities of user participation and revocation in these
schemes are linearly increasing with the number of data
owner and the number of revoked users, respectively. By
setting a group with a single attribute, Lu et al. proposed a
secure provenance scheme based on the cipher text-policy
attribute-based encryption technique, which allows any
member in group to share data with others. However, the issue
of user revocation is not addressed in their scheme presented a
scalable and fine-grained data access control scheme in cloud
computing based o the Key policy attribute-based
encryption(KP-ABE) technique. unfortunately, the single
owner manner hinders the adoption of their scheme into the
case , where any user is ranted to store and share data.
Several security schemes for data sharing on un trusted servers
have been proposed [4], [5], [6]. In these approaches, data
owners store the encrypted data files in un trusted storage and
distribute the corresponding decryption keys only to
authorized users. Thus, unauthorized users as well as storage
servers cannot learn the content of the data files because they
have no knowledge of the decryption keys.
Proposed a cryptographic storage system that enables secure
file sharing on un trusted servers, named plutus. By dividing
files into file groups and encrypting each file group with a
unique file-block key, the data owner can share the file groups
with others through delivering the corresponding lockbox
key, where the lockbox key is used to encrypt the file-block
keys. However, it brings about a heavy key distribution
overhead for large-scale file sharing. Additionally, the fileblock key needs to be updated and distributed again for a user
revocation. Files stored on the un trusted server include two
parts: file metadata and file data. The file metadata implies the
access control information including a series of encrypted key
blocks, each of which is encrypted under the public key of
authorized users. Thus, the size of the file metadata is
proportional to the number of authorized users. The user
revocation in the scheme is an intractable issue especially for
large-scale sharing, since the file metadata needs to be
updated. In their extension version, the NNL construction is
used for efficient key revocation. However, when a new user
joins the group, the private key of each user in an NNL system
needs to be recomputed, which may limit the application for
dynamic groups. Another concern is that the computation
ISBN NO : 978 - 1502893314


overhead of encryption linearly increases with the sharing

Leveraged proxy re-encryptions to secure distributed storage.
Specifically, the data owner encrypts blocks of content with
unique and symmetric content keys, which are further
encrypted under a master public key. For access control, the
server uses proxy cryptography to directly re encrypt the
appropriate content key(s) from the master public key to a
granted users public key. Unfortunately, a collusion attack
between the un trusted server and any revoked malicious user
can e launched, which enables them to learn the decryption
keys of all the encrypted blocks.
Cloud Computing Security as Cloud computing security
(sometimes referred to simply as "cloud security") is an
evolving sub-domain of computer security, network security,
and, more broadly, information security. It refers to a broad set
of policies, technologies, and controls deployed to protect
data, applications, and the associated infrastructure of cloud
3.1 Data Security in Existing Cloud Computing System:
Cloud Computing is the vast developing technology, but
Security is the major challenging issue that is faced by the
Cloud Service Providers for handling the Outsourced Data.
Although the infrastructures under the cloud are much more
powerful and reliable than personal computing devices, they
are still facing the broad range of both internal and external
threats for data integrity. Thus, Trustworthiness for Data
Management system reduced rapidly. To overcome this
drawback, there is no big implementation was introduced till
now. By using this drawback of the cloud, the hackers are
hacking the data from the Cloud Servers. Dynamic broadcast
encryption technique is used and users can anonymously share
data with others .It allows the data owners to securely share
data files with others.

To achieve secure data sharing for dynamic groups in the

cloud, we expect to combine the group signature and dynamic
broadcast encryption techniques. Specially, the group
signature scheme enables users to anonymously use the cloud
resources, and the dynamic broadcast encryption technique
allows data owners to securely share their data files with
others including new joining users. Unfortunately, each user
has to compute revocation parameters to protect the
confidentiality from the revoked users in the dynamic
broadcast encryption scheme, which results in that both the
International Association of Engineering and Technology for Skill Development

Proceedings of International Conference on Advancements in Engineering and Technology

computation overhead of the encryption and the size of the

ciphertext increase with the number of revoked users. Thus,
the heavy overhead and large ciphertext size may hinder the
adoption of the broadcast encryption scheme to capacitylimited users. To tackle this challenging issue, we let the
group manager compute the revocation parameters and make
the result public available by migrating them into the cloud.
Such a design can significantly reduce the computation
overhead of users to encrypt files and the cipher text size.
Specially, the computations overhead of users for encryption
operations and the cipher text size are constant and
independent of the revocation users.

From the above analysis, we can observe that how to securely

share data files in a multiple-owner manner for dynamic
groups while preserving identity privacy from an un trusted
cloud remains to be a challenging issue. In this paper, we
propose a novel Mona protocol for secure data sharing in
cloud computing. Compared with the existing works, Mona
offers unique features as follows:

Any user in the group can store and share data files
with others by the cloud.


The encryption complexity and size of cipher texts

are independent with the number of revoked users in
the system.


User revocation can be achieved without updating the

private keys of the remaining users.


A new user can directly decrypt the files stored in the

cloud before his participation.




The following figure shows the overall architecture
of proposed system.Here the data is stored in a secure manner
in cloud and TPA audits the data to verify its integrity. If any
part of data is modified or corrupted, then mail alert is sent to
the data owner to indicate that the file has been changed.

Figure: System model.

Once Data Owners registers in the cloud, private and
public keys are generated for that registered owners. By using
these keys, data owners can now store and retrieve data from
cloud. A data owner encrypts the data using Advanced
Encryption Standard (AES) and this encrypted data is then
hashed with Merkle Hash Tree algorithm. By using Merkle
Hash Tree algorithm the data will be audited via multiple level
of batch auditing process. The top hash value is stored in local
database and other hash code files are stored in cloud. Thus
the original data cannot be retrieved by anyone from cloud,
since the top hash value is not in cloud. Even if any part of
data gets hacked, it is of no use to the hacker. Thus, the
security can be ensured

To overcome this drawback, we propose secure storage for

To check whether the data is modified or not, that is
multi-owner data sharing authentication system in loud. If data
present in cloud, data owner assigns a third party called
owner wants to upload data in cloud, Public and Private Keys
Trusted Party Auditor (TPA). Once the data owner sends the
will be generated for that user. He first encrypts the data using
request to audit the data, TPA checks the integrity of the data
Advance Encryption Standard algorithm and then hashes the
by getting the hash code files from cloud server and top hash
encrypted data using Merkle Hash Tree algorithm. Then the
value from db and verifies the file using Merkle Hash Tree
data will be given to the Trusted Party Auditor for auditing
Algorithm. After each time period, the auditing information
purpose. The Auditor audits the data using Merkle Hash Tree
will be updated by the Trusted Party Auditor. If any file is
Algorithm and stores in the Cloud Service Provider. If the user
missing or corrupted, email alert will be sent to data owner
wants to View/Download the data, they have to provide the
indicating that the data has been modified. The TPA can verify
public key. The Data Owners will check the public key
the file either by random or in manual way. Thus by allowing
entered by the User. If valid, then the decryption key will be
the Trusted Party Auditor to audit the data, Trustworthiness
provided to the user to encrypt the data.
ISBN NO : 978 - 1502893314
International Association of Engineering and Technology for Skill Development

Proceedings of International Conference on Advancements in Engineering and Technology


will be increased between the User and Cloud service


number of group members (i.e., the staffs) as illustrated in Fig.


Our contributions: To solve the challenges presented above,

we propose Mona, a secure multi-owner data sharing scheme
for dynamic groups in the cloud. The main contributions of
this paper include:

Group manager takes charge of system parameters

generation, user registration, user revocation, and revealing
the real identity of a dispute data owner. In the given example,
the group manager is acted by the administrator of the
company. Therefore, we assume that the group manager is
fully trusted by the other parties. Group members are a set of
registered users that will store their private data into the cloud
server and share them with others in the group. In our
example, the staffs play the role of group members. Note that,
the group membership is dynamically changed, due to the staff
resignation and new employee participation in the company.


We propose a secure multi-owner data sharing

scheme. It implies that any user in the group can
securely share data with others by the un trusted


Our proposed scheme is able to support dynamic

groups efficiently. Specifically, new granted users
can directly decrypt data files uploaded before their
participation without contacting with data owners.
User revocation can be easily achieved through a
novel revocation list without updating the secret keys
of the remaining users. The size and computation
overhead of encryption are constant and independent
with the number of revoked users.



We provide secure and privacy-preserving access

control to users, which guarantees any member in a
group to anonymously utilize the cloud resource.
Moreover, the real identities of data owners can be
revealed by the group manager when disputes occur.
We provide rigorous security analysis, and perform
extensive simulations to demonstrate the efficiency
of our scheme in terms of storage and computation

4.4 USER AUTHENTICATION:In this module, the user is allowed to access the
information from the Cloud Server. When a user registers in
cloud, private key and public key will be generated for that
user by cloud server. If user wants to view his own file, he
uses private key. If user wants to view others file, he uses
public key. This public key is split up equally for verification
by data owners. Each part of the public key is verified by data
owners. After verifying the key, if the key is valid, then user is
allowed to access the data. If the key is invalid, then the user is
rejected to access the data by Cloud Service Provider.
We consider a cloud computing architecture by combining
with an example that a company uses a cloud to enable its
staffs in the same group or department to share files. The
system model consists of three different entities: the cloud, a
group manager (i.e., the company manager), and a large
ISBN NO : 978 - 1502893314

Cloud is operated by CSPs and provides priced abundant

storage services. However, the cloud is not fully trusted by
users since the CSPs are very likely to be outside of the cloud
users trusted domain. Similar to [3], [7], we assume that the
cloud server is honest but curious. That is, the cloud server
will not maliciously delete or modify user data due to the
protection of data auditing schemes [17], [18], but will try to
learn the content of the stored data and the identities of cloud
By providing the Public and Private key components,
only the valid user will be allowed to access the data.
By allowing the Trusted party Auditorto audit the
data, Trustworthiness will be increased between the
User and Cloud ServiceProviders.
By using Merkle Hash Tree Algorithm the data will
be audited via multiple level of batch auditing
As Business Point of view, the Companys
Customers will be increased due to the Security and
Auditing Process.
Anonymity and traceability: Anonymity guarantees that
group members can access the cloud without revealing the real
identity. Although anonymity represents an effective
protection for user identity, it also poses a potential inside
attack risk to the system. For example, an inside attacker may
store and share a mendacious information to derive substantial
benefit. Thus, to tackle the inside attack, the group manager
should have the ability to reveal the real identities of data

Efficiency: The efficiency is defined as follows: Any

group member can store and share data files with others in the
group by the cloud . User revocation can be achieved without
International Association of Engineering and Technology for Skill Development

Proceedings of International Conference on Advancements in Engineering and Technology

involving the remaining users. That is, the remaining users do

not need to update their private keys or re-encryption
operations. New granted users can learn all the content data
files stored before his participation without contacting with the
data owner.
Data is secured by keeping the top hash value in local
database and hash code files in Cloud Server. By enabling
TPA to audit he data, integrity is maintained. Authenticating
the requested user key by all data owners. we design a secure
data sharing scheme, Mona, for dynamic groups in an un
trusted cloud. In Mona, a user is able to share data with others
in the group without revealing identity privacy to the cloud.
Additionally, Mona supports efficient user revocation and new
user joining. More specially, efficient user revocation can be
achieved through a public revocation list without updating the
private keys of the remaining users, and new users can directly
decrypt files stored in the cloud before their participation.
Moreover, the storage overhead and the encryption
computation cost are constant. Extensive analyses show that
our proposed scheme satisfies the desired security
requirements and guarantees efficiency as well.


Cloud, IEEE Trans. On Parallel and Distributed Systems,

[8] H. Shacham and B. Waters (2008), Compact Proofs of
Retrievability, proc. Intl Conf. Theory and Application of
Cryptology and Information Security: Advances in Cryptology
(Asiacrypt), pp. 90-107.
[9] C. Wang, Q. Wang, K. Ren, and W. Lou (2013), PrivacyPreserving Public Auditing for Secure Cloud Storage, IEEE
Trans. on Computers, pp. 362-375.
[10] S. Yu, C. Wang, K. Ren, and W. Lou (2010), Achieving
Secure, Scalable and Fine Grained Data Access Control in
Cloud Computing, Proc. IEEE INFOCOM, pp. 534 - 542.

[1] G. Ateniese, K. Fu, M. Green, and S. Hohenberger (2005),
Improved Proxy Re- Encryption Schemes with Applications
to secure Distributed Storage, Proc. Network and Distributed
Systems Security Symp. (NDSS), pp. 29-43
[2] G. Ateniese, R. Burns, R.Curtmola, J. Herring, L. Kissner,
Z. Peterson, and D. Song, Provable Data Possession at
Untrusted stores, proc. 14th ACM Conf. Computer and
Comm. Security (CSS 07), pp. 598-609
[3] K.D. Bowers, A.Juels, and A.Oprea (2009), HAIL: A
High-Availability and Integrity Layer for Cloud Storage,
Proc.ACM Conf. Computer and Comm. Security (CCS 09),
pp. 187-198
[4] A. Fiat and M.Naor (1993),Broadcast Encryption, proc.
Intl Cryptology Conf. Advances in Cryptology(CRYPTO),
[5] E.Goh, H. Shacham, N. Modadugu,and D.Boneh (2003),
Sirius: Securing Remote Untrusted Storage, Proc. Network
and Distributed Systems Security Symp. (NDSS), pp. 131145.
[6] M.Kallahalla,E.Riedel,R. Swaminathan, Q. Wang, and K.
Fu (2003), Plutus: Scalable Secure File Sharing on Untrusted
storage, Proc.USENIX Conf. File and Storage Technologies,
pp. 29-42.
[7] X. Liu, Y. Zhang, B. Wang, and J. Yan (2013), Mona:
Secure Multi-Owner Data Sharing for Dynamic Groups in the
ISBN NO : 978 - 1502893314

International Association of Engineering and Technology for Skill Development


Proceedings of International Conference on Advancements in Engineering and Technology



K. Thirumala 1, V. Pandu Ranga 2

M.Tech Student, Department of ECE, CMR College of Engineering and Technology, Secunderabad
Assistant Professor, Department of ECE, CMR College of Engineering and Technology, Secunderabad
Mail id: kurimillathirumala12@gmail.com

limiting factor to acquisition accuracy, pervasiveness

The Wireless sensor networks (WSN) are well suited
for continuous environmental data acquisition for

and cost.




environment temperature representation. This paper

presents the functional design and implementation of
a complete WSN platform that can be used for a

considered: the radio-frequency identification

(RFID) and the wireless sensor networks (WSN).

range of continuous environmental temperature

monitoring in a forest area. The application
requirements for low cost, high number of sensors,

While the former is well established for low-cost

identification and tracking, WSNs bring forest

fast deployment, long life-time, low maintenance,

and high quality of service are considered in the
specification and design of the platform and of all its

applications richer capabilities for both sensing

and actuation. In fact, WSN solutions already

components. Low-effort platform reuse is also

considered starting from the specifications and at all
design levels for a wide array of related monitoring

cover a very broad range of applications, and

research and technology advances continuously

expand their application field. This trend also
Index TermsWireless Sensor Networks(WSN),





monitoring applications, WSN optimized design,

increases their use in many applications for

versatile low-cost data acquisition and actuation.

WSN platform, WSN protocol.

However, the sheer diversity of WSN


applications makes increasingly difficult to define

More than a decade ago, the parameters of

an environment was coined in which computers were
able to access data about objects and environment

typical requirements for their hardware and

software. In fact, the generic WSN components

without human interaction. It was aimed to

complement human-entered data that was seen as a

ISBN NO : 978 - 1502893314

often need to be adapted to specific application

International Association of Engineering and Technology for Skill Development1


Proceedings of International Conference on Advancements in Engineering and Technology

requirements and



These ad hoc changes tend to adversely impact


physical access to the field for deployment and


the overall solution complexity, cost, reliability,

The generic WSN platforms can be
and maintenance that in turn effectively curtail
used with good results in a broad class of forest
WSN adoption, including their use in forest




applications (e.g., those in open nature) may
To address these issues, the reusable

have stringent requirements, such as very low

WSN platforms receive a growing interest.

cost, large number of nodes, long unattended

These platforms are typically optimized by


leveraging knowledge of the target class of

maintenance, which make these generic WSN


platforms less suited.










phenomena of interest) to improve key WSN

This paper presents the application











solutions, and the practical realization of a full-

custom, reusable WSN platform suitable for use
Among the forest application domains,

in low cost long-term environmental monitoring

the environmental/earth monitoring receives a

applications. For a consistent design, the main

growing interest as environmental technology

application requirements for low cost, fast-

becomes a key field of sustainable growth

deployment of large number of sensors, and
















considered at all design levels. Various trade-

challenging because of, e.g., the typically harsh






operating conditions and difficulty and cost of

specifications are identified, analyzed, and used

to guide the design decisions. The development

ISBN NO : 978 - 1502893314

International Association of Engineering and Technology for Skill Development2


Proceedings of International Conference on Advancements in Engineering and Technology


methodology presented can be reused for

nature the maintenance can be also very difficult

platform design for other application domains, or

and costly.

evolutions of this platform.

These considerations make the open

nature one of the toughest application fields for





large scale WSN environmental monitoring, and

flexibility and reusability for a broad range of

the Internet of things applications requirements
related applications was considered from the
for low cost, high service availability and low
start. A real-life application, representative for





this application domain, was selected and used

as reference throughout the design process.
To be cost-effective, the sensor nodes
Finally, the experimental results show that the
often operate on very restricted energy reserves.



Premature energy depletion can severely limit

the network service and needs to be addressed


considering the

WSN environmental monitoring includes

both indoor and outdoor applications. The later
can fall in the city deployment category (e.g., for
traffic, lighting, or pollution monitoring) or the
open nature category (e.g., chemical hazard,
earthquake and flooding detection, volcano and




precision agriculture). The reliability of any

outdoor deployment can be challenged by
extreme climatic conditions, but for the open

application requirements for

cost, deployment, maintenance, and service

availability. These become even more important
for monitoring applications in extreme climatic
environments, such as glaciers, permafrosts or





environments can considerably benefit from






conditions emphasize the issues of node energy

management, mechanical and communication






ISBN NO : 978 - 1502893314

International Association of Engineering and Technology for Skill Development3


Proceedings of International Conference on Advancements in Engineering and Technology






configuration-free field deployment procedure




suitable for large scale application deployments.

experiments show that WSN optimization for

reliable operation is time-consuming and costly.






requirements for long-term, low-cost and reliable

service, unless reusable hardware and software





Internet-enabled servers to collect and process

the field data for environment applications.
This paper contributions of interest for







Fig. 1. Example of an ideal WSN deployment

for in situ wildfire detection applications.

summarized as: 1) detailed specifications for

a demanding WSN application for long-term


environmental monitoring that can be used to


analyze the optimality of novel WSN solutions,

2) specifications, design considerations, and

WSN data acquisition for environmental

monitoring applications is challenging, especially for
open nature fields. These may require large sensor

experimental results for platform components

that suit the typical application requirements of

numbers, low cost, high reliability, and long

maintenance-free operation. At the same time, the
nodes can be exposed to variable and extreme

low cost, high reliability, and long service time,

3) specifications and design considerations for

climatic conditions, the deployment field may be

costly and difficult to reach, and the field devices
weight, size, and ruggedness can matter, e.g., if they








are transported in backpacks.

Most of these requirements and conditions


can be found in the well-known application of

monitoring applications, and 4) a fast and







temperature sensors and on-board data processing. In

ISBN NO : 978 - 1502893314

International Association of Engineering and Technology for Skill Development4


Proceedings of International Conference on Advancements in Engineering and Technology


its simplest event-driven form, each sensor node

Since these and many related applications

performs periodic measurements of the surrounding

typically use fewer sensor nodes, they are less

air temperature and sends alerts to surveillance

demanding on the communication channels (both in-

personnel if they exceed a threshold. Fig. 1 shows a

field and with the server), and for sensor node energy

typical deployment pattern of the sensor nodes that

and cost. Consequently, the in situ wildfire detection

achieves a good field coverage. For a fast response

application can be used as reference for the design of

time, the coverage of even small areas requires a

aWSN platform

large number

making this

optimized for IoT environmental monitoring and the

application representative for cost, networking and

platform should be easily reusable for a broad class

deployment issues of the event-driven high density

of related applications. Thus, the requirements of

Internet of things application class. In the simplest

aWSN platform for IoT long-term environmental

star topology, the sensor nodes connect directly to the

monitoring can be defined as follows:

gateways, and each gateway autonomously connects

low-cost, small sensor nodes with on-board

to the server. Ideally, the field deployment procedure


ensures that each sensor node is received by more


than one gateway to avoid single points of failure of

low-cost, small gateways (sinks) with self-testing,

the network. This application can be part of all three

error recovery and remote update capabilities, and

WSN categories: event-driven (as we have seen),


time-driven (e.g., if the sensor nodes periodically


send the air temperature), and query-driven (e.g., if

sufficient gateway hardware and software resources

the current temperature can be requested by the

to support specific application needs (e.g., local

operator). This means that the infrastructure that

transducers, and data storage and processing);

supports the operation of this application can be

detection of field events on-board the gateway to

reused for a wide class of similar long-term

reduce network traffic and energy consumption;

environmental monitoring applications like:

water level for lakes, streams, sewages;


gas concentration in air for cities, laboratories,

from few sparse to a very large number of nodes;


low data traffic in small packets;

soil humidity and other characteristics;

fast and reliable field node deployment procedure;

inclination for static structures (e.g., bridges, dams);

remote configuration and update of field nodes;

position changes for, e.g., land slides;

high availability of service of field nodes and

lighting conditions either as part of a combined

servers, reliable data communication and storage at

sensing or

all levels;

standalone, e.g., to detect intrusions in dark places;

extensible server architecture for easy adaptation to

infrared radiation for heat (fire) or animal

different IoT application requirements;

















ISBN NO : 978 - 1502893314

International Association of Engineering and Technology for Skill Development5


Proceedings of International Conference on Advancements in Engineering and Technology


multiple-access channels to server data for both

for the platform nodes, reducing the deployment cost


and errors.





programmable multichannel alerts;

automatic detection and report of WSN platform


faults (e.g., faulty sensor nodes) within hours, up to a

day; 310 years of maintenance-free service.

In this section will be presented the use

of the specifications defined in Section III to
derive the specifications of the WSN platform
nodes, design space exploration, analysis of the
possible solutions, and most important design

Fig. 2. Tiered structure of the WSN platform







periodically send the field data to the application

server using long-range communication channels.
The application server provides long-term data
storage, and interfaces for data access and process by
end users (either human or other applications).
The platform should be flexible to allow the

Since forest applications may require large

numbers of sensor nodes, their specifications are very
important for application performance, e.g., the in

distributed wildfire detection selected as

reference for the reusable WSN platform design.

removal of any of its tiers to satisfy specific

application needs. For instance, the transducers may
me installed on the gateways for stream water level
monitoring since the measurement points may be
spaced too far apart for the sensor node short-range
communications. In the case of seismic reflection
geological surveys, for example, the sensor nodes
may be required to connect directly to an on-site
processing server, bypassing the gateways. And when
the gateways can communicate directly with the end
user, e.g., by an audible alarm, an application server
may not be needed.

the sensor node cost reduction. Also, for low

application cost the sensor nodes should have a long,
maintenance-free service time and support a simple
and reliable deployment procedure. Their physical
size and weight is also important, especially if they
are transported in backpacks for deployment. Node







characteristics. Batteries can provide a steady energy

flow but limited in time and may require costly
maintenance operations for replacement. Energy
harvesting sources can provide potentially endless

In addition to the elements described above,

the platform can include an installer device to assist
the field operators to find a suitable installation place

ISBN NO : 978 - 1502893314

One of the most important requirements is

energy but unpredictable in time, which may impact

node operation. Also, the requirements of these






International Association of Engineering and Technology for Skill Development6


Proceedings of International Conference on Advancements in Engineering and Technology


deployment costs. Considering all these, the battery

on LCD screen and also if the temperature increased

powered nodes may improve application cost and

beyond threshold voltage then it will send the alert

reliability if their energy consumption can be

message to the authorized persons mobile through

satisfied using a small battery that does not require

GSM network.

replacement during node lifetime.

Fig.4: Block diagram of Monitoring Section

In the following are presented the most

Fig.3: Block diagram of Node section

In this node section, Node1 send the reading










of temperature to Node2 then to Node3 then updated







transmission. In monitoring section it will send alert

requirements in Sections III and IV and are

suitable for long-term environmental monitoring

messages to the authorized person by using GSM

internet of applications.

Similarly at Node2, it has temperature
reading. After some delay it will send to Node3 then
to the monitoring section. Then to the authorized

A. Sensor Node Implementation:

Fig. 3 shows several sensor nodes designed for longterm environmental monitoring applications. The
node for in situ wildfire monitoring is optimized for

Similarly at Node3 will send the reading to

the monitoring section then to the authorized person

cost since the reference application typically requires

a high number of nodes (up to tens of thousands).

through GSM.






environmental readings from the nodes and displays

ISBN NO : 978 - 1502893314

International Association of Engineering and Technology for Skill Development7


Proceedings of International Conference on Advancements in Engineering and Technology


coordinator implements a priority-based service

preemption allowing higher priority service requests
to interrupt and take over the gateway control from
any lower priority service requests currently being
served. This improves the gateway forwarding time
of alert messages, for instance.
The application tasks implement specific
functionalities for the application, such as the
message queue, field message handling, sensor node
status, field message post processing, RPC, etc. They

Fig.5: (a) firmware structure for reference

are implemented as round-robin scheduled co-

application and (b) operation state flow

routines to spare data memory (to save space and


costs the gateway uses only the microcontroller

internal RAM).
Manual configuration during sensor node deployment

The node microcontroller is an 8 bit

is not necessary because the field node IDs are

ATMEL AT89S52 with 4 KB program and 128 bytes

mapped to the state structure using a memory-

data memory, clocked by its internal 11.0592 MHz

efficient associative array. The node IDs are added as

crystel oscillator (to reduce the costs and energy

they become active in gateway range up to

consumption, since it does not need accurate

1000 sensor nodes and 10 peer gateways, while

timings). The full custom 2 KB program has the

obsolete or old entries are automatically reused when

structure in Fig. 5(a). A minimal operating system


supports the operation of the main program loop

shown in Fig. 5(b) and provides the necessary
interface with the node hardware, support for node
self-tests, and the communication protocol.

B. Gateway Node Implementation:

Fig. 6 shows the layers of the full custom
software structure of the gateway. The top-level
operation is controlled by an application coordinator.
On the one hand, it accepts service requests from
various gateway tasks (e.g., as reaction to internal or
external events, such as message queue nearly full or






Fig.6: Gateway firmware block diagram.






respectively).On the other hand, the coordinator

triggers the execution of the tasks needed to satisfy
the service request currently served. Also, the

ISBN NO : 978 - 1502893314






receiving data from one sensor node and no

International Association of Engineering and Technology for Skill Development8


Proceedings of International Conference on Advancements in Engineering and Technology

peers operate for one year on 19 Ah batteries,




communication segments, with latency-energy

calculations above. It is also worth noting that

trade-offs, and the fast and ubiquitous end user

the gateway average current can be further


reduced by using the hardware SPI port to

applications). The full custom server software


has the structure shown in Fig. 8. It provides
























programming the latter to autonomously scan for

interfaces for:

incoming packets instead of the software-

field nodes (gateways);

controlled LPL over a software SPI port

the operators and supervisors for each field;

emulation used currently.

various alert channels;

external access for other IoT systems.
Each interface has a processing unit
that includes, e.g., the protocol drivers. A central
engine controls the server operation and the
access to the main database. It is written in
Java, uses a MySQL database and runs on a

Fig.7: Block structure of the deployment

Linux operating system. Two protocols are used

to interface with the field nodes (gateways) for





The repeater node uses the gateway

unreliable connections: normal and service (boot
design with unused hardware and software
loader) operation. The normal operation protocol
components removed.
acknowledges each event upon reception for an


incremental release of gateway memory
The main purpose of a WSN application
server is to receive, store, and provide access to

ISBN NO : 978 - 1502893314





communications. Messages and acknowledges

International Association of Engineering and Technology for Skill Development9


Proceedings of International Conference on Advancements in Engineering and Technology


can be sent asynchronously to improve the

its lifetime. For example, Fig. 12 shows some


typical deployments for the reference application






nodes. Node deployment can be a complex,



time-consuming, error-prone, and manpower-

avoided at every communication level. The

intensive operation, especially for applications

gateways timestamp the field messages and

with a large number of nodes. Thus, it needs to

events using their relative time and the server

be guided by automatic checks, to provide quick

converts it to real-world time using an offset

and easy to understand feedback to field



operators, and to avoid deployment-time sensor

communication session. The protocol for the

or gateway node configuration. The check of

boot loader mode is stateless, optimized for

node connectivity with the network is important

large data block transfers and does not use

for star topologies and especially for transmit-

acknowledges. The gateway maintains the

only nodes (like the reference application sensor

transfer state and incrementally checks and

nodes). These nodes cannot use alternative

builds the firmware image. An interrupted

message routing if the direct link with the

transfer can also be resumed with minimal

gatewayis lost or becomes unstable.














sensor node of the reusable WSN platform takes


into account the unidirectional communication

capabilities of the sensor nodes. It is also

designed to avoid user input and deploymentThe node deployment procedure of the
time configurations on the one hand, and a fast
WSN platform aims to install each node in a field





location both close to the application-defined

position and reliable concurrent neighbor node
position and that ensures a good operation over
deployment on the

ISBN NO : 978 - 1502893314

International Association of Engineering and Technology for Skill Development10


Proceedings of International Conference on Advancements in Engineering and Technology


other hand. The sensor nodes are temporarily

deployment state, (b) display position suitability.

switched to deployment operation by activating

The deployment device collects all the data, and

their on-board REED switch using a permanent

computes and

magnet in the deployment device, as shown in

displays an assessment of deployment position

Fig.8(a). This one-bit near field communication

suitability. No gateway or node configuration is

(NFC) ensures a fast, reliable, input-free node

required and the procedure can be repeated

selectivity. Its device ID is collected by the

until a suitable deployment position is found.

deployment device that listens only for strong

deployment messages. These correspond to
nodes within just a few meters providing an

effective insulation from collecting IDs of nearby
Power supply circuit schematic:
concurrent node deployments. The gateways
that receive the sensor node deployment
messages report the link quality with the node
[see Fig. 8(b)].

Node section schematic:

Fig.8: Field deployment of sensor nodes: (a)

use deployment device magnet to set to
deployment state, (b) display position

Field deployment of sensor nodes: (a)

Monitoring Section Schematic:

use deployment device magnet to set to

ISBN NO : 978 - 1502893314

International Association of Engineering and Technology for Skill Development11


Proceedings of International Conference on Advancements in Engineering and Technology

In proposed method power consumption will be

Long range




provided. The security is high. The application

requirements for low cost, high number of sensors,
fast deployment, long lifetime, low maintenance, and
high quality of service are considered in the
specification and design of the WSN platform and of
all its components.

[1] K. Romer and F. Mattern, The design space of
wireless sensor networks, IEEE Wireless Commun.,
vol. 11, no. 6, pp. 5461, Dec. 2004.
[2] I. Talzi, A. Hasler, S. Gruber, and C. Tschudin,
Permasense: Investigating permafrost with a WSN
in the Swiss Alps, in Proc. 4th Workshop Embedded
Netw. Sensors, New York, 2007, pp. 812.
[3] P. Harrop and R. Das,Wireless sensor networks
20102020, IDTechEx Ltd, Cambridge, U.K., 2010.
[4] N. Burri, P. von Rickenbach, and R. Wattenhofer,
Dozer: Ultra-low power data gathering in sensor

ISBN NO : 978 - 1502893314


networks, in Inf. Process. Sensor Netw., Apr. 2007,

pp. 450459.
[5] I. Dietrich and F. Dressler, On the lifetime of
wireless sensor networks, ACM Trans. Senor Netw.,
vol. 5, no. 1, pp. 5:15:39, Feb. 2009.
[6] B. Yahya and J. Ben-Othman, Towards a
classification of energy aware MAC protocols for
wireless sensor networks, Wireless Commun. Mobile
Comput., vol. 9, no. 12, pp. 15721607, 2009.
[7] J. Yang and X. Li, Design and implementation
of low-power wireless sensor networks for
environmental monitoring, Wireless Commun.,
Netw. Inf. Security, pp. 593597, Jun. 2010.
[8] K. Martinez, P. Padhy, A. Elsaify, G. Zou, A.
Riddoch, J. Hart, and H. Ong, Deploying a sensor
network in an extreme environment, Sensor Netw.,
Ubiquitous, Trustworthy Comput., vol. 1, pp. 88,
Jun. 2006.
[9] A. Hasler, I. Talzi, C. Tschudin, and S. Gruber,
Wireless sensor networks in permafrost research
challenges, in Proc. 9th Int. Conf. Permafrost, Jun.
2008, vol. 1, pp. 669674.
[10] J. Beutel, S. Gruber, A. Hasler, R. Lim, A.
Meier, C. Plessl, I. Talzi, L. Thiele, C. Tschudin, M.
Woehrle, and M. Yuecel, PermaDAQ: A scientific
instrument for precision sensing and data recovery in
environmental extremes, in Inf. Process. Sensor
Netw., Apr. 2009, pp. 265276.
[11] G. Werner-Allen, K. Lorincz, J. Johnson, J.
Lees, and M. Welsh, Fidelity and yield in a volcano

International Association of Engineering and Technology for Skill Development12


Proceedings of International Conference on Advancements in Engineering and Technology


monitoring sensor network, in Proc. 7th Symp.

Operat. Syst. Design Implement., Berkeley, CA,
2006, pp. 381396.
[12] G. Barrenetxea, F. Ingelrest, G. Schaefer, and M.
Vetterli, The hitchhikers guide to successful
wireless sensor network deployments, in Proc. 6th
ACM Conf. Embedded Netw. Sensor Syst., New
York, 2008, pp. 4356.

ISBN NO : 978 - 1502893314

International Association of Engineering and Technology for Skill Development13