Você está na página 1de 202

International Conference on Computer Applications 2012

Volume 2

International Conference on Computer Applications 2012

Volume 2
In association with
Association of Scientists, Developers and Faculties (ASDF), India
Association of Computer Machinery(ACM)
Science & Engineering Research Support society (SERSC), Korea

Computational Intelligence, Computer Applications, Open Source Systems

27-31 January 2012


Pondicherry, India

Editor-in-Chief

K. Kokula Krishna Hari


Editors:
E Saikishore, T R Srinivasan, D Loganathan,
K Bomannaraja and R Ponnusamy

Published by
Association of Scientists, Developers and Faculties
Address: 27, 3rd main road, Kumaran Nagar Extn., Lawspet, Pondicherry-65008

Email: admin@asdf.org.in || www.asdf.org.in

International Conference on Computer Applications (ICCA 2012)


VOLUME 2
Editor-in-Chief: K. Kokula Krishna Hari
Editors: E Saikishore, T R Srinivasan, D Loganathan, K Bomannaraja and R Ponnusamy

Copyright 2012 ICCA 2012 Organizers. All rights Reserved

This book, or parts thereof, may not be reproduced in any form or by any means, electronic or mechanical, including
photocopying, recording or any information storage and retrieval system now known or to be invented, without written
permission from the ICCA 2012 Organizers or the Publisher.

Disclaimer:
No responsibility is assumed by the ICCA 2012 Organizers/Publisher for any injury and/ or damage to persons or
property as a matter of products liability, negligence or otherwise, or from any use or operation of any methods, products
or ideas contained in the material herein. Contents, used in the papers and how it is submitted and approved by the
contributors after changes in the formatting. Whilst every attempt made to ensure that all aspects of the paper are uniform
in style, the ICCA 2012 Organizers, Publisher or the Editor(s) will not be responsible whatsoever for the accuracy,
correctness or representation of any statements or documents presented in the papers.

ISBN-13: 978-81-920575-5-2
ISBN-10: 81-920575-5-2

PREFACE
This proceeding is a part of International Conference on Computer Applications 2012 which was
held in Pondicherry, India from 27-Dec-2012 and 31-Dec-2012. This conference was hosted by
Techno Forum Research and Development Centre, Pondicherry in association with Association of
Computer Machinery(ACM), Association of Scientists, Developers and Faculties (ASDF), India,
British Computer Society (BCS), UK and Science and Engineering Supporting Society (Society),
Korea.
The world is changing. From shopping malls to transport terminals, aircraft to passenger ships, the
infrastructure of society has to cope with ever more intense and complex flows of people. Today,
more than ever, safety, efficiency and comfort are issues that must be addressed by all designers. The
World Trade Centre disaster brought into tragic focus the need for well-designed evacuation systems.
The new regulatory framework in the marine industry, acknowledges not only the importance of
ensuring that the built environment is safe, but also the central role that evacuation simulation can
play in achieving this.
An additional need is to design spaces for efficiency ensuring that maximum throughput can be
achieved during normal operations and comfort ensuring that the resulting flows offer little
opportunity for needless queuing or excessive congestion. These complex demands challenge
traditional prescriptive design guides and regulations. Designers and regulators are consequently
turning to performance-based analysis and regulations facilitated by the new generation of people
movement models.
When a greater changes are achieved these past years, still more is to be achieved which still seems
to be blue sky of 1970s. But for all the challenges, capabilities continue to advance at phenomenal
speed. Even three years ago it may have been considered a challenge to perform a network design
involving the evacuation of 45,000 people from a 120 story building, but with todays sophisticated
modelling tools and high-end PCs, this is now possible. Todays challenges are much more ambitious
and involve simulating the movement and behaviour of over one million people in city-sized

geometries. The management of these network is also easy and more specifically all the 45,000
people can be monitored by a single person sitting in his cabin. This has been the evidence of the
development these days.
As such, the conference represents a unique opportunity for experts and beginners to gain insight into
the rapidly.
Also I would like to thank all the co-operators for bringing out these proceedings for you which
majorly includes my mom Mrs. K. Lakshmi and my dad Mr. J. Kunasekaran. Apart from them my
biggest worthy gang of friends including Dr. S. Prithiv Rajan, Chairman of this conference, Dr. R. S.
Sudhakar, Patron of this Conference, Dr. A. Manikandan and Dr. S. Avinash, Convener of this
conference, Dr. E. Sai Kishore, Organizing Secretary of this Conference and the entire team which
worked along with me for the rapid success of the conference for past 1 year from the date of
initiating this Conference. Also I need to appreciate Prof. T. R. Srinivasan and his team of Vidyaa
Vikas College of Engineering and Technology for helping to make the publication job easy.
Finally, I thank my family, friends, students and colleagues for their constant encouragement and
support for making this type of conference.
-- K. Kokula Krishna Hari
Editor-in-Chief

Organizing Committee
 Chief Patron
Kokula Krishna Hari K, Founder & President, Techno Forum Group, Pondicherry, India

 Patron
Sudhakar R S, Chief Executive Officer(CEO), Techno Forum Group, Pondicherry, India

 Chairman
Prithiv Rajan S, Chairman & Advisor, Techno Forum Group, Pondicherry, India

 Convener
Manikandan A, Chief Human Resources Officer(CHRO), Techno Forum Group, India

 Organizing Secretary
Sai Kishore E

Chief Information Officer, Techno Forum Group, India.

 Operations Chair
G S Tomar

Director, MIR Labs, Gwalior, India

 International Chair
Maaruf Ali

Executive Director, (ASDF) - Europe, Europe

 Hospitality
Muthualagan R Alagappa College of Technology, Chennai

 Industry Liaison Chair


Manikandan S

Executive Secretary, Techno Forum Group, India

 Technical Panels Chair


Debnath Bhattacharyya, Executive Director, (ASDF) - West Bengal, India

 Technical Chair
Samir Kumar Bandyopadhyay Former Registrar, University of Calcutta, India
Ponnusamy R
President, Artificial Intelligence Association of India, India
Srinivasan T R,Vice-Principal, Vidyaa Vikas College of Engineering and Technology

 Workshops Panel Chair


Loganathan D

Department of Computer Science and Engineering, Pondicherry


Engineering College, India

 MIS Co-Ordinator
Harish G Trustee, Techno Forum Research and Development Centre, Pondicherry

 Academic Chair
Bommanna Raja K, Principal, Excel College of Engineering for Women, India
Tai-Hoon Kim Professor & Chairman, Dept. of Multimedia, Hanmam University, Korea

TECHNICAL REVIEWERS

Adethya Sudarsanan Cognizant Technology Solutions, India

Ainuddin University of Malaya, Malaysia

Ajay Chakravarthy University of Southampton, UK

Alessandro Rizzi University of Milan, Italy

Al-Sakib Khan Pathan International Islamic University, Malaysia

Angelina Geetha B S Abdur Rahman University, Chennai


Aramudhan M PKIET, Karaikal, India
Arivazhagan S Mepco Schlenk Engineering College, India
Arokiasamy A Anjalai Ammal Mahalingam Engineering College, India
Arul Lawrence Selvakumar A Adhiparasakthi Engineering College, India
Arulmurugan V Pondicherry University, India
Aruna Deoskar Institute of Industrial & Computer Management and Research, Pune

Ashish Chaurasia Gyan Ganga Institute of Technology & Sciences, Jabalpur, India

Ashish Rastogi Guru Ghasidas University, India

Ashutosh Kumar Dubey Trinity Institute of Technology & Research, India

Avadhani P S Andhra University, India

Bhavana Gupta All Saints College of Technology, India

Bing Shi University of Southampton, UK

C Arun R. M. K. College of Engineering and Technology, India

Chandrasekaran M Government College of Engineering, Salem, India

Chandrasekaran S Rajalakshmi Engineering College, Chennai, India

Chaudhari A L University of Pune, India


Ching-Hsien Hsu Chung Hua University, Taiwan

Chitra Krishnamoorthy St Josephs College of Engineering and Technology, India

Christian Esteve Rothenberg CPqD (Telecom Research Center), Brazil


Chun-Chieh Huang Minghsin University of Science and Technology, Taiwan
Darshan M Golla Andhra University, India
Elvinia Riccobene University of Milan, Italy
Fazidah Othman University of Malaya, Malaysia
Fulvio Frati University of Milan, Italy

G Jeyakumar Amrita School of Engineering, India

Geetharamani R Rajalakshmi Engineering College, Chennai, India


Gemikonakli O Middlesex University, UK
Ghassemlooy Z Northumbria University, UK

Gregorio Martinez Perez University of Murcia, Spain

Hamid Abdulla University of Malaya, Malaysia

Hanumantha Reddy T Rao Bahadur Y Mahabaleswarappa Engineerng College, Bellary

Hari Mohan Pandey NMIMS University, India

Helge Langseth Norwegian University of Science and Technology, Norway

Ion Tutanescu University of Pitesti, Romania


Jaime Lloret Universidad Politecnica de Valencia, Spain
Jeya Mala D Thiagarajar College of Engineering, India

Jinjun Chen University of Technology Sydney, Australia

Joel Rodrigues University of Beira Interior, Portugal

John Sanjeev Kumar A Thiagarajar College of Engineering, India


Joseph M Mother Terasa College of Engineering & Technology, India
K Gopalan Professor, Purdue University Calumet, US
K N Rao Andhra University, India

Kachwala T NMIMS University, India

Kannan Balasubramanian Mepco Schlenk Engineering College, India


Kannan N Jayaram College of Engineering and Technology, Trichy, India

Kasturi Dewi Varathan University of Malaya, Malaysia

Kathirvel A Karpaga Vinayaga College of Engineering & Technology, India


Kavita Singh University of Delhi, India

Kiran Kumari Patil Reva Institute of Technology and Management, Bangalore, India

Krishnamachar Sreenivasan IIT-KG, India


Kumar D Periyar Maniammai University, Thanjavur, India
Lajos Hanzo Chair of Telecommunications, University of Southampton, UK
Longbing Cao University of Technology, Sydney
Lugmayr Artur Texas State University, United States

M HariHaraSudhan Pondicherry University, India

Maheswaran R Mepco Schlenk Engineering College, India

Malmurugan N Kalaignar Karunanidhi Institute of Technology, India

Manju Lata Agarwal University of Delhi, India

Mazliza Othman University of Malaya, Malaysia

Mohammad M Banat Jordan University of Science and Technology


Moni S NIC - GoI, India

Mnica Aguilar Igartua Universitat Politcnica de Catalunya, Spain

Mukesh D. Patil Indian Institute of Technology, Mumbai, India

Murthy B K Department of Information and Technology - GoI, India

Nagarajan S K Annamalai University, India

Nilanjan Chattopadhyay S P Jain Institute of Management & Research, Mumbai, India

Niloy Ganguly IIT-KG, India

Nornazlita Hussin University of Malaya, Malaysia

Panchanatham N Annamalai University, India

Parvatha Varthini B St Josephs College of Engineering, India

Parveen Begam MAM College of Engineering and Technology, Trichy

Pascal Hitzler Wright State University, Dayton, US

Pijush Kanti Bhattacharjee Assam University, Assam, India

Ponnammal Natarajan Rajalakshmi Engineering College, Chennai, India


Poorna Balakrishnan Easwari Engineering College, India

Poornachandra S RMD Engineering College, India

Pradip Kumar Bala IIT, Roorkee

Prasanna N TMG College, India

Prem Shankar Goel Chairman - RAE, DRDO-GoI, India

Priyesh Kanungo Patel Group of Institutions, India

Radha S SSN College of Engineering, Chennai, India

Radhakrishnan V Mookamibigai College of Engineering, India

Raja K Narasu's Sarathy Institute of Technology, India

Ram Shanmugam Texas State University, United States

Ramkumar J VLB Janakiammal college of Arts & Science, India

Rao D H Jain College of Engineering, India

Ravichandran C G R V S College of Engineering and Technology, India

Ravikant Swami Arni University, India

Raviraja S University of Malaya, Malaysia

Rishad A Shafik University of Southampton, UK

Rudra P Pradhan IIT-KGP, India

Sahaaya Arul Mary S A Jayaram College of Engineering & Technology, India

Sanjay Chaudhary DA-IICT, India

Sanjay K Jain University of Delhi, India

Satheesh Kumar KG Asian School of Business, Trivandrum, India

Saurabh Dutta Dr B C Roy Engineering College, Durgapur, India

Senthamarai Kannan S Thiagarajar College of Engineering, India

Senthil Arasu B National Institute of Technology - Trichy, India

Senthil Kumar A V Hindustan College, Coimbatore, India

Shanmugam A Bannari Amman Institute of Technology, Erode, India

Sharon Pande NMIMS University, India

Sheila Anand Rajalakshmi Engineering College, Chennai, India

Shenbagaraj R Mepco Schlenk Engineering College, India

Shilpa Bhalerao FCA Acropolis Institute of Technology and Research

Singaravel G K. S. R. College of Engineering, India

Sivabalan A SMK Fomra Institute of Technology, India

Sivakumar D Anna University, Chennai

Sivakumar V J National Institute of Technology - Trichy, India

Sivasubramanian A St Josephs College of Engineering and Technology, India

Sreenivasa Reddy E Acharya Nagarjuna University, India

Sri Devi Ravana University of Malaya, Malaysia

Srinivasan A MNM Jain Engineering College, Chennai

Srinivasan K S Easwari Engineering College, Chennai, India

Stefanos Gritzalis University of the Aegean, Greece


Stelvio Cimato University of Milan, Italy

Subramanian K IGNOU, India

Suresh G R DMI College of Engineering, Chennai, India

Tulika Pandey Department of Information and Technology - GoI, India

Vasudha Bhatnagar University of Delhi, India

Venkataramani Y Saranathan College of Engineering, India

Verma R S Joint Director, Department of Information and Technology - GoI, India

Vijayalakshmi K Mepco Schlenk Engineering College, India

Vijayalakshmi S Vellore Institute of Technology, India


Ville Luotonen Hermia Limited, Spain

Vimala Balakrishnan University of Malaya, Malaysia

Vishnuprasad Nagadevara Indian Institute of Management - Bangalore, India

Wang Wei University of Nottingham, Malaysia

Yulei Wu Chinese Academy of Sciences, China

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

Part I
Proceedings of the Second International Conference on
Computer Applications 2012

ICCA 12

Volume 2

www.asdf.org.in

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

K-CCNF And SAT Complexity


Kanak Chandra Bora

Bichitra Kalita

Department of Computer Science & Engineering


Assam Down Town University,
Panikhaiti, Guwahati-781026, Assam, India.

Department of Computer Application (M.C.A),


Assam Engineering College,
Guwahati-781013, Assam, India.

Abstract- In this paper, a special type of K-CCNF Boolean


formula has been considered and the satisfiability condition
has been discussed with the help of theoretical explanation.
An algorithm has also been focused for SAT problems.

have a satisfying assignment such that at least one literal


in each clause is set to false. If such an assignment
exists, then it is called a NAE-satisfying assignment and
is said to be Not-All-Equal satisfiable (NAESAT).
Definition2: Given a Boolean formula , does have
a satisfying assignment such that at least one literal in
each clause is set to true. If such an assignment exists,
then it is called an X-satisfying assignment and is said
to be X-satisfiable (XSAT).
Definition3: If a Boolean formula is unsatisfiable,
then it is neither NAE-satisfiable nor X-satisfiable
however, satisfiability does not imply NAE-satisfiable
nor X-satisfiability.
Definition4: A Boolean formula is Horn, if every
clause contains at most one positive literal.
Definition5: A 3IFF formula R is a conjunction of
2CNF clauses and of 3IFF clauses of the form (A v B)
C where A, B and C are literals of three distinct
variables a, b, c respectively; variable a and b are input
variables and c is the conclusion variable[7].
Definition6: A 3IFFCNF formula S is the CNF
formula obtained from a 3IFF formula R by representing
each 3IFF clause (A v B) C by the three equivalent
CNF clauses A v C, B v C and A v B v C [7]
Definition7: The problems 2SAT (resp. 3SAT,
3IFFSAT) are special cases where S is 2CNF (resp.
3CNF, 3IFFSAT). The problem ONE-IN-THREE 3SAT
has as input a 3CNF formula S and the question Can S
be satisfied such that in each clause exactly one literal
evaluates to True? is to be answered[7].
It is also known that many definitions related to SAT
and CNF have been introduced. SAT theoretical
approaches to categorize classes of SAT problems as
polynomial time solvable or NP-complete have already
been eslablished [8]. Tractable families are distinguished
by a specific clausal structure. These structures either
limit the number of literals per clause [9,10] or the
number of times that a variable appears across all the
clause [11]. In this paper we present a new definitions
K- canonical conjunctive normal form and full K-CCNF
formula with an exact algorithm to solve the problem.
K-CCNF FORM:
Definition(a): A literal in a Boolean formula is an
occurrence of a variable or its negation. A Boolean
formula is called k-canonical conjunctive normal form or
k-CCNF, if it is expressed as an AND of clauses, each of
which is the OR of k literals iff there are k variables
involved in the Boolean formula and in each clause all k
variables appear.
Definition(b): Satisfiability of Boolean formula in kCCNF is called full iff there are 2k distinct clauses.
1.Theorem: Satisfiability of Boolean formulas in Kcanonical conjunctive normal form belongs to NP iff K

I. INTRODUCTION
The concept of NP-complete problem had been
introduced by S.A. Cook in 1971 [1]. It has been found
that the complexity theory regarding the SAT problems
was the first known NP-complete problem which
determines whether Boolean formula having AND,
variables with
NOT, OR gates and v1, v2,...,vn
parenthesis, has a satisfiable assignment.
It is known that a Boolean formula is in kconjunctive normal form if it is in a conjunction of m
clauses like c1, c2,....,cm and each clause contains exactly k
vaiables or their negation. K.Subramani [2] has studied
the computational of three types of queries relating to
satisfiability, equivalence and hull inclusion. He has
formulated to study the NP-completeness for 3CNF
which are Not All-Equal satisfiability. Satisfiability of
2-CNF formula has been checked in polynomial time
algorithm and 3-CNF satisfiability formula has been
found to be NP-complete.
It has already known that the SATs have many
practical applications in planning, circuit design, spinglass model, molecular biology [4],[3],[5]. Many
research work on 3-SAT has been reported. Many exact
and heuristic algorithms have been introduced. Exact
algorithms can determine whether a problem is
satisfiable or unsatisfiable and this type of algorithms
has an exponential worst-case time complexity. Heuristic
algorithms can determine a problem quickly but they are
not guaranteed to give a definite solution to all problems.
There are so many examples relating to exact
algorithms. It is known that the splitting algorithms
reduces the problem for the input formula F to the
problem for polynomial many formulas F1,F2,......,Fp and
make a recursive call for each or one of Fis [12]. In
addition to it has been found as an example that the
heuristic algorithms are stochastic local search (SLS)
and evolutionary algorithms(EAs). One heuristic
algorithm, known as EF_3SAT for solving 3-SAT
problem has been forwarded by Istvan Borgulya [6].
Let us remind some known definitions which are
related to our present discussion as follows.
Definition1: Given a Boolean formula , does
Proc. of the Intl. Conf. on Computer Applications
Volume 1. Copyright 2012 Techno Forum Group, India.
ISBN:978-81-920575-5-2:: doi: 10.73725/ISBN_0768
ACM #: dber.imera.10.73725

www.asdf.org.in

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

3.Proof: Suppose a certificate is given to an n-canonical


conjunctive normal form where n 3. Here certificate
means an assignment to the n variables. It is known that
the n-conjunctive normal form belongs to NP when it
can be checked in polynomial time algorithm. We now
consider the following example for K-CCN form to
present that this K-CCNF can also be checked in
polynomial time algorithm and this will lead to proof our
theorem.
Let us consider a K-canonical conjunctive normal
form where K=4 as given below.
= (x1 v x2 v x3 v x4) (x1 v x2 v x3 v x4) ( x1
v x2 v x3 v x4) ( x1 v x2 v x3 v x4)
Here one satisfying assignment to these four
variables is x1=1, x2=1, x3=1, x4=1, since
= (1 v 1 v 1 v 1) ( 1 v (1) v (1) v (1) ) (
(1) v 1 v (1) v (1) ) ( (1) v (1) v 1 v (1) )
= (1 v 1 v 1 v 1) ( 1 v 0 v 0 v 0 ) ( 0 v 1 v 0 v 0
) (0v0v1v0)
=1111
=1
In the above example, there are K=4 variables .Hence
if there are n variables, each clause takes O (n) times and
if there are n clauses then total time taken by all clauses
will be O (n2) times. To get the final output, AND (i.e. )
function will take O (n) times. Hence total time required
to check a satisfiability of an n-canonical conjunctive
normal form for an assignment is O (n3), which is
nothing but polynomial time.
Thus we see that if we consider a k-CCNF Form of
Boolean formula with satisfiability condition then belong
to NP.
Hence by the definition of NP problems it is clear
that K-canonical conjunctive normal form is belongs to
NP.
2.Theorem: Satisfiability of Boolean formula in KCCNF is NP-complete iff K 3.
Proof: From theorem 1, we have K-CCNF NP.
Now we have only to show that SA T p K-CCNF-SAT.
As discussed in the above theorem, we consider a
satisfiable Boolean formula
= ((x1 x2) v (x1 x3)) x2.
This is satisfiable for the assignment x1=1, x2=0,
x3=1, since
= ((1 0) v (1 1)) 0
= (0 v 1) 1
= 1.
Hence this formula belongs to SAT.
Now we reduce this formula to a k-CCNF where k
3 as follows.
We introduce a variale yi for output of each operation
and considering the final output as y1 and parenthesizing
formula as shown below.
= (((x1 x2) v (x1 x3)) x2)
= y1 (y1 (y2 x2))
(y2 (y3 v y4))
(y3 (x1 x2))

www.asdf.org.in

(y4 (x1 x3)), where is the first step


reduction of
Here for the above satisfiable assignment of , is
also satisfiable. Now we convert each clause into CNF as
follows.

is

(1)
1= (y1 (y2 x2))
Table 1 is the truth table of (1)
Disjunctive normal form (or DNF) formula for 1

(y1 y2 x2) v (y1 y2 x2) v (y1 y2 x2) v


(y1 y2 x2)
Applying DeMorgans laws, we get the CNF
formula.
1 = (y1 v y2 v x2) (y1 v y2 v x2) (y1 v y2
v x2) (y1 v y2 v x2) which is equivalent to the original
clause 1.
2= (y2 (y3 v y4))
(2)
Table 2 is the truth table for (2) From table 2,
disjunctive normal form (or DNF) formula for 2 is
(y2 y3 y4) v (y2 y3 y4) v (y2 y3 y4)
Applying DeMorgans laws, we get the CNF
formula.
2 = (y2 v y3 v y4) (y2 v y3 v y4) (y2 v y3 v
y4) which is equivalent to the original clause 2.
3= (y3 (x1 x2))
Table 3 is the truth table for (3).
Table 1 Truth Table
Y1
Y2

X2

1
1
1
1
0
0
0
0

1
0
1
0
1
0
1
0

1
1
0
0
1
1
0
0

(3)

(y1 (y2
x2))
0
1
0
0
1
0
1
1

From table 3, disjunctive normal form (or DNF)


formula for 3 is
(y3 x1 x2) v (y3 x1 x2) v (y3 x1 x2) v
(y3 x1 x2)
Applying DeMorgans laws, we get the CNF
formula.
3 = (y3 v x1 v x2) (y3 v x1 v x2) (y3 v x1 v
x2) (y3 v x1 v x2) which is equivalent to the original
clause 3.
(4)
4= (y4 (x1 x3))
Table 4 is the truth table of (4). Disjunctive normal
form (or DNF) formula for 4 is
(y4 x1 x3) v (y4 x1 x3) v (y4 x1 x3) v
(y4 x1 x3)
Applying DeMorgans laws, we get the CNF
formula.

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

4 = (y4 v x1 v x3) (y4 v x1 v x3) (y4 v x1


v x3) (y4 v x1 v x3) which is equivalent to the original
clause 4.
Table 2 Truth Table
Y2
Y3

y4

1
1
1
1
0
0
0
0

1
0
1
0
1
0
1
0

1
1
0
0
1
1
0
0

Table 3 Truth Table


Y3
X1

X2

1
1
1
1
0
0
0
0

1
0
1
0
1
0
1
0

1
1
0
0
1
1
0
0

Table 4 Truth Table


Y4
X1

X3

1
1
1
1
0
0
0
0

1
0
1
0
1
0
1
0

1
1
0
0
1
1
0
0

(y2 (y3
v y4))
1
1
1
0
0
0
1
1
(y3 (x1
x2))
1
0
1
1
0
1
0
0

(y4 (x1
x3))
1
0
0
1
0
1
1
0

Now we convert each i CNF formula into 7-CCNF


as there are 3 original variables and 4 introduced
variables.
It is known that if there is a clause (p v q v r), a new
variable s can be included as (p v q v r v s) (p v q v r v
s). Applying this rule repetitively on 1,2, 3, 4
CNF formula can be converted into 7-CCNF formula
1, 2, 3, 4 respectively.
Hence 1, 2, 3, 4 are satisfiable as it was
considered originally as satisfiable the SAT formula is
satisfiable.
We have reduced the SAT formula into 7CCNF formula. Constructing from introduces at
most 1 variable and 1 clause per connective.
Constructing from can introduce at most 8 clauses
into for each clause from , since each clause has at
most 3 variables and the truth table for each clause has at
most 23=8 rows. The construction of from
introduces at most (i.e. 27 3=128-3=125) 125 clauses
into for each clause of . Thus, the size of the
resulting formula is polynomial in the length of the
original formula. Each of the construction can easily be
accomplished in polynomial time.
Thus total time taken in the reduction = O(m) to
construct , where m = no of connective + O(23) +
O(27) = O(27) which is polynomial.

www.asdf.org.in

In reducing in case we introduce yi variables, and


suppose there are k variables in the original formula then
in k-CCNF formula value of n will be at least i + m.
Definition: Satisfiability of Boolean formula in kCCNF is called full iff there are 2k distinct clauses.
Lemma: Full k-CCNF formula is unsatisfiable.
Proof: Let us assume a 3-CCNF formula
(x v y v z) (x v y v z) (x v y v z) (x v y v z)
(x v y v z) (x v y v z) (x v y v z) (x v y
v z)
Now for these three variables there will be 23=8 input
combinations. If we apply these 8 input combinations to
the above formula then for each input combination the
output is 0. Hence it is proved that full k-CCNF formula
is unsatisfiable.
In circuit design we can replace a full k-CCNF
circuit by a simple circuit which will produce constant 0.
Algorithm for k-CCNF formula:
INPUT: Consider any 3-CCNF formula.
OUTPUT: Find whether it is satisfiable or not
For k variables, there will be 2k input combinations,
and we have to check the satisfiability of the formula for
each input combination.
For u= 0, 1, 2,.....,2k-1
{
Convert decimal u to k digit binary number like
b1b2....bk
For i=1, 2,. . ., k
{ xi = bi }
Read the k-CCNF formula and assign
C1,C2,...,Cm to clauses serially. //for example C1= (x1 v x2
v x3), C2= (x1 v x2 v x3), C3= (x1 v x2 v x3)//
For j = 1, 2, ....,m
{
Read clause Cj
For i = 1, 2, ..., k
{
If(negation of xi)
(0 to 1 or 1 to 0)
Change the value of xi
If( xi = = 0)
{
i=i+1
if (i > k)
{
Cj=0
j=j+1
}
}
Else
{
Cj = 1
j=j+1
}
}
}
For j = 1, 2, ...,m
{
If(Cj = = 0)

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

Goto print

[5].

Else
j=j+1
}
Printf satisfied for x1,x2, ...,xk
Print: printf unsatisfied for x1,x2, ...,xk
u=u+1
}
Complexity Analysis of the Algorithm:
To convert decimal number u to binary number,
complexity = O(k), where 2k-1 u 2k
Algorithm to convert a decimal number u to binary
number
int i, num, k
i = 1;
While (num > 1)
{
a[i] = num % 2
i++;
num = num /2;
}
K = i -1;
For (j = 1; j < i ; j++)
{
B[j] = a[k];
k-- ;
}
Here complexity of the decimal to binary conversion
is = k,
It is observed that k = 4 and u = 8 when 23 8 24
k = 6 and u = 32 when 25 32 26
k = 4 and u = 9 when 23 9 24
k = 5 and u = 16 when 24 16 25
k =5 and u = 25 when 24 25 25
Hence complexity of the conversion of a decimal
number uto binary number = O(k),
where 2k -1 u 2k
The worst-case time Complexity of the main
algorithm = 2k(k + k + mk + m) = O(2kmk).
Example: = (x1 v x2 v x3) (x1 v x2 v x3) (x1 v
x2 v x3)
Ans: Applying the above algorithm we can test that
the is satisfiable.

[6].
[7].

[8].

[9].

[10].
[11].
[12].

HAGIYA M., ROSE J. A., KOMIYA K., SAKAMOTO


K., Complexity analysis of the SAT engine. DNA
algorithms as probabilistic algorithms, Theoretical
Computer Science 287 (2002) pp. 59-71.
Istvan Borgulya, An Evolutionary Framework for 3-SAT
Problems, Journal of Computing and Information
Technology. CIT 11, 2003, 3, 185-191.
Utz-Uwe Hans, Klaus Truemper, and Robert Weismantel,
Linear Satisfiability Algorithm for 3CNF Formulas of
Certain Signaling Networks, Journal on Satisfiability
Boolean Modeling and Computation 6 (2008) 13-32.
T. I. Schaefer, The complexity of satisfiability problems,
in Proceedings of the 10th Annual ACM Symposium on
Theory of Computing , A. Aho, Ed., pp. 216-226, ACM
Press, San Diego, Calif, USA,1978.
B. Aspvall, M.F. Plass, and R.E Tarjan, A linear time
algorithm for testing the truth of certain quantified Boolean
formulas, Information processing Letters, vol.8. no. 3, pp.
121-123, 1979.
C.H. Papadimitriou, Computational Complexity, Addition
Wesley, Reading, Mass, USA 1994.
M.R. Garey and D.S. Johnson, Computers and
Intractability: A Guide to the Theory of NP completeness,
W.H. Freeman, San Francisco, Calif, USA, 1979.
Dantsin E., Hirsch E.A., Ivanov I.,Csemirnov M.,
Algorithms for SAT and Upper Bounds on Their
Complexcity, Electronic Colloquium on Computational
Complexity, Report No. 12 (2001).

REFERENCES
[1]. S. A. Cook, The complexity of theorem-proving
procedure, in proceeding of the ACM Symposium on
Theory of Computing (STOC 71),1971.151-158
[2]. K. Subramani, On the Complexities of Selected
Satisfiability and Equivalence Queries over Boolean
Formulas and Inclusion Queries over Hulls, Journal of
Applied Mathematics and Decision Sciences, Volume 2009,
Article ID 845804, 18 pages, 2009.
[3]. CRISANTI A., LEUZZI L., PARISI G., The 3-SAT
problem with large number of clauses in the co-replica
symmetry breaking scheme. J. Phys. A: Math. Gen 35
(2002) pp. 481-497.
[4]. DU D., GU I.,PARDALOS P. (eds), Satisfiability Problem.
Theory and Applications. (1997) Vol.35. DIMACS Series
in Discrete Mathematics and Theoretical Computer Science,
AMS, Providence, Rhode Island.

www.asdf.org.in

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

Enhancement Of Optical Character Recognition Output and Testing using


Graphical User Interface
Nandhini Ramesh
Janani Rajasekaran
Velammal Engineering College,
Ambattur, Chennai-600 066

Janani Rajasekaran

Pavithra Rajendran

Velammal Engineering College,


Ambattur, Chennai-600066

Velammal Engineering College,


Ambattur, Chennai-600066

ABSTRACT: Optical Character Recognition (OCR)


refers to the process of converting printed text documents
into software translated Unicode text. The printed
documents like books, news-papers, magazines,are scanned
using scanners of atleast 300dpi which produce an image.
As part of the preprocessing phase the image is checked for
skew. If the image is skewed, it is corrected by Hough
transform in the appropriate direction. Then the image is
passed through a noise elimination phase and is binarized.
The pre-processed image is segmented using an algorithm
which decomposes the scanned text into blocks and then the
blocks into lines using vertical projection, and lines into
words using horizontal projection, and words into
characters.There are OCR for different.Languages like
English,Japanese and other different languages.Additional
enhancements such as Bigram Statistics,accuracy
improvisation of matras and punctuations of the output for
Tamil language has been implemented. .A tool which
collects information from xml files and highlights
blocks,lines and words has been implemented as a
graphical user interface and this tool can be used for any
language.

I.

OCR FUNCTIONAL BLOCK DIAGRAM


The block diagram of OCR [1] consists of various
stages as shown in the below diagram. They are
scanning phase, classification (SVM), Unicode
mapping, and recognition and output verification.
SCAN DOCUMENT

INTRODUCTION

Indian languages have a disadvantage that each letter


has a combination of keys.The text blockshave to be
manually selected and should be given as an input, if it is
a pure OCR system. Tamil, which is a south Indian
language, is one of the oldest languages in the
world.Though works are going on in Tamil OCR, the
rate of accuracy still remains a challenging area of
research.Many of the works related to Tamil OCR have
not concentrated enough with the accuracy parameter.
Optical Character Recognition (OCR) is a document
image analysis where a scanned digital image that
contains either machine printed or handwritten script is
given as input into an OCR software engine and
translating it into an editable machine readable digital
text format.
Tamil has 12 vowels and 18 consonants and these are
combined with each other to yield 216 consonant
characters and 1 special character (aayutha ezuthu)
counting to a total of 247 characters. Vowels are
otherwise called uyirezuthu and are of two types short
and long.Consonants are classified into three classes
with 6 in each class amd are called valinam, idaiyinam,
Proc. of the Intl. Conf. on Computer Applications

Volume 1. Copyright 2012 Techno Forum Group, India.


ISBN: 978-81-920575-5-2:: doi: 10. 73732/ISBN_0768
ACM #: dber.imera.10. 73732

www.asdf.org.in

mellinam. The Unicode Standard is the Universal


Character Encoding scheme for written characters and
text. Each particular character in Tamil has a unicode.
The tamil unicode range is U+0B80 to U+0BFF [2]. The
unicode characters are comprised of 2 bytes in
nature.Although the accuracy is about 93%, manual
correction is done before converting it into a braille
script. Measures were taken so that the accuracy is
improved.The major problem is that Tamil characters
occupy 2, 3 or 4 bytes and hence wide characters are
used which occupies 4 bytes.

PREPROCESSING
Skew detection, Binarization, Noise
reduction
SEGMENTATION
FEATURE EXTRACTION

CLASSIFICATION

UNICODE MAPPING AND TEXT


RECOGNITION
Fig i
Support Vector Machines are based on the
concept
of decision planes that define decision
boundaries A decision plane is one that separates a
set of objects having different class memberships. It is
used to classify different types of character glyphs
belonging to different Tamil fonts.
Support Vector
Machine (SVM) is primarily a classifier which performs
classification tasks by constructing hyper planes in a
multidimensional space that separates cases of different
class labels. There are two types of SVM Classifier

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

1,SVM1- This is used for all the characters present


in the language. The final stage of OCR is to recognize
the scanned images in form of Tiff files to Text. Over
here it separates the Text and Non-Text. It does the
word level classification and finally identifies the
characters. Once the character is been identified, text
recognition will be easier. The format of
scanned images can be in number of ways For
instance, it can be either be in simple format
consisting of 1 column else can be in Manhattan
manner where there are many rows and columns. The
final recognized text is stored as RTF file. The already
proposed system of Tamil which follows similar
procedure as that for Bangla Script [3] and Kanada
classifier is mainly used to recognize the characters and
special symbols and identify the unicode and confidence
level of characters. The confidence level is to find
three characters and its probability of how close it
resembles the identified character. The enhancement of
OCR accuracy is explained in section II of the paper
along with the testing tool.

II.
III. ENCHANCEMENT OF OCR AND TESTING
TOOL
BIGRAM STATISTICS
An bigram is a subsequence of two subsequence
items from a given sequence. The items can
bephonemes, syllables, letter words or base pairs
according to the application. This is a type of
probabilistic model for predicting the next item in such
a sequence. Bigram are used in various areas of
statistical natural language processing and genetic
sequence analysis. The number of occurrences of a
particular character given the other character was
found out. Bigram statistics on text will tell the
probability of occurrences of all combinations and we
can
find
the
normalized
frequency
we can find the normalized frequency. For e.g. in
English alphabets, "qu" will have probability close to
1,"qd" close to 0.
There are cases where two components merge
together and give another component. But by studying
the aspect ratio we can recognize that something has
gone wrong in between these characters. This further
reduces the accuracy of the OCR output.

Fig iii

Fig ii

www.asdf.org.in

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

Fig vi
Fig iv
CERTAIN ISSUES
There are certain problems found in the OCR
output. In Tamil Language,[2] Unicode of two matras
cannot follow one another. For instance in the word with

matra, the character will be recognized in a particular


fashion that is the matra will always follow the
consonant.
But
the
OCR,
in
some cases, recognizes the word as matras followed by
consonants .This reduces the accuracy of the output. So
the program flags out an error, if unicodes of two matras
follow each other. The programs are done with file
management as well as without file management. In file
management, a match file is used with matches with the
input file and pops out error if two matras are together. In
the other case, input file is scanned and all those strings
which have matras together are printed

The figure v shows the output of OCR obtained with the


error. For the file management, the output is shown in
figure vi .In case without file management, whole input
file is scanned and with help of the ascii values for
matras, it is checked if two matras[2] come together. If
suppose this condition prevails, we print the strings in
the terminal as in figure vii.

Fig vii

The other problem found in the OCR output


deals with punctuation marks. In the output Text file, we
find that punctuation marks . found in between the
characters of a word. Punctuation can come at the end of
the word but not in the word and a Tamil sentence as
such is possible. So a program is written in which it flags
an error if punctuation mark comes in the middle of the
word. In case punctuation mark comes at the end, the
particular word will be accepted. Again over here, it is
done both with and without File management. Further, if
punctuation mark comes in the middle of a character, it
gets replaced with a * symbol. For instance, if there is a
wrong input such as shown in figure ix, program will
give an output, indicating an error and will print the
punctuation mark found in the middle of the word. The
program looks into the constraint of punctuation marks
at the end. For example if a full stop(.) comes at the end
of the word, then word will not print any error neither
the full stop(.) will be replaced by asterisk(*) symbol.

Fig v

www.asdf.org.in

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

Fig viii
Beyond the unicodes of matra occurring
together and the recognition of punctuation marks at the
middle, there is another problem found at the OCR
output. The problem is that, in Tamil Language, the
text cannot have a number between characters.
This decreases the efficiency of OCR output. Thereby, a
program is written which detect such problems and
convert these numbers into # symbol. For example, the
above word changes as shown in figure ix .This might
increase the efficiency of the output.

Fig ix

In OCR there are two classifiers SVM1 and SVM2


classifiers. SVM1 classifier recognizes the vowels and
consonants of Tamil Language while the SVM2
recognizes the special characters like (.,? :;). Each
particular character has a particular relative height. The
relative height is defined by the particular formula
Relative height= ((height of the character)/ (Line Height))
where line height defines the height of the line
from which the particular character is taken. So, a
function is added namely Line Ratio in the OCR code
which checks for relative height of the segmented
component. If the relative height of the segmented

www.asdf.org.in

component is less than 0.175; it means that these


represent special characters. So, again it is checked if
these special characters are present in middle of a word.
If this condition prevails the special character is deleted.
This increases the accuracy of the OCR. The next task was
to scan pages and find problems in the OCR output. Some
problems were found. The first one was that for a multicolumn input image file, the output of OCR was not
correct. For instance, we find that Tamil magazines and
newspapers have multiple columns. The OCR, in such a
case read the lines as a single column. This reduces the
accuracy of the output. Thereby, an algorithm which
solves this problem is found where x and y projections
are considered, i.e. (Horizontal and vertical
projections)[2] to solve this problem.
The algorithm is as follows:
Start
Step 1: To get the input using opencv command and
check if the image is successfully loaded.
Step 2: To get the up, down, left, right margins of the
binarised image.
Steps to get the margins
Start
Step 1: Find the vertical projection
Step 2: Smoothing the vertical projection
Step 3: Find the left and right margin of the binarised
image with help of vertical projection.
Step 4: Find the horizontal projection
Step 5: Smoothing the horizontal projection
Step 6: Find the up and down margin of the binarised
image with help of horizontal projection.
End
Step 3: Now find the vertical gap length and index with
vertical projection.
Step 4: With help of the vertical gap value, index and
vertical projection, find the width of Block.
Step 5: Again like step 3, find the horizontal gap length
and index with horizontal projection.
Step 6: Now with the values obtained in step 4, find the
height of the block.
Step7: After getting the height and width of blocks split
it.
End
TESTING TOOL FOR OCR OUTPUT
During the segmentation of a page, two xml
file [4] is created. One xml file contains the
information of blocks, while the other has word level
information of the same page. The block xml has
information about the block number, block type, the
starting and ending x and y coordinates. Similarly the
word xml contains the line number in which the word is
present, the x and y coordinates of all words. It is not
possible for a user to look into the coordinates and
come to a correct conclusion. For instance, if the Block
xml has information like x1=400 px x2=490px, y1=890
px and y2=980 px. In this case, we cannot expect a

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

person to manually calculate the pixel and find the


location of the word. In order to make the work easier, a
tool is created which will collect the information from the
xml files and highlight the blocks, lines and words. This
can also be used as a checker for OCR output. For better
efficiency, the information of the scanned image file can
be stored in an xml [4] and the output can be seen in
this tool. This provides a better understanding and will
also be user friendly.
The following steps are followed in this tool:
1. Load the intended book into the GUI.
2. The xml information is parsed using DOM.
3. Initially the block xml is parsed and it checks if the
block is a text or image.
4. If the block is an image, it is highlighted by green
rectangle.
5. Else if block is a text, a red rectangle is drawn.
6. Then the xml containing word information is parsed.
7. With the help of the word information, the left, right,
top and bottom boundaries of line are found.
8. Then the line is highlighted in blue color.
9. With help of word information, the words are
highlighted in magenta color.
10. Then the final image is displayed as in figure x.

10

The first step was creating GUI as in figure xi. To


create this GUI, Abstract Window Toolkit (AWT) was
used. This is Java's original platform-independent
windowing, graphics and user interface widget toolkit.
This is an Application Programming Interface (API)
for providing graphical user interface (GUI) for a Java
program. Furthermore, the action which the buttons would
perform when they are clicked is also added.
STEP 2:
The next step was to add the tiff file into the GUI. To do
this, the library named JAI was used. A multitiff image
was loaded inside where initially the first image is
displayed. Then later when a user clicks the next button,
the second page is displayed.
STEP 3:
After loading the image into GUI, the next step will be to
parse the xml [4] code containing block information. The
block information contains details like the page number,
book code, block number and block type. Thus, with
help of page numbers, the blocks are found. Moreover,
the block type is also checked. Finally, if the block type
is a text, then the text block will be painted red in color
else will be painted green in color
STEP 4:
Here the xml containing word information is parsed.
With this information, the left, up, right and top
boundaries are found. The left boundary of the first
word
in
a
line
will
be
left
boundary of the line. The top and boundary of the word
in one particular line number will be the top and bottom
boundary of the line. The right boundary of the line will
be
the
right
boundary of the last word in one line. With this
information, the lines are highlighted with blue color as in
figure xii and xiii.

Fig x
Fig xi

STEPS TO BE FOLLOWED
STEP1:

www.asdf.org.in

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

11

and word information. With the parsed information,


highlight of the text block, image block, lines and words
are indicated in different colours. This tool can be
mainly used to find the segmentation of blocks, words
and lines.

Fig xii
IV.

Further work can be done to bring the rate of


accuracy higher and implement recognition of matras
and character at a much more efficient rate. The testing
tool is slow when there is a vast amount of data available
and hence rewriting the code to make it more efficient
and effective with increased speed can be done in future.

CONCLUSION AND FUTURE WORK

The main aim was to improvise the output of the optical


character recognition for Tamil language. Bigram
Statistics was implemented with wide characters since
Tamil characters were occupying 3 or 4 bytes. The issue
of punctuation marks was also dealt where it is checked
if punctuation mark occurs in the beginning, middle or
the end of the word. If it occurs in the middle of
characters, then it is indicated with a *. Further work
was done to look into the unicodes of two matras that
come together and if this condition gets satisfied, then
the string is printed. Occurrence of numbers in between
characters was also checked and if so, they are replaced
with #. All these come under post processing steps.
This might reduce the error rate and increase the
accuracy of the optical character recognition. During
scanning, errors in the output were due to insertion,
deletion, replacement or word split. Testing tool was
created to check the optical character recognition output
where the xml files are parsed that contains the block

www.asdf.org.in

REFERENCES
[1]

[2]

[3]

[4]

K.H.Aparna,
Sumanth
Jagannathan,
P.Krishnan
and
V.S.Chakravarthy, An Optical Character Recognition system for
Tamil newsprint, National Conference on Communications,
IITMadras, January 2003.
B.J.Manikandan,Gowrishankar,V.Anoop,A.Datta,
and
V.SChakravarthy, LEKHAK: A System for Online Recognition
of Handwritten Tamil Characters, International Conference on
Natural Language Processing (ICON), December 2002.
B.B.Chaudhuri and U.Pal An OCR System to read two Indian
languages scripts Bangla and Devangari (Hindi), International
Conference on Document Analysis and Recognition, August 1820, ULM Germany, PP1011-1015 (1987).
Swapnil Belhe, Chetan Paulzagade, Sanket Surve, Nitesh
Jawanjal, Kapil Mehrotra, Anil Motwani, "Annotation Tool and
XML Representation for Online Indic Data," icfhr, pp.664-669,
2010 12th International Conference on Frontiers in Handwriting
Recognition, 2010

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

12

Computer Simulation of DLA in Electrodeposition of Dendritic Patterns


Dr. Anwar Pasha Deshmukh
College of Computer and Information Technology,
University of Tabuk, Tabuk, Saudi Arabia

Abstract Advanced topics and High Tech


experiments demand for lot of infrastructure. There are
areas in living sciences and other subjects where the
magnitude of information is so huge that it is more
convenient to do the experiment by simulation using
computers. With advancement of computers and Internet,
there has been an increasing trend in utilizing computers
and Internet for simulation.. The present work aims to
study the Diffusion Limited Aggregation (DLA) in
Electrodeposition of Dendritic patterns using the Brownian
motion. It is going to be an interactive tool where the
learner can change the parameters of interest to study the
Diffusion
Limited
Aggregation
(DLA)
in
Electrodeposition of Dendritic patterns using the Brownian
motion and actually see the effect by way of simulated
graphics. Quantitative values of the parameters arising
from the simulation study will also be available to the
learner. We present here the diffusion limited aggregation
which is grown using computer simulation under
different conditions and are compared with experimental
results, in this simulation, first a seed particle is placed at
origin. Then random walkers are released, one at a time,
from some distant location. When one of these particles
makes contact with the aggregate, the next particle is
released. This process continues and a pattern is formed It
is this effect that leads to the characteristic Dendritic
pattern. In this pattern formation the center of mass of the
growth is shifted that is studied in different electric field
conditions (bias).
Keywords: Simulation, DLA, Electrodeposition, Dendritic
Pattern, Brownian Motion

I INTRODUCTION
The study of Diffusion Limited Aggregation (DLA)
in Electrodeposition of Dendritic patterns has become a
very important tool for understanding irregular objects,
complex systems and phenomena in which a scale
invariance of some sort exists in various scientific
disciplines. These complex shapes are studied in a broad
variety of systems that span through the interdisciplinary
spectrum of modern research including life sciences and
biology, chemical sciences as well as materials sciences,
geology etc. In the field of physical chemistry, fractals
have been observed in growth processes called as
electrodeposition. Recently scientists in various scientific
disciplines have started studying this phenomenon. The
first aggregation model is based on joining the
Proc. of the Intl. Conf. on Computer Applications

Volume 1. Copyright 2012 Techno Forum Group, India.


ISBN: 978-81-920575-5-2:: doi: 10. 73739/ISBN_0768

randomly walking particles to a growing cluster. In


practice aggregation process is more complex and the
resulting deposits exhibit a variety of complex structures
which usually exhibit statistically simple, self-similar, i.e.
Dendritic patterns
Electrodeposition of metals in dendritic shapes is
governed by processes similar to the Diffusion Limited
Aggregation (DLA) in which there are two major
competing processes. One is the presence of Brownian
motion of the ions and molecules in the solution at room
temperature and the other one is the movement of ions
towards respective electrodes. As the metal ions move
toward cathode, in the way they undergo collisions with
other ions and molecules that results in change in
direction and magnitude of motion of the ions. In other
words, the ions released from anode, while moving
toward the cathode are distracted due to the random
Brownian motion of the rest of the molecules in the
solution. If the component of random motion is
prominent as compared to the directional motion (radial
motion in the case of circular electrodeposition cell),
instead to going straight toward the cathode, their motion
tend to be random. This random motion may have any
direction, not necessarily toward the cathode, therefore if
these randomly wandering ions reach a metallic deposit
already attached to the cathode, may stick there and
become part of the growth. It is this effect that leads to
the characteristic dendritic growth because if a branch is
developed, it is more likely to intercept the movement of
the ion in the electrolyte. This result in further growth of
the branch and therefore a branch that has started growing
tend to grow faster and faster as compared to the rest of
the regions in the entire growth. Therefore a shorter
branch that lies between two growing branches will have
very limited chance of growth by further attachment of
ions as compared to the two neighboring branches. This
effect is also known as masking or shielding effect as the
neighboring branches prevent the randomly wandering
ions from reaching shorter branches or inner parts of the
growth.
The simulation of such a process demands for making
the ions move in a fashion identical to that would prevail
in circular cell geometry with certain applied voltage and
presence of random motion at room temperature. To
simulate such a situation a lattice is usually selected and
the ions are allowed to move on the lattice and the motion
of the ions is controlled by the conditions to be simulated.
In the case of DLA the moving ions travel a fraction of
step in radial motion and rest of the motion is in a

ACM #: dber.imera.10. 73739

www.asdf.org.in

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

randomly oriented direction. Thus the step size keeps on


changing therefore the off lattice walk is more suitable
here.
II OUTLINE OF ACTUAL WORK
Fig. 1 shows a simulated electrodeposit using bias
parameter B = 0.001 at two different stages of simulation
of electrodeposition. Fig 1 (a) is the simulated growth
with 1000 points attached and Fig. 1 (b) is the next stage
of growth with 1500 points attached. It is seen from both
the patterns that the attachment of fresh ions follow
certain pattern such that fresh ions are attached at the
fastest growing branch tips. To identify the different stage
of growth of the simulation pattern different colors are
used. It is also seen from the simulation that for a lower
value of B = 0.001, the growth pattern is less crowded
with limited branching, however secondary and tertiary
branches are present in appreciable amount. During the
simulation of the growth initial value of B = 0.001 was
used and was allowed to change gradually to a value of
0.005 as the growth reaches the anode.

13

(b)
Fig. 1 Simulated electrodeposition patterns using bias parameter B
= 0.001 at two different stages of simulation of electrodeposition, (a) is
for 1000 points and (b) for 1500 points attached.

Two different stages of simulation of growth (1000


and 1500 points attached) of electrodeposition pattern are
presented in Fig. 1. The value of B used in the simulation
is 0.001 corresponding to a growth under low cell
operating voltage with lower values of electric field i.e.
near DLA conditions. Each square dot in the image
indicates one particle, different colors are used to identify
the particles attached at different stages of growth the
color was change after 200 particles are attached.
Comparison of Fig. 1(a) and (b) clearly shows that the
growth is prominent on the developed branches and fresh
particles get attached at the tip of growing branches and
confirm the expected masking effect. Also as the
simulation corresponds to growth under low cell
operating voltage conditions (low value of B), the growth
is more open with limited branching. The associated
complexity of structure is also less. To quantify this, the
shapes were analyzed using mass radius method.
Fig. 2 Simulated electrodeposition patterns using bias
parameter B = 0.005 with 1500 points attached.

(a)

www.asdf.org.in

It is seen from the pattern simulated in Fig. 2 that with


the increase in the effect of electric field (Bias B = 0.005)
the growth is more crowded with more of branches.
Comparison with Fig 1 (b) clearly reveals that there is
more of structure to the deposition with crowded and
dense branching. This is in agreement with the
experimental results that show that at higher cell
operating voltages corresponding to higher electric field
the branching is crowded as shown in Also the secondary
and tertiary branches are more prominent than those in
Fig. 1 (b).
III CONCLUSION
We simulated the Electrodeposition of Dendritic
patterns using Brownian motion under different electric
field (Bias) conditions using off lattice random walk. It is
observed that fractal growth at low bias B = 0.001 shows
open structure with fewer branches and with less

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

crowding. As the bias is increased (higher value of B i.e.


0.005), the growth tends to be dense and with more
crowded branching. The complexity of the growth
Dendritic patterns increase with an increase in the value
of B. indicating the increased complexity of the
structure and texture of the simulated Dendritic patterns.
This is in agreement with the experimental observations.
It is also observed that during the evolution of fractal
patterns or Dendritic patterns as growth proceeds with the
attachment of particles, the Center of mass of the growth
is also shifted slightly. It is observed that the shifting of
the center of mass exhibits random walk like patterns and
the center of mass wanders around the origin or the
starting point of the growth.
REFERENCES
[1].Stefen Schroedl, Kiri Wagstaff Melkon T, Rana S_. How fractal is
dancing. Chaos, Solitons & Fractals 2008;36(4):1019.

[2].Wenze Ouyang Zhi-Jie Tan, Xian-Wu Zou, Zhun-Zhi Jin. Pattern


of diffusion-limited aggregation on nonuniform substrate. Chaos,
Solitons &
Fractals 2003;17(23):189.
[3].Mendivil F, Vrscay ER. Fractal vector measures and vector
calculus on planar fractal domain. Chaos, Solitons & Fractals
2002;14(8):1239.
[4].Melkon T, Ayce EC. Fractal dimension of zeolite surfaces by
calculation. Chaos, Solitons & Fractals 2001;12(6):1145.
[5].Mourad G, Rioux C, Slobodrian RJ. Electrical conductance of
metallic fractal aggregates. Chaos, Solitons & Fractals
2008;37(2):496.
[6].Berg H. Random walks in biology _. Princeton University Press;
1983.
[7].Boston University Center for Polymer Studies. The Image
Galleries http://polymer.bu.edu, 2002.

www.asdf.org.in

14

[8].Wirtz

F, Diffusion-Limited Aggregation and its Simulation,


http://www.oche.de/~cotopia/dla, 2002.
[9].Leonardo Di G, Sigalotti Otto Rendn. Quantum decoherence and
El Naschies complex temporality. Chaos, Solitons & Fractals
2007;32(5):1611.
[10]. Nakagawa M, Kobayashi K. Diffusion-limited aggregation with
a local particle drift. Chaos, Solitons & Fractals 1991;1(6):535.
[11]. Lifeng Yan, Hiroshi Iwasaki. Fractal aggregation of DNA after
thermal denaturation. Chaos, Solitons & Fractals 2004;20(4):877.
[12]. Turcotte DL. Fractals and chaos in geology and geophysics.
Cambridge: Cambridge University Press; 1992.
[13]. Mandelbrot BB, Viscek T. Richardson plot and fractals. J Phys
A 1989;22:L337.
[14].
Witten TA, Sander LM. Phys Rev B 1983;27:5686.
[15]. Nakagawa M, Kobayashi K, Namikata H. An extended
diffusion-limited aggregation model with repulsive and attractive
interactions. Chaos, Solitons & Fractals 1992;2(1):1.
[16]. Michel B, Denis B, 2D growth processes: SLE and Loewner
chains Preprint submitted to Elsevier Science (17 April 2006).
[17]. Michael B, Music from fractal noise the Proceedings of the
Math2000 Festival, Melbourne, (2000).
[18]. Fanchiotti H., S.J. Sciutto, Analysis of Sunspot Number
Flucatuations FERMILAB (2004).
[19]. Introduction to Time Series Analysis, Apr 2007
http://www.itl.nist.gov/div898/handbook/pmc/section4/p.
[20]. Chaotic
Time
Series
Analysis,
Jun
2004
http://www.physics.emory.edu/~weeks/research/tseries1.
[21]. Morency C, Chapleau R. Harmonic and Fractal Image Analysis
2003;30.
[22]. Armin B, Shlomo H. Fractals in science _. Berlin Heidelberg:
Springer-Verlag; 1994.

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

15

Programmatic Testing of a Technology Product


Suresh Malan, Amol Khanapurkar, Chetan Phalak
Performance Engineering Research Center
Tata Consultancy Services
Mumbai, India

Abstract This paper describes the testing experiences with a


technology product - Jensor. It is targeted primarily at J2EE
developer and testing community. Jensors objective is to be a
very useful tool in Performance Engineering of all kinds of
Java application. Jensor was designed to seamlessly work with
the Java application without needing access to source code or
change in any of the existing development methodologies and
processes.
Jensor is implemented in Java using Bytecode
Instrumentation. Bytecode instrumentation is process of
writing Java opcodes into compiled Java class file. It is fit for
monitoring web application under large workloads and can
generate profiling data at rapid rates. The overall size of data
to be analyzed can range from KBs to GBs. Testing Jensor
meant testing its modules. A spectrum of challenges were
involved in testing it, including testing correctness of
instrumentation, ensuring deterministic behavior in multithreaded environments and validating correctness of large
data sets. This paper highlights how test cases were derived
and implemented.
Keywords- Testing, Unit Testing, Integration Testing,
Bytecode instrumentation, Profiling, Data Analysis

I. INTRODUCTION
A technology product is different from an application
because it is based on a set of specifications that the
underlying technology is based on. A specification is
usually evolved with the help of a community. For a
technology product there are two implications, viz.

A specification describes only the What part and


never the How part. Hence a thorough understanding of
specifications is a must for product development.

The product will need to adapt to revisions to a


specification like adding or removing a feature in the
technology. This task is non-trivial.
Jensor is based on Java Class file specification[1]. The
design goals were to be a light-weight and low-overhead
profiler. An efficient implementation and robust testing
were required to meet the goals.
Proc. of the Intl. Conf. on Computer Applications

Volume 1. Copyright 2012 Techno Forum Group, India.


ISBN: 978-81-920575-5-2:: doi: 10. 73746/ISBN_0768
ACM #: dber.imera.10. 73746

www.asdf.org.in

The testing activity primarily revolved around 3 themes


viz.
Bytecode Instrumentation (BCI) Testing
Data Capture (DC) Testing
Data Analysis (DA) Testing
During BCI and DA phases only Jensor code was active
whereas Jensor code co-exists with application (being
profiled) code during the DC phase. During BCI the focus is
on correctness of instrumentation whereas during DA
correctness can be guaranteed only if captured data meets
certain properties. Similarly, in the DC phase ensuring that
Jensors presence does not adversely affect application is
important. Thus it is easy to see that each theme had
different challenges.
This paper describes functional and performance testing
of Jensor. More specifically it describes the testing strategy,
the process followed and the results achieved.
II. ORGANIZATION OF THE PAPER
The paper has been organized to provide a brief
introduction to Jensor and then provide the details of the
testing activity performed. The different sections of the
paper are as follows.
The section Jensor Introduction helps in introducing
Jensor, providing details on the representative use-cases,
architecture and the working of the profiler.
The section Testing experiences detailed provides
details on the functional and performance testing activities
performed. Under functional testing, the activity performed
by each of Jensor component in described in detail before
moving on to the challenges faced and the approach used to
resolve them. Under performance testing, the calibration
activities performed are detailed.
III. RELATED WORK
There have been two other publications around Jensor.
Ref. [2] concentrates on adaptive nature of Jensor describing
the State Machine based approach to perform adaptive
profiling which progressively refines the set of methods
suitable for profiling. Also Jensor was re-architected to
avoid any increase in overheads and reduce resource
utilization by data analysis modules, the process followed

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

has been detailed in [3]. Jensor is available for download at


http://jensor.sourceforge.net.
IV. JENSOR INTRODUCTION
Jensor focuses on performing dynamic analysis on code
at runtime. It primarily targets obtaining method level
response times and memory consumption. After obtaining
this data it performs intelligent analysis to pin-point and
even predicts the bottlenecks in the system. The following is
a list of use cases, representative of, but not limited to, what
Jensor can be used for.
A. Representative use cases for Jensor

To obtain response time of methods in production


environments without making any modifications to source
code. This is extremely useful where different vendor code
is responsible for different components of a system.

To identify bottleneck methods in Development,


Test and Production environments. Many profilers that are
suitable for development are not suited for Production and
vice-versa. The profiler under consideration here is designed
to work efficiently in all environments.

To obtain insight into application reliability and


scalability.

To obtain JVM Heap Utilization statistics. This is


useful in many cases such as:
To detect if application has memory leaks
To quantify memory requirements of an
application
To characterize memory usage patterns
To arrive at optimal Garbage Collection
(GC) settings.
B. Jensor Architecture
Jensor comprises of different components, each playing
its part in the profiling process. The figure below depicts the
High-level architecture.
Target Java
Application

Communication
Engine (CE)

Instrumented
Code

Analysis
Workbench
(AW)

Core
Persistent

Data Store

16

Persistent Data Store: Persistent Data store for the


discussion here represents flat files. The testing details
will not dwell too much on the Persistent Data store but
concentrate on the other aspects.
Analysis Workbench (AW): It is responsible for
Log analysis. It acts as the User interface facilitating
transmission of control signals and interpretation of the
analysis results.
Communication Engine (CE): It acts as a carrier of
control signals from AW to the Core. Also it acts as a
medium for transfer of data logs from the Core to AW.
C. How does Jensor work?
The process starts with the user providing either the
class files directly or an archive file containing the class
files. Jensor performs bytecode instrumentation of these
class files. Bytecode instrumentation is the process of
writing valid bytecode into compiled class files.
On invocation of the instrumented code, log generation
gets started. These logs are written to flat files on the disk.
The user will essentially need to start and stop the
logging process appropriately. This control is enabled
through the Core residing within the instrumented code.
However the user cannot directly communicate with the
Core.
This problem is resolved by the Analysis
Workbench(AW) which provides a User interface for
sending the control signals across the Communication
Engine(CE).
Also AW is responsible for performing analysis on the
logs. The various analysis modules within AW enable a
better interpretation of the logs, thereby aiding the
identification of the bottlenecks.
The newer versions of Jensor are adaptive in nature.
Adaptive profiling[1] refers to the process of refining the set
of methods being monitored over multiple iterations such
that the profiling overheads are negligible.
The experiences of testing Jensor are detailed in the next
section.
V. TESTING EXPERIENCES DETAILED
Basically two broad categories of testing were
performed namely, Functional testing and Performance
testing, each of which is described below.
A. Functional Testing
Functional testing refers to the process of testing each
component of Jensor validating its functional correctness.
The Functional testing process has been explained in the
context of the different components of Jensor. The figure
below displays the components and their breakup into subcomponents.

Fig 1 High-level architecture


Jensor consists of 4 main components viz.
Core: Core is responsible for controlling the
behaviour of the profiler and the generation of logs.

www.asdf.org.in

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

Instrumentation
Engine

Core
Log Writer

Byte Code
Writer

Byte Code
Transformer

Real-time
Statistics
Engine
Control
Engine

Byte Code
Reader

TCP / IP
Server

AW
- Reports

- Call Trace
- Pattern
Analysis

- JVM Replay
- Charting
Engine

- Unexited
Methods
- Responsetime spectrum
- Object

Fig 2 Profiler Block Diagram

Each lane in the above diagram represents a component


in Jensor with clearly defined responsibilities. The subcomponents displayed as boxes play an important part in the
performance of these responsibilities.
In this section, the functionality of each module is
explained in detail then challenges faced in testing are
mentioned, followed by the solutions that overcame the
same.
1) Instrumentation Engine
The instrumentation engine is responsible for
manipulating the Java classes. Sun Microsystems defines
specifications for Java Class file format. A standard
specification of Class file across all platforms is what makes
Java portable. The specification defines a structure as per
which Java Classes are organized.
Bytecode Instrumentation (BCI) is the process of writing
bytecode into the compiled class files. Jensor uses ASM[4],
a Java bytecode manipulation framework to perform BCI.
BCI can be performed statically as well as dynamically.
Static instrumentation is a semi-automatic process where the
instrumentation utility is told where the class files reside on
the file system. In dynamic instrumentation, classes are
hooked at load time (when they are about to be loaded into
the JVM) and instrumentation code is injected in them.
In the context of this paper, the term instrumentation in
general refers to static instrumentation. Under the
instrumentation process, the user submits the original class
files to the Instrumentation engine. The engine then
proceeds with instrumentation process. Instrumentation
enables logging of information regarding the various events
of interest like method entry / exit, object instantiation, etc.

www.asdf.org.in

17

The user then receives this instrumented code which is used


for execution.
The different activities performed under the
instrumentation phase are as follows
The Byte Code Reader reads a Class file as an
array of bytes and interprets it as per Class file structure.
The Byte Code Reader would read the bytecode either
from a file or a stream.
The Byte Code Transformer is responsible to
identify the elements and modify them. The Transformer
pipeline involves the following adapters each of which
inserts additional bytecode
- Response time adapter: Inserts the code for
identification of method entry and exit events and calculate
the response times
- Object Instantiation adapter: Inserts the code for
accounting for object instantiations
- Heap Utilization adapter: Inserts the code for
writing heap utilization statistics
- Line of code adapter: Inserts the code for
identifying line of code level details
The Byte Code writer is responsible for writing out
the instrumented class file. After the class file passes
through the Transformer pipeline, Byte Code Writer writes
out the instrumented bytecode
It should be noted that the modification of Class files
should adhere to elaborate set of Java rules that govern if the
Class is valid. An invalid or malformed Class file is ignored
(after throwing relevant Exceptions) by the JVM and is
never loaded. Hence verification was performed at each and
every step.
a) Challenges
Instrumentation is performed at static time which meant
execution could not be a means to validate the correctness
of the format of class files. Validation process involved
How to validate correctness of the class file read by
the Byte Code Reader?
How to validate whether right changes at right
locations are made by the Byte Code Transformer?
i.
Validating of class files read
This validation was essential since a malformed class
file will mean failure even before the instrumentation
process begins.
The first solution identified was testing the correctness
of the class file format while reading as well as after
writing. However when the reading process was reviewed it
was identified that the class file is read as a stream of bytes.
So the only possibility of a malformed class file could be
due to loss of bytes while reading. Thus a simple solution
put to use was to compare the size of the class file and the
number of bytes read.
ii.
Validating the transformation
Given that the reading process was flawless, the class
file next moves to the Byte Code Transformer, where in it
passed through the various adapters. The validation process

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

remains similar across the different adapters. Hence


considering the space constraints, the validation process is
explained for the Response time adapter alone.
The Response time adapter inserts additional bytecode
into the compiled class files to capture the method entry and
exit events. For explanation purposes, method exit event is
taken as a representative case and a detailed description of
how and where the changes are made follows.
As per the class file format specification each methods
return statement is identified by an appropriate instruction /
opcode. And for each method the return instruction must
match its method's return type. Thus if the method returns a
boolean, byte, char, short, or int, only the ireturn instruction
may be used. If the method returns a float, long, or double,
only an freturn, lreturn, or dreturn instruction, respectively,
may be used. If the method returns a reference type, it must
do so using an areturn instruction. All instance initialization
methods, class or interface initialization methods, and
methods declared to return void must use only the return
instruction.
This meant locating these instructions within the
bytecode will identify all the method exit events. These are
the locations for insertion of additional bytecode.
So the validation process essentially required that all
possible instructions / opcodes indicating a particular event
were identified. Thus the solution resided in understanding
the different instructions / opcodes indicating a particular
event occurrence.
Also the logging process will work independent of the
J2EE application being profiled. This is because any J2EE
application code on compilation will generate class files
containing valid instructions / opcodes. As the logging
process is based on these instructions / opcodes it meant that
logs will get generated irrespective of the application being
dealt with.
iii.
Verification of Instrumented classes
In the transformation process, the additional bytecode
was injected but verifying the preservation of the class file
format was mandatory.
One of the checks used was the change in size of the
class files due to the additional bytecode but that could not
guarantee correctness.
As execution was not available under the
instrumentation phase, the only check available was
Disassembly.
For this, the javap command was used. javap
disassembles a class file. Its output depends on the options
used. Here the c option was used which prints out the
disassembled code, i.e., the instructions that comprise the
Java bytecodes, for each of the methods in the class.
Both the original and the instrumented files were
disassembled using the javap command. The format of both
the files was compared to check if they were similar with
the instrumented file having the additional bytecode as
expected.
2) Core

www.asdf.org.in

18

Core is the component which is implanted within the


instrumented code as part of the instrumentation process.
Thus it is assumed that the instrumentation process is
successfully performed.
On execution of the instrumented code, Core is
instantiated and runs simultaneously along with the
instrumented code. Its responsibilities include
- collecting data from instrumented classes
- computing real time statistics
- storing data to persistent store and
- controlling the behavior of the Profiler at runtime.
Collection of data refers to the process of logging the
details pertaining to the pre-determined set of events.
Some runtime analysis is also performed generating
real-time statistics.
The persistent data store represented by flat files is used
to store this data.
The core is also capable of receiving signals to control
the Profiler behaviour at runtime. The control signal could
come from user interaction (e.g. start / stop logging), filebased parameters (e.g. memory snapshot frequency) or its
own adaptive behavior (Profiler keeps track of its own
overheads imposed on the methods) which requires the
Profiler to adjust to events happening in the JVM.
i.
Challenges
Validating the core functionality meant answering two
questions
a) Is the Data collection and storage process being
performed correctly?
b) Can the Control mechnism be guaranteed to be
flawless?
ii.
Validating Data collection
Under the data collection activity two checks were
required.
Correctness of the logging process
Checking the correctness of the logging process is pretty
simple. The instrumented code is executed and it is verified
that the appropriate log files are getting generated.
Correctness of the logs themselves
Validating the correctness of the logs themselves posed
a different challenge. Before moving on to the validation
phase, a brief description of the logs follows.
To keep it concise, only the Method Response time logs
are considered. The entire validation process will be
explained in terms of the Method response time logs. A
similar process exists for all the other logs as well.
The Method response time logs have a pattern to them.
So if A, B and C are three methods of a class such that in
the main method A is called, A then calls B and B in turn
calls C.
main()
|
--A()
|
--B()
|

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

Fig 3 Sample method call tree

The simplified format of logs will look like


Enter,main,<t0>
Enter,A,<t1>
Enter,B,<t2>
Enter,C,<t3>
Exit,C,<t4>
Exit,B,<t5>
Exit,A,<t6>
Exit,main,<t7>
Fig 4 Jensor log format

where, t0, t1, t2, t3, t4, t5, t6 and t7 represent the
timestamps required for response time calculation.
As can be seen, the main method enters first, followed
by A, B and C respectively. And since C is the last method,
it is the first one to exit. Only after C exits does B exit, after
which A and main follow. Thus the logs generated are based
on the actual call sequence.
Coming back to the validation of logs, the logs have to
be validated against the actual sequence of method calls
during the execution of an application. It is difficult with a
third party application, since the exact sequence of method
calls will be difficult to trace. A home-grown application
seemed to be the best solution, since in that case the exact
call sequence would be available along with the
approximate response times. Development and maintenance
of an application meant additional time and efforts. Hence a
more simple approach was identified.
For this a Programmable Test suite was defined
consisting of programs representative of the different
scenarios the profiler can encounter when dealing with realworld applications. For each scenario a program was
written, so there were programs for Recursion, Iteration,
Multi-threading, Daemon Thread execution to name a few.
The advantage here was that the method call sequence
was already known resulting in identification of the desired
format of logs. Also the suite itself was extensible as
expanding the scope meant addition of a new program.
With the use of the Programmable Test suite, the process
became very simple. First the instrumentation of the
programs was carried out, followed by the execution of the
same resulting in the generation of logs. These logs were
then compared with the desired format of logs for
ascertaining correctness.
However in certain cases though everything else is
correct, there will be an inconsistency in the logs collected.
In these special cases the validation requires a few
additional steps as mentioned below
i.
Selective instrumentation

www.asdf.org.in

19

Jensor allows the user to select the classes s/he wants to


instrument out of all the available ones. The data will then
be reported only for these instrumented classes.
In this case, all methods of the instrumented classes are
identified. The desired logs are modified to include entries
related to only these methods. The logs generated are then
tested against these modified set of logs.
ii.
Filteration mechanism
Jensor also provides a filtering mechanism. In this the
user specifies a list of packages / classes / methods for
which data needs to be collected. For this the list of all
methods need to be identified based on the packages /
classes / methods mentioned in the filter. The desired logs
are modified to include entries related to only these
methods. The logs generated are then tested against these
modified set of logs.
If both the steps are performed, compare the two lists
and create a third list comprising of only the common
methods. Modify the original set of desired logs to include
details of only these common methods.
Now the same tests will work but the generated logs will
be compared with these modified desired logs.
iii.
Validating Control mechanism
Control mechanism could be enforced using the Filebased approach or through User interaction. Both the
approaches were validated as mentioned below.
File-based approach
The file approach is pretty simple in the sense that the
Core will read a properties file at initialization time, and
based on the properties setting, the appropriate controls will
be enabled / disabled.
Each of the properties could take on one of the values
out of a predetermined set available for it. This provides
tighter control to the user allowing for setting of the
appropriate property to
Enable / Disable Object Instantiation
Enable / Disable Filtering and also adjust
level of Filtering
Enable / Disable Logging at application
startup
Adjust Refresh interval for Real-time
statistics transmission
Validating the File-based approach involved a lengthy
and tedious process where in at a time only one property
was changed. The property was changed to take on the
different values available under its predetermined set and
their effects on the profiling process were noted down. This
process was repeated with all the values for all the
properties.
If the changes resulted in a desired behavior, the
validation was deemed successful.

User Interaction
The control through user interaction essentially involves
signaling over TCP / IP communication. Two test cases
were defined here

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

To verify the correctness of the TCP / IP


signal transmission.
To verify the behavior of the Core given
that proper signal transmission is guaranteed
For the first test case, certain programs were written
to communicate with the Core at its specified port. The
core was made to output the communication so that a
thorough check could be made that the signal sent was
received without any loss in transmission.
For the second test case, the behavior of the core was
monitored to identify whether it was acting on the signal
received. For instance, if a START message was sent, did it
start logging and similarly on receiving a STOP message,
did it stop logging.
Also it was checked if the core could identify its current
state. For instance, if a START message is received and
logging is already started, the core should not try to start the
logging again but reply stating that the logging was already
started. Similar checks had to be incorporated for the other
signals.
3)
Analysis Workbench (AW)
After successful instrumentation is performed, executing
the instrumented code activates the Core resulting in the
generation of logs. AW then establishes a connection to the
Core, sending a control signal to stop logging and enable the
transfer of logs. AW then imports the logs and generates
offline analysis. The only requirement is that the logs should
adhere to a specific format otherwise AW will not be able to
perform the analysis.
The various analysis modules of AW can each be used
to have a different perspective into the logs collected. This
section describes the different modules.

Summary Report This component crunches data


logs and generates Summary Report which gives methodlevel statistics

Call Trace Creates entire call execution


sequence with break-up of response time at each level.

JVM Replay Allows replay of method executions


as they happened in the JVM. This feature can be useful in
debugging reliability and scalability problems.

Pattern Analysis Engine Identifies patterns of


interest from application performance perspective. This
component contains 2 patterns viz. Elementary and Advance
pattern analysis.

Charting Engine Provides graphical visualization


for correlating method response time histograms and
memory utilization.

Un-exited Methods Provides useful pointers into


debugging reliability of the application.

Tagging Tagging is the process in which one


declares Tags. Next tags are applied to technical data
obtained from the logs. Jensor then provides break-up and

www.asdf.org.in

20

visualization based on Tags. Furthermore, Tags can be


nested to bring out appropriate correlation between different
categories of tags. Using tags it is possible to create a
business perspective of technical data.

Response Time Spectrum Averages typically


dilute information about response time. This module depicts
response time of all occurrences of selected methods against
wall clock time. This module enables identifying periods of
the day where a method performance degraded and by how
much.

Object Instantiations Shows memory graph and


details about object allocations grouped by methods. This
module allows one to select time range and get information
about allocated object type and allocating method along
with count of number of objects allocated. A high allocation
of a particular object type within a method can provide
useful pointers to memory consumption by that method.
i.

Challenges faced
a) How to validate the analysis results presented by
each of the modules?
b)
How to validate the analysis presented by two
different analysis modules for the same profiling session?
c)
How to handle inconsistent logs?
ii.

Validating individual analysis modules


The Programmable Test suite brought simplicity to the
process of validating the analysis modules. As mentioned
earlier the sequence of method calls is known prior to
running of the test. With this information, the results for the
other reports can be validated. A few of the validations
performed are explained here.
Call trace results in the creation of the entire call
execution sequence with breakup of response time at each
level. Since the sequence of method calls is known and
ideally the response times are also known, the call trace can
be easily validated.
Similarly, the other modules like JVM Replay, Unexited
methods, Response time spectrum can also be validated.
Pattern analysis module displays the different patterns
identified during execution. The patterns can be manually
formed from the available sequence of method calls. These
can be used to validate the patterns generated by the Pattern
analysis module.
Summary Report module provides method level
statistics - call counts, total, average, max and min response
times. Based on the call sequence, response time details
available from the validated call trace, and a few manual
calculations, the call counts and response time statistics can
be generated manually. These values can be used to validate
the Summary Report.
Object Instantiations is the module accounting for the
object instantiations. In case of the Programmable Test

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

iii.

Validating data across modules


For this data across two different modules was
compared. There could be many such checks; a few of them
are listed below.
a) Comparison of the Call count in the Summary
report with the calls displayed in the Call trace module.
b) Comparison of the memory utilization shown by
the Charting engine with that in the Object instantiations
module.
c) Comparison of the Call count displayed at the last
instant of time in the Charting engine with the call count for
the same method in the Summary report.
Validation was successful if all such checks were
successful.
iv.
Handling inconsistency in logs
The Core collects the logs based on method entry / exits,
so if a certain methods execution had begun before the start
of logging, the logs will not contain its entry but will have
its exit. Similarly it is possible that a methods entry is
logged, but it completes execution only after the logging is
stopped. In that case its entry will be captured but their
corresponding exit will not be captured.
In these special cases, the analysis modules have to alter
their normal behaviour. Exits without entries are simple to
handle. They are simply discarded because they serve little
purpose as without the entry timestamp, response time
calculations cannot proceed.
However it is not possible to discard methods that have
entries but no corresponding exits. This is because there are
some special cases, where in method entries are recorded
but their exits are not captured. They are as mentioned
below
a) The method is running in the context of a daemon
thread and is designed to never exit.
b) The method is stalled.
c) The method throws a run time exception and exits
by an alternative path causing the entire stack to be
unwound.
Cases b) and c) are pointers to performance and / or
reliability problems. A response time of -1 (minus one) is
specified against the methods which do not exit. A separate
module named Un-exited methods has been designed to
display the details of these methods to help debugging
reliability of an application.
With all the above mentioned test cases, the functional
correctness of the product and its components could be
validated.
Next Performance testing was performed.

B. Performance Testing
Performance testing involved calibration of the
Instrumentation overheads as well as AW. The
Instrumentation process has a direct impact on the
applications performance.
AW on the other hand performs a memory-intensive
activity essential to derive relevant information from the
collected logs.
Performance calibration of both the aspects are required
to understand the requirements and limitations of Jensor.
i.

Instrumentation overheads calibration


Jensor footprint and runtime overheads depend on
number of events that it generates. Jensor runtime overheads
were benchmarked against three types of operations
Compute, IO Read and IO Write intensive. Workload of all
operation types was altered to emulate increase in
complexity. Along the X-axis, workload of a given type of
benchmark was increased. For e.g. to assess impact of
Jensor on a read intensive application, the test program was
made to perform more and more IO reads. Both the time
taken for the operation to complete and overheads imposed
by Jensor were measured.
This was repeated for each kind of workload. The
following results were observed.

The overheads incurred are constant for any kind


of workload. This shows that Jensor does not interfere with
normal functioning of the application.

As the complexity increases for all kinds of


workloads percentage overhead of Jensor is negligibly
small.

Response time of a single read or write IO is an


order of magnitude times higher than the overhead incurred.
What this means is that Jensor has practically zero
overheads on any method that does even a single IO
operation.

For methods performing compute operations in less


than 10 microseconds, Jensor imposes a significant
overhead. However, as computations increase in complexity
and start to take response times in milliseconds, the
overhead becomes negligibly small.
Profiling Overheads v/s Response Time for Compute-Intensive, I/O
Reads & I/O Writes
10000
1000
m sec

suite, the source code is available so the number of objects


instantiated can be calculated. This will help validate the
results in the Object instantiations module.

21

100

Compute

10

IO Reads

IO Writes

0.1 1 2 3 4 5

6 7 8 9 10

Profiler Overheads (msec)

0.01
0.001
Increasing Workload

www.asdf.org.in

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

Figure 3: - Calibrations
The pink line in the graph in Figure 3 displays Jensor
overheads. It clearly displays that profiling overhead per
method is of the order of 10 30 microseconds. Only in
cases where CPU intensive compute operations (taking few
microseconds) are performed can Jensor exert an overhead.
This is where adaptive profiling helps. Jensor can track its
own overheads and stop profiling methods on which it
imposes significant overhead. For the Read and Write IO
workloads, the overhead in terms of response time is
negligibly small.
The CPU and memory utilization for Jensor depends on
number of classes instrumented and the frequency of their
invocations. For 132 classes in a Sun Microsystems
Java2D demo application, the CPU overhead is less than 3%
and memory consumption is less than 1 MB.
AW Calibration

AW on the other hand processes the profiling data


offline. The following table depicts the memory
requirements for various analysis tools. The memory
requirement is proportional to the number of method
invocations during run with the instrumented code. It is also
roughly proportional to the cumulative size of .plot files
generated in JAVA_PROFILER_HOME\projects\<project
name>\bin folder.
All memory units mentioned in the table below are in
MBs.
The observations are as follows
Call Trace, Pattern Analysis, Unexited methods
and Response Time spectrum are memory intensive
modules. There is a direct correlation between the .plot file
sizes and their memory requirements.
Summary Report, Charts and Tagging Engine are
light-weight with respect to memory requirements and do
not vary much with .plot file sizes.
512 MB memory should support all modules as
long as total size of .plot files remains under 200 MB.

Apply Tags (in


MBs)

Pie-Chart (in
MBs)

Nested PieChart (in MBs)

Response-time
Spectrum (in
MBs)

58

0.08

59

0.1

141

16

31

84

314

29

149

10

72

274

41

55

55

12

302

462

158

Required size of
Plot files (in
MBs)

Create Tags (in


MBs)
Perform
Tagging (in
MBs)

Total memory
(in MBs)

Unexited +
Unexited
Details (in
MBs)

JVM Replay (in


MBs)

Debug Tree /
Call Trace (in
MBs)

Charts (in MBs)

The calibrations were performed on a desktop machine


running Windows XP with Intel Xeon 2.8 GHz and RAM
512 MB. The JDK version was Sun JDK 1.4.2_09.

Pattern Analysis
(in MBs)

Summary
Report (in MBs)

ii.

22

Table 1: - Jensor Overheads


VI.

LEARNINGS

In course of testing Jensor we learnt the following


A complex implementation (Instrumentation
Engine) need not necessarily have complex test
cases
A simple implementation need not necessarily be
simple to test. It is difficult to guarantee 100%
accuracy of many of the analysis modules Pattern
Analysis, Call Trace etc.
Huge datasets need not necessarily require a database
for manageability or testability purposes. 14 releases of
Jensor have taken place without needing a database to
manage profiling data.

www.asdf.org.in

It is possible to perform high quality testing without any


investment in commercial tools. Jensor development and
testing was done in desktop environment and incurred zero
cost towards software licences or hardware investment.
VII. CONCLUSION
The Divide-And-Test strategy adopted in case of Jensor
testing has helped in not only overcoming various testing
challenges but has also paid rich dividends in terms of
stability of the product.
Adherence to Java Class file specification was used as
drivers for designing test cases in the BCI phase. This not

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

23

only helped introduce relevant features but also derive


custom test cases.
In the Data Capture phase testing, onus was on ensuring that
correct data was being captured with no adverse impact on
application being profiled. Since correct data and the ability
to guarantee data properties was pivotal in getting correct
analysis programmatic testing was done to ensure these
goals were met.
Programmatic testing was also done during Data Analysis
testing phase. Maximum performance testing and
calibrations were required in this phase. This helped in
tuning Jensor and making it suitable for use even on
smaller-sized desktops. Crunching several gigabytes of logs
in a few megabytes of memory was a feat achieved through
iterative testing and development cycles. The agility in this
testing helped in making incremental and progressive
refinements to Jensor design and architecture without losing
backward compatibility.
Finally it is worth mentioning that the simple yet
powerful testing process has helped have quality testing and
timely release of the various versions of Jensor.

REFERENCES
[1]

[2]

[3]

[4]

Sun
JVM
Specifications
2nd
Edition
http://java.sun.com/docs/books/jvms/second_edition/html/VMSpecT
OC.doc.html
Khanapurkar Amol B., Malan Suresh B., "State-Machine based
Adaptive Application Profiling : Near-zero Overheads using
Progressive Refinements", International Conference on Computer
Applications 2010 [ICCA 2010], Pondicherry, 24-27 Dec 2010
Khanapurkar A., Malan S., Performance Engineering of a Java
Profiler, 3rd National Conference on Information and Software
Engineering (NCISE 2011). 18th& 19th Feb, 2011
ASM
Java
bytecode
manipulation
framework.
http://asm.objectweb.org/

www.asdf.org.in

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

24

the application.
Algorithms in Simulated Parallel Condition
A Performance Analysis

Gagandeep Singh1
M.Tech(C.S.E)
ACET, Manawala, Amritsar

Abstract - Performance analysis of Scheduling Algorithms in


Simulated Parallel Environment is made in Visual Basic.Net,
which enables to make the system to evaluate and compare
the performance of various Scheduling Algorithms. Efficiency
of scheduling algorithms is essential in order to attain optimal
performance from parallel programming systems. System to
evaluate and compare the performance of various scheduling
algorithms has been implemented. It consider a connected set
of workstations as a pool of processors to analyze the
performance of scheduling algorithms. Scheduling algorithms
select the best possible subset of workstations for a task to
minimize its completion time.
This paper is an effort to provide a GUI based simulated
Parallel framework for the Performance analysis of
Scheduling Algorithms in Simulated Parallel Environment.
Virtualization of parallelization environment is carried out
with the help of simulated program, which simulates all
components of actual processors so as to give best possible
outcome. Such simulated framework will provide the stage
for the young researchers to model and evaluate their
scheduling policies on virtual parallel environment. The
intention behind this parallel simulation environment is the
necessity to facilitate the research of parallel systems and
performance measurement of scheduling algorithms in
developing countries.
This paper presents first come first serve, smallest job
first and round robin scheduling approaches where general
problem is analyzed and modeled for a simple scenario. The
performance of proposed schemes is evaluated and benefits
are highlighted in terms of Response time, Turnaround time,
Waiting Time, Total Turnaround time based on simulation
study.
Practical experimentation on parallel system is still a
costly and complex approach and there are several reasons
for the non-availability of these systems. Due to these reasons
these strategies switch towards simulation of parallel
environment for the performance measurements of processor
allocation methods.

I. INTRODUCTION
In parallel processing, the parallel portion of the
application can be accelerated according to the number of
processors allocated to it. In a homogeneous architecture,
where all processors are identical, the sequential portion of
the application will have to be executed in one of the
processors, considerably degrading the execution time of
Proc. of the Intl. Conf. on Computer Applications
Volume 1. Copyright 2012 Techno Forum Group, India.
ISBN: 978-81-920575-5-2:: doi: 10. 73753/ISBN_0768
ACM #: dber.imera.10. 73753

www.asdf.org.in

Chhailadeep Kaur2
Assist. Prof. (Deptt. Of IT)
ACET, Manawala, Amritsar

Two main distinguishing features of parallel versus


sequential programming are program
Partitioning and task scheduling. Both techniques are
essential to high- performance computing on both
homogeneous and heterogeneous systems. The partitioning
problem deals with how to detect parallelism and
determine the best trade-off between parallelism and
overhead, which means finding the best grain size that
maximizes parallelism while reducing overhead. After
program partitioning, tasks must be optimally scheduled on
the processors such that the overall make span of the
parallel application is minimized. In general, the
scheduling problem deals with choosing the order in which
a certain number of tasks may be performed and their
assignment to processors in a parallel/distributed
environment.
A. Parallelization
Parallelization [1] implies automation when used in
context, refers to converting sequential code into multithreaded or vectorized (or even both) code in order to
utilize multiple processors simultaneously in a sharedmemory multiprocessor (SMP) machine. The goal of
parallelization is to relieve programmers from the tedious
and error-prone manual parallelization process.
Compiler parallelization analysis
The compiler usually conducts two passes of analysis
before actual parallelization in order to determine the
following:

Is it safe to parallelize the loop? Answering this


question needs accurate dependence analysis and alias
analysis

Is it worthwhile to parallelize it? This answer


requires a reliable estimation (modeling) of the program
workload and the capacity of the parallel system.
The first pass of the compiler performs a data
dependence analysis of the loop to determine whether each
iteration of the loop can be executed independently of the
others. Data dependence can sometimes be dealt with, but
it may incur additional overhead in the form of message
passing. synchronization of shared memory, or some other
method of processor communication.
The second pass attempts to justify the parallelization
effort by comparing the theoretical execution time of the

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

code after parallelization to the code's sequential execution


time.. The extra overhead that can be associated with using
multiple processors can eat into the potential speedup of
parallelized code.
1.3 Simulated Framework for Performance Analysis of
Scheduling Algorithms in Simulated Parallel Environment
Computer system users, administrators, and designers
usually have a goal of highest performance at lowest cost.
A simulation is the execution of a model, represented by a
computer program that gives information about the system
being investigated. The simulation approach of analyzing a
model is opposed to the analytical approach, where the
method of analyzing the system is purely theoretical. As
this approach is more reliable, the simulation approach
gives more flexibility and convenience. The activities of
the model consist of events, which are activated at certain
points in time and in this way affect the overall state of the
system. The points in time that an event is activated are
randomized, so no input from outside the system is
required. In the field of simulation, the concept of
"principle of computational equivalence" has beneficial
implications
for
the
decision-maker.
Simulated
experimentation accelerates and replaces effectively the
"wait and see" anxieties in discovering new insight and
explanations of future behavior of the real system.
1.3 The Need for High Performance Computers
Many of todays applications such as weather
prediction, aerodynamics and artificial intelligence
are very computationally intensive and require vast
amounts of processing power.
So to give accurate long range forecasts (e.g. a week)
much more powerful computers are
needed. One way of doing this is to use faster electronic
components. The limiting factor is however the speed of
light.

25

1.4 Classification of Parallel Machines


Models of Computation (Flynn 1966)
Any computer, whether sequential or parallel, operates
by executing instructions on data.
a stream of instructions (the algorithm) tells the
computer what to do.
a stream of data (the input) is affected by these
instructions.
Depending on whether there is one or several of these
streams [5], we have four classes of computers. There is
also a discussion of an additional 'pseudo-machine' SPMD.
SISD Computers
This is the standard sequential computer.
A single processing unit receives a single stream of
instructions that operate on a single stream of data.

Fig.2 SISD Computers

i.e. algorithms for SISD computers do not contain any


parallelism, there is only one processor
MISD Computers
N processors, each with its own control unit, share a
common memory.
There are N streams of instructions (algorithms /
programs) and one stream of data. Parallelism is achieved
by letting the processors do different things at the same
time on the same datum.

SIMD Computers

Fig.1 Sequential and parallel operations

So it appears that the only way forward is to use


PARALLELISM. The idea here is that if several
operations can be performed simultaneously then the total
computation time is reduced.
The parallel version has the potential of being 3 times
as fast as the sequential machine.

www.asdf.org.in

All N identical processors operate under the control of


a single instruction stream issued by a central control unit.
(To ease understanding
assumes that each processor holds the same identical
program.)
There are N data streams, one per processor so different
data can be used in each processor.
The processors operate synchronously and a global
clock is used to ensure lockstep
operation. i.e. at each step (global clock tick) all
processors execute the same instruction, each on a different
datum.

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

26

Fig. 6 Potential of SISD, SIMD, MISD, MIMD


Fig.4 SIMD Computers

This information can be encoded in the instruction itself


indicating whether

the processor is active ( execute the instruction )

the processor is inactive ( wait for the next


instruction )
MIMD Computers (multiprocessors / multicomputer)
This is the most general and most powerful of our
classification.
We have N processors, N streams of instructions and N
streams of data.
Each processor operates under the control of an
instruction stream issued by its own control
Unit. (I.e. each processor is capable of executing its
own program on a different data.
This means that the processors operate asynchronously
(typically) i.e. can be doing different things on different
data at the same time.

SPMD - Single Program Multiple Data


The SIMD paradigm is an example of synchronous
parallelism - each processor. Operates in lockstep. A
related asynchronous version is SPMD. This is where the
same program is run on the
processors of an MIMD machine. SPMD is not a
hardware paradigm; it is the software equivalent of SIMD.
Because an entire program is executed on separate data, it
is possible that different branches are taken, leading to
asynchronous parallelism. The processors no longer do the
same thing (or nothing)
in lockstep; they are busy executing different
instructions within the same program.
Consider
IF X = 0
Assume X = 0 on P1
THEN S1
X! = 0 on P2
ELSE S2
Now P1 executes S1 at the same time P2 executes S2
(which could not happen on an SIMD machine.
1.7 Parallel Environment (PE)
Parallel Environment is a high-function development
and execution environment for parallel applications
(distributed-memory,
message-passing
applications
running across multiple nodes). It is designed to help
organizations develop, test, debug, tune and run highperformance parallel applications written in C, C++ and
FORTRAN on p Series clusters. Parallel Environment runs
on AIX or Linux. Parallel Environment includes the
following components:

The Parallel Operating Environment


(POE) for submitting and managing jobs.

IBM's MPI and LAPI libraries for


communication between parallel tasks.

PE Benchmarked, a suite of applications


and utilities to analyze program performance.

A parallel debugger (pdbx) for


debugging parallel programs.

Parallel
utilities
to
ease
file
manipulation.
2.2.4 Comparison between different Scheduling
Algorithms

Fig.5 MIMD Computers

www.asdf.org.in

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

C
R
T
Sche
Thr
PU
urnar espon
duling
Ut oughpu
ound se
algorith
ilizati t
time time
m
on
First
L
Lo
In
ow
w
First
Out

Hi
gh

St
D
eadli arvati
on
ne
fre
hand
ling e

H
igh

N
o

Ye
s

Shor
M
N
M
Hig
M
test
ediu
o
edium
remaini edium h
m
ng time
Rou
ndHi
Me
M
L
N
robin
gh
dium
edium ow
o
s
scheduli
ng

No

Ye

A. Performance Metrics of Parallel Systems


Speedup: Speedup Tp is defined as the ratio of the
serial runtime of the best sequential algorithm for solving a
problem to the time taken by the parallel algorithm to solve
the same problem on p processor. The p processors used by
the parallel algorithm are assumed to be identical to the
one used by the sequential algorithm.
Cost: Cost of solving a problem on a parallel system is
the product of parallel runtime and the number of
processors used E = p.Sp
Efficiency: Ratio of speedup to the number of
processors. Efficiency can also be expressed as the ratio of
the execution time of the fastest known sequential
algorithm for solving a problem to the cost of solving the
same problem on p processors. The cost of solving a
problem on a single processor is the execution time of the
known best sequential algorithm.
Cost Optimal: A parallel system is said to be costoptimal if the cost of solving a problem on parallel
computer is proportional to the execution time of the
fastest known Sequential algorithm on a single processor.
II. PROBLEM DEFINITION
A parallel algorithm, as opposed to a traditional
sequential (or serial) algorithm, is an algorithm which can
be executed a piece at a time on many different processing
devices, and then put back together again at the end to get
the correct result.
On a given problem, once the parallelism is extracted
and the execution is described as a dynamic task graph, the
problem is to schedule this task graph on the resources of
the parallel architecture. This motivates theoretical studies:
to design algorithms whose related task graphs can be
scheduled on various architectures with proved bounds is a
main research axis; to provide scheduling algorithms suited

www.asdf.org.in

27

to machine models; to develop quantitative models of


parallel executions.
Some algorithms are easy to divide up into pieces like
this. For example, splitting up the job of checking all of the
numbers from one to a hundred thousand to see which are
primes could be done by assigning a subset of the numbers
to each available processor, and then putting the list of
positive results back together.
Most of the available algorithms to compute pi (), on
the other hand, cannot be easily split up into parallel
portions. They require the results from a preceding step to
effectively carry on with the next step. Such problems are
called inherently serial problems. Iterative numerical
methods, such as Newton's method or the three-body
problem, are also algorithms, which are inherently serial.
Some problems are very difficult to parallelize, although
they are recursive. One such example is the depth-first
search of graphs.
Parallel algorithms are valuable because of substantial
improvements in multiprocessing systems and the rise of
multi-core processors. In general, it is easier to construct a
computer with a single fast processor than one with many
slow processors with the same throughput. But processor
speed is increased primarily by shrinking the circuitry, and
modern processors are pushing physical size and heat
limits. These twin barriers have flipped the equation,
making multiprocessing practical even for small systems.
The cost or complexity of serial algorithms is estimated
in terms of the space (memory) and time (processor cycles)
that they take. Parallel algorithms need to optimize one
more resource, the communication between different
processors. There are two ways parallel processors
communicate, shared memory or message passing.
Shared memory processing needs additional locking for
the data, imposes the overhead of additional processor and
bus cycles, and also serializes some portion of the
algorithm.
Message passing processing uses channels and message
boxes but this communication adds transfer overhead on
the bus, additional memory need for queues and message
boxes and latency in the messages. Designs of parallel
processors use special buses like crossbar so that the
communication overhead will be small but it is the parallel
algorithm that decides the volume of the traffic.
Parallel algorithm design is not easily reduced to
simple recipes. Rather, it requires the sort of integrative
thought that is commonly referred to as ``creativity.''
However, it can benefit from a methodical approach that
maximizes the range of options considered, that provides
mechanisms for evaluating alternatives, and that reduces
the cost of backtracking from bad choices. Goal of this
thesis is to suggest a framework within which parallel
algorithm design can be explored. In the process,

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

we hope this will develop intuition as to what


constitutes a good parallel algorithm. Various cost
annotations, such as number of operations, Response time,
Turnaround time, Waiting Time, Total Turnaround time
have been introduced in order to lead to more realistic
complexity analysis on distributed architectures.
4.1 Simulated Framework for Performance Analysis of
Scheduling Algorithms in Simulated Parallel Environment
Computer system users, administrators, and designers
usually have a goal of highest performance at lowest cost.
Modeling and simulation [12] of system design trade off is
good preparation for design and engineering decisions in
real world jobs.
System Simulation is the mimicking of the operation of
a real system, such as the day-to-day operation of a bank,
or the value of a stock portfolio over a time period, or the
running of an assembly line in a factory, or the staff
assignment of a hospital or a security company, in a
computer. Instead of building extensive mathematical
models by experts, the readily available simulation
software has made it possible to model and analyze the
operation of a real system by non-experts, who are
managers but not programmers.
The activities of the model consist of events, which are
activated at certain points in time and in this way affect the
overall state of the system. The points in time that an event
is activated are randomized, so no input from outside the
system is required. Events exist autonomously and they are
discrete so between the executions of two events nothing
happens. In the field of simulation, the concept of
"principle of computational equivalence" has beneficial
implications
for
the
decision-maker.
Simulated
experimentation accelerates and replaces effectively the
"wait and see" anxieties in discovering new insight and
explanations of future behavior of the real system.
4.5.2Steps for the working of simulator:
Initially values for the jobs are assigned.

Then information about number of processors is


fed to the simulator as shown in snapshot.

On the basis of number of process arrived it


equally divides the load among different processors.

When jobs are divided among processors they


started giving response and at run

Time choose the scheduling algorithm according


to requirement.

Then show the simulator and start the simulation


where jobs start running and we get the response time,
waiting time and Turnaround time. As information about
Turnaround time of various jobs at any instant is available,
total turnaround time of the processor is generated.
III. CONCLUSION & FUTURE DIRECTIONS
This Paper demonstrated the advantages of deploying a
scheduling algorithm method in a parallel system. It had
presented an scheduling algorithm method and

www.asdf.org.in

28

demonstrated its favorable properties, both by theoretical


means and by simulations.
The value of the proved minimal disruption property of
the mapping adaptation has been demonstrated in the
extensive set of simulations.
Such a scheme is particularly useful in systems with
many input ports and packets requiring large amounts of
processing. With the proposed scheme, a kind of statistical
multiplexing of the incoming traffic over the multiple
processors is achieved, thus in effect transforming a
network node into a parallel computer. The improvements
of processor utilization decrease the total system cost and
power consumption, as well as improve fault tolerance.
Work done in this Paper was an effort to design and
develop a simulated multiprocessor environment so as to
virtualize the actual Scheduling system. This paper
presents a simulation environment developed with the aim
to facilitate the research of multiprocessor systems as well
as performance measurement of scheduling algorithms in
developing countries. A simulator program was coded in
VB.net to fulfill this purpose. In future the work done in
this thesis can be extended by modeling many more
scheduling algorithms in the developed environment.
Effort will be done in future to validate the data captured
by simulator with actual experimental setup.
REFERENCES
[1]. Almasi, G.S. and A. Gottlieb (1989). Highly Parallel Computing.
Benjamin-Cummings publishers, Redwood City, CA.
[2]. Hillis, W. Daniel and Steele, Guy L., Data Parallel Algorithms
Communications of the ACM December 1986
[3]. Quinn Michael J, Parallel Programming in C with MPI and
OpenMP McGraw-Hill Inc. 2004. ISBN 0-07-058201-7
[4]. IEEE Journal of Solid-State Circuits:"A Programmable 512 GOPS
Stream Processor for Signal, Image, and Video Processing",
Stanford University and Stream Processors, Inc.
[5]. Barney, Blaise. "Introduction to Parallel Computing". Lawrence
Livermore National Laboratory.
http://www.llnl.gov/computing/tutorials/parallel_comp/. Retrieved
2007-11-09
[6]. Bill Dally, Stanford University: Advanced Computer Organization:
Interconnection Networks
[7]. Quinn Michael J, Parallel Programming in C with MPI and
OpenMP McGraw-Hill Inc. 2004. ISBN 0-07-058201-7.
[8]. Albert Y.H. Zomaya, Parallel and distributed Computing Handbook,
McGraw-Hill Series on Computing Engineering, New York (1996).
[9]. Ernst L. Leiss, Parallel and Vector Computing A practical
Introduction, McGraw-Hill Series on Computer Engineering, New
York (1995).
[10]. Vipin Kumar, Ananth Grama, Anshul Gupta, George Karypis,
Introduction to Parallel Computing, Design and Analysis of
Algorithms, Redwood City, CA, Benjmann / Cummings (1994).
[11]. X. Sun and J. Gustafson, "Toward a Better Parallel Performance
Metric," Parallel Computing, Vol. 17, No. 12, Dec. 1991, pp. 10931109.
[12]. Bossel H., Modeling & Simulation, A. K. Peters Pub., 1994.
[13]. Ghosh S., and T. Lee, Modeling & Asynchronous Distributed
Simulation: Analyzing Complex Systems, IEEE Publications, 2000.
[14]. Fishman G., Discrete-Event Simulation: Modeling, Programming
and Analysis, Springer-Verlag, Berlin, 2001.
[15]. Woods R., and K. Lawrence, Modeling and Simulation of Dynamic
Systems, Prentice Hall, 1997.

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

29

Hybrid Genetic Algorithm for Optimization of Bloom Filter in Spam Filtering


Arulanand Natarajan
Anna University of Technology
Coimbatore, India

Abstract Bloom Filter (BF) is a simple but powerful data


structure that can check membership to a static set. The tradeoff to use Bloom filter is a certain configurable risk of false
positives. The odds of a false positive can be made very low if
the hash bitmap is sufficiently large. Spam is an irrelevant or
inappropriate message sent on the internet to a large number
of newsgroups or users. A spam word is a list of well-known
words that often appear in spam mails. The proposed system of
Bin Bloom Filter (BBF) groups the words into number of bins
with different false positive rates based on the weights of the
spam words for spam filtering. The Genetic Algorithm (GA) is
employed to minimize the total membership invalidation cost
for finding the optimal false positive rates and number of
elements stored in every bin. GA has premature convergence
problem due to its genetic operators that are not able to
generate offsprings which are superior to the parents. So more
number of similar chromosomes presented on the population.
The proposed system hybrids Simulated Annealing (SA) to GA
for population diversity and prevent the premature
convergence effectively. The experimental results of total
membership invalidation cost are analyzed for various sizes of
bins. The results show that the BBF of hybrid GA-SA model
outperforms GA model and BF.
Keywords- Bin Bloom Filter; Bloom Filter; Spam Word;
Hash function; Genetic Algorithm; Simulated Annealing; False
Positive Rate

I.

INTRODUCTION

A spam filter is a program that is used to detect


unsolicited and unwanted email and prevent those messages
from getting into user's inbox. A spam filter looks for certain
criteria on which it stands decisions. For example, it can be
set to look for particular words in the subject line of
messages and to exclude these from the user's inbox. This
method is not effective, because often it is omitting perfectly
legitimate messages and letting actual spam through. The
strategies used to block spam are diverse and includes many
promising techniques. Some of the strategies like black list
filter, white list / verification filters rule based ranking and
nave bayesian filtering are used to identify the spam
A Bloom Filter (BF) presents a very attractive option for
string matching (Bloom 1970). It is a space efficient
randomized data structure that stores a set of signatures
efficiently by computing multiple hash functions on each
member of the set. It queries a database of strings to verify
Proc. of the Intl. Conf. on Computer Applications
Volume 1. Copyright 2012 Techno Forum Group, India.
ISBN: 978-81-920575-5-2:: doi: 10. 73760/ISBN_0768

for the membership of a particular string. The answer to this


query can be a false positive but never be a false negative.
The computation time required for performing the query is
independent of the number of signatures in the database and
the amount of memory required by a Bloom filter for each
signature is independent of its length (Feng et al 2002).
This paper presents a Bin Bloom Filter (BBF) which
allocates different false positive rates to different strings
depending on the significance of spam words and gives a
solution to make the total membership invalidation cost
minimum. BBF groups strings into different bins via
smoothing by bin means technique. The number of strings to
be grouped and false positive rate of each bin is identified
through Genetic Algorithm (GA) which minimizes the total
membership invalidation cost. This paper examines different
number of bins for given set of strings, their false positive
rates and number of strings in every bin to minimize the total
membership invalidation cost.
The organization of this paper is as follows. Section 2
deals with the standard BF. Section3 presents the GA
technique. Simulated Annealing is discussed in section 4.
Section 5 explains the optimized BBF using GA.
Performance evaluation of the BBF with standard Bloom
Filter is discussed in section 6.
II. BLOOM FILTER
Bloom filters (Bloom 1970) are compact data structures
for probabilistic representation of a set in order to support
membership queries. This compact representation is the
payoff for allowing a small rate of false positives in
membership queries which might incorrectly recognize an
element as member of the set.
Given a string S the Bloom filter computes k hash
functions on it producing k hash values and sets k bits in an
m-bit long vector at the addresses corresponding to the k
hash values. The value of k ranges from 1 to m. The same
procedure is repeated for all the members of the set. This
process is called programming of the filter. The query
process is similar to programming, where a string whose
membership is to be verified is input to the filter. The bits in
the m-bit long vector at the locations corresponding to the k
hash values are looked up. If at least one of these k bits is not
found in the set then the string is declared to be a
nonmember of the set. If all the bits are found to be set then
the string is said to belong to the set with a certain
probability. This uncertainty in the membership comes from
the fact that those k bits in the m-bit vector can be set by any
other n-1 members. Thus finding a bit set does not
necessarily imply that it was set by the particular string being

ACM #: dber.imera.10. 73760

www.asdf.org.in

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

queried. However, finding a bit not set certainly implies that


the string does not belong to the set.
In order to store a given element into the bit array, each
hash function must be applied to it and, based on the return
value r of each function (r1, r2, , rk), the bit with the offset r
is set to 1. Since there are k hash functions, up to k bits in the
bit array are set to 1 (it might be less because several hash
functions might return the same value). Fig 1 is an example
where m=16, k=4 and e is the element to be stored in the bit
array.

Figure 1 Bloom Filter

Bloom filters also have the unusual property that the time
needed to either add items or to check whether an item is in
the set is a fixed constant, O(k), completely independent of
the number of items already in the set.
One important feature of Bloom filters is that there is a
clear tradeoff between the size of the filter and the rate of
false positives. The false positive rate of Bloom filter is

e kn/m

.
When the set of elements is changing over time, in that
case the insertions and deletions in the Bloom filter become
important. Inserting elements into a Bloom filter hash the
element k times and set the bits to 1. However, deletion
process is hashing the element to be deleted k times and set
the corresponding bits to 0, is not possible. This is because
setting a location to 0 that is hashed to by some other
element in the set, and the resultant Bloom filter is no longer
correctly reflects all elements in the set. To avoid this
problem, Counting Bloom Filter (CBF) has an entry in the
Bloom filter is not a single bit but instead a small counter.
When an item is inserted, the corresponding counters are
incremented; and when an item is deleted, the corresponding
counters are decremented.
Compressed Bloom filter (Mitzenmacher, 2002)
improves the performance when the Bloom filter is passed as
a message, and its transmission size is a limiting factor.
Bloom filter is suggested as a means for sharing Web cache
information. In this setting, proxies do not share the exact
contents of their caches, but instead periodically broadcast
Bloom filters representing their cache. By using compressed
Bloom filters, proxies can reduce the number of bits
broadcast, the false positive rate, and/or the amount of
computation per lookup. The cost is the processing time for
compression and decompression. It can use simple arithmetic

www.asdf.org.in

30

coding, and more memory use at the proxies, which utilize


the larger uncompressed form of the Bloom filter.
The Spectral Bloom filter (Cohen & Matias, 2003) is an
extension of the Bloom Filter to multi-sets, allowing the
filtering of elements whose multiplicities are below a
threshold given at query time. The spectral Bloom filter
replaces the bit vector with a vector of m counters C. the
counters in C roughly represent multiplicities of items. All
the counters in C are initially set to 0. When inserting an
item, it increases the counters Ch1(s), Ch2(s), , Chk(s) by 1
and it stores the frequency of each item. It allows deletion
by decreasing the same counters.
The one type of bloom filter is Split Bloom filter (Xiao,
Dai and Li, 2004). It increases the capacity by allocating a
fixed sm bit matrix instead of an m-bit vector as used by
the Bloom Filter to represent a set. A certain number of s
filters each with m bits, are employed and uniformly
selected when inserting an item of the set. The false match
probability increases as the set cardinality grows. The basic
idea is, in element addition operation, before going to map
element x into the standard bloom filter s, it first checks the
bloom filters from 1 to s-1 whether they have response that
element x is a member of set A. If the response is false, it
makes sure that there is no false positive probability in first
s-1 bloom filters, so it maps the element x into bloom filter
s; otherwise, it just go ahead to the next element with no any
operation on element x.
The Dynamic Bloom Filter (DBF) can support concise
representation and approximate membership queries of
dynamic set instead of static set (Guo et al 2006). The basic
idea of DBF is to represent a dynamic set with a dynamic
sm bit matrix that consists of s bloom filters. Here s is
initialized to 1, but it is not a constant as split bloom filter. It
can increase during the continuous increasing process of the
set size.
The Scalable Bloom filter (Xie et al 2007) represents
dynamic data sets well and provides a way to effectively
solve the scalability problem of Bloom filters. It solves the
scalability problem of Bloom filters by adding Bloom filter
vectors with double length when necessary.
The data structure of Hierarchical Counting Bloom Filter
(Yuan et al 2008) is composed of several sub CBFs. The
number of these sub filters is h. Each sub filter has different
counter length and bit array length. Each counter length is
c0, c1 ch-1 and each bit array length is m0, m1, , mh-1
respectively. m0> m1> > mh-1.
An Enhanced Matrix Bloom Filter (Arulanand et al.,
2011) is a Matrix Bloom Filter it dynamically creates
another bloom filter for the row which exceeds the given
threshold value. Arulanand and Subramanian(2010)
proposed payload inspection using parallel bloom filter in
dual core processor. The parallel bloom filters are
constructed using threads for each hash function. In this
work the parallel bloom filter outperformed the sequential
bloom filter in Dual core processor.

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

III.

GENETIC ALGORITHM

A Simple GA is a computational abstraction of biological


GA proposed by Holland (1975), is a probabilistic optimal
algorithm that is based on the evolutionary theories. This
algorithm is population-oriented. Successive populations of
feasible solutions are generated in a stochastic manner
following laws similar to that of natural selection.
GAs is a family of computational models inspired by
evolution. These algorithms encode a potential solution to a
specific problem on a simple chromosome-like data structure
and apply recombination and mutation operators to these
structures so as to preserve critical information. An
implementation of a genetic algorithm begins with a
population of (usually random) chromosomes. One then
evaluates these structures and allocates reproductive
opportunities in such a way that those chromosomes which
represent a better solution to the target problem are given
more chances to reproduce than those chromosomes which
are poorer solutions. The goodness of a solution is typically
defined with respect to the current population. Usually there
are only two main components of genetic algorithms that are
problem dependent: the problem encoding and the fitness
function (objective function / evaluation function). A
problem can be viewed as a black box with different
parameters: The only output of the black box is a value
returned by an evaluation function indicating how well a
particular combination of parameter settings solves the
optimization problem. The goal is to set the various
parameters so as to optimize some output. In more traditional
terms that to maximize (or minimize) some function
F(x1,x2,..., xm). Fig. 2 shows the Simple GA.

Figure 2 Simple Genetic Algorithm

A. Premature Convergence Problem


GA suffer from the premature suboptimal convergence
or stagnation which occurs when some poor individuals

www.asdf.org.in

31

attract the population due to a local optimum or bad


initialization, it prevents further exploration of the search
space (Bonabeau et al 1999). One of the causes of this
problem is that a very fit chromosome is generally sure to be
selected for mating and since offspring resemble their
parents, chromosomes become too similar. Hence, the
population will often converge before reaching the global
optimal solution, resulting in premature convergence. Also
in GA the population size is finite, which influences the
sampling ability of a genetic algorithm and as a result
affects its performance.
Incorporating a local search method can introduce new
genes which can help to fight the genetic drift problem
(Asoh and Muhlenbein 1994; Thierens et al 1998) caused by
the accumulation of stochastic errors due to finite
populations. It can also accelerate the search towards the
global optimum (Hart 1994) which in turn can guarantee
that the convergence rate is large enough to obstruct any
genetic drift. In addition a local search method within a
genetic algorithm can improve the exploiting ability of the
search algorithm without limiting its exploring ability (Hart,
1994). If the right balance between global exploration and
local exploitation capabilities can be achieved, the algorithm
can easily produce solutions with high accuracy (Lobo and
Goldberg, 1997).
The proposed work incorporates
Simulated Annealing (SA) to GA to avoid premature
convergence.
IV.

SIMULATED ANNEALING

In an optimization problem, often the solution space has


many local minima. A simple local search algorithm
proceeds by choosing random initial solution a d generating
a neighbor from that solution. If it is a minimum fitness
transition then the neighboring solution is accepted. Such an
algorithm has the drawback of often converging to a local
minimum. The simulated annealing algorithm avoids getting
trapped in a local minimum by accepting cost increasing
neighbors with some probability. It solves this problem by
allowing worse moves (lesser quality) to be taken some of
the time. That is, it allows some uphill steps so that it can
escape from local minima. In SA, first an initial solution is
randomly generated, and a neighbor is found and is accepted
with a probability of min (1, exp (-E/T)), where E is the
cost difference and T is the control parameter corresponding
to the temperature of the physical analogy and will be called
temperature On slow reduction of temperature, the
algorithm converges to the global minimum. Among its
advantages are the relative ease of implementation and the
ability to provide reasonably good solutions for many
combinatorial problems. Simulated Annealing is inherently
sequential and hence very slow for problems with large
search spaces. Though a robust technique, its drawbacks
include the need for a great deal of computer time for many
runs and carefully chosen tunable parameters. Fig. 3 shows
the Simulated Annealing Algorithm.

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

32

legitimate (non-spam). The collection is represented into


feature vector by the text documents are converted to
normalized case, and tokenized them, splitting on nonletters. The stop words are eliminated. The spam weights for
words are calculated from the set.
This weight value
indicates its probable belongings to spam or legitimate. The
weight values are discretized and assigned for different
Bins. The tuple to describe the Bin Bloom Filter is, {{n1, n2,,
, nL}, {w1, w2,, wL}, m, {k1, k2, , kL}, {f1, f2, ,
fL}}. It is an optimization problem to find the value of n
and f that to minimize the total membership invalidation
cost. For membership testing the total cost of the set is the
sum of the invalidation cost of each subset. The total
membership invalidation cost (Xie et al., 2005) is given as,
F= n1f1w1 + n2f2w2 ++ nLfLwL
The total membership invalidation cost
F(L) =
(1)
to be minimized.
L

Where

n
i 1

rj
nj

1
j

(1 i L)
x
a
m

C C

L L
F F
f i
f
i

x
a
m

C 0

The objective function f(L) taken as standard for the


problem of minimization is


(2)

where Cmax is a large constant.


L
f

x
a
m

A. Bin Bloom Filter (BBF)


A BBF is a date structure considering weight for spam
word. It groups spam words into different bins depending on
their weight. It incorporates the information on the spam
word weights and the membership likelihood of the spam
words into its optimal design. In BBF a high cost bin lower
false positive probability and a low cost bin has higher false
positive probability. The false positive rate and number of
strings to be stored is identified through optimization
technique GA which minimize the total membership
invalidation cost. Figure 4 shows Bin Bloom filter with its
tuple <n,f,w> configuration.

m
ri

BLOOM FILTER OPTIMIZATION USING GA-SA

2
n
l

Figure 3 Simulated Annealing

V.

1 2 fi
n
l

fi ri

N- Total number of Strings in a spam set.

C. Weight Assignment
The first step for assigning weight to spam words is
estimating the word probability that depends on word
frequency. Word frequency is measured by the number of
occurrences of a specific word in the document. Estimating
probabilities is achieved using Bayes conditional probability
theorem according to which the probability of a word given
that the message is spam can be estimated as follows:
(3)
(4)

Figure. 4 Bin Bloom Filter

B. Problem Definition
Consider a standard supervised learning problem with a
set of training data D = {<Y1,Z1 >,..., <Yi, Zi>, ,< Yr ,Zr
>} , where Yi is an instance represented as a single feature
vector, Zi = C(Yi ) is the target value of Yi , where C is the
target function. Where Y1, Y2, , Yr set of text document
collection C is a class label to classify into spam or

www.asdf.org.in

Ps is the probability of a word given the mail is spam.


Pns is the probability of a word given the mail is legitimate.
fs is the frequency of word in the spam documents.
fns is frequency of words in the legitimate documents.
Ns is the total spam documents.
Nns is the total legitimate documents.

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

The next step is calculating word weights. Estimating a


weight for each word is based on its frequency and its
probability in spam mail documents and non-spam mail
documents. The weight of every word is estimated using the
formula:
(5)
This weight value is based on text collection containing
spam messages and non-spam messages. The word weights
are estimated for spam list during the training process and
stored in a separate text document.
D. Chromosome Representation
In the context of Bin bloom filter, a chromosome
represents number of bloom filters with its number of words
to be stored, false positive rate and its weight. That is, each
chromosome Xi, is constructed as follows:
Xi
=

ni1

fi1

wi1

ni2

fi2

wi2

nij

fij

wij

niL

fiL

wiL

Figure. 5 Chromosome representation for Bin Bloom Filter

33

techniques exist for individual. In single point crossover, a


position is randomly selected at which the parents are
divided into two parts. The parts of the two parents are then
swapped to generate two new offspring.
H. Mutation
Mutation is a genetic operator that alters one or more
gene values in a chromosome from its initial state. This can
result in entirely new gene values being added to the gene
pool. With these new gene values, the GA may be able to
arrive at better solution than was previously possible.
Mutation applied usually with a low probability to introduce
random changes into the population. It replaces gene values
lost from the population. It helps to prevent the population
from stagnating at any local optima and it avoids premature
convergence. Mutation evaluates more regions of the search
space that is it makes the entire search space reachable. It
occurs during evolution according to a user-definable
mutation probability. This probability should usually be set
fairly low. If it is set to high, the search will turn into a
primitive random search.
I.

where nij, fij and wij refer respectively the number of words,
false positive rate of and the weight of the jth bin of ith
chromosome. A set of 3 genes <n,f,w> encodes a protein a
trait, that is a single bin.
E. Initial Population
One chromosome in the population represents one
possible solution for assigning the triples <n, f, w> for L
Bloom filters. Therefore, a population represents a number
of candidate solutions for the Bloom filters. At the initial
stage, each chromosome randomly chooses different <n, f,
w> for L Bins. The fitness function for each individual can
be calculated based on the equation (2).
F. Selection
In selection the offspring producing individuals are
chosen. Each individual in the selection pool receives a
reproduction probability depending on the own fitness value
and the fitness value of all other individuals in the selection
pool. This fitness is used for the actual selection in the step
afterwards. This simplest selection scheme is roulette-wheel
selection, also called stochastic sampling with replacement.
The proposed system employs roulette-wheel selection
method.
G.

Crossover
The interesting behavior happens from genetic
algorithms because of the ability of the solutions to learn
from each other. Solutions can combine to form offspring
for the next generation. Occasionally they will pass on their
worst information, but doing crossover in combination with
a powerful selection technique perceives better solutions
result. Crossover occurs with a user specified probability
called, the crossover probability Pc. Many crossover

www.asdf.org.in

Evaluation
After producing offspring they must be inserted into the
population. By a reinsertion scheme individuals should be
inserted into the new population and it determines which
individuals of the population will be replaced by offspring.
The used selection algorithm determines the reinsertion
scheme. The elitist combined with fitness-based reinsertion
prevents this losing of information and is the recommended
method. At each generation, a given number of the least fit
parent is replaced by the same number of the most fit
offspring.
J.

Hybrid GA with SA
A combination of a genetic algorithm and a SA can
speed up the search to locate the exact global optimum. In
this hybrid, applying a SA to the solutions that are guided
by a GA to the most promising region can accelerate
convergence to the global optimum. The time needed to
reach the global optimum can be further reduced if local
search methods and local knowledge are used to accelerate
locating the most promising search.
For any hybrid algorithm, a local search can be applied
to either every individual in the population or only few
individuals. Applying a local search to every individual in
the population on expensive function evaluations can waste
resources without providing any more useful information.
Applying a local search to a large fraction of the population
can limit exploration of the search space by allowing the
genetic algorithm to evolve for a small number of
generations. Deciding upon the optimal fraction of the
population which should perform local search, and the basis
on which these individuals are chosen, has a great impact on
the performance of a hybrid. The proposed system hybrids
SA with GA when the best chromosome has same fitness
value for designated number of iterations.

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

VI.

EXPERIMENTAL RESULTS

In the proposed system the roulette wheel selection,


objective function value as fitness, single point crossover,
uniform mutation and maximum iteration numbers as
stopping criterion are used in the experimental analysis..
The single point crossover is applied with a probability of
Pc. For every bit of the string, mutation occurs with
probability Pm. The levels of operator probabilities are
drawn from the literature. De Jong (1975) suggested that
crossover rates between 0.65 and 1, and mutation rates
between 0.001 and 0.01 are useful in GA applications.
Population size and maximum generation number have also
positive effects on finding the best fitness value. Increasing
the population size or number of generations enlarges the
search space. At the end of the analysis, the crossover
probability level Pc as 0.65, mutation level Pm as 0.01,
population size as 10, and maximum number of iteration is
set as 100. The total number of strings taken for testing is
3000 and their weights are ranging from 0.0005 to 5. The
size of the BF is 1024. These experimental values are tested
for bin sizes from 10 to 12.
Since Bloom Filter allows false positive, the query
invalidation cost is unavoidable. For BBF, the total
membership invalidation cost is expressed in (1). In
standard BF, different weights in different bins into
consideration, the total membership invalidation cost is then
as follows.
Fstandard= (n1w1 + n2w2 ++ nLwL)f
L

Fstandard (L) f n i w i
i 1

The average false positive rate obtained from BBF at the


last iteration is assigned to the false positive rate of BF. Fig.
6, 7 and 8 show the total membership invalidation cost
attained from BBF and BF using GA. BBF-GA represents
the cost obtained from the best chromosome from the
population on each iteration and BBF-Avg GA stands for
average cost of the population, BF corresponds to the cost
obtained from standard BF with fixed false positive rate . It
shows that after 50th iteration in BBF most of the
chromosomes are similar, consequently the average cost of
the population closer to cost of best chromosome value.
This is due to the children being similar to parents.

Fig. 7 Total membership invalidation cost for bin size 11

Fig. 8 Total membership invalidation cost for bin size 12

Table 1 and 2 show the values obtained from the best


chromosome at 100th iteration of GA and GA-SA. The first
column represents the bin size; the second column, number
of strings in each bin; the third column, average weight of
the strings present in a bin; the fourth column, false positive
rate of each bin. The fifth and sixth column show the total
membership invalidation cost of each bin in BBF and
standard BF respectively.
TABLE I.

Bin
size

10

11

12

Figure. 6 Total membership invalidation cost for bin size 10

www.asdf.org.in

34

Numbe
r of
Strings
121, 127
131, 194
206, 243
454, 500
512, 512
93, 96
107, 117
181, 255
292, 351
501, 503
504
73, 100
108, 113
116, 169
212, 311
339, 464
496, 499

VALUES OBTAINED FROM GA

Average
String
weight
4.883, 4.621
4.413, 4.147
3.835, 3.491
2.932, 2.130
1.307, 0.402
4.908, 4.683
4.518, 4.344
4.099, 3.769
3.344, 2.817
2.104, 1.285
0.396
4.926, 4.718
4.544, 4.370
4.171, 3.955
3.665, 3.262
2.726, 2.054
1.270, 0.391

False positive
rate
0.0171, 0.0208
0.0234, 0.0792
0.0918, 0.1320
0.3384, 0.3738
0.3825, 0.3825
0.0050, 0.0059
0.0101, 0.0149
0.0660, 0.1452
0.1855, 0.2462
0.3746, 0.3760
0.3768
0.0012, 0.0073
0.0105, 0.0129
0.0144, 0.0544
0.0982, 0.2056
0.2343, 0.3463
0.3709, 0.3731

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

Cost of
BBF

Cost of
Standar
d BF

6082.75

6339.97

6061.40

6336.48

6042.18

6333.24

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

TABLE II.
Bin
size

10

11

12

Number
of
Strings
96, 137
147, 152
215, 253
479, 496
511, 514
95, 104
108, 118
179, 250
294, 345
497, 504
506
73, 104
105, 112
114, 171
212, 304
350, 457
494, 504

35

VALUES OBTAINED FROM GA-SA


Average
String
weight
4.906, 4.658
4.428, 4.174
3.891, 3.532
2.949, 2.128
1.310, 0.404
4.907, 4.677
4.501, 4.325
4.080, 3.757
3.335, 2.810
2.105, 1.289
0.397
4.926, 4.716
4.539, 4.369
4.173, 3.957
3.665, 3.267
2.728, 2.053
1.277, 0.396

False positive
rate
0.0059, 0.0276
0.0352, 0.0393
0.1014, 0.1430
0.3580, 0.3709
0.3818, 0.3840
0.0056, 0.0088
0.0105, 0.0155
0.0640, 0.1397
0.1876, 0.2403
0.3716, 0.3768
0.3782
0.0012, 0.0088
0.0092, 0.0124
0.0134, 0.0563
0.0982, 0.1982
0.2452, 0.3408
0.3694, 0.3768

Cost of
BBF
6080.679

6061.072

Figure 11 Total membership invalidation cost for bin size 12


6041.730

Fig. 9, 10 and 11 show the total membership invalidation


cost obtained from GA and hybrid GA with SA. GA as a
global search method is combined with a SA; the overall
search capability is enhanced. The enhancement is in terms
of solution quality. When the best fitness value obtained
from the population is same for designated number of
iterations, the SA is applied on the best fit chromosome.
The experimental results show that the SA accepts worse
moves to escape from local minima and creates diversity in
the population. Fig. 12 shows the cost obtained from GASA is lesser than the cost obtained from GA.

Figure 9 Total membership invalidation cost for bin size 10

Figure 12 performance evaluation chart for ga and ga-sa

VII. CONCLUSIONS
Bloom filters are simple randomized data structures that
are useful in practice. The BBF is an extension of BF, and
inherits the best feature of BF such as time and space
saving. The BBF treats strings in a set in a different way
depending on their significance, groups the strings into bins
and allocates different false positive rate to different bins.
Important spam words have lower false positive rate than
less significant words. GA has been used many types of
optimization problem. The proposed system used GA to
minimize the total membership invalidation cost of the BF.
Premature convergence was the main problem for GA. It is
caused by lower diversity of the population. Apparently
maintaining higher diversity is important to obtained better
result. To increase the diversity as well as preventing
premature convergence SA is incorporated in GA when the
best fit chromosome fitness value has no change for
designated number of iterations. The experiment results
show that the result obtained from GA-SA has lesser false
positive rate than values obtained from GA.
REFERENCES
[1]
[2]

B, Bloom Space/time tradeoffs in hash coding with allowable


errors,Communications of the ACM, Vol.13, 1970, pp. 422426.
Feng W,.Shin K.G, Kandlur D.D. & D.Saha, "The BLUE active
queue management algorithms",
IEEE/ACM Transactions on
Networking, Vol. 10, 2002, pp. 513 528.

Figure10 Total membership invalidation cost for bin size 11

www.asdf.org.in

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

[3]

M. Mitzenmacher Compressed Bloom filters, IEEE/ACM


Transactions on Networking, Vol. 5, 2002, pp. 604-612.
[4] S Cohen & Matias Y, , Spectral bloom filters, Proceedings of the
2003 ACM SIGMOD international conference on Management of
data, 2003, pp. 241-252.
[5] M. Xiao, Dai Y. & Li X., Split Bloom Filter, Chinese Journal of
Electronic, Vol. 32, 2004, pp. 241-245.
[6] D. Guo Wu J, Chen H & X.Luo, Theory and network applications of
dynamic bloom filters Proceedings of 25th IEEE International
Conference on Computer Communications, 2006, pp.1-12.
[7] K. Xie., Min Y., Zhang D., Wen J. & Xie G, A Scalable bloom
filter for membership queries, Proceedings of IEEE Conference on
Global Telecommunications, 2007, pp. 543 547.
[8] Z Yuan., Chen Y., Jia Y. & Yang S., Counting evolving data stream
based on hierarchical counting bloom filter, Proceedings of
International Conference on Computational Intelligence and Security,
2008.
[9] D.E. Goldberg. Genetic Algorithms-in Search, Optimization and
Machine Learning, Addison- Wesley Publishing Company Inc.,
1989
[10] J. Holland Adaption in Natural and Artificial Systems University
of Michigan Press, Ann Arbor, MI , 1970.
[11] E. Bonabeau., Dorigo M. and Theraulaz G. Swarm Intelligence
from Natural to Artificial Systems, Oxford University Press, 1999
[12] H.Asoh & Mhlenbein H., On the mean convergence time of
evolutionary algorithms without selection and mutation, Parallel

www.asdf.org.in

[13]

[14]
[15]

[16]

[17]

[18]

[19]

36

Problem Solving from Nature, PPSN III, Y. Davidor, H.-P. Schwefel,


and R. Manner, Eds. Berlin, Germany: Springer-Verlag, 1994, pp.
8897.
D. Thierens, Goldberg, D. & P. Guimaraes, Domino convergence,
drift, and the temporal-salience structure of problems, IEEE
International Conference on Evolutionary Computation Anchorage,
1998, pp. 535-540.
W.E. Hart Adaptive global optimization with local search, Doctoral
Dissertation, 1994
F.G. Lobo & Goldberg D. E., Decision making in a hybrid genetic
algorithm, IEEE International Conference on evolutionary
Computation., 1997, pp. 122-125.
K. Xie, Min Y., Zhang D., Wen J., Xie G. & Wen J, Basket
Bloom Filters for Membership Queries, Proceedings of IEEE
Tencon05, 2005,.1-6.
K.A De Jong. Genetic Algorithms-in Search, Optimization and
Machine Learning, Addison- Wesley Publishing Company, Inc.,
1975.
Arulandand Natarajan & S. Subramanian, Payload Inspection Using
Parallel Bloom Filter in Dual Core Processor, Journal of Computer
and Information Science, Vol. 3, 2010, pp. 215-224.
Arulanand Natarajan , Subramanian S, Premalatha K. Enhanced
Matrix Bloom Filter for Weak Password Identification, CiiT
International Journal of Networking and Communication
Engineering, Vol. 4, 2011, pp.313-322.

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

37

Robust Modelling for Steganography Using Genetic Algorithm and Visual


Cryptography for Mitigating RS Analysis
Ms. Akanksha Jain-Research Scholar
Sri Satya Sai Institute of Science & Technology
Sehore, Madhya Pradesh, India

AbstractThe proposed system highlights a novel approach


for creating a secure steganographic method using genetic
algorithm and visual cryptography for preventing RS analysis.
Although there has been an extensive research work in the
past, but majority of the research work has no much optimal
consideration for robust security towards the encrypted image.
The proposed method encodes the secret message in least
significant bits of the original image, where the pixels values of
the encrypted image are modified by the genetic algorithm to
retain their statistic characters, thereby making the detection
of secret of message difficult. Use of Genetic algorithm has
compelled the system for enhancing the security against RS
analysis using optimal selection, mutation, and cross over. The
proposed system accepts the steganographic image generated
as input for visual cryptographic. The implementation is done
in java platform which shows that the proposed system has
better resiliency towards RS attack by considering the
steganalysis and bench marking with optimal visual standards.
Keywords-Steganography,
Algorithm, RS Analysis

I.

Visual

Cryptograhy,

Genetic

INTRODUCTION

The RS analysis is considered as one of the most famous


steganalysis algorithm which has the potential to detect the
hidden message by the statistic analysis of pixel values [1].
The process of RS steganalysis uses the regular and singular
groups as the considerations in order to estimate the
correlation of pixels [2]. The presence of robust correlation
has been witness in the adjacent pixels. But unfortunately
using traditional LSB replacing steganography [3], the
system renders the alteration in the proportion in singular
and regular groups which exposes the presence of the
steganography. Ultimately, it will not be so hard to decrypt
the secret message. Both the topic of steganography and
visual cryptography has been considered as a distinct topic
for image security. Although there are extensive researches
based on combining these two approaches [4] [5] [6], but the
results are not so satisfactory with respect to RS analysis.
Other conventional methods of image security has witnessed
the use of digital watermarking extensively, which embeds
another image inside an image, and then using it as a secret
image [7]. The use of steganography in combination visual
Proc. of the Intl. Conf. on Computer Applications

Volume 1. Copyright 2012 Techno Forum Group, India.


ISBN: 978-81-920575-5-2:: doi: 10. 73767/ISBN_0768
ACM #: dber.imera.10. 73767

www.asdf.org.in

cryptography is a sturdy model and adds a lot of challenges


to identifying such hidden and encrypted data.
Fundamentally, one could have a secret image with
confidential data which could be split up into various
encrypted shares. Finally when such encrypted shares are reassembled or decrypted to redesign the genuine image it is
possible for one to have an exposed image which yet consists
of confidential data. Such types of algorithms cannot persists
without possessing an appropriate charecteristics in the
visual cryptography procedure. The ground for this is that if
the rebuilding method or even the encoding method changes
the data exists in the image, then the system would
accordingly change the encrypted information which makes
the system feasible for extracting the encrypted data from the
exposed image
The steganalysis is the process to expose the confidential
message even certain uncertain media. There are various
attacks reported on Least Significant Bytes substitution of
picture elements or bit planes [8][9]. Various histogram as
well as block effect has also been reported in the prior
research work [10]. But certain RS steganalysis work has
been reported as most concrete and appropriate technique to
other conventional substitution steganography [11], which
uses regular and singular groups as the elementary
parameters to estimate the association of the pixels. In order
to prevent RS analysis, the impact on the association of the
pixels will be required to be compensated. Such types of
compensation might be accomplished by adjusting other bit
planes. By doing such attempt, the implications towards
security will be almost computationally impossible. For such
reason, various optimization algorithms can be deployed
employed in secure data hiding to identify the optimal
embedding positions. The main aim of the proposed model is
to design a feasible RS-resistance secure algorithm which
combines the use of both steganography and visual
cryptography with the goals of improving security,
reliability, and efficiency for secret message.
The remaining section of the paper is designed as
follows. Section-II discusses about related work which is
followed by proposed system in Section-III. Section IV
discusses about Algorithm description, Section V discusses
about implementation and results, followed by description of
performance analysis in section VI. Section VII discusses
about conclusion summarizing the entire proposed work.
II.

RELATED WORK

Namita Tiwari [12] focus on hiding the message in the


least significant bits of the colors of the pixels of a GIF

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

image. Umamaheswari [13] compress the secret message and


encrypt it by the receivers public key along with the stego
key and embed both messages in a carrier using an
embedding algorithm. Shyamalendu Kandar [14] proposed a
technique of well known k-n secret sharing on color images
using a variable length key with share division using random
number. Anupam [15] describes how such an even-odd
encryption based on ASCII value is applied and how
encrypted message converting by using Gray code and
embedding with picture can secured the message and thus
makes cryptanalysts job difficult.
III.

PROPOSED SYSTEM

The proposed work is basically a framework designed in


java swing with two modules e.g. Steganography using
Genetic Algorithm and Visual Cryptography for preventing
RS analysis. Genetic algorithm has a heuristic approach
which guarantees for optimal solution. Each entity in genetic
algorithm highlights a feasible solution towards a defined set
of problem [16]. The recommended solution is programmed
into genes of the entity. The selection process in GA chooses
the fittest entity. The cross-over method is used for
combining those chosen entity for new entity and finally
mutation method basically appends some genes noise to the
genes of an entity. The proposed system fundamentally
cooperates in the analytical evaluation of the strategy to be
use against protection from RS-attack in steganography. The
product is designed with encoding as well as decoding
module. It is also assisted by the simulation as well as the
analysis section for the analysis of the software application
in terms of security.
An input image is accepted as cover image for the input
message in plain text format. The product will also designed
with user selection of encryption. After embedding the secret
message in LSB (least significant bit) of the cover image, the
pixel values of the steg-image are modified by the visual
cryptography to keep their statistic characters. The
experimental results should prove the proposed algorithms
effectiveness in resistance to steganalysis with better visual
quality. The user can select their targeted information in
terms of plain text for embedding the secret message in LSB
of the cover image. The implications of the visual
cryptography will enable the pixels value of the steg-image
to keep their statistic character. LSB steganography has low
computation complexity and high embedding capacity, in
which a secret binary sequence is used to replace the least
significant bits of the host medium. This is also one of the
strong algorithms which keeps the information proof from
any intruder.
IV.

ALGORITHM DESCRIPTION

The proposed project work consist of mainly two


algorithms which are (i) Steganography using Genetic
Algorithm and (ii) Visual Cryptography with pseudorandom
number. The application initiates with Steganography
module where the cover image will be encrypted to generate
Stego image. The stagographic image generated in this
module will act as an input for visual cryptographic module.

www.asdf.org.in

38

Algorithm: Steganography using Genetic Algorithm


Input: Cover Image
Output: Stego Image with embedded message
1 Read input image (Cover image)
2 If image is color
3 Then
4 Convert to gray scale
5 Resize image to 256 x 256
6 Read the plain text message
7 Authentication using password
8 Switch (encode_alg)
9
Case-1: Implement BattleSteg;
10
break;
11
Case-2: Implement BlindHide;
12
break;
13
Case-3: Implement FilterFirst;
14
break;
15
Case-4: Implement HideSeek
16
break;
17 Convert image to double precision
18 Embed the message in the cover image based on
percentage
19 Generate random message
20 Apply uniformly distributed Pseudorandom integers
21 msg = randi([0 round(255*perc/100)],size(I));
//perc= Embedd the message in the cover image based on
percentage
22 I = I+msg;
23 Divide image into 8 x 8 blocks
24 Apply the non-positive flipping F25
Generate random 0 and 1s
26
Change LSB as per flipping
27 Apply non-negative flipping F+
28 Generate random -1 and 0 s
29 Change LSB as per flipping
30 Calculate_correlation
31 Initialize maximum chromosome
32 Flip second lowest bit randomly for number of time
33 PSNR = snr(Chrom-Cn)
//Cn=Correlation for non-negative flipping
34 fitness = alpha*(e1+e2)+PSNR
35 If fitness>maxfitness //maxfitness=0 is initialized
36 maxfitness = fitness;
37 Chrommax = Cp;
//Cp= Correlation for non-positive flipping
38 crossover = crossover+1;//crossover=0 is initialized
39 end
40 replace chromosome with new one
Algorithm: Visual Cryptography
Input: Stego-Image
Output: Encrypted Shares
1 Read colored Stego-Image generated
2 Read pixel Pij with position i and j is the input
3
Apply pixel reversal
4
Pij = 255 - Pij
5 Use pseudorandom number generator (0.1 to 0.9) to
reduce Pij randomly.

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

6 Take the difference of Pij with original pixel Pij.


7 Use pseudorandom number generator to reduce reversed
value of Pij randomly.
8 Apply pixel reversal i.e Pij = 255 - Pij .
9 Store in matrix as Encrypted share 1.
10 Take the difference of two random number generators
with original pixel Pij.
11 Apply pixel reversal i.e Pij = 255 - Pij
12 Store Pij in matrix as Encrypted share 2.
13 Store Pij in matrix as Encrypted share 5.
14 Fix Thres=64
15 if((red<Thres)&&(green<Thres)&&(blue<Thres)){
16 result = WHITEPIXEL;
}
17 else
18 if((red>3*Thres) && (green>3* Thres) && (blue>3*
Thres)){
19 result = BLACKPIXEL;
}
20 else
21 if((red > 2* Thres) && (red> green) && (red>blue)){
22 result = REDPIXEL;
}
23 else
24 if((green > 2* Thres) && (green>red) &&
(green>blue)){
}
25 else if((blue > 2* Thres) && (blue>green) &&
(blue>red)){
26 result = BLUEPIXEL;
}
27 else{
28 result = WHITEPIXEL;
29 }Store in Foil;
The proposed scheme is based on standard visual
cryptography as well as visual secret sharing. The applied
technique uses allocation of pseudorandom number as well
as exchange of pixels. One of the contrast part of this
implementation is that while decrypting, the stego-image
will be morphologically same compared to the cover image
with respect to the shape and size thereby preventing pixel
expansion effect [17]. The implementation of the algorithm
yields in better result with insignificant shares when stego
images are normally with light contrast. It can also be seen
that the algorithm gives much darker shares in both gray as
well as colored output.
V.

IMPLEMENTATION AND RESULTS

39

Fig 1. Cover Image (L) and Plain Text Message (R)

The encryption process is carried out using genetic


algorithm deploying use of BattleSteg, BlindHide, Filter
First, and Hide Seek algorithm. The encryption can also
assist to give the output of a stego image of PNG format of
85.4 KB as shown in Fig 2 and for preventing RS analysis, it
can also show the image in complete black pixels for blind
steganalysis, where the embedded image is also in same
PNG format.

Fig 2. Stego Image (L) and Result of Blind Steganalysis(R)

The proposed method encrypts secret message which is


very minimal per block of image. The process only encrypts
1bytes in an 8x8 blocks while other conventional techniques
uses 1 bit in each pixels. Therefore this methodology can be
used for encrypting secret message per block of image
which significantly enhances the performance and retains a
best quality of image during encryption. The important
factor regarding the proposed system is that it does not
depend on the data encryption in the LSB of the pixel
values. The technique always attempt to evaluate the
optimal confidential image elements where the layer of the
cover image picture elements value is equivalent in the
elevated layers of the image thereby retaining the
superiority of the image which makes it completely resistant
against RS attack.
The stego-image generated from the steganographic
module will be then subjected to our visual cryptography
module which generates 5 secret shares as shown in Fig 3.
The proposed visual cryptographic module will be extended
to work both for grayscale as well as colored image in the
overlay image using threshold image hiding scheme [18]

The project work is designed on 32 bit Windows OS with


Dual Core Processor with 2 GB RAM and 1.80GHz using
Java Platform. The original image is in JPEG format of 5.28
KB whereas the plaintext message is of size 569 bytes as
shown in Fig 1.

www.asdf.org.in

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

VI.

(a) 1st ES

(b) 2nd ES

(c) 3rd ES

(d) 4th ES
Fig 3.5 Encrypted Shares
The encrypted shares generated in grayscale are as follows

Fig 4 Grey scale Encrypted Share

Fig 5. Color Encrypted Share

Fig 5 represents the colored stacked shares. The merit of


the proposed technique is that a client can chose the location
of the secret image in order to allocate to a confidential
intensity and thereby the system incurs a supple and
cooperative to the clients selection. Although, this
methodology might not be visualized a optimal secure
compared to other methods as certain levels of the
confidentiality can yet be exposed even if the user do not
have possession of all the secret shares, but it is almost
impossible for anyone who will attempt to decrypt the
encrypted data within that image to reveal if the secret
shares which they posses are set of all encrypted shares or
certain secret shares are missing.

www.asdf.org.in

40

PERFORMANCE ANALYSIS

The performance of the proposed system is experimented


by performing steganalysis and observing the chances of RS
analysis. The performance is defined by 3 factors-(i)
Understanding RS analysis parameters for both overlapping
and non-overlapping groups of pixels, (ii) Laplace Graph
with frequency variation corresponding to Laplace value.
and (iii) conducting benchmarking test for analyzing
parameters like Average Absolute Difference, Mean
Squared Error, Laplace Normalization, Laplacian Mean
Squared Error, Signal to Noise Ratio, Peak Signal to Noise
Ratio, Normalized Cross-Correlation, and Correlation
Quality.
Table 1. RESULTS OF STEGANALYSIS
-----------------------------------------------------------------------RS ANALYSIS
-----------------------------------------------------------------------RS Analysis (Non-overlapping groups)
Percentage in red: 1.99381
Approximate length (in bytes) from red: 375.67826
Percentage in green: 4.79634
Approximate length (in bytes) from green: 903.73869
Percentage in blue: 8.84555
Approximate length (in bytes) from blue: 1666.70112
RS Analysis (Overlapping groups)
Percentage in red: 2.88103
Approximate length (in bytes) from red: 542.85095
Percentage in green: 6.53665
Approximate length (in bytes) from green: 1231.65193
Percentage in blue: 10.47579
Approximate length (in bytes) from blue: 1973.87514
Average across all groups / colors: 5.92153
Average approximate length across all groups/colors:
1115.74935
Table 2. LAPLACE GRAPH (CSV formatted)
-----------------------------------------------------------------------Frequency
Laplace Value
0.12052700712494527
0.0
0.01596146956971699
1.0
0.03347530151653863
2.0
0.1490864944473192
3.0
0.03005214345420531
4.0
0.028499781077100664
5.0
0.09738088604067985
6.0
0.02258886279504836
7.0
0.021335031644309995
8.0
0.03518688054770529
9.0
0.014807148827767385
10.0
Table 3. RESULTS OF BENCHMARK TESTS
-----------------------------------------------------------------------Average Absolute Difference: 0.003861003861003861
Mean Squared Error: 0.004736695458344943
LpNorm: 0.0023683477291724713

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

Laplacian Mean Squared Error: 3.766266553829349E-6


Signal to Noise Ratio: 4.750907487815126E7
Peak Signal to Noise Ratio: 1.2322852611764705E8
Normalized Cross-Correlation: 0.9999988260739924
Correlation Quality: 193.19666853498956
The above discussed tables show the connection with
image security along with image quality. As the visual
quality of the resultant images in steganography is of
superior quality, there is no requirement of using any
external adjustment.

[8]

VII. CONCLUSION

[12]

The proposed system has discussed implementation of


securely using steganographic technique using genetic
algorithm and visual cryptography using pseudorandom
number. It can be concluded that when normal image
security using steganographic and visual cryptographic
technique is applied, it makes the task of the investigators
unfeasible to decrypt the encoded secret message. The
security features of the steganographic is highly optimized
using genetic algorithm. The proposed system is highly
resilient against RS attack and optimally used for both
grayscale and colored output in visual secret shares making
it highly compatible for real-time applications. The future
work could be towards the enhancing the algorithm using
neural network for the visual cryptography, so that the
system can generate highly undetectable secret shares using
certain set of training data which might be automatically
generated and is disposed after the task has been performed.
Such type of approach might render the most secure
steganographic and visual cryptographic scheme.

[9]

[10]

[11]

[13]

[14]

[15]

[16]

[17]
[18]

41

Arezoo Yadollahpour, Hossein Miar Naimi, Attack on LSB


Steganography in Color and Grayscale Images Using Autocorrelation
Coefficients, European Journal of Scientific Research ISSN 1450216X Vol.31 No.2 (2009),
Qing zhong Liu, Andrew H. Sung, Jianyun X, Bernardete M.
Ribeiro,. Image Complexity and Feature Extraction for Steganalysis
of LSB Matching, The 18th International Conference on Pattern
Recognition (ICPR'06) 0-7695-2521-0/06 $20.00 2006 IEEE.
J. Fridrich, M. Goljan, and D. Hogea. Steganalysis of jpeg images:
Breaking the f5 algorithm. In Proc. of the ACM Workshop on
Multimedia and Security 2002, 2002.
J. Fridrich, M. Goljan, and R. Du. Detecting lsb steganography in
color, and gray-scale images. IEEE MultiMedia, pages 2228, 2001.
Namita Tiwari, Dr.Madhu Shandilya, Evaluation of Various LSB
based Methods of Image Steganography on GIF File Format,
International Journal of Computer Applications (0975 8887)
Volume 6 No.2, September 2010
Dr.M.Umamaheswari Prof. S.Sivasubramanian S.Pandiarajan,
Analysis of Different Steganographic Algorithms for Secured Data
Hiding, IJCSNS International Journal of Computer Science and
Network Security, VOL.10 No.8, August 2010
Shyamalendu Kandar, Arnab Maiti, Variable Length Key based
Visual Cryptography Scheme for Color Image using Random
Number, International Journal of Computer Applications (0975
8887) Volume 19 No.4, April 2011
Anupam Kumar Bairagi, ASCII based Even-Odd Cryptography with
Gray code and Image Steganography: A dimension in Data Security,
ISSN 2078-5828 (Print), ISSN 2218-5224 (Online), Volume 01, Issue
02, Manuscript Code: 110112
Aderemi Oluyinki, Some improved genetic algorithms based on
Heuristics for Global Optimization with innovative Applications,
Doctorial thesis, 2010
Talal Mousa Alkharobi, Aleem Khalid Alvi, New Algorithm For
Halftone Image Visual Cryptography, IEEE 2004
Chin-Chen Chang; Iuon-Chang Lin; , A new (t, n) threshold image
hiding scheme for sharing a secret color image, Communication
Technology Proceedings, ICCT 2003.

REFERENCE
[1]

[2]

[3]

[4]

[5]

[6]

[7]

Fridrich, J., Goljan, M. and Du,R, Reliable Detection of LSB


Steganography in Color and Grayscale Images, Proceedings of ACM
Workshop on Multimedia and Security, Ottawa, October 5, 2001, pp.
27-30.
Sathiamoorthy Manoharan, an empirical analysis of rs steganalysis,
proceedings of the third international conference on internet
monitoring and protection, ieee computer society washington, 2008
Rita Rana, Dheerendra Singh, Steganography-Concealing Messages
in Images Using LSB Replacement Technique with Pre-Determined
Random Pixel and Segmentation of Image, International Journal of
Computer Science & CommunicationVol. 1, No. 2, July-December
2010, pp. 113-116
Singh, K.M.; Nandi, S.; Birendra Singh, S.; ShyamSundar Singh,
L.; , Stealth steganography in visual cryptography for half tone
images, Computer and Communication Engineering, International
Conference, 2008
Jithesh K , Dr. A V Senthil Kumar , Multi Layer Information Hiding A Blend Of Steganography And Visual Cryptography, Journal of
Theoritical and Applied Information Technology, 2010
Hsien-Chu Wu; Chwei-Shyong Tsai; Shu-Chuan Huang;, Colored
digital watermarking technology based on visual cryptography,
Nonlinear Signal and Image Processing, IEEE-Eurasip, 2005
R. Chandramouli, Nasir Menon, Analysis of LSB Based Image
Steganography techniques, IEEE-2001

www.asdf.org.in

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

42

The Place of Information Technology as a Well of Opportunities for Poverty


Alleviation
Awodele Oludele
Computer Science Department
Babcock University
Ilishan-Remo, Nigeria

Kuyoro Shade O.
Computer Science Department
Babcock University
Ilishan-Remo, Nigeria

Abstract The poor are not just deprived of basic resources;


they lack access to information which may be vital to their lives
and livelihoods. They also lack information about the market
prices of the goods they produce, about health, about the
structure and services of public institutions, and also about
their rights. They lack political visibility and voice in the
institutions and power relations that shape their lives. They
lack access to knowledge, education and skills development
that could improve their livelihood. Also, information about,
income-earning and opportunities are far from their reach.
Information Technology (IT) is a powerful tool, with an
enormous potential for social impact, human development and
potential to improve the lives of the people served. Information
is critical to the social and economic activities that comprise
the development process. Thus, ICTs, as a means of sharing
information, are a link in the chain of the development process
itself.
Keywords- Information Technology; Digital divides,
development
I. INTRODUCTION
The rapid advancements in the field of Information
Technology (IT) and the resultant explosive growth of the
information services sector have radically changed the
worlds economic and social landscape. These changes have
given rise to a new society, based on information and
knowledge. This has further resulted in new avenues of
development, employment, productivity, efficiency, and
enhanced economic growth.
Globally, IT-led growth is creating jobs, raising
productivity, increasing incomes and opening many
opportunities for increased trade and human development.
Extensive application of information technology now
provides opportunities for new ways to create wealth, thus
contributing significantly to poverty alleviation. It is
recognized that there is a growing digital divide between the
countries that are highly endowed and developed in the field
of Information Technology and Nigeria comprising of rural
and urban areas in the country. It is therefore the objective of
citizens to initiate steps to reduce this divide by using
Information Technology to rapidly develop all sectors of the
economy. Information is a resource which must be
Proc. of the Intl. Conf. on Computer Applications

Volume 1. Copyright 2012 Techno Forum Group, India.


ISBN: 978-81-920575-5-2:: doi: 10. 73711/ISBN_0768
ACM #: dber.imera.10. 73711

www.asdf.org.in

Solanke Ilesanmi
Computer Technology Department
Yaba College of Technology
Yaba, Lagos, Nigeria

generated, collected, organized, leveraged, secured and


preserved for national prosperity.
II. ICTS, DIGITAL DIVIDES AND THE NEED FOR
MULTI-DIMENSIONAL APPROACHES
The term digital divide was first used in the 1990s and
it originally referred to the differences in access to
technology, between those who had access to technology
and those who did not. Then the existence of a gap
separating individuals who were able to access computers,
the Internet and new forms of Information Technology,
from those who had no opportunity to do so, was
recognized[3]. As such, the first research on the matter
focused on the factors determining the differentiated
physical access to ICTs, such as computers and the
availability of a network. When there is a digital divide, part
of the population is excluded from accessing information
and networks that could be used to expand their capabilities
and freedoms, therefore providing access to information for
those at the end of the gap. This is thought to be a good
process, to alleviate poverty.
In the context of analyzing information as a source of
exclusion and inequality, van Dijk, in 2006 synthesized that,
in the literature regarding the existence of a divide
between people or organizations, with differentiated access
to information, difficulties in accessing information can be a
basis of inequality. Also information can be a primary good
or input, a positional good or a source of skills. Information
is a crucial resource for good decision-making and can
determine the extent to which a person can have access to
different types of services, goods and markets. It is a source
of opportunities and therefore the difficulty in accessing
information or the lack of possibilities to access it is a
source of inequality in different spheres of human
development.
Information is now considered a primary good that is
essential for the survival and self-respect of individuals.
Information is a positional good when some opportunities in
society create better opportunities than others, in gathering,
processing and using valuable information. This occurs in
particular in the context of a network society, in which lack
of a position in a digital network constitutes a form of social
exclusion. In this context, those who have access to
information may be considered information elites, with
more power, capital and resources, amplifying even further

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

the inequalities already initiated by differences in physical


access to ICTs.
The inequality in terms of skills, resulting from
differences in access to information, comes mainly from the
conclusion reached by Nathius and de Groot in 2003 who
found empirical evidence of the existence of skills premium,
in having ICT skills that explains increasing income
inequality between countries with differences in the
appropriation of ICTs [4].
Although ICTs have the potential to reduce the digital
divide within and between countries and regions, ICTs and
their benefits are not yet reaching poor countries at the same
scale as they have reached developed countries, particularly
poor rural areas within various countries[5]. The digital
divide mainly differentiates the rich from the poor.
III. ICTS, DEVELOPMENT AND POVERTY
ALLEVIATION
The concepts and measures of poverty in relation to
ICTs have already been considered, also the nature of digital
divides and their impact on poverty has been reviewed.
Then what is the role of ICTs in development and in poverty
alleviation?
By reducing the costs of information sharing, improving
its timely availability and providing the opportunities to
create networks between people sharing particular interests
or information needs, ICTs have the potential to contribute
to the improvement of socioeconomic conditions in
developing countries.
Despite proven effectiveness in helping to reduce rural
poverty, priority has not been given to the development of
ICTs in rural areas. Demand for ICTs is not perceived to be
as urgent as demand for primary infrastructure and social
services, when actually the poor are hungry for ICT,
knowing well that information provides access to education,
markets and health services.
The impacts of ICTs for rural households include saving
time and other resources, access to better information
leading to better decision making, improvements in
efficiency, productivity and diversity, information on new
technologies and expanded market reach.
The potential of ICTs to facilitate and improve the
already existing exchange of information that take place in
rural communities is very important. This and the use of
ICTs strategically, to serve community development needs,
can facilitate the indigenous development of rural
communities, through pluralistic and participatory
approaches.
To establish the role of ICTs in supporting and building
the capacity of indigenous knowledge systems, the
mechanism for information sharing must initially be
assessed within the local context. This is because ICTs have
the potential to initiate new rural networks of information
exchange although their use, in the first instance will need to
be determined locally, according to local choices.

www.asdf.org.in

43

IV. THE POTENTIALS IN ICT TO ALLEVIATE


POVERTY
Successful experiences in the application of ICTs in
marginalized and rural areas have shown how ICTs enable
access to markets, by providing information about prices
which improve the informed position of rural producers, for
decision making, and by facilitating the connection to
complete transactions.
ICTs have also proven successful in the provision of
services such as banking, health and the creation of
knowledge networks between universities around the world
and Africa, to support open, distance and e-learning
institutions. They have also proved useful as a source of
multimedia entertainment, providing information that raises
awareness, which is related to health issues, such as AIDS.
The use of ICTs, such as fixed phone lines, mobile phones,
access to radio, television and mobile banking services have
been shown to have improved the livelihoods of poor people
living in rural areas of developing countries.
The impacts of ICTs on a poor household, that is the
benefits of ICTs for poverty alleviation, can be measured by
gains in welfare, under the assumption that monetary
improvements eventually bring about non-monetary welfare
improvements. An example of how this could be done is by
measuring the alternative variation of the use of an ICT,
such as mobile phones, compared to other means of sending
information, such as giving it personally or sending a
messenger. These types of exercises can provide an idea of
how much more resources per capital can be available for
other activities, when costs of access to information and
sharing of information are reduced.
V. STRATEGIES IN POVERTY ALLEVIATION
The Government and the private sector should adopt the
following strategies to realize poverty alleviation:
A.
Information Technology Infrastructure
Measures should be put in place to encourage the
provision of infrastructure, for easy access to local, national
and international information resources. The aim will be to
provide sufficient internet connectivity for schools, colleges
and businesses as well as to provide a reliable and secure
internet infrastructure. A nationwide network, consisting of
fibre optic, satellite and terrestrial radio communication
networks should be established.
The Government should encourage sharing of the
capacity of public and private utility providers (e.g. power,
water, railway, etc) that have the rights of the method in
developing national information infrastructure. The
development, deployment and maintenance of multipurpose
community, public library and post office owned public
access centres should be encouraged. Local software
industry should be publicized by increasing awareness of
stakeholders about the opportunities offered by different
software models which includes proprietary, open-source
and free software, in order to increase competition, access,

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

diversity of choices and to enable users to determine


solutions.
B. Electronic Commerce
In recognition of the important role which e-commerce
plays in economic development, the use of e-commerce in
trade and investments, as a means of integrating Nigeria into
the global economy, should be promoted. To this end, the
Government should:
Support the development of e-commerce by
enacting appropriate legislation to support e-business;
Support promotional campaigns to raise public
awareness on the potential opportunities presented by ecommerce; and
Promote collaboration with the international
community, in developing an equitable framework for ecommerce.
C. E-Government Services
The overall goal of e-Government is to make the
Government more result oriented, efficient and citizen
centered. The e-Government strategy will focus on
redefining the relationship between Government and
citizens, with the objective of empowering them, through
increased and better access to government services. The
broad objectives of e-Government will be to:
Improve collaboration between Government
agencies and enhance efficiency and effectiveness of
resource utilization;
Improve Nigerias competitiveness by providing
timely information and delivery of Government services;
Provide a forum for citizens participation in
Government activities.
The e-Government initiative will be a shared vision
between the Government, private sector and civil society.
The implementation process will involve all stakeholders.
D. E- Learning
The ICT policy will promote the growth and
implementation of e-learning. To do this, the Government
should adopt the following strategies:
Promote the development of e-learning recourses;
Facilitate Public/Private Partnerships, to mobilize
resources, in order to support e-learning initiatives;
Promote the development of integrated e-learning
curriculum to support ICT in education;
Promote distant education and virtual institutions,
particularly in higher education and trainings;
Promote the establishment of a national ICT centre
of excellence;
Provide affordable infrastructure to facilitate
dissemination of knowledge and skill, through e-learning
platforms;
Promote the development of content, to address the
educational needs of primary, secondary and tertiary
institutions;
Create awareness of the opportunities offered by
ICT as an educational tool to the education sector;

www.asdf.org.in

44

Facilitate sharing of e-learning resources between


institutions;
Exploit e-learning opportunities to offer Nigerian
education programs for export; and
Integrate e-learning resources with other existing
resources.
E. IT in Health Services
The use of IT in health delivery systems reinforces
fundamental human rights by improving equity and quality
of life. The Government should promote the use of IT in
health delivery by the following:
Providing IT facilities in all public health facilities;
Providing IT training to medical staff;
Setting standards and norms for IT in the
healthcare system;
Developing legislation governing telemedicine and
health information;
Establishment of national resource centers for IT in
the healthcare system.
F. Local Content
The overall objective will be to develop local content in
ICTs, for greater access and relevance to the citizens.
Strategies on local content should:
Support locally based development of IT
applications and multimedia content for productivity;
Encourage the use of local languages in developing
the ICT content;
Encourage the development of content that
captures and preserves knowledge and culture of local
communities;
Promote electronic publishing, collection and
preservation of local materials; and
Encourage the development and management of
information and knowledge resources as a national
heritage.
G. Fiscal Measures
The Government should introduce measures to stimulate
increased investment and growth in the ICT sector, in order
to create a favorable investment climate for the development
of a globally competitive IT enabled economy. The
Governments overall objective should be to:
Promote favorable fiscal policies to ensure that the
countrys ICT products and services are globally
competitive;
Develop fiscal mechanisms that respond to the fast
changing needs of the information economy;
Promote duty free zones and incubation centers to
attract ICT investment; and
Make budgetary provision to spur the growth of
ICT.
VI. CONCLUSION
ICT has from age to age become a deep well which
spring up with opportunities for societies that are ready to

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

45

drink of it. A few methods of tapping into the potentials that


ICT has have been outlined in this paper although ICTs are
only limited by the users. Gradually bridging the digital
divide would see our country moving from a state of
underdevelopment to a state of development.
REFERENCES
[1]

[2]

[3]

[4]
[5]

[6]

[7]

United
Nations
Development
Programmes
Asia-Pacific
Development Information
Programme (UNDP-APDIP),
http://157.150.195.10/depts/dhl/events/infosociety/toc/toc9.pdf
J. Y. Hamel, ICT4D and the Human Development and Capabilities
Approach: The
Potentials of Information and
Communication Technology. Human Development Research Paper
2010/37, UNDP. (2010).
R. Harris Information and communication technologies for poverty
alleviation, the United Nations Development Programmes AsiaPacific Development Information, Programme (UNDP-APDIP),
2004. http://157.150.195.10/depts/dhl/events/infosociety/toc/toc9.pdf
J. A. Van Dijk. Digital divide research, achievements and
shortcomings, Poetics 34(4-5), 221-235. 2006
J. Von Braun and M. Torero, (eds.) ICTs: Information and
Communication Technologies for the Poor, International Food Policy
Research Institute (IFPRI), brief. 2005. Available at
http://www.ifpri.org/sites/default/files/publications/ib40.pdf
R. Chapman, and T. Slaymaker, (2002). ICTs and rural development:
review of the literature, current interventions and opportunities for
action, Working Paper 1992
UNCTAD Information Economy Report 2010: ICTs, Enterprises and
Poverty Alleviation, Technical report, United Nations Conference on
Trade and Development (UNCTAD).

www.asdf.org.in

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

46

Role of K- Guru in Dynamic Game Analysis

K. Kalaiselvi, Research Scholar

G.V. Uma, Professor

Department of Mathematics
Anna University
Chennai, India

Department of Information Science & Technology


Anna University
Chennai, India

AbstractIn todays era wikis plays an vital role among the


learners, tutors and subject matter experts (SME), who believe
that wikis facilitate collaborative learning among people and
leads to effective knowledge sharing. Knowledge sharing is an
activity through which knowledge (i.e. information, skills, or
expertise) is exchanged among people and community. The
sharing of knowledge constitutes a major challenge in the field
of knowledge management because some students tend to resist
sharing their knowledge. However, risk in knowledge sharing
is that individuals are most commonly rewarded for what they
know, not what they share. Role of K-Guru helps to promote
and remove knowledge sharing obstacles, paved way for
discovery and innovation. To promote sharing of knowledge
i.e. socialization of tacit knowledge there must be abundant
sources for collection of ideas. For this kind of collective work
tools like wikis, emails and chat must be provided for direct as
well as indirect communication. Communication or online
going elaboration must be dynamically captured and should be
kept as raw material for intelligent verification to make that as
part of knowledge base. This sharing process must be
optimized under a game analysis approach to get quality
knowledge growth in the university
Keywords- ADIPS, Dynamic Game Analysis, K-Guru,
Knowledge management, optimization, randomization,
university, Web capture

I.

INTRODUCTION

The growth in the technology improved education system


with the evolution of new mode of education. Recent trends
in technology helped to empower the education using
information and Communication Technology. Education is a
continuous process for improving ones knowledge
throughout their life time. Knowledge management plays a
vital role and proven concept in many areas of computer
science. Knowledge management approach plays an
important role in higher education. Application using
knowledge management helps to improve the teaching and
learning skills.
The main goal of Knowledge management system is to
provide the learners with the right information at the right
Proc. of the Intl. Conf. on Computer Applications

time. During past decade, organizations understand the


importance of knowledge and role of knowledge can be used
for the overall improvement of the organization [1, 2, and 3].
Knowledge plays a specific role for an individual/group of
users with in a specific domain of system. When knowledge
shared between communities of people it leads to skill
improvisation for learners as well as organizations.
II.

RELATED WORK

In the paper [1], the author proposes two dynamic


games to optimize the knowledge sharing as actors
organization and employee to achieve the competitive edge
over rivals.
They describes the kind of rewards can be given to the
employees as Materialized and Non-materialized rewards.
Materialized rewards contains bonus, incentives and
promotions in post, while non materialized rewards contains
praise for good work, offer to lead any project, trainings and
etc.
They also revealed the effect materialized rewards
limitations because they can provide only initial satisfaction
but in gradual way it became ineffective. So it is necessary
to make a perfect mixture of materialized and nonmaterialized rewards to provide long term effect on
promotional work of knowledge sharing process. The game
designed in this paper considers various factors cost of
recruiting against the employees job hope conditions. As the
realization of promise against employees sharing work is
done it becomes beneficial in optimal way otherwise it has
some counter measures. With these factors a Nash
equilibrium has been found to optimize the knowledge
sharing process.
The drawback of this system is that it is developed by
keeping view of achieving organizational advantage in
modern market. It has no such generic view that it can be
used in other prospect also like Institutional organization.

Volume 1. Copyright 2012 Techno Forum Group, India.


ISBN: 978-81-920575-5-2:: doi: 10. 73781/ISBN_0768
ACM #: dber.imera.10. 73781

www.asdf.org.in

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

Figure 2.1. ADIPS (Agent based Distributed Information Processing


System)

In paper [2] pertaining to the influential factor of


knowledge-sharing can be classified into three categories.
First, it is the personal factor, such as the personal
satisfaction related to the human, the subject of knowledge
sharing, learning ability and self-efficacy. Second, it is the
organizational factor that can be summarized as the culture
and structure of organization, characteristics of task, reward,
etc. Third, it is the technical factor of information system
and information technology which support the knowledge
sharing. It works on the hypothesis of The pleasure of
helping others will have a positive impact on the activities
of Knowledge Sharing and self-efficacy related to the
knowledge will have a positive impact on the activities of
knowledge sharing.
It provides the modular development of system through
ADIPS model. It is layered to provide flexibility and high
performance through the Intelligence layer and structured
resource layer. The work presented in the paper [4] intends
to design an adaptive knowledge management environment
based on the multi-agent technology, for providing the users
with timely access to the just-in-time and context-dependent
knowledge with an effective approach to managing
distributed information systems.
It provides the IT impact on knowledge management
system development. It points out the danger than IT driven
knowledge management strategies may end up objectifying
and calcifying knowledge into static, insert information,
thus disregarding altogether the role of tacit knowledge.
Thats why nowadays the information exchange is provided
through direct communication by applications such as
electronic mail, video conferencing, chat room etc [6]. The
view of this paper provides the reason to use WIKI kind of
application for knowledge sharing because it provides both

www.asdf.org.in

47

active and passive communication at the same time. It also


provides the secure and authentic knowledge source by
providing necessary constraints.
And the last referred paper [7] provides the theme of
knowledge management system. It provides the way to
present the knowledge in Wiki pages with the opinion
making facility. In this project application is designed such
as
a wiki-like application that lets the user create
arbitrary RDF resources and the
application
logic for presenting and interacting with them,
a personal note management and publishing tool
for keeping track of and extracting metadata and
user annotations from a variety of local sources
such as plain-text files and comments in source
code, and
a discussion forum in which users can classify and
structure responses with typed annotations instead
of email-like quoting.
This paved way to work for the platform of
knowledge sharing in University domain. The platform for
sharing is selected as K-Guru which consist additional
security and authenticity i.e. the creation and editing is done
only by SMEs. The automation of knowledge creation is
provided through using modern web technology Web 3.0 to
capture the Internet data as textual reference. The difference
from existing works is that in university domain there is no
way to recruit additional tutors to promote knowledge
sharing or no way to force the tutors generally. Hence
performance measurement and reward option takes concern
on only their work on system and users opinion against
work.
III.

KNOWLEDGE MANGEMENT IN HIGHER EDUCATION

From an academic perspective, the learning community


should start at the individual level, create departmental
knowledge, create domains of knowledge across
departments that share academic interests or disciplines,
create institutional knowledge networks and networks with
other institutions and corporations(Galbreath,2000).
In Educational perspective, the knowledge management
should provide information about how to link people,
processes, and technologies and discuss how organizations
can promote policies and practices that help people to share
and manage knowledge(Petrides & Nodine, 2003). There
are two types of knowledge involved in higher education
settings: academic knowledge and organizational
knowledge. Academic knowledge is the primary purpose of
universities and colleges. Organizational knowledge refers
to knowledge of the overall business of an institution: its
strength and weaknesses, the markets it serves, and the

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

factors critical to organizational success (Coukos-Semmel,


2003). It is believed that knowledge management can be
used to support educational administration which in turn
supports teaching and learning(Petrides & Guiney, 2002).

Figure 3.1 Role of K-Guru in Dynamic Game Analysis

The proposed framework consists of components like


web capturing, data security and knowledge level
determination. It also explains how knowledge can be
shared optimally.
A. WEB-CAPTURING
The dynamic collection is provided through the WEBCapturing techniques. Web access and search is provided
through Go ogle API. Web capturing is done using Java
html parser package, which enables through the java
programming to extract the web addresses, resource
harvesting, text extraction in both linear and non linear
formats as well.
B.

KNOWLEDGE REPOSITORY CREATION


The knowledge base creation is there in which all SMEs
(Subject matter expertise) has to give their knowledge
support; their knowledge must be standardized, refined and
categorized. Also the captured content from web can be
utilized for creating the new pages. Following this the
knowledge accumulation is done in knowledge base based
on the optimization of classified searching and retrieval.

48

C. KNOWLEDGE LEVEL DETERMINATION


Second component deals with authorization of users to
the system using cryptographic algorithm and interaction.
Learners must go through the cognitive knowledge level
determination using Psychometric Analysis. This phase
consists of three levels as follows: Analytical analysis
Subject knowledge analysis
Innovative approach analysis
These test questions are randomized (randomization
algorithm) and updated according number of user
interaction, user group identification and on time basis. All
the three sections have different weight age which
collectively determines the k-level of learner.
D. DATA SECURITY
This module implements the HMAC cryptographic
algorithm for providing security to confidential data like
research papers. It uses the Hash algorithm for key
generation with which the content will be encrypted and
decrypted on both ends of connection (Client and server).
The adaptability and performance flexibility is the key
feature of algorithm.
E. COLLABORATIVE LEARNING INFRASTRUCTURE
For active knowledge sharing K-Guru functionality is
proposed with email and chat discussion. K-Guru is a kind
of Wikipedia which provides the functionality of editing the
content at any time by SMEs and users can give their
suggestions also for enhancement of content. The content
versions will be stored for verification by administrator of
system and he can restore the previous version also. As well
learners can also suggest the improvement.
F. OPTIMIZATION OF KNOWLEDGE SHARING
The most eventual work on knowledge sharing and
transferring of tacit (implicit knowledge) between SMEs at
the university level is to optimize the knowledge wealth of
university. This optimization is done on the basis of
dynamic game analysis. A game has to be designed and
implemented by considering all the three actors namely
university, SMEs and learners. According to the input and
different level the desired and received outcome is measured
along various scales regarding development in all the three
actors performance.
IV.

K-SHARING

OPTIMIZATION PROCESS

The Proposed work is on knowledge sharing and


transferring of tacit (implicit knowledge) between SMEs at

www.asdf.org.in

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

49

the university level to optimize the knowledge wealth of

Where Number of SMEs= nsme and j {0,..,nsme}

university. This optimization is done on the basis of

Finally sort the contribution performance of SMEs.

Dynamic game analysis. This game is designed and

INPUT

implemented by considering all the three actors-university,

1. Number of documents created and edited by SME in

SMEs and learners.

certain period.

ALGORITHM

2. Number of votes against each version of page.

1. Determine the total number of pages created during a

OUTPUT

certain period of report generation.

Result of all SMEs contribution.

2. Determine the total number of pages edited per Period.


3. Get vote count for each page published by SME.
4. Determine the performance of SME by considering the
Weight age of their different actions.
Weight of actions:
Wn =

Weight on creation of new pages.

We=

Weight on creation of new pages.

Ws =

Weight on concerning Suggestions

Ww=

Weight on web searching and


Capturing work.

Wlp=

Weight on last three previous periods.

n=

Number of pages created.

e=

Number of edited pages.

t=

Number of total suggestions.

s=

Number of total suggestions concerned.


5. Determine performance
P= Wn*n+ We*e+ Ws*(s-1.2*(t-s))+ (0.6)i * Wlpi
Where I {1, 2, 3}

Coefficient 1.2 taken to reduce the performance value


against ignorance of suggestion
Last periods performance is considered with the coefficient
of (0.6)i to give the most recent performance most weight.
To determine contribution of SME:
Cnsme = Pj / Pj

www.asdf.org.in

V.

CONCLUSION

The proposed work is to automate and promote the


knowledge sharing among various knowledge resources like
SME, learner and Global knowledge source Internet. It
provides the platform through which knowledge resources
can be accessed through web and captured by authentic
SMEs to provide the knowledge to learners according to
their own understanding level. It provides a collaborative
platform to share and discuss through suggestions how
better the learning process should be. This system provides
a test procedure to determine the knowledge level of learner
with assurance of not going under the same test
questionnaire in repeated visits.
To promote the knowledge sharing among SMEs a
dynamic game model is provided which concerns the
contribution of SMEs in terms of pages created and edited
by them and how much they were effective according to the
learners opinion. Based upon game analysis it is concluded
that what kind of rewards can be granted to the SMEs to
promote their good work to be continued. This system also
provides the storage level security to keep research work
details secure from intruders using HMAC cryptographic
algorithm. HMAC algorithm provides the flexibility to
change the underlying HASH function according to the need
of performance or high level security.
The proposed work is developed for educational domain
and at university level. The system is developed to work on
server of particular department but with some extension it
will be capable to perform the same work at university level
and all the departments can work on this system for sharing
of their own kind of Knowledge. In future it can be provided
as Web Service through Internet and can connect number of
Universities/ Institutions to promote the knowledge sharing
large scale but that requires some higher level of security
extensions. This system requires a very large database size
to store web captured content, so there is need for disk
cleaning procedure also to manage database and disk

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

efficient utilization. Provided k-guru need some more


feature development to be efficient as needed.
REFERENCES
[1]

Jungwoo Lee, Jeongeun Kim, Yoengjong Han, A study on factors


Influencing knowledge-sharing activity for the innovation activity of
team, 2010.

[2]

S. H. An, Study on Knowledge Sharing within Organizations, PhD


Thesis, Tianjin University, Tianjin, China, 2005.

[3]

C. M. Wang and L. Huang, A Game Analysis of Employees Special


On-Job-Training, Journal of Jilin University(Information Science
Edition),2007.

[4]

S. Bandyopadhyay and P. Pathak, Knowledge sharing and


cooperation in outsourcing projects-A game theoretic analysis,
Decision Support System, 2007.

www.asdf.org.in

50

[5]

R. M. Grant, Toward a knowledge-based theory of the firm,


Strategic Management Journal, 1996, 17(Special Issue):109122.

[6]

Mingzheng wang, Jinchao Zhang and Welin Li, Dynamic game


analysis of knowledge sharing for training inside enterprises under
the mode of master-apprentice, 2009.

[7]

Tai-song Yin,Dynamic game analysis in workers tacit knowledge


sharing process in enterprise, 2005.

[8]

Sameera Abar,Tetsuo Kinoshita, A next generation knowledge


management system architecture,
Proceedings of the 18th
International Conference on Advanced Information Networking and
Application,2008.

[9]

E. K. Kelloway, Knowledge work as organizational behavior,


International Journal of Management Reviews, 2000.

[10] Brenda chawner& Paul H.Lewis wikiki webs: New ways to


communicate in a web enivronment. Information Technology and
Libraries, 2006. 3.pp:55-64.

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

51

A Visual Interface for Analyzing Three Graph-Based Image Segmentation


Algorithms
Prof.Yeshwant Deodhe1

Yeshwant.deodhe@gmail.com1
Electronics Engg. Department
Y.C.C.E Nagpur
Abstract- Segmentation of images into regions for
measurement or recognition is probably the most single
problem area for image analysis numerical evaluations are
needed to quantify the consistency between them. Error
measures can be used for consistency quantification because
are allowing a principled comparison between segmentation
results on different images, with differing numbers of
regions, and generated by different algorithms with different
parameters. This paper presents a graphical interface for
evaluating three graph-based image segmentation
algorithms: the color set back-projection algorithm, an
efficient graph based image segmentation algorithm known
also as the local variation algorithm and a new and original
segmentation algorithm using a hexagonal structure defined
on the set of image pixels.
Keywords - image segmentation, segmentation error
measures, Hexagonal structure of set of pixel., color set back
projection algorithms.

I INTRODUCTION
IMAGE segmentation is one of the most difficult and
challenging tasks in image processing and can be defined
as the process of dividing an image into different regions
such that each region is homogeneous while not the union
of any two adjacent regions. The consistency between
segmentations must be evaluated Because no unique
segmentation of an image can exist. If two different
segmentations
arise
from
different
perceptual
organizations of the scene, then it is fair to declare the
segmentations Inconsistent [3]. This paper presents a
graphical interface used for an objective and quantitative
evaluation of three graph-based segmentation Algorithms
that will be described further. For each of these
algorithms three characteristics are examined [4]:
Correctness, stability with respect to parameter Choice,
stability with respect to image choice. The evaluation of
the
Algorithms is based on two metrics (GCE, LCE) defined
in [3] which can be used to measure the
Proc. of the Intl. Conf. on Computer Applications
Volume 1. Copyright 2012 Techno Forum Group, India.
ISBN: 978-81-920575-5-2:: doi: 10. 73788/ISBN_0768
ACM #: dber.imera.10. 73788

www.asdf.org.in

Consistency of a pair of segmentations. These measures


allows a comparison between segmentation results on
different images, with differing numbers of regions, and
generated by different algorithms with different
parameters. In order to establish which algorithm
produces better results these are compared with manual
segmentations of the same image. For searching certain
structures graph-based segmentation methods such as
minimum spanning tree [6] [7], or minimum Cut [8] [9]
are using an edge weighted graph constructed on the
image pixels. The Graph-based clustering algorithms use
the concept of homogeneity of regions. For color
segmentation algorithms the homogeneity of regions is
color-Based, and thus the edge weights are based on color
distance. For obtaining the image regions early graphbased methods have used fixed thresholds and local
measures. Using these values larger edges belonging to a
minimum spanning tree were beaked. The problem of
fixed threshold [10] can be avoided by determining the
normalized weight of an edge using the smallest weight
incident on the vertices touching that edge. Other
methods presented in [6] [7] use an adaptive criterion that
depends on local properties rather than global ones. The
methods based on minimum cuts in a graph are designed
to minimize the similarity between pixels that are being
split [8], [9].
II. THE COLOR SET BACK-PROJECTION
ALGORITHM
The color set back-projection algorithm proposed in [5] is
a technique for the automated extraction of regions and
representation of their color content. This algorithm is
based on color sets which provide an alternative to color
histograms for representing color information. Their
utilization is possible only when
salient regions have few equally prominent colors. Each
pixel from the initial image is represented in the
HSV color space. The quantized colors from 0 to 165 are
stored in a matrix that is filtered by a 5x5 median filter for
eliminating the isolated points. The back-projection
process requires several stages: a) Color set selection candidate color sets are selected first with one color, then

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

with two colors, etc. ,until the salient regions are extracted
736 b) Back-projection onto the image - a transformation
from the RGB color space to HSV color space and a
quantization of the HSV color space at 166 colors is
performed for each segmented image c) Thresholding and
labeling image insignificant color information is
reduced and the significant color regions are evidentiated,
followed by the generation, in automatic way, of the
regions of a single color, of the two colors, of three
colors. In the implementation of the color set backprojection Algorithm that can be found it in [1] [13].
After processing the global histogram of the image, and
the color set are provided. The process of regions
extraction is using the filtered matrix and it is a depth first traversal described in pseudo-cod in the following
way: Procedure
Find regions (Image i, colorset C)
1) InitStack (S)
2) Visited =
3) For *each node P in the I do
4) If *color of P is in C then
5) PUSH (P)
6) Visited - Visited U {P}
7) While not Empty(S) do
8) CrtPoint +-POP (S)
9) Visited - Visited U {CrtPoint}
10) For *each unvisited neighbor S of CarPoint
do
11) if *color of S is in C then
12) Visited +- Visited U {S}
13) PUSH(S)
14) End
15) End
16) End
17) *Output detected region
18) End
19) End
The total running time for a call of the procedure
Find Regions (Image /, colorset C) is O (m2 * n2) where
m is
The width and n is the height of the image [1][13].
III. LOCAL VARIATION ALGORITHM
This algorithm described in [6] is using a graph based
approach for the image segmentation process. The pixels
are considered the graph nodes so in this way it is
possible to define an undirected graph G = (V, E) where
the vertices Vi from V represent the set of elements to be
segmented. Each edge (Vi' V j ) belonging to E has
associated a corresponding weight W( Vi' V j) calculated
based on color, which is a measure of the dissimilarity
between neighboring elements Vi and V j .A minimum
spanning tree is obtained using Kruskal's algorithm. The
connected components that are obtained represent image's
regions. It is supposed that the graph has m edges and n

www.asdf.org.in

52

vertices. This algorithm is described also in [14] where it


has four major steps that are presented below:
1. The input is a graph G = (V, E), where V are the n
vertices and E are m edges. Each edge has a
corresponding weight, which is a measure of dissimilarity
between adjacent pixels.
2. Perform the segmentation such that teach component C
S corresponds to a connected component in a graph G
= (V; E), where E E.
3. If the weight of the edge connecting two vertices in
adjacent components is small compared to the internal
difference of both the components, then merge the two
components, otherwise do nothing.
4. Repeat Step 3for q = 1, 2... m.
5. Return Sm the components after the final iteration
The existence of a boundary between two components in
segmentation is based on a predicate D. This predicate is
measuring the dissimilarity between elements along the
boundary of the two
Components relative to a measure of the dissimilarity
among neighboring elements within each of the two
Components.
The internal difference of a component C ; V was
defined as the largest weight in the minimum spanning
tree of a component MST(C, E): Int(C) = eEMST(C, E)
max wee). The difference between two components C]
,Cz gr is defined as the minimum weight edge connecting
the two components: if(C] ,Cz)=min(w(vi, Vj)) Vi EL]'Vj
ELz, (vi, vj) EE A threshold function is used to control
the degree to which the difference between components
must be larger than minimum internal difference. The pair
wise comparison predicate is defined as: D(C1 C2) - (true
if Dif(C1, ,C2Mlnt(C1, ,C2) ], false otherwise where the
minimum internal difference Mint is defined as: Mint(C1
,C2) = min(Int(C1 ) + r(C2 ), Int(Cz) +r(Cz)) The
threshold function was defined based on the size of the
component: T(C) = k IC I. The k value is set taking into
account the size of the image. The algorithm for creating
the minimum panning tree can be implemented to run in
O (m log m) where m is the number of edges in the graph.
Disjoint Set Data Structure
A disjoint-set is a collection of sets S = {S1, S2
Sk} of distinct dynamic sets which is used to keep
track of segments of a broken element. Each set is
identified by a member of the set, called
representative. In the implementation of this
algorithm we make use of disjoint sets to partition
the image into regions. A disjoint set can be made to
perform the following operations: MAKE-SET(x):
create a new set with only x, assuming x is not
already in some other set. UNION(x, y): combine
the two sets containing x and y into one new set. A
new representative is selected. FIND-SET(x): return
the representative of the set containing x.

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

Algorithm function
Make Set(x)
x.parent := x
x.rank := 0
function Union(x, y)
xRoot := Find(x)
yRoot := Find(y)
if xRoot.rank > yRoot.rank
yRoot.parent := xRoot
else if xRoot.rank < yRoot.rank
xRoot.parent := yRoot
else if xRoot != yRoot
yRoot.parent := xRoot
xRoot.rank := xRoot.rank + 1
function Find(x)
if x.parent == x
return x
else
x.parent := Find(x.parent)
return x.parent
IV. IMAGE SEGMENTATION USING A
HEXAGONAL STRUC TURE DEFINED ON THE SE T
OF PIXELS
The technique is based on a new and original
utilization of pixels from the image that are integrated
into a network type graph [2].
The hexagonal network structure on the image
Pixels, as presented in figure 1 were selected to improve
the running time required by the algorithms used for
segmentation and contour detection. The hexagonal
structure represents a grid-graph and for each hexagon h
in this structure there exist 6-hexagons that are neighbors
in a 6-connected sense. The time complexity of the
algorithms is reduced by using hexagons instead of pixels
as elementary piece of information. The index of each
hexagon is stored in a vector of numbers [1.N], where N,
the number of hexagons, is calculated using the formula
[2]:
N= (height l ) /2*(Width - widthmod 4)/2 -1)
Where height and width represent the height and the
width of the image.

53

Fig. l. Hexagonal structure constructed on the image pixels with each


hexagon two important attributes are Associated:

a) The dominant color - eight pixels contained in


Hexagon are used: six pixels from the frontier and two
from interior b) the gravity center
The pixels of image are split into two sets, the set of
pixels representing the vertices of hexagons and the set of
complementary pixels. These two lists are used as inputs
for the segmentation algorithm. Based on the algorithms
proposed in [2] the list of salient regions is obtained from
the input image using the hexagonal network and the
distance between two colors in HSV space color. The
color of a hexagon is obtained using a procedure called
sameVertexColour. The execution time of this Procedure
is constant because all calls are constant in time
processing. The color information is used by the
procedure expand Color Area to find the list of hexagons
that have the same color. To expand the current region it
is used a procedure containing as input:
a) The current hexagon hi ,
b) L1 and L2 lists,
c) The list of hexagons V
d) The current region index indexCrt region
e) The current color index indexCrt Color.
The output is represented by a list of hexagons with
the same color crtRegionltem. The running time of the
procedure expandColourArea is O (n) where n is the
number of hexagons from a region with the same color.
The list of regions is obtained using the list Regions
procedure the input of this procedure contains:
a) The vector V representing the list of hexagons
b) The lists L1 and L2 the output is represented by a list
of colors pixels and a list of regions for each color.
Procedure
IistRegions (V, L1, L2)
1) Colour NB-0;
2) For i -1 to n do
3) Initialize crtRegionItem;
4) if note visit( hi )) then

www.asdf.org.in

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

5) CrtColorHexagon +-same VertexColour (L1, L2, hi);


6) If crtColorHexagon.sameColor then
7) K find Color (crtColorHexagon.color);
8) If k < 0 then
9) add new color CcolourNb to list C;
10) K -colourNb++;
11) IndexCrtRegion +-0;
12) Else
13) IndexCrtColor +-k;
14) IndexCrtRegion +-findLastIndexRegion (index
CrtColor);
15) indexCrtRegion++;
16) End
17) HLindexRegion +-indexCrtRegion;
18) HLindexColor +-k;
19) Add hi to crtRegionItem;
20)Expand ColourArea (hi, L1,L2,V,indexCrtRegion,
IndexCrtColor; crtRegionItem);
21) Add new region crtRegionItem to list of element
K From C
22) End
23) End
24) End
The running time of the procedure list Regions is
O (n2), where n is the number of the hexagons network
[2].
V.SEGMENTATION ERROR MEASURES
To evaluate a segmentation algorithm it is needed to
measure the accuracy, the precision and the performance.
When multiple segmentation algorithms are evaluated
some metrics are needed to establish which algorithm
produce better results. In [3] are proposed two metrics
tolerant to refinement that can be used to evaluate the
consistency of a pair of Segmentations. A segmentation
error measure takes two segmentations SI and S2 as input,
and produces a real valued output in the range [0 ... 1]
where zero signifies no error. For a given pixel Pi two
segments S1 and S2 containing that ixel, are considered.
If one segment is a proper subset
of the other, then the pixel lies in an area of
refinement, and the local error should be zero. If there is
no subset relationship, then the two regions overlap in an
inconsistent 738 manner. In this case, the local error
should be non-zero. Let \
Denote set difference, and I x I the cardinality of set x.
If R(S, Pi) is the set of pixels corresponding to the region
in segmentation S that contains pixel Pi ' the local
refinement error is defined in [3] as:
E(S1,S2,Pi) = I R(S1 ,Pi) \ R(S2 ,Pi)I / R(S1 , Pi)
This local error measure is not symmetric and it
encodes a measure of refinement in one direction only.
Given this local refinement error in each direction at each
pixel, there are two natural ways to combine the values

www.asdf.org.in

54

into an error measure for the entire image. Global


Consistency Error (GCE) forces all local refinements to
be in the same direction. Local Consistency Error (LCE)
allows refinement in different directions in different parts
of the image. Let n be the number of pixels [3]:
GCE (S1,S2)=1/n min{ i.E.(SI 'S2 ,Pi ),E(S2,S1, Pi)}
LCE(S1,S2)=.1/n min {(E(S1, S2, Pi), E (S2, S1, Pi)}
LCE GCE for any two segmentations and it is clear
that GCE is a tougher measure than LCE. In [3] are
shown that, as expected, when pairs of human
segmentations of the same image are compared, both the
GCE and the LCE are low; conversely, when random
pairs of human segmentations are compared, the resulting
GCE and LCE are high. If the pixel wise minimum is
replaced by a maximum it is obtained a new measure
named Bidirectional Consistency Error (BCE) that is not
tolerating the refinement. This measure is evaluated using
BCE(SI,S2)= 1/n ,max(E(SI ,S2,Pi),E(S2,SI ,Pi))
If an image is interpreted as a set 0 of pixels and the
segmentation as a clustering of 0 we can apply measures
for Comparing clusters. As described in [17] we can have
two types of distances available for
Clusters: the distance between clusters evaluated by
counting pairs and the distance between clusters evaluated
using set matching. For
the first case are assumed two clusters C1 and C2
belonging to a set 0 of objects and also all pairs of distinct
objects ( Oi ,Oj) from OxO.
Each pair can be found into one of the four categories:
a) in the same cluster under both CI and C2 (the total
number of these pairs is represented by N11 )
b) In different clusters under both C1 and C2 (Noo),
c) In the same cluster under CI but not C2 (N1o),
d) In the same cluster under C2 but not CI (Nol).
Based on these assumptions the following statement
is obtained
N11 +Noo +N1o +Nol =n (n-l ) / 2.
Based on these numbers multiple distances of the first
type were defined. One of the distances described in [18]
is the
R and index evaluated as
R (C1, C2) = 1 (N11+N00) / n (n-l) / 2
A perfect matching between two clusters is implied
by a 0 value. Another index called the Jacard index [19] is
N evaluated as
J (C1, C2) = 1-N 11 / N11 +N10 + N01
For the second type the comparison criteria is based
on the following term
A (C1,C2) = ci E1 max cj E2 I Ci Cj I

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

55

This term measures the atching degree between C1


and C2 has a value of n when the two clusters are equal.
The Dongan index [20] is evaluated as
D (C1, C2) =2 n-a (C1, C2) - a (C2, C1)
VI. EXPERIMENTAL RESULTS
For each image three major steps are required in order
to calculate GCE, LCE, BCE, Dongen index and Rand
index values: a) Obtain the image regions by applying
segmentation algorithms (color set back-projection - CS,
the algorithm based on the hexagonal structure - HS, the
local variation algorithm - LV) and the regions manually
segmented MS b) Store the information obtained for
each region in the database c) Use a graphical interface to
easily calculate GCE and LCE values and to store them in
the database for later statistics All algorithms are using a
configurable threshold value to keep only the regions that
have a number of pixels greater than this value graphical
interface presented in figure 1.This application is offering
to the user a list containing the name of all images that
were segmented and that have regions stored in the
database. After selecting from the list the name of an
image the user needs to press Get lnfo button to retrieve
de information stored in the database for that image. In
that moment the text boxes containing the Number of
regions obtained with each algorithm are filled. For each
algorithm a button named View Regions is available. By
pressing this button a new form is shown .
Containing the obtained regions as presented in image
2.This form helps the user to view the regions detected by
each algorithm and to make an empirical evaluation of the
performance. Because this evaluation is subjective we
need a numerical evaluation. Since the information about
regions is available the user can press Calculate button. In
this moment the error. Measures will be calculated and
shown to the user. If the user decides that these values are
needed later he can choose to store them in the database
by pressing Store results button. To compare the obtained
results with previous results stored in the database View
stored results button should be pressed. In this way it is
possible to evaluate what algorithm is producing better
results.

www.asdf.org.in

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

VII. CONCLUSION
In this paper it is proposed a graphical interface for
evaluating three graph-based image segmentation
algorithms: the color set back-projection algorithm, the
image segmentation using a hexagonal structure defined
on the set of image pixels and the local variation
algorithm. Error measures like GCE, LCE are used to
evaluate the accuracy of each segmentation produced by
the algorithms. The values for the error measures are
calculated using the described application specially
created for this purpose. The proposed error measures
quantity the consistency between segmentations of
differing granularities. Because human segmentation is
considered truth segmentations the error measures are
calculated in relation with manual segmentation. The
GCE and LCE demonstrate that the image segmentation
based on a hexagonal Structure produces a better
segmentation than the other methods. In the future work
the comparative study will be effectuated on medical
images.

56

[2] D. D. Burdescu, M. Brezovan, E. Ganea, and L. Stanescu "A New


Method for Segmentation of Images Represented in a HSV Color Space"
ACIVS 2009: Bordeaux, France, pp. 606-617.
[3] D. Martin, C. Fowlkes, D. Tal, J. Malik "A Database of Human
Segmented Natural Images and its Application to Evaluating
Segmentation Algorithms and Measuring Ecological Statistics" In IEEE
(ed.), Proceedings of the Eighth International Conference On Computer
Vision (ICCV-OI), July 7-14, 2001, Vancouver, British
Columbia, Canada, vol. 2, pp. 416-425.
[4] R. Unnikrishnan, C. Pantofaru, and M. Hebert. ''Toward Objective
Evaluation of Image Segmentation Algorithms", IEEE Transactions on
Pattern Analysis and Machine Intelligence, Vol. 29, No. 6, June, 2007,
pp. 929-944.
[5] J. R. Smith, S. F. Chang.:" Tools and Techniques for Color Image
Retrieval", Symposium on Electronic Imaging. In: Science and
Technology - Storage & Retrieval for Image and Video Databases
IV,volume 2670, San Jose, CA, February 1996. IS&T/SPIE. (1996)
[6] P.F. Felzenszwalb, W.D. Huttenlocher:" Efficient Graph-Based
Image Segmentation", IntI. Journal of Computer Vision, 59(2), pp. 167181(2004)
[7] L. Guigues, , L.M Herve, L.-P. Cocquerez: ''The hierarchy of the
cocoons of a graph and its application to image segmentation." Pattern
Recognition Letters, 24(8), pp. 1059-1066 (2003)
[8] J. Shi, J. Malik: "Normalized cuts and image segmentation", Proc. Of
the IEEE Conference on Computer Vision and Pattern Recognition, an
Juan, Puerto Rico, pp. 731-737 (1997)
[9] Z. Wu ,R. Leahy: "An optimal graph theoretic approach to data
clustering: theory and its application to image segmentation", IEEE
Trans. on Pattern Analysis and Machine Intelligence,15(11 pp.11011113 (1993)
[10] R. Urquhart: "Graph theoretical clustering based on limited
neighborhood sets." Pattern Recognition, 15(3), pp 173-187 (1982)
[II] 1. Jermyn, H. Ishikawa:" Globally optimal regions and boundaries as
minimum ratio weight cycles." IEEE Trans.on ttern Analysis and
Machine Intelligence, 23(8), pp. 1075-1088 (2001)
[12] C.F. Bennstrom, J.R. Casas: "Binary-partition-tree creation using a
quasi-inclusion criterion." Proc. of the Eighth International
Conference on Information Visualization, London, UK, pp. 259-294
(2004)
[13] L. Stanescu "Visual information. Processing, Retrieval and
Applications" SciTech Craiova 2008
[14] C.Pantofaru, M.Hebert: "A Comparison of Image Segmentation
Algorithms". Technical Report CMU-RI-TR-05-40, Robotics
Institute,Carnegie Mellon University, Pittsburgh, Pennsylvania .(2005).
[15] ImageClef (2009) http://imageclef.org/2010
[16] X Jiang, C Marti, C Irniger, H Bunke (2005) Distance Measures for
Image Segmentation Evaluation. In: EURASIP Journal on Applied
Signal Processing, vol. 2006, pp. 1-10
[17] W. M. Rand" Objective criteria for the evaluation of clustering
methods" In: Journal of the American Statistical Association, vol.
66,1971, no. 336, pp. 846--850
[18] A. Ben-Hur, A. Elisseeff, 1. Guyon" A stability based method for
discovering structure in clustered data" In: Proceedings of 7th Pacific
Symposium on Biocomputing (PSB '02), vol. 7, 2002, pp. 6--17,Lihue,
Hawaii, USA
[19] S. van Dongen" Performance criteria for graph clustering and
Markov cluster experiments" In: Tech. Rep. INS-ROO 12, 10 EURASIP
Journal on Applied Signal Processing Centrum voor Wiskunde en
Informatica (CWI), 2000, Amsterdam

REFERENCES
[I] D. D. Burdescu, L. Stanescu" A New Algorithm for Content-Based
Region Query in Multimedia Databases" Congres DEXA 2005
:Database and expert systems applications Copenhagen, 22-26 August
2005 , vol. 3588, pp. 124-133.

www.asdf.org.in

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

57

E-learning model for quality enhancement of education in Universities


Dr. Naseem Akthar
Emirattes Professor
University of Madras,
Chennai,India

K.Kumar
Asst.Prof, MCA Department ,
Veltech Dr.RR and Dr.SR Technical University
Chennai, India
Abstract - Driven by an increasingly competitive global
economy and access to record amounts of student loan
funding, adults have flocked to college for academic degrees
that they did not attain as traditional undergraduates, citing
various reasons, including: the college degree did not seem
necessary for success when they were entering job fields at 18
22 years of age; higher education was not accessible (or
practical) to them as traditional undergraduates due to
funding, distance, or limited degree offerings; or, the student
began, but did not complete, a college degree program as a
traditional undergraduate.
Not all adults who did not achieve college degrees as
traditional students seek degrees in adulthood. Many are not
drawn to college campuses (actual or virtual) by the reasons
mentioned earlier. Still others may be interested in attaining a
degree, but face numerous barriers that cannot be overcome
by personal motivation. In todays scenario changes are
happening at a faster pace. Therefore the requirement by
company towards working adult group is increasing. The
adults are not able to keep pace with the requirement, just
because of their age and tight schedule . The solution is may
be amidst tight schedule they need to go for training session to
get themselves equipped. However companies cannot afford
to send them for long training sessions. One solution to help
such learners is to help them to go for continuing education
from their home or at office from desktop.
Keywords : Virtual University,
intergration,eknowledge

Interlinking,

an e-learning model or e-model keeping in mind the target


group. When e-learning model was designed - it had
inconsistency , wrong data and it was not constantly
updated. Therefore the adult learners were not getting what
they needed through continuing education by e-learning.
Other problems in the developed e-learning existing model
was that the course ware does not match current industry
trends. As such learning happens i.e. e-learning in bits
and pieces and content spread across.
We are currently living in a fascinating time in history.
New developments in learning science and technology
provide opportunities to create engaging, interactive and
meaningful course content in almost all subject areas. Now
we are able to create affordable, efficient, easily accessible,
open, flexible, well-designed, learner-centered, distributed
and facilitated learning environments. We are blessed with
the advancement of learning technologies that allow us to
create exciting learning environments. Various educational
institutions, corporations, government agencies and
individuals in the community are creating online learning
materials and resources for their target audience. We
should integrate large reservoir of resources, opportunities,
situations available in all systems and domains of
community to support learning
and development of children and youth, and the
continuous learning and development of adults through life
(Banathy, 1991). With the blessing of distributed learning
technologies, communities throughout the world can
establish virtual universities by organizing societys best
resources for learning, professional development and
continuing education. Therefore, virtual universities
become hubs for excellence in education, training and
learning resources in their respective communities. What
components/parts have to be involved, and in what
arrangements that have the capability to attend to the
functions In establishing VU, each community has to
identify its purposes, functions and components. The
purpose-function-component of a VU can depend on the
type of community it encompasses and scope of its
operation (i.e., local, state, multi-state, national, multinational or international level VU). The purpose of a
virtual university is to serve as a hub for excellence in
education, training and learning resources by integrating its
community's best services and resources for learning,
professional development and continuing education.

Learning,

I. INTRODUCTION
The continuing education id through e-learning or
electronic learning. That is learn as much as possible and
update yourself through Computer Based Tutorials(CBT),
Community of Practice(CoP by having discussions with
your peers during free hours, learning and updating
yourself through video conferencing , where you learn and
share your experience and finally take some short term
courses from universities that provide learning
opportunities through interaction and helping the adult
learners to continue education through e-learning.
Definitely e-learning is the solution for the day for these
adult learners to en cash the opportunity and update their
knowledge through short term courses provided by
university. But the problem lies with respect to designing
Proc. of the Intl. Conf. on Computer Applications
Volume 1. Copyright 2012 Techno Forum Group, India.
ISBN: 978-81-920575-5-2:: doi: 10. 73795/ISBN_0768
ACM #: dber.imera.10. 73795

www.asdf.org.in

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

58

learners. These efforts may cut across all the content


generation activities.
17. Improving teachers training and course curriculum.
18. Providing Digital/Information Literacy for teacher
empowerment.
19. Creating a clearinghouse cum rating agency for
various web based learning contents for guiding Indian
learners.
20. Credit based flexible module formulation for
openness to qualifications and easy transfer of credits from
one programme / course to another.
21. Development of robust models of networking to
encourage community participation at local levels.
22. Providing e-Learning support to every higher
education institution for technology assisted learning.

II.
AIM OF THE RESEARCH
In order to bolster our knowledge resources, to obtain
and maintain the competitive edge in the world, we require
a system of identification and nurturing of talent and
lifelong learning. Knowledge modules based on the
personalized needs of the learner would need to be
delivered to him /her at the right time with the right content
interactively to take care of his / her aspirations. In due
course of time there would be a need to develop and
maintain the knowledge and capability profile of every
individual learner / worker. Such a system would have to
be developed in a cost effective manner over a period of
time, integrating, inter-alia the following objectives:
1. Effective utilization of intellectual resources,
minimizing wastage of time in scouting for opportunities
or desired items of knowledge appropriate to the
requirement,
2. Certification of attainments of any kind at any level
acquired through formal or non formal means in
conventional or non conventional fields.
3. Any-time availability of desired knowledge at
appropriate levels of comprehension to all for self paced
learning,
4. Platform for sharing of ideas and techniques and
pooling of knowledge resources.
5. Systematically building a huge database of the
capabilities of every individual human resource over a
period of time.
7. Nurturing of scholars and learners.
8. Support to all the learners / workers for any of their
perceived learning needs,
9. Extensive leveraging of the advancements in the
field of ICT for taking the knowledge resources to the door
steps of the learner,
10. Use e-learning as an effort multiplier for providing
access, quality and equality in the sphere of providing
education to every learner in the country.
11. Provide for Connectivity & access devices, content
generation, personalization & mentoring, testing &
certification and encouragement of talent.
12. Bringing efforts of different interested agencies
working in the field of e-learning under one umbrella and
establishing logical linkages between various activities.
13. Providing e-books & e-journals, utilizing the
repository of contents generated so far and the automation
of evaluation processes. Creating a high impact brand for
e-Journals in leading disciplines with a provision for good
incentive-based payment to the researchers publishing their
high quality papers in these e- Journals.
14. Spreading Digital Literacy for teacher
empowerment and encouraging teachers to be available on
the net to guide the learners.
15. Voice support for educational material delivery and
interactivity for the content on the portal.
16. Development of interfaces for other cognitive
faculties which would also help physically challenged

III.
PROBLEM STATEMENT
With an ever expanding field of knowledge, the
knowledge and skill sets required by an individual to
successfully lead life has also expanded, throwing up
challenges of learning more and more throughout ones
life. Add to that challenges of pedagogy being faced by the
teachers to package more and more for the uptake by the
students within the same amount of time available. A
proper balance between content generation, research in
critical areas relating to imparting of education and
connectivity for integrating our knowledge with the
advancements in other countries is to be attempted. For
this, what is needed is a critical mass of experts in every
field working in a networked manner with dedication. The
objectives of the National Mission on Education through
ICT shall include ( a ) the development of knowledge
modules having the right content to take care of the
aspirations and to address to the personalized needs of the
learners; (b) research in the field of pedagogy for
development of efficient learning modules for disparate
groups of learners; (c) standardization and quality
assurance of contents to make them world class; (d)
building connectivity and knowledge network among and
within institutions of higher learning in the country with a
view of achieving critical mass of researchers in any given
field; (e ) availability of eknowledge contents, free of cost
to Indians; (f) spreading digital literacy for teacher
empowerment (g) experimentation and field trial in the
area of performance optimization of low cost
access/devices for use of ICT in education; (h) providing
support for the creation of virtual technological
universities; (i) identification and nurturing of talent; (j)
certification of competencies of the human resources
acquired either through formal or non-formal means and
the evolution of a legal framework for it; and (k)
developing and maintaining the database with the profiles
of our human resources.
IV.

LITERATURE REVIEW

www.asdf.org.in

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

59

anny benefits or stability sincce they are ussually funded by


goovernment graants that migght last for onnly a couple of
yeears.
4.1 Continuiing Educationn in Indian Uniiversities
University E
Education in India is idenntified with tthe
prreparation of ppeople for proffessional workk. Today it neeeds
to be examined whether the iinitial trainingg and preparatiion
off young peoplee for occupatioons of professsional level is all
thaat universitiess need to do. In an ever cchanging socieety
whhich grows coomplex and m
mysterious at eevery turn of tthe
yeear, professionnals need guiddance and illuumination alm
most
thrroughout theirr careers.
The scientifi
fic and technollogical growthh in India and tthe
naation, and the resolve to uppgrade levels of living in oour
ruural areas hhave necessiitated the aacceptance aand
opperationalisatioon of a cullture of lifelong educatioon.
Inndividuals and groups need to be continuoously galvanissed
intto developmeental action thhrough a proccess of perioddic
uppdating of ttheir
know
wledge and skills,
s
a bettter
unnderstanding of
o their work eenvironment aand living and its
chhallenges, annd adaptationn of innovaative behaviooral
prractices.
The Educatiional Policy P
Perspective reccently circulatted
byy the Governm
ment of India visualises a lifelong
l
learniing
soociety. The present schem
mes of the Unniversity Graants
Coommission unnder its
Continuing Education Prrogrammes offfer an excellent
oppportunity to institutions oof higher educcation to exteend
theeir physical annd technocratiic resources too all segmentss of
thee community in their area inn the form of short-term neeedbaased educationnal programmees. Continuingg Education
is thus low
w-cost educattional provisiion deriving its
suupport from thhe existing inffrastructure inn the institutioons
off higher educaation. It requirres an innovattive approachh to
tarrget group iddentification, assessment of their neeeds,
formulation off educationaal programm
mes, choice of
innnovative instrructional methhodologies, loow-cost financcial
maanagement strrategies and onngoing feedbaack mechanism
ms.
4.2 Key Finddings of the L
Literature Reviiew
1. It is im
mportant to rrealize that w
we live in a ffast
chhanging worlld, dictated by the deevelopments in
tecchnology. Quick access to informattion has maade
knnowledge creaation fast, and the multiplierr effect has maade
it even explosivve. It is increeasingly difficcult to anticipate
chhanges and rrespond to thhem with crreative purpoose.
Deesigning couurses with reelevance to the future aand
deeveloping the necessary maanpower to deliver them iss a
chhallenging taskk. All this callls for a team of professionnals
in different areeas to come ttogether to deevelop proactiive
strrategies for hiigher education to meet the future demannds.
A Strategy Plannning Body annd an Institutioon to design aand
deevelop futurisstic courses ffor transferrinng them to tthe
Unniversities andd Colleges maay be created.
2. As thee Colleges aree the feedingg sources of tthe
Unniversities, a better coordiination in theeir working aand
acctivities is verry much required. The partticipation of tthe

This concept used in continuing


c
eduucation and Adult
A
eeducation proograms aimedd at recently illiterate or ""neolliterate" adultss and communnities, largely in the Develooping
w
world. Unlikee Continuing eeducation or F
Further educaation,
w
which coverss secondary oor vocationall topics for aadult
llearners, Post literacy progrrams provide sskills which m
might
ootherwise be provided in P
Primary educaation settings. Post
lliteracy educcation aims tto solidify liiteracy educaation,
pprovide resouurces and meddia aimed at the
t newly liteerate,
aand also mayy create system
ms of non-formal educatioon to
sserve these coommunities. Prrojects includee providing foormal
ccontinuing edducation, prooviding writteen materials (the
L
Literate enviroonment) relevvant to econom
mic developmeent to
nnewly literatte members of developinng societies, and
lleveraging raddio and otherr non-written media to incrrease
aaccess to educcational materrial in inform
mal settings. Adult
A
eeducation is thhe practice off teaching andd educating addults.
A
Adult educatiion takes plaace in the w
workplace, throough
''extension' schhool (e.g. Haarvard Extensiion) or 'schoool of
ccontinuing edducation' (Coolumbia Schoool of Continnuing
E
Education). O
Other learninng places innclude folk high
sschools, comm
munity collegees, and lifelonng learning cennters.
T
The practice is also oftenn referred to as 'Training and
D
Development 'and is oftenn associated w
with workforcce or
pprofessional ddevelopment. It has also been referred to
t as
aandragogy (tto distinguishh it from ppedagogy). Adult
A
eeducation is ddifferent from
m vocational edducation, which is
m
mostly workpplace-based foor skill improovement; and also
ffrom non-form
mal adult educcation, includding learning sskills
oor learning foor personal devvelopment. A common probblem
iin adult educaation in the U
U.S. is the lacck of professiional
ddevelopment opportunities for adult eduucators. Most aadult
eeducators com
me from otheer professions and are not well
ttrained to deeal with adultt learning isssues. Most off the
ppositions avaiilable in this field
f
are only part-time witthout

www.asdf.org.in

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

60

Network with broadband connectivity


linking hardware and appliances at various places
for giving access to anyone, anytime, anywhere.
Software
tools,
techniques
and
applications for enabling people and groups to
communicate with others quite intimately.
Content needed and shared by groups of
people, organizations/institutions, which enables
providers to offer services to users and customers.
13. Content in e-formats on a knowledge grid that
enables teachers and students get personalized urriculum
of high quality, relevance and utility.
14. Educational delivery system that ensures quality
and developmental relevance of educational offerings
(Developmental Education) for individual, institutions and
community.
15. By using all the media of print, audio, video,
animation and simulation, content in e-education can be
developed in the formats of :
E-lectures
Multi-media materials in distance
education formats.
Interactivity based content out of
Questions & Answers, Seminars,
workshops

Assignments & projects done by students

Content output could be stored on servers at


various places in the network.
4.3 Enhanced Solution
The categories were again rearranged in order to
generate the following theoretical propositions that were
grounded in the data.
1. Community did not happen unless the participants
wanted it to happen. That observation referred back to
similarities and differences in participants' personal and
academic needs. If the participants needed or wanted
community for whatever reason, they were apt to find it.
Some saw it as an opportunity to network. Some just
naturally wanted to participate and, in fact, provided some
of the glue that was needed to put a community together
and keep it together. Some participants were resistant to
membership in a community so they purposely did not get
involved or stayed on the fringes.
2. On-line community was present for some
participants and not for others, even though they were in
the same class and even though that class was described by
numerous students as having a lot of community. The
student didn't have a lot in common with other participants
and didn't interact in the virtual cafeteria. He was
comfortable with the technology, pedagogy and content,
but preferred face-to-face (or at least oral) communication.
3. Modeling, encouragement, and participation by the
instructor helped community form more readily for more
students in computer-mediated classes. The most obvious
example of this was offered by a student who was in two

teaching faculty in through a democratic process should be


ensured.
3. New models for higher education including the
following aspects need to be created and adopted in the
country:
(a) extended traditional Universities
(b) technology based Universities, and
(c) corporate Universities..
4. The Universities and National Institutes of higher
Learning should design their courses in collaboration with
industry and such courses be updated regularly, e.g., every
year, according to need.
5. E-Learning appears to be a fast emerging mode of
global entry at the present time. The Universities and
other Institutions of higher education can design their web
sites
for offering online education worldwide.
6. Other desirable initiatives for export of higher
education include:
Developing educational products of new
models based on flexibility and learner's
choice;
Preparing students for the knowledge
society;
Providing methods and styles of working for
life-long learning;
Arranging facilities for E-learning and
distance learning;
Ensuring total quality management in the
higher education system;
Catering to the changing market demands
and churn out adaptable work force.
7. Linking of University in Tamil Nadu will give
Abundance of resources , in response to the huge costs
involved in running a complex researchintensive
university. These universities have four main sources of
financing: government budget funding for operational
expenditures and research, contract research from public
organizations and private firms, the financial returns
generated by endowments and gifts, and tuition fees.
8. Upgrading Existing Institutions. One of the main
benefits of this first approach is that the costs can be
significantly less than building new institutions from
scratch.
9. Make teaching and learning possible from
Anywhere, Anytime, Link education - learning with life
and work related processes and places.
10.
Create National /Regional Grid network of
educational content and services, which can flow in the
network and support the processes of educating- learning,
teaching and evaluating- anywhere anytime.
11. Enable educators and educational institutions to
create new paradigms of education dependent on various
developmental processes and models.
12. Any networked society will need.

www.asdf.org.in

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

summer classes, one of which she said had a lot of


community and one of which had only a little.
4. Participants had to get comfortable with the
technology used to deliver the classes, with the doctoral
level class content, with the collaborative learner-centered
teaching method, and with faceless interaction before they
could become active members of a classroom community.
5. Veteran students could help create on-line
community or hinder its formation or both. New students
needed veterans' support and encouragement to continue as
they "learned the ropes." Veterans often helped create
community by modeling expected behavior, but that was
generally in the early days of class. After the initial sign-on
interaction, veterans often went back to communicating
with "friends" from previous classes, which hindered the
overall formation of community.
6. Community could be experienced at any of three
different levels: having on-line friends or acquaintances
with whom participants interacted regularly, feeling part of
an on-line classroom community, and enjoying
camaraderie.
7. Membership in the on-line community was
conferred by others through feelings of worthiness and
acceptance or belonging that occurred following
participation in long threaded discussions.
8. Voluntarily interacting beyond class requirements
promoted the feeling of community in computer-mediated
classes. Whether this interaction was in the form of
additional classroom input, virtual cafeteria input,
individual e-mails, phone calls, faxed material, or face-toface meetings, the cumulative effect was that the person
creating the additional interaction was more fully engaged
in the class and so found more community there.
9. Levels of community experienced were closely
linked to levels of engagement in the class and dialogue.
10. Long-term association with each other helped
promote on-line community because students who went
through multiple classes together could continue to amplify
on-line relationships so that a higher level of community
resulted over an extended period of time.
11. Qualities such as respect and trust were found in
descriptions of on-line classroom community members
even though such feelings had to be transmitted through
text only on a computer screen.
12. Feelings of acceptance and worthiness were
transmitted through on-line community conferment, but
on-line acceptance and worthiness were more likely to be
based on the quality of the participant's input than on his or
her virtual personality.
4.4 Model Proposed

61

eCM eCoreModel

Problemsfacedbylearnersaftertrainingis
givenasscopeforimprovement

CoreEngine
Trackand
Fire

Learnersusingtheemodel

Operations

Web
Interface
Database
Email
System

Output
deliverables

Learnerswantingtoimprove
Training

Problemsfacedduringtraining

V Virtual University
5.1 New Age New Organizations- Virtual Universities
During the last few years, many universities and
colleges are getting ready to face the impact of
globalization and emerging competition in marketing
education by forming consortia of colleges and
universities. The major approach employed is to partner
with other colleges and universities and to offer the best
available educational expertise, courses and services to
students both on-campus and off-campus. This is also
aimed at survival of small institutions against the
competition from the big ones; and is using first generation
technologies. Many colleges and universities have formed
partnerships- virtual universities- by using essentially first
generation technologies for becoming competitive and
earning resources to support .
Three features of the university need to be underscored:
The virtual university is not being
proposed as a university in the conventional single
institutional sense. It will, in fact, be a virtual
organization.
The virtual university will carry out its
functions by optimizing ICT applications,
particularly those that enable the creation and
deployment of content databases based on
learning objects and granules..
The virtual university will be as much
concerned with adding value to conventional
on-campus instruction as it is with serving
learners at a distance.

www.asdf.org.in

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

62

knowledge and helps in knowledge reaching the end user


faster.

It is therefore a bold and challenging vision of a


virtual university that has the promise of enabling the
consortium of member institutions to become leaders in
development of education models that can be tailored to
the realities of the learners they serve
The virtual university is a concept at the initial stage of
development and operations, and offers an opportunity to
radically transform the existing models and practices
education. Education can now be made central to all the
human developmental activities by developing radically
different paradigms of education.
.Development of such a network would create
infrastructure and framework, which could help weaker
and disadvantaged colleges & universities to join
regional/national consortia and offer best educational
services to their local students by offering personalized
services.
The major components of the National Network will
be:
National educational network connecting all
institutions and their classrooms through broadband
connectivity.

Indian Knowledge Grid to enable content to flow


to anyone anywhere and anytime.

Granulated Object Based Content in a Metadatabase

Promotion of national and region level consortia


of colleges and universities

Movement for giving services to weaker and


disadvantaged for ensuring quality,

justice and quality for all..


In response to increased demand for anytime, anywhere
learning, every community in the world
will try to establish their virtual universities. Some may
not call their efforts as virtual universities, but their
purposes are aligned with virtual universities. Regardless
of what name a virtual university uses, the quality of
services to the learners is the key to its success. In
emphasizing the quality of services, VU should be
established with the purpose of serving as the hub for
excellence in education, training and learning resources by
providing high quality learning environment supported by
well-designed resources and best all-around services for
learning.
A critical review of virtual university literature has
revealed that there are many different terms associated
with virtual universities. Virtual universities are also
known as virtual teaching/learning environments, online
teaching/learning,
web
based
teaching/learning
environments, virtual learning communities, and flexible
learning environments.

EnhancedVirtualLearningModel
Browsethewebsitefor
coursecontent

Experiential
Learning

Satisfied
Registrationforthe
course

Learn

n
io
s
se
S
t
x
e
N
r
o
f
n
o
ti
rat
is
g
e
R

Conference

Brainstorming

Chatting,Mailing
&Blogging

Listenand
Learn

Interactivelearning

AttendSpecial
Class

Submittheassignment

Workshop

Guest
Lecture

Audio

Seminar

Visual
Learning
Knowledgegaining
andapplying

Pk

Inl
Animation

No
Clear

Interview

Tests
TestOutcome

Yes
Video
Conferencing

Legend

Getthecertificate

Pk PriorKnowledge
C Content

End

InI Interactivelitorarties

Figure 5.1 The linking of information inside a university through


model EVLM

The model given below embeds the model given above


in the model below by creating a virtual platform to
interlink university to create virtual university.
VirtualEntityModel
People
IndependentEvaluator

Individuals/Institutions
Funder
Authorized
ContactCenter

Authorized
ContactCenter

Authorized
ContactCenter

Authorized
ContactCenter

Processes
TeachingandLearning

VirtualUniversity[LegalEntity]

Enrolment
Credentials

Funder

Curriculum

Support/QualityAssurance

VirtualPlatform[EVLM]

ProjectFinance

Tools

Funder

Students

Students

Students

Figure 5.2 The EVLM embedded inside model link for virtual
university

This model will satisfy the need of the hour where


knowledge is spread all over and we should bring all
knowledge under one roof. The sharing of knowledge
should be done by encapsulating all the knowledge and
distributing it to the learners.

The model given below is enhanced virtual learning


model which can be incorporated in an university to
distribute knowledge virtually. This enhances flow of

V. CONCLUSION
The virtual campus may widen opportunities for some,
but not by and large for those at the low end of the socio-

www.asdf.org.in

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

63

[7] Dickson, P. (2005). Toward a deeper understanding of student


performance in virtual high school courses: Using quantitative analyses
and data visualization to inform decision making. In R. Smith, T. Clark,
& B. Blomeyer, (Eds.), A synthesis of new research in K12 online
learning (pp. 2123). Naperville, IL: Learning Point Associates.
[8]Ferdig, R., DiPietro, M., & Papanastasiou, E. (2005). Teaching and
learning in collaborative virtual high schools. In R. Smith, T. Clark, &
B. Blomeyer, (Eds.), A synthesis of new research in K12 online
learning (pp. 3133). Naperville, IL: Learning Point Associates.
[9] Harms, C.M., Niederhauser, D.S., Davis, N.E., Roblyer, M.D., &
Gilbert, S.B. (2006). Educating educators for virtual schooling:
communicating roles and responsibilities. Electronic Journal of
Communication.
[10] Haughey, M., & Muirhead, W. (2004). Managing virtual schools:
The Canadian experience. In C. Cavanaugh (Ed.), Development and
management of virtual schools (pp. 5067). Hershey, PA: Information
Science.
[11] M. M. Al-Baiati. The applied & scientific dimensions in electronic
learning. The Arbian net for open electronic learning, Amman, Jordan,
2006.
[12] A. S. Al-Radi. The effect of using the virtual lab technology on the
academic attainments of 3rd secondary school students in the course of
chemistry at Al-Qasem Region. Unpublished MA Thesis, King Saud
University, Riyadh 2008.

economic scale, who have traditionally been


underrepresented in higher education. Virtual space is
infinite, but it does not promise universality or equity, nor
is it appropriate for many students whose experience with
technology is limited - and who might benefit far more
from traditional delivery systems.
More research should be performed on societal
functions of conventional universities other than the
transfer of knowledge and skills through formal
instruction, on the societal importance of these functions,
and on the incorporation of such functions in virtual
universities. In developing virtual universities programs,
the multidimensionality of conventional universities should
be considered, including their functions in social
integration, establishing social networks, promoting
personal and social change, the offering of extracurricular
activities and social services, and the cultural transmission
of values. It should be considered how such functions
could be incorporated into virtual universities, and, if this
cannot be done well, whether omitting such functions is
acceptable. In developing virtual university programs,
special attention should be give to the development of
community. In doing this, the question should be addressed
to what extent virtual communities can function as genuine
communities, and to what extent virtual environments and
interactions must be supplemented by physical places and
face-to-face interactions for community development.
VI.
FUTURE ENHANCEMENT
This model developed may have
its own
disadvantages. The implementation only can help us in
finding out some of them. The model can be modified
further with respect to the environment variables or
addition of new technologies to the model to make it reach
all sections of the society where the thirst and
improvement of the knowledge is there. It is the duty of the
individual to identify such sections and implement the
same.
REFERENCES
[1] Shun-Yun Hu, "Spatial Publish Subscribe," in Proc. IEEE Virtual
Reality (IEEE VR) workshop Massively Multiuser Virtual Environment
(MMVE'09), Mar. 2009.
[2]Shun-Yun Hu,
Jui-Fa Chen and Tsu-Han Chen, "VON: A Scalable Peer-to-Peer
Network for Virtual Environments," IEEE Network, vol. 20, no. 4,
Jul./Aug. 2006, pp. 22-31.
[3] Jehn-Ruey Jiang, Yu-Li Huang, and Shun-Yun Hu, "Scalable AOICast for Peer-to-Peer Networked Virtual Environments," in Proc. 28th
International Conference on Distributed Computing Systems Workshops
(ICDCSW) Cooperative Distributed Systems (CDS), Jun. 2008.
[4]Shun-Yun Hu and Guan-Ming Liao, "Scalable Peer-to-Peer
Networked Virtual Environment," in Proc. ACM SIGCOMM 2004
workshops on NetGames '04, Aug. 2004, pp. 129-133.
[5]Shun-Yun Hu, Jui-Fa Chen and Tsu-Han Chen, "VON: A Scalable
Peer-to-Peer Network for Virtual Environments," IEEE Network, vol.
20,
no.
4,
Jul./Aug.
2006,
pp.
22-31.
[6]Jehn-Ruey Jiang, Jiun-Shiang Chiou, and Shun-Yun Hu, "Enhancing
Neighborship Consistency for Peer-to-Peer Distributed Virtual
Environments," in Proc. 27th International Conference on Distributed
Computing Systems Workshops (ICDCSW), Jun. 2007, pp. 71.

www.asdf.org.in

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

64

Segmentation of Pulmonary Nodules in Computed Tomography Scan


For Lung Disorder Prediction
Prof. Vilas D. Alagdeve1

Prof. Mahesh S. Pawar2

Electronics Engineering Dept.


Y.C.C.E., Nagpur
vilas_a23@rediffmail.com1

Electronics Engineering Dept.


Y.C.C.E., Nagpur
Mahesh2323@gmail.com2

Abstract - Lung cancer is the most common cause of death


due Pulmonary Nodule in both men and women throughout
the world. Segmentation of pulmonary computed
tomography (CT) images is a precursor to most pulmonary
image analysis Applications to state the Tumor may either
benign or malignant. Obstruction of the bronchioles may be
detected indirectly by computed tomography (CT). The
process of detection of lung Tumor can be done by using
CT images through Digital Image processing techniques
addition with Neural Network Classification using BPN.
The lung CT image is denoised using filtering method to
remove random noise prevalent in CT images to get
noiseless results. Optimal thresholding is applied to the
denoised image to segregate lung regions from surrounding
anatomy. Lung nodules, approximately spherical regions of
relatively high density found within the lung regions are
segmented using region growing method. Textural and
geometric features extracted from the lung nodules using
gray level co-occurrence matrix (GLCM) is fed as input to a
back propagation neural network that classifies lung tumor
as cancerous or non-cancerous. The proposed system
implemented on MATLAB takes less than 2 minutes of
processing time and has yielded promising results that
would supplement in the diagnosis of lung cancer.
Keywords- Denoising, Optimal thresholding, Region growing,
Gray level Co-occurence matrix, Textural features, Back
propagation network.

I. INTRODUCTION
The mortality rate for lung cancer is higher than that
of other kinds of cancers around the world. To improve
the chance of survival, an early detection of lung cancer
is crucial. Lung nodule detection is a challenging task in
medical imaging. Lung cancer is the most common cause
of death due to cancer in both men and women
throughout the world [1]. The five-year survival rate is
about 15%, and it has not significantly increased over the
last 20 years. Early diagnosis of
lung cancer can improve the effectiveness of the
treatment and therefore fast accurate analysis of
pulmonary nodules is of major importance. Lung cancer
results from an abnormality in the body's basic unit of
life, the cell. Normally, the body maintains a system of
checks and balances on cell growth so that cells divide to
produce new cells only when new cells are needed.
Proc. of the Intl. Conf. on Computer Applications
Volume 1. Copyright 2012 Techno Forum Group, India.
ISBN: 978-81-920575-5-2:: doi: 10. 73802/ISBN_0768
ACM #: dber.imera.10. 73802

www.asdf.org.in

Disorders of lungs checks and balances on cell


growth results in an uncontrolled division and
proliferation of cells that eventually forms a
mass known as a tumor. Tumors can be benign or
malignant. Benign tumors do not spread to other parts of
the body and usually can be removed and whereas
malignant tumors grow aggressively and spread to other
sites in the body.
Computed Tomography (CT) is the most recently
used diagnostic imaging examination for chest diseases
such as lung cancer, tuberculosis, pneumonia, and
pulmonary emphysema. Computed Tomography (CT)
scanning allows looking inside the body, without
resorting to invasive methods. Not only more
comfortable and safe for the patient, imaging also allows
views of anatomy and physiology that cannot be obtained
by any other means. Medical imaging plays an important
role in early nodule detection and treatment of cancer. It
also guides the medical practitioner with information for
efficient and effective diagnosis.
Computed Tomography (CT) is one of the best
imaging techniques for soft tissue imaging behind bone
structures. A CT image has high spatial resolution,
minimizes artifacts and provides excellent visualization
of anatomical features for analysis. In earlier stages, lung
cancer is most commonly noticeable in CT images
radiologically as a non-calcified solitary pulmonary
nodule. Nodules are visible as low contrast white,
approximately spherical objects within the lung fields.
Image segmentation facilitates delineation of anatomical
structures and other regions of interest. Segmentation is
one of the most difficult tasks in image processing and
determines the outcome of analysis and evaluation of
pathological regions. Neural networks are used widely
for classification of image analysis in several biomedical
systems. The aim of the proposed system is to predict
lung tumor through efficient segmentation and neural
network classification.
System architecture is discussed in Section II.
Experimental results are presented in Section III and
analyzed. Conclusion and future work is briefed in
Section IV.
II. SYSTEM ARCHITECTURE
The system architecture for prediction of lung tumor
is shown in Fig. 1. The proposed system uses image
processing techniques and neural networks to predict
lung tumor. The major components of tumor prediction
system are Denoising using filtering method, optimal
thresholding segmentation of lungs and region growing

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

65

segmentation of tumor regions, gray-level co-occurrence


matrix feature extraction, back propagation neural
network classification and inference & forecasting
system.

Lung CT Image Set

Denoising of CT image
using Filtering

(a) Histogram of Noisy Lung CT image

Lung Region Separation


using Optimal
Thresholding
& Morphological
operations

ROI Extraction
using
Region Growing

(b) Histogram of Denoised Lung CT image


Figure 2. Histogram of lung CT before and after denoising

Gray Level Cooccurrence


Matrix Feature
Extraction
Back Propagation
Network
Classification

Training
Data

Inference & Forecasting

Benign / Malignant
Figure l. Lung tumor prediction system

A. Image Denoising
CT images are found to have random noise. The
random distortion makes it difficult to perform perfect
image processing. Filtering method is chosen because it
effectively removes random noise present and also
smoothens the image. The input to the denoising module
is a 2D lung CT image in JPEG format of size 512 x 512.
Let the observed intensity function Uo(x, y) denote the
pixel values of a noisy image. Let u(x, y) denote the
desired clean image, and n be the
additive noise, is represented in (1).
uo(x,y) = u(x,y)+ n(x,y)
(1)
The Denoising method results smoothness effect is as
shown by the histogram in Fig. 2.

www.asdf.org.in

B. Segmentation of Lung Regions


The pre-processed lung CT consists of high intensity
pixels in the body and low intensity pixels in the lung
and surrounding air. The non-body pixels are the
surrounding air that lies between the lungs and the dark
background. The lung regions are segmented by
separating the pixels corresponding to lung tissue from
the pixels corresponding to the surrounding anatomy.
Optimal thresholding proposed
by Shiying et al. [8], is applied on the pre-processed lung
image to select a segmentation threshold to segregate the
body and non-body pixels through an iterative procedure.
1. The initial threshold T0 is average of minimum and
maximum intensity in the image.
2. Let Ti be the segmentation threshold at step i.
3. To choose a new segmentation threshold, segment the
image with the current threshold to separate body and
non-body pixels of the image
4. Let b and n be the mean gray-level of the body pixels
and non-body pixels after segmentation.
5. The new threshold Ti+1 is determined using (3).
Ti+1 = (b + n) / 2

(3)

6. Repeat steps 3-5 until there is no significant difference


between threshold values in successive iterations (Ti+1 =
Ti ).
The lung image after segmentation with optimal
threshold value contains non-body pixels such as the air
surrounding the lungs, body and other low-density
regions within the image and is removed through
morphological operations as shown in Fig. 3.
Background pixels are identified as non-lung pixels
connected to the border. The air surrounding the body
and image background is not relevant to the study, hence

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

cleared. Noises caused by imaging operations and


airways such as trachea or the bronchi may result in
empty cavities. These holes are filled. Small
disconnected regions if any are then discarded if the area
is minimal. The lung regions are identified and extracted
by replacing intensity values of pixels in the lung region
with the intensity values of the denoised image. All
pixels in the non-lung regions were then set to 255.

66

often a pixel with the intensity I occurs in a specific


spatial relationship to a pixel with value j. Features such
as area, convex area, equiv diameter, eccentricity,
solidity, energy, contrast, correlation and homogeneity
were considered for classification in the proposed work.
The features are defined as follows:
1. Area: It is a scalar value that gives the actual number
of pixels in the ROI
2. Convex Area: It is a scalar value that gives the number
of pixels in convex image of the ROI which is a binary
image with all pixels within the hull filled in.
3. Equiv Diameter: It is the diameter of a circle with the
same area as the ROI as defined in (4)

4. Eccentricity: The eccentricity is the ratio of the


distance between the foci of the ellipse and its major axis
length. The value is between 0 and 1.
5. Solidity: It is the proportion of the pixels in the convex
hull that are also in the ROI, defined in (5).

Figure3. Optimal thresholding and morphological operations

C. ROI Extraction
Lung nodules are approximately spherical regions of
relatively high density found in CT images. Lung nodule,
also referred to as tumor regions, is a white mass of
tissue located in the lungs. In this work, lung nodules are
the desired regions of interest. Region growing is a
segmentation method especially used for the delineation
of small, simple structures such as tumors and lesions.
The ROIs are segmented using region growing method as
follows:
1. In lung CT images the tumor regions have a
maximum intensity value of 255.
2. A pixel is added to the regions if it satisfies the
following criteria:
The absolute difference between the seed and any
pixel is less than 50. The value of 50 is arrived after
analyzing the histogram.
A pixel is included, if it has 8-connectivity to any
one of the pixels in that region
3. If a pixel is connected to more than one region then
the regions are merged.
Region growing is one of the best methods to
segment tumor regions as the borders of regions found
by region growing are perfectly thin and connected. The
extracted ROIs are then subject to feature extraction for
analysis.
D. Feature Extraction
Textural features from the spatial distribution
can be used to characterize images. The extracted ROIs
can be distinguished as either cancerous or not using
their texture characteristics. Gray Level Co-occurrence
Matrix (GLCM) is one of the most popular ways to
describe the texture of an image. GLCM is defined as a
matrix of relative frequencies p(i,j) calculated as how

www.asdf.org.in

6. Energy: It is the sum of squared elements in the


GLCM as in (6) and the value ranges between 0-1.

7. Contrast: It is the measure of the intensity contrast


between a pixel and its neighbour over the whole ROI as
in (7) where Ng is the number of distinct gray levels.

8. Correlation: It is the measure of how a pixel is


correlated to its neighbour over the ROI as in (8). Value
ranges between -1 and + 1.

9. Homogeneity: It is the measure of closeness of the


distribution of elements in the GLCM to the GLCM of
each ROI as in (9). Value ranges between 0 and 1.

The above nine textural features are extracted from the


GLCM of the ROIs and stored as feature vectors in the
database for training the network.
E. Back Propagation Network Classification
Back Propagation Network (BPN) is a
systematic method for training multi-layer artificial
neural networks. Back propagation provides a
computationally efficient method for changing the
weights in a feed forward network with differentiable

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

activation function units to learn a set of input-output


examples.
A multi-layer feed forward BPN for
classification of tumor consists of an input layer, one
hidden layer and an output layer as shown in Fig. 4.
The total number of nodes in the input layer is 9
representing features extracted from the ROI. The
number of nodes in the hidden layer is 5 was decided
experimentally as the network produced good results.
The output of BPN is a single value, hence one node in
the output layer.
The nodes in one layer connect to the nodes in the next
layer by means of directed communication links, each
with an associated random weight. Since the BPN has
one hidden layer and one output layer, it results in two
weight matrices, w1 connecting the input layer to the
hidden layer and the w2 connecting the hidden layer to
the output layer.
The network is trained using log-sigmoid activation
function with a learning rate of 0.1 to evaluate the feature
vectors based on the current network state. The error
threshold is set as 0.1 and maximum number of epochs
as 500. The output values lie in the range 0 to 1. A
threshold value of 0.5 is selected to distinguish between
cancerous and non-cancerous lung nodule. The
processing time taken by the network is less than a
minute.

67

The results obtained for a healthy lung image is


shown in Fig. 5. Fig. 5(a) shows the input image, (b) the
denoised image, and (c) the segmented lung regions.

Figure 5. Results of image processing obtained for a healthy lung


image

The results obtained for a diseased lung image is shown


in Fig. 6. Fig. 6(a) shows the input image, (b) the
denoised image, (c) the lung regions and (d) the extracted
ROI.
A few classification results of feature vectors
using back propagation network as either cancerous or
non-cancerous is listed in table I. If the output value is
greater than threshold value 0.5, then the diseased image
is classified as cancerous otherwise non-cancerous.
ROI#
ROI_15
ROI_28
ROI_34
ROI_41
ROI_52
ROI_64
ROI_72

Output values

Target value

Classification

0.0022
0.6964
0.1476
0.9279
0.8256
0.9361
0.9960

0
1
0
1
1
1
1

Non-Cancerous
Cancerous
Non-Cancerous
Cancerous
Cancerous
Cancerous
Cancerous

TABLE! . NODULE CLASSIFICATION USING BPN

Figure 4. BPN for tumor classification

F. Inference and Forecasting


A lung CT image is given as input to the
subsystem. The input image is pre-processed, lungs and
then ROIs are segmented and textural features extracted.
It then applies BPN classification technique to predict the
lung tumor as either benign or malignant.
III . EXPERIMENTAL RESULTS AND ANALYSIS

The lung image set used for study of the proposed


system consist of 32 diseased lung CT JPEG images of
size 512x512. A total of 78 ROIs were extracted. The
system was tested with 8 diseased and 3 normal lung
images.

www.asdf.org.in

Figure 6. Results of image processing obtained for a diseased lung


image

Lung nodules classified by the system as cancerous


or non-cancerous and classified as positive by the expert
are true positives (TP). Nodules classified by the system
as cancerous or non-cancerous and classified as negative
by the expert are false positives (FP). Nodules missed by
the system and classified as positive by the expert are

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

false negatives (FN). Nodules missed by the system and


classified as negative by the expert are true negatives
(TN).
The output of image processing and tumor
classification was shown to an experienced radiologist
and based on the feedback, the number of TP, FP, TN
and FN were counted.
The performance of the proposed system was
evaluated in terms of specificity, accuracy, precision and
recall. Specificity is the number of correctly diagnosed
negative nodules out of all negative nodules. Accuracy is
the number of correctly diagnosed nodules out of the
total number of nodules. Precision is the number of
correctly classified nodules out of the number of positive
nodules. Recall is the number of correctly classified
positive nodules out of all positive nodules. These
measures are computed using (10),(11), (12) and (13)
respectively.

TABLE II. PERFORMANCE MEASURES


Performance
Measures
Specificity
Accuracy
Precision
Recall

BPN Classification

68

REFERENCES
[1] Weir HK et. al. ," Annual rpe ort to the nation on the status of
cancer, 1975 -2000 ," Journal National Cancer Institute, vol. 95 , No.
17,p p.
1276 -1299 , Sep. 2003.
[2] Omar S. Al-Kadi, and D. Watson," Texture analysis of aggressive
and nonaggressive lung tumor CE CT images," IEEE Transaction on
Biomedical Engineering, vol. 55, no.7, p p. 1822-1830 , July 2008.
[3] Zhong Xue, Kelvin Wong, and Stpehen Wong," Joint registration
and segmentation of serial lung CT images in microendoscopy
molecular image-guided therapy," Springer Medical Imaging and
Augmented Reality, vol. 5128, no. 4, p p. 12-20 , Aug. 2008.
[4 ] Serhat Ozekes, Onur Osman, and Osman N. Ucan," Nodule
detection in a lung region that's segmented with using genetic cellular
neural networks and 3D template matching with fuzzy rule based
thresholding," Korean Journal of Radiology , vol. 9. no. I, pp. 1 -9, Feb.
2008.
[5 ] Jamshid Dehmeshki, Hamdan Amin, Manlio Valdivieso, and
Xujiong Ye, "Segmentation of pulmonary nodules in thoracic CT
scans: A region growing approach," IEEE Transactions on Medical
Imaging, vol. 27, no. 4, p p. 467-480 , April 2008.
[6 ] A A Farag, AEI-Baz, and G. Gimelfarb, "Quantitative nodule
detection in low dose chest CT scans: New template modeling and
evaluation for CAD system design," Proc. of International Conference
on Medical Image Computing and Computer Assisted Intervention, p p.
720-728, Oct. 2005.
[7] Leonid I. Rudin, Stanley Osher, Emad Fatemi, "Nonlinear total
variation based noise removal algorithms," Physica D, vol. 60 no. l-4 ,
pp. 259 -268, Nov. 1992.
[8] Shiying Hu, Eric A Huffman, and Jospeh M. Reinhardt," Automatic
lung segementation for accurate quantitiation of volumetric X-Ray CT
images", IEEE Transactions on Medical Imaging, vol. 20, no. 6, pp.
490 -498, June 2001.
[9] D. J. Withey and Z. J. Koles," Medical image segmentation:
Methods and software," Proceedings of NFSI & ICFBI, pp.140 -143 ,
Oct. 2007. 523
[10] R M. Haralick, K. Shanmugam and I.H. Dinstein," Textural
features for image classification," IEEE Transactions on Systems, Man
and Cybematics, vol.3, no. 6, p p. 610 -621, Nov. 1 973.
[11] L. Fausett, Fundamentals of Neural Networks-Architectures,
Algorithms, and Applications: Prentice Hall, 1994.
[12] R C. Gonzales and R E. Woods, Digital Image Processing: Pearson
Education, 2003.

86.7 %
85. 2 %
89.6 %
85. 2 %

IV. CONCLUSIONS AND FUTURE WORKS

A set of textural features extracted from the extracted


ROIs is classified by the BPN. The system was able to
predict whether the tumor was benign or malignant in
nature with an accuracy of 85.2% within 3 minutes. The
proposed system would be effective in assisting the
physician in identifying the lung tumor as cancerous or
non-cancerous.
The proposed system can be enhanced in the
following ways. The accuracy of the network can be
improvised by training it on a very large image set and
classification based on hybrid systems. The classification
can be extended to include various stages of malignant
tumor and computation of growth rate.

www.asdf.org.in

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

69

Fabric Defect Detection Using Statistical Texture Analysis


Comparison of GLCM and Histogram.
S.Padmavathi, Asst. Professor

Rohit K. Binnu, B. Tech

P.Sathishkumar, B. Tech

Department of Information Technology


Amrita School of Engineering, Amrita
University
Ettimadai, Coimbatore, Tamil Nadu,
India

Department of Information Technology


Amrita School of Engineering, Amrita
University
Ettimadai, Coimbatore, Tamil Nadu,
India

Department of Information Technology


Amrita School of Engineering,
Amrita University
Ettimadai, Coimbatore, Tamil Nadu,
India

Abstract Quality assessment is an important task in textile


fabric industry. In this paper defects in the fabric are
identified through image processing techniques. The fabric is
considered as a textured image and statistical texture analysis
methods like Grey level co-occurrence matrix (GLCM) and
Histogram are used for identifying the defects. This paper
presents the performance analysis of GLCM and Histogram in
identifying the defects. Commonly occurring 16 fabric defect
images are used for experimentation. The aim is to choose the
appropriate feature and technique for Fabric Automatic
Visual Inspection system such that the discrimination between
normal and defective regions is high. The performance of
various features in defect detection is analyzed and tabulated.

I.

INTRODUCTION

Quality inspection is an important aspect of modern


industrial manufacturing. In textile industries, fabric defect
detection plays a prominent role in the quality control. Fabric
faults or defects comprise nearly 85% of the defects found in
the garment industry [1]. Manufactures recover only 45-65%
of their profit from second or off quality goods [2]. However,
the current inspection task is primarily performed by human
inspectors and this intensive labour cannot always give
consistent evaluation of products.
Fabric Automatic Visual Inspection (FAVI) system is a
promising alternative to the native human vision inspection.
Based on the advances made in digital image processing and
pattern recognition, FAVI system can provide reliable,
objective and stable performance on fabric defects
inspection. An automated system means lower labour cost
[3] and shorter production time [4].
There are numerous reported works in the past two
decades during which computer vision based inspection has
become one of the most important application areas. The
FAVI system majorly texture based methods. The texture
materials can be further divided into uniform, random or
patterned textures. Brazakovic et al. [5] have detailed a
model based approach for the inspection of random textured
materials. The problem of printed textures (e.g. wall paper
scanning, ceramic flaw detection and printed fabric
detection) involves evaluation of colour uniformity [6] and
Proc. of the Intl. Conf. on Computer Applications

Volume 1. Copyright 2012 Techno Forum Group, India.


ISBN: 978-81-920575-5-2:: doi: 10. 73816/ISBN_0768
ACM #: dber.imera.10. 73816

www.asdf.org.in

consistency of printed patterns. Ngan et al.[19] have


introduced the new regular bands (RB) methods which is an
effective approach for pattern texture inspection.
II.

IMAGE PROCESSING TECHNIQUES FOR FABRIC


INSPECTION

Texture is the basic repeating unit of local variation in


intensity of the image; hence it is one of the most important
characteristics in identifying defects or flaws. In fact, the task
of detecting defects has been predominantly viewed as a
texture analysis problem. A variety of techniques for
describing image texture has been proposed in the research
literature. With reference to several texture analysis survey
papers [7-12], texture analysis techniques can be categorized
into four ways: statistical approaches, structural approaches,
filter based approaches and model based approaches.
M.Tuceryan and Jain [13], on the other hand, defined five
major categories of features for texture analysis: statistical,
geometrical, structural, model based and signal processing
features. Geometrical and structural methods try to describe
the primitives and the rules governing their spatial
organization by considering texture to be composed of
texture primitives. However these two approaches have not
been feasible in detecting defects. This is mainly due to the
stochastic variations in the fabric structures (due to elasticity
of yarns, fabric motion, fibre heap, noise etc.). This poses
severe problems in the extraction of texture primitives from
the real fabric samples. The spectral methods use the
frequency domain and statistical methods use spatial domain.
Statistical texture analysis methods measure the spatial
distribution of pixels. An important assumption in this
approach is that, the defect free regions are stationary, and
these regions extend over a significant portion of inspection
images. So the statistical method is chosen to find and
classify the defects.
The co-occurrence matrix method, also known as the
spatial gray-level dependence method, has been widely used
in texture analysis. Tsai et al. [20] have detailed fabric defect
detection while using only two features, and achieved a
classification rate as high as 96%.Rosler [21] has also
developed a real fabric defect detection system, using cooccurrence matrix features, which can detect 95% of the
defects. However this technique becomes computationally
expensive as the number of gray levels increase. This paper
introduces a technique in which gray levels are reduced
while maintaining the accuracy.

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

III.

FABRIC DEFECTS

A. Data set
Fabric defects can be classified basically into 3 types
namely colour defects, construction defects and presence of
dirt on the fabric material. The defects that occur during the
colour and placement issues (i.e., during labelling, colouring
of the sewing thread and errors in embroidery design) are
called as trims, accessories and embellishment defects.
Some of the major defects which occur frequently are :
Bad shedding, Broken pick, Crack, End out, Float, Holes,
Knots, Loose warp, Oil mark, Oil stain, Rust stain, Selvedge
float, Smash, Weft bar, Weft crack which are shown in
figure 1 through 16.

Figure 1: Bad Shedding

Figure 3: Crack

Figure 5: Float

70

Figure 9: Oil Mark

Figure 10: Oil Stain

Figure 11: Rust Stain

Figure 12: Selvedged Float

Figure 13: Smash

Figure 14: Weft Bar

Figure 15: Weft Crack

Figure 16: Weft Crack2

Figure 2: Broken Pick

Figure 4: End Out

Figure 6: Hole

B. GLCM
A GLCM is a second order statistic method. It is a G x G
matrix P, in which G represent the set of possible grey level
values in the image. The GLCM is defined by:

Pd [i, j ] nij

(1)

where nij is the number of occurrences of the gray levels (i,j)


with a specific displacement and orientation d. GLCM is
normalized for image size invariance as N[i,j], defined by:

N [i, j ]
Figure 7: Knots

Figure 8: Loose Warp

P[i, j ]
P[i, j ]
i

(2)

The features that are extracted for the GLCM are:

www.asdf.org.in

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

1.

Energy

(13)

Cen Pd [i, j ]2
i

2.

(3)

Ce Pd [i, j ] ln Pd [i, j ]
j

(5)

i, j

Contrast

C ( k , n ) (i j ) k Pd [i, j ]n
i

5.

6.

(4)

Maximum Probability

C m max Pd [i, j ]
4.

Inverse difference moment


P [i, j ]
Ci d
i j)k
(
i
j

(6)

(7)

Correlation

[ijP [i, j ]]
d

Cc

(8)

i j

where

i iPd [i, j ],

i2 i 2 Pd [i, j ] i2

C. Histogram
The histogram is the first order statistics method.
Histogram of a digital image with gray levels in the range [0,
L-1] is a discrete function h(rk) = nk, where rk is the kth gray
level and nk is the number of pixels in the image having gray
level rk. Some of the properties of the histogram help to
identify the defect in the fabric materials are given in Eq. 9
through 13.
1.

Mean
(9)

2.

Standard deviation
(10)

3.

Entropy
H(x)=-log(

4.

(11)

Skewness

Where m=3.
Kurtosis

www.asdf.org.in

FABRIC DEFECT DETECTIO006E

A fabric image which does not contain any defect is


taken as reference image. Since the number of gray levels
affects the computation of statistical techniques, a
quantization procedure is adopted. The gray levels are
grouped until their percentage of distribution exceeds a
certain threshold T. The gray levels which are grouped
together are replaced with their median in the image. This
procedure gives a GLCM whose size is lesser than 256256.
The amount of size reduction is controlled by T. The GLCM
is calculated on this image; features are extracted and stored
for each defect. When a test image is given the same process
is applied on the image with same threshold T. The
Euclidean distance between the reference image and the test
image is calculated. The test image is classified as
undefective if the distance is less than and classified as
defective otherwise. A similar analysis is done using
histogram and the results are compared.
As listed in section III.A, fabrics with sixteen different
defects are collected and images are captured from them.
Each one of the images is cropped at various locations.
Among these one is taken as the reference which is defectless. Five images are taken from defective area and the other
five images are taken from defect-less area which gives a
data set of 1116 images. Since the input images for the
defect are from different fabrics, these cropped images are
used as samples for experimentations. This paper tries to
identify the best features for each defect given a reference
image.
Experimentation is done for various T values and
identified that T=20 gives better overall efficiency of 95.5%
for GLCM method. The distance parameter is also varied
and seen that 4 gives an optimal result. The GLCM are
calculated for four different directions: 0, 45, 90 and 135
and distance of 4 units. The results are tabulated in table I
through IV. The histogram is calculated for each image in
the data-set. The values of different features as specified in
section III.C, are calculated for the reference image and the
sample images and the values are tabulated in table V. For
histogram at quantization 25, the maximum efficiency of
85% is obtained.
In each table the first 16 rows corresponds to 16 kinds of
defects that are considered and 6 columns corresponds to the
6 features as specified in section III.A. In the table, tij
indicates the percentage of accuracy of property j used for
classifying the defect i.
Accuracy = number of correctly classified images 100
Total number of images for a defect

(12)
5.

Where m=4
IV.

Entropy

3.

71

A Overall efficiency for each feature f is calculated and


listed in the 17th row.
f = accuracy of feature for each defects

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

Total number of defects


Maximum efficiency for each defect and the features that
provide them are listed in 7th and 8th columns respectively.
Assuming the appropriate features being selected, the
efficiency of this method is calculated by summing the
maximum accuracies obtained for each defect across various
direction and quantisation.
TABLE I.

d10
d11
d12
d13
d14
d15
d16
eff

60
92
100
100
92
100
80
76.75

56
92
100
100
92
100
80
76.5

44
84
100
100
92
100
100
75

56
88
100
100
80
92
80
71.25

d1
d2
d3
d4
d5
d6
d7
d8
d9
d10
d11
d12
d13
d14
d15
d16
eff

f2
88
64
72
100
100
80
24
96
100
96
92
100
88
96
96
40
83.2

f3
100
92
28
24
60
100
40
100
100
64
80
100
84
96
40
28
71

f4
92
100
76
100
100
80
44
84
100
72
84
100
48
96
80
16
79.5

TABLE II.

d1
d2
d3
d4
d5
d6
d7
d8
d9
d10
d11
d12
d13
d14
d15
d16
eff

f1
92
92
60
100
100
84
36
96
100
96
92
100
96
96
88
16
84

f2
92
92
60
100
100
84
36
96
100
96
92
100
96
96
88
16
84

f3
100
88
28
24
40
100
40
92
100
44
76
100
76
96
40
32
67.25

TABLE III.

d1
d2
d3
d4
d5
d6
d7
d8
d9

f5
100
100
68
48
100
84
48
76
100
88
84
100
96
96
4
28
76.25

84
5
92
1,2
100 1,2,3,4,5
100 1,2,3,4,5
92
1,2,3
100 1,2,3,5
100
3

f6
28
76
76
56
76
76
56
80
96
56
88
80
48
40
68
76
67.25

Max
100
100
76
100
100
100
56
100
100
96
92
100
96
96
96
76

GLCM =45, D=4

3,5
4,5
4,6
1,2,4
1,2,4,5
3
6
3
1,2,3,4,5
1,2
1,2
1,2,3,4,5
5
1,2,3,4,5
1,2
6

d1
d2
d3
d4
d5
d6
d7
d8
d9
d10
d11
d12
d13
d14
d15
d16
eff

f1
88
76
28
44
76
80
24
100
100
84
92
40
100
100
100
16
71.75

f2
88
76
28
44
76
80
24
100
100
84
92
40
100
100
100
16
71.75

f3
100
84
24
0
100
100
20
96
100
56
96
100
100
96
44
16
70.75

f4
92
80
16
80
56
80
28
100
100
68
84
76
72
100
100
16
71.75

f5
100
100
20
72
76
88
20
80
100
72
92
72
100
100
100
28
76.25

f6
28
76
72
52
80
72
56
84
96
52
88
80
48
40
68
72
66.5

Max
100
100
72
80
100
100
56
100
100
84
96
100
100
100
100
72

3,5
5
6
4
3
3
6
1,2,3
1,2,3,4,5
1,2
3
3
1,2,3,5
1,2,4,5
1,2,4,5
6

GLCM =135, D=4


f4
92
100
76
100
100
80
44
84
100
64
84
100
40
100
84
16
79

f5
100
100
68
100
100
84
32
60
100
84
88
100
96
96
36
32
79.75

f6
28
76
76
56
76
76
60
80
96
56
88
80
48
40
68
80
67.75

Max
100
100
76
100
100
100
60
96
100
96
92
100
96
100
88
80

TABLE V.
3,5
4,5
4,6
1,2,4,5
1,2,4,5
3
6
1,2
1,2,3,4,5
1,2
1,2
1,2,3,4,5
1,2,5
4
1,2
6

GLCM =90, D=4

f1
f2 f3 f4 f5
100 100 100 100 100
64 64 40 52 68
20 20 16 16 12
40 40 24 12 76
100 100 100 52 100
96 96 100 100 100
28 28 56 28 48
56 56 44 84 32
100 100 100 100 100

www.asdf.org.in

52
88
80
48
40
68
72
66.25

GLCM =0, D=4


TABLE IV.

f1
88
64
68
100
100
80
24
96
100
96
92
100
88
96
96
40
83

84
88
100
100
68
100
72
78

72

f6
28
76
72
48
80
72
56
84
96

Max
100 1,2,3,4,5
76
6
72
6
76
5
100 1,2,3,5
100 3,4,5
56
3,6
84
4,6
100 1,2,3,4,5

d1
d2
d3
d4
d5
d6
d7
d8
d9
d10
d11
d12
d13
d14
d15
d16
eff

f1
72
60
100
96
56
40
40
48
100
48
32
64
84
96
16
52
62.75

f2
12
84
24
100
68
84
52
0
64
60
64
92
100
12
72
20
56.75

V.

HISTOGRAM GREY-LEVEL 25

f3
60
68
24
100
80
100
44
8
96
56
80
60
100
20
72
12
61.25

f4
40
100
52
96
76
44
76
72
72
68
56
88
16
48
68
24
62.25

f5
32
100
52
100
68
52
76
60
56
68
64
92
4
48
72
24
60.5

Max
72
100
76
100
80
100
76
72
100
68
64
92
100
96
72
52

1
4,5
1
2,3,5
3
3
5,6
5
1
5,6
2,6
2,6
2,3
1
2,3,6
1

CONCLUSION

On comparison of the 2 methodologies used to identify


the fabric defects it is evident from the observations that
GLCM out performs histogram. GLCM method offers a
maximum accuracy of up to 95.5%. It could be seen that the
horizontal and off diagonal orientation performs better. The
features Energy, entropy, contrast and inverse difference
moment give better accuracy across the defects. GLCM
identifies d4,d5,d9 and d12 with a high accuracy but it is not
so precise in identifying the defects d7, d16 and d3.

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

ACKNOWLEDGMENT
We wish to express our profound gratitude to Prof.
K.Gangatharan, HOD of Information Technology of Amrita
University. We also wish to express our sincere gratitude to
Prem who has provided the test image data set.
REFERENCES
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]

Sengottuvelan P., Wahi A. and


Shanmugam A(2008)
Research
Journal of Applied Sciences, 3(1) pp 26-31.
Srinivasan K., Dastor P. H., Radhakrishnaihan P., and Jayaraman
S.(1992) J. text. Inst., vol. 83, no. 3, pp. 431-447
Chin R. T. and Harlow C. A. (1982) IEEE Trans .Pattern Anal.
Machine Intell., 4(6), 557-573.
Smith M. L. and Stamp R. J. (2000) Computer Ind. 43,73-82.
Chin R.T. (1987) Computer Vision,Graphics, and Image Processing,
41(3),346-381.
Yang X. Pang G., and Yung N (2005) IEEE Proceedings Vision,
Image Processing,
Harlick R. (1979) Proceedings of IEEE, 67(5) 786-804.
Wechsler H. (1980) Signal processing, 2271-282.

www.asdf.org.in

73

[9]

Van Gool L., Dewaele P., and Oosterlinck A. (1985) Computer


Vision, Graphics and Image Processing, 29, 336- 357.
[10] Vilnrotter F., Nevatia R., and Price K. (1986) IEEE Transactions on
Pattern Analysis and Machine Intelligence, 8,76-89.
[11] Reed T. and Buf J. (1993) Computer Vision, Image Processing and
Graphics, 57(3),359-372.
[12] Kumar A. (Jan-2008) IEEE Trans. Of Industrial Electronics, 55(1).
[13] Tuceryan M. and Jain A. (1998) In Handbook of Pattern recognition
and computer vision, 235-276.
[14] Mahajan P.M.1, Kolhe S.R.2 and Patil P.M.3 A review of automatic
fabric defect detection techniques
[15] De Natale F. G. B. (1986) Intl. J. Pattern Recognition and Artificial
Intell., 10(8).
[16] Brodatz P. (1966) Texture: A Photographic album for artists and
designers, Dover, New York.
[17] Kauppinen H. (2000) Proc. IEEE Conf. Pattern Recognition,
Barcelona (Spain), 4, 803-806.
[18] Desoli G. S., Fioravanti S., Fioravanti R., and Corso D.,(1993) Proc.
Intl. Conf. Industrial Electronics, 3, 1871-1876.
[19] Ngan H.Y.T. and Pang G.K.H. (2009) IEEE Trans. on Automation
Science and Engineering, 6(1).
[20] Tsai I., Lin C., and Lin J. (Mar 1995) Text. Res. J., 65, 123-130.
[21] Rosler R. N. U. (1992) Mellind Texilberichte, 73, 292.

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

74

Survey Of Digital Image Steganography In DWT Domain


A. Ilavendhan

Prof. D. Loganathan

Computer Science and Engineering


Pondicherry Engineering College
Pondicherry, India
e-mail: ilavendhana62@pec.edu

Computer Science and Engineering


Pondicherry Engineering College
Pondicherry, India
e-mail: drloganathan@pec.edu

AbstractSteganography is the art of hiding information,


data or messages in a multimedia carrier, like Image, video
and audio files. The purpose is to pass on the information
safely to the destination without any regard or knowledge of
others. The ultimate objectives of Steganography, which are
undetectability, robustness (resistance to various image
processing methods and compression) and capacity of the
hidden data. In this paper, we review about the list of
techniques of steganography in DWT domain. Here the
embedding process is performed in transform domain (DWT)
by modifying the coefficients in an appropriate manner. In
addition to increase the Embedding capacity in gray scale
image we use Block pattern encoding techniques and the
solution for the same is dealt separately.
Keywords-component; DWT; block-pattern encoding; Haar;
Daubechies .

I.

INTRODUCTION

Steganography is a type of hidden communication that


literally means covered writing (from the Greek words
stegano or covered and graphos or to write). The goal
of steganography is to hide an information message inside
harmless cover medium in such a way that it is not possible
even to detect that there is a secret message. Often times
throughout history, encrypted messages have been
intercepted but have not been decoded. While this protects
the information hidden in the cipher, the interception of the
message can be just as damaging because it tells an
opponent or enemy that someone is communicating with
someone else. Steganography takes the opposite approach
and attempts to hide all evidence that communication is
taking place. Essentially, the information-hiding process in
a Steganographic system starts by identifying a cover
mediums redundant bits (those that can be modified
without destroying that mediums integrity).
The embedding process creates a stego medium by
replacing these redundant bits with data from the hidden
message. Modern steganographys goal is to keep its mere
presence undetectable, but steganographic systems,
because of their invasive nature, leave behind detectable
Proc. of the Intl. Conf. on Computer Applications
Volume 1. Copyright 2012 Techno Forum Group, India.

traces in the cover medium through modifying its statistical


properties, so eavesdroppers can detect the distortions in
the resulting stego mediums statistical properties. The
process of finding these distortions is called Statistical
Steganalysis.
II.

INFORMATION HIDING SYSTEM FEATURES

An information-hiding system is characterized be


having three different aspects that contend with each other
as shown in Fig.1 capacity, security, and robustness.
Capacity refers to the amount of information that can be
hidden in the cover medium, security to an eavesdroppers
inability to detect hidden information, and robustness to
the amount of modification the stego medium can with
stand before an adversary can destroy hidden information.
Generally speaking, information hiding relates to both
watermarking and steganography.
A watermarking systems primary goal is to achieve a
high level of robustness-that is, it should be impossible to
remove a watermark without degrading the data objects
quality. Steganography, on the other hand, strives for high
security and capacity, which often entails that the hidden
information is fragile. Even trivial modifications to the
stego medium can destroy it.

Capacity

Robustness

Security

Figure 1. Information Hiding System Features.

ISBN: 978-81-920575-5-2:: doi: 10. 73823/ISBN_0768


ACM #: dber.imera.10. 73823

www.asdf.org.in

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

III.

DISCRETE WAVELET TRANSFORM

The transform of a signal is just another form of


representing this signal. It does not change the information
content present in it. In most Digital Signal Processing
(DSP) applications, the frequency content of the signal is
very important. The Fourier Transform (FT) is probably
the most popular transform used to obtain the frequency
spectrum of a signal. But the Fourier Transform is only
suitable for stationary signals, i.e., signals whose frequency
content does not change with time. The Fourier Transform,
while it tells how much of each frequency exists in the
signal, it does not tell at which time these frequency
components occur. Signals such as image or speech have
different characteristics at different space or time, i.e., they
are non-stationary. Most of the biological signals too, such
as, Electrocardiogram, Electromyography, etc., are nonstationary. To analyze these signals, both frequency and
time information are needed simultaneously, i.e., a timefrequency representation of the signal is needed. The
Wavelet
Transform
provides
a
time-frequency
representation of the signal. It uses multi-resolution
technique by which different frequencies are analyzed with
different resolutions.
Discrete Fourier Transform (DFT) and Discrete Cosine
Transform (DCT) are full frame transform, and hence any
change in the transform coefficients affects the entire
image except if DCT is implemented using a block based
approach. However DWT has spatial frequency locality,
which means if signal is embedded it will affect the image
locally. Hence a wavelet transform provides both
frequency and spatial description for an image.

75

Fig. 2. There are many available filters, although the most


commonly used filters are Haar Wavelet Filters and
Daubechies Filters. However in this paper, we use the
Daubechies 4 filters is in the implementation of the
proposed technique. Each of these filters decomposes the
image into several frequencies. When applying discrete
wavelet transform on an image, four different sub-images
are obtained. Each level of the wavelet decomposition has
four sub-images with same size. Let the LLk stands for the
approximation sub image and HLk, LHk, HHk stand for the
horizontal, vertical and diagonal direction high-frequency
detail sub-image respectively. Where the variable k =
1,2,3.. is the scale or the level of the wavelet
decomposition, four different sub-images are obtained as
follows:

LL: A coarser approximation to the original image


containing the overall information about the
whole image. It is obtained by applying the lowpass filter on both x and y coordinates.

HL and LH: They are obtained by applying the


high-pass filter on one coordinate and the lowpass filter on the other coordinate.

HH: Shows the high frequency component of the


image in the diagonal direction. It is obtained by
applying the high-pass filter on both x and y
coordinates .
Since human eyes are much sensitive to the low
frequency part (LL sub-image), LL is the most important
component in the reconstruction process.
IV.

REVIEW OF DWT BASED STEGANOGRAPHY

In this section we discuss about the various techniques


used for steganography in Discrete Wavelet domain.
Approximation Band

Vertical Detail Band

Horizontal Detail Band

Diagonal Detail Band

Figure 2. Three Level Wavelet Decomposition.

The forward 2-D discrete wavelet transform can be


implemented using a set of up-samplers, down-samplers,
and recursive two-channel digital filter banks as shown in

www.asdf.org.in

A. High Capacity and Security Steganography (HCSSD)


In this paper [1] author proposed High Capacity and
Security Steganography using Discrete wavelet transform
(HCSSD). The wavelet coefficients of both the cover and
payload are fused into single image using embedding
strength parameters alpha and beta. The cover and payload
are pre-processed to reduce the pixel range to ensure the
payload is recovered accurately at the destination. It is
observer that capacity and security is increased.
B. Data Hiding in DWT using PMM
In [2] authors proposed a novel steganographic method
for hiding information in the transform domain of the gray
scale image. The proposed approach works by converting
the gray level image in transform domain using discrete
integer wavelet technique through lifting scheme. This
approach performs a 2-D lifting wavelet decomposition
through Haar lifted wavelet of the cover image and
computes the approximation coefficients matrix CA and
detail coefficients matrices CH, CV, and CD. Next step is
to apply the PMM technique in those coefficients to form
the stego image.

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

Pixel Mapping Method (PMM) is a method for


information hiding within the spatial domain of any gray
scale image. Embedding pixels are selected based on some
mathematical function which depends on the pixel
intensity value of the seed pixel and its 8 neighbors are
selected in counter clockwise direction. Before embedding
a checking has been done to find out whether the selected
embedding pixels or its neighbors lies at the boundary of
the image or not. Data embedding are done by mapping
each two or four bits of the secret message in each of the
neighbor pixel based on some features of that pixel.
C. A DWT Based Approach for Steganography
In [3] author proposed a new steganography technique
which embeds the secret messages in frequency domain.
According to different users demands on the embedding
capacity and image quality, the proposed algorithm is
divided into two modes and 5 cases. Unlike the space
domain approaches, secret messages are embedded in the
high frequency coefficients resulted from Discrete Wavelet
Transform. Coefficients in the low frequency sub-band are
preserved unaltered to improve the image quality.
D. Modified High Capacity Image Steganography
In [4] author proposed a modified high-capacity image
steganography technique that depends on wavelet
transform with acceptable levels of imperceptibility and
distortion in the cover image and high level of overall
security. Here the cover image is tested and whether its
contain any pattern and it is passes to the histogram test
then the cover image is accepted, then image preprocessing and correction after that DWT is taken from
that we can calculate the threshold calculation. Next
message partitioning then key encryption after that
encrypted the message DWT transform, finally we get the
stego image.
E. Singular Value Decomposition in Wavelet Domain
In [5] a new watermarking algorithm is propose based
on Singular Value Decomposition (SVD) and discrete
wavelet transform (DWT). The algorithm uses a gray
image as a watermark, increasing embedded information
capacity. The algorithm can satisfy the transparency and
robustness of the watermarking system very well. This
algorithm perform a 3-level wavelet transform using Haar
wavelet for host image, and intend to embed watermark in
LL3 . Then perform Arnold transform for watermark image
then we obtain the watermarked image coefficients matrix.
The stego image is obtained by apply reverse wavelet
transform for original image, then changing the doubleprecision real number to unsigned 8-bit integer. Thus,
obtain the watermarked image in which watermark are
embedded.

76

implement steganography is skin tone region of images.


Here secret data is embedded within skin region of image
that will provide an excellent secure location for data
hiding. For this skin tone detection is performed using
HSV (Hue, Saturation and Value) color space. Additionally
secret data embedding is performed using frequency
domain approach - DWT (Discrete Wavelet Transform),
DWT outperforms than DCT (Discrete Cosine Transform).
Secret data is hidden in one of the high frequency sub-band
of DWT by tracing skin pixels in that sub-band. Different
steps of data hiding are applied by cropping an image
interactively.
G. DWT Based Digital Watermarking Technique
In [7] the author proposed Discrete Wavelet transform
based digital image watermarking technique. For
embedding process, The binary watermark signal is first
generated by pseudo random bit generator. we consider the
watermark signal as a binary sequence which is embedded
to the high (HL and HH) frequency band of the blue
channel. For Detecting process, the correlation between the
high frequency band DWT coefficients of the watermarked
image and the watermark signal is compared with the
predefined threshold to determine whether the watermark
is present or not
V. BLOCK PATTERN ENCODING TECHNIQUES
In Block Pattern Encoding techniques [8] every 22
block in a secret image is regarded as a pattern with a 4-bit
binary value in which each bit of 0 corresponds to a black
pixel and each bit of 1corresponds to a white pixel. Data
encoding is accomplished by changing the block bit values,
so that the corresponding code of the resulting block
pattern becomes just some bits of the input message data to
be encoded. A block pattern encoding table is constructed,
which maps each block pattern into a certain code. The
number of possible block pattern in 22 block is 16. Some
of the possible block pattern encoding table is shown in
Table 1. Such a table is just one of the many possible ones
which may be used for encoding. Here the pattern 1111 is
to be replaced by 1 and 1110 to be 10. Similarly the 16
possible 22 block pattern to be take place in the secret
image to get the encoded bits of the secret binary image.
TABLE I.
Type

BLOCK PATTERN ENCODING TABLE

Block Pattern

Binary Value

1111

Encoded Data

F. Steganography Approach using Biometrics


Steganography method used in this paper [6] is based
on biometrics. And the biometric feature used to

www.asdf.org.in

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

1110

10

1101

11

1011

0110

77

Converting

the
decimal
sequence
into
corresponding binary value.
Match the corresponding binary value with the
block pattern encoding table to retrieve the the
secret image S.

01

011

Figure 3. Block Diagram for Embedding Secret Image

VI. PROPOSED IMAGE DATA HIDING TECHNIQUES


In this section we will present a new method for hiding
a secret image S inside a cover image C using Daubechies
4 DWT and Block Pattern Encoding .The schematic block
diagram of the whole process is shown in Fig. 3 and Fig. 4.
A. The Embeding Procedure
Let C and S be the cover-image and the binary secretimage respectively. The stego-image G can be obtained by
the following procedure

Perform 2D binary secret image S is encoded by


using a 22 block pattern encoding.

The encoded 1-D bit stream is decompose into 3bit


block prepration and its converted to
corresponding
decimal sequence for embedding data in a cover image
C.
The cover image C is decomposed by using the
Daubechies Filters (ie) Db4.
2D DWT is applied to the cover image C by using
the Daubechies Filters we get the four sub bands LL,
HL, LH and HH
bands which is used for
modification of coefficient for embedding secret data S.
Apply 2D IDWT to the above process we get the
Stego image.
B. The Extraction Procedure
Let the stego image G is taken as the input for this
process, The binary secret image S is obtained by the
following procedure

Figure 4. Block Diagram for Extracting Secret Image

VII.

CONCLUSION

Steganography is used for secret communication. In this


paper we have documented the few techniques of DWT
based steganography. The main goal of the proposed
steganography method is improving the embedded capacity
in the cover image. The implementation of the above
proposed method is dealt separately.

Apply 2DWT to the stego image


Selecting the sub bands for extracting the
decimal sequence.

www.asdf.org.in

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

78

REFERENCES
[1]

[2]

[3]
[4]

[5]

[6]

[7]

[8]

H.S. Majunatha Reddy, and K. B. Raja, High Capacity Security


Steganography using Discrete Wavelet Transform, IJCSS, India,
vol.3, pp. 462472.
S. Bhattacharyya and G. Sanyal,Data Hiding in image in Discrete
Wavelet Domain using PMM, IJECE India vol. 5, pp.359367,
2010.
Po-Yueh Chen and Hung-Ju Lin, A DWT based approach for
image steganography in IJASE, vol.3, 2006, pp. 275290.
Ali Al-Ataby and Fawzi Al-Naima, A Modified High Capacity
Image Steganography Technique Based on Wavelet Transform,
IAJIT, Iraq, Vol.7, pp.358-364, October 2010.
W. Yang and X. zhao, A digital watermarking algorithm using
singular value decomposition in wavelet domain, IEEE Trans.
Image processing, pp. 28292832, 2011.
Po-Yueh Chen and Hung-Ju Lin, A DWT based approach for
steganography using biometrics in IEEE, Computer Society, 2010,
pp. 3943, doi: 10.1109/DSDE.2010.
S.M. Mohidul Islam, R. Debnath, S.K. Hossain, DWT based digital
watermarking technique and its robustness on image
roatation,scaling,JPEG
compression,cropping
and
multiple
watermarking, ICICT,Bangladesh, pp.246-249, March. 2007.
I-Shi Lee and Wen-Hsiang Tsai, Data Hiding in imagesby dynamic
programming based on human visual model, ELSEVIER, Pattern
Regonition, Vol. 42, pp.1604-1611, January 2009.

www.asdf.org.in

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

79

KEY FRAME EXTRACTION BASED ON BLOCK BASED HISTOGRAM


DIFFERENCE AND EDGE MATCHING RATE
S.Vigneswaran

R.Bremananth

Research Scholar, Department of Computer Applications


Anna University of Technology, Coimbatore
Coimbatore, India.

Assistant Professor, Information Technology Department


Sur University College
Sur, Oman

Abstract This paper presents a new approach for


key frame extraction based on the block based Histogram
difference and edge matching rate. Firstly, the Histogram
difference of every frame is calculated, and then the edges of the
candidate key frames are extracted. At last, the paper makes the
edges of adjacent frames matching. If the edge matching rate is
up to 50%, the current frame is deemed to the redundant key
frame and should be discarded. The experimental results show
that the method proposed in this paper is accurate and effective
for key frame extraction, and the extracted key frames can be a
good representative of the main content of the given video.

candidate key frame whose histogram difference is above the


threshold point, finally matches the edges between adjacent
candidate frames to eliminate redundant frames. The
experimental results show that the key frames extracted by
the method reflect the main content of the video and the
method is good approach to determine key frames [7]. The
method for key frame extraction consists of three steps as
shown in figure1.

To extract robust frame difference from consecutive


frames, we used verified x2 test which shows good
Keywords- Histogram, Key frame Extraction, Edge performance comparing existing histogram based algorithm
and to increase detection effect of color value subdivision
matching.
work, color histogram comparison using the weight of
brightness grade. Also to reduce the loss of spatial
I. INTRODUCTION
information and to solve the problem for two different frames
The development of multimedia information technology, to have similar histogram, we used local histogram
the content and the expression form of the ideas are comparison.
increasingly complicated. How to effectively organize and
retrieve the video data has become the emphasis of the study.
The technology of the key frame extraction is a basis for
II.PROCEDURE OF THE KEY FRAME EXTRACTION
video retrieval. The key frame which is also known as the
representation frame represents the main content of the video.
Using key frames to browse and query the video data greatly
reduces the amount of processing data. Moreover, key frames
provide an organizational framework for video retrieval. In
general, the key frame extraction follows the principle that
[1] the quantity is more important than the quality and
removes redundant frames in the event that the representative
features are unspecific. Currently, key frame extraction
algorithms [2] can be categorized into following four classes:
Fig:1 Key frame Extraction Procedure
1. Content based approach.
2. Unsupervised clustering based approach.
3. Motion based approach.
4. Compressed video streams based approach.

Color histogram comparison (dr,g,b(fi,fj)) is calculated by


histogram comparison of each color space of adjacent two
frame (fi, fj ).[5] among static analysis method for
emphasizing the difference of two frames, x2 test comparison
In order to overcome the shortcomings of the above
(dwx2(fi , fj)) is efficient method to detect Candidate key
algorithms, this paper proposes a new approach for key frame
frames by comparison change of the histogram and it is
extraction based on the image Histogram difference and edge
defined as Equation
matching rate, which calculates the histogram difference of
two consecutive frames, then chooses the current frame as a
Proc. of the Intl. Conf. on Computer Applications
Volume 1. Copyright 2012 Techno Forum Group, India.

d w x2 f i , f j

H k H
i

(k )

max(H i (k ) H j (k ))

, if H i , j 0

0, otherwise

ISBN: 978-81-920575-5-2:: doi: 10. 73830/ISBN_0768


ACM #: dber.imera.10. 73830

www.asdf.org.in

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

80

The histogram based method may have a problem to detect


In this paper, we choose an un sharp masking
two different with similar color distribution as same image as technique, i.e.,
it doesnt use the spatial information. This problem can be
solved by the method of comparing local histogram
~
distribution as dividing frame area. Hi(k,bl) is the histogram
~
distribution of k position of the frame (fi) block(bl) and m is d f n, n 1 d n, n 1 d n, n 1 if d n, n 1 d n, n 1 2
0
otherwise
the number of total blocks[8]. Using the merits of subdivided
local histogram comparison and applying weight to each
color space, the use of value of difference expansion for
statistical method can be obtained.
Here, the 1-D frame difference signal d(n; n - 1) can
either be dh (n; n - 1) or dp(n; n -1). d(n; n - 1) denotes the
low-pass filtering and/or median filtering result of d (n; n -1),
10 FpS
and d f (n; n - 1) denotes the un sharp masking output,
respectively. After sequentially applying un sharp masking to
both histogram difference and pixel difference features, we
___
F10
F1
F2
obtain the filtered signal df(n,n-1) as shown in Figure 2. The
candidate key frames obtained from the spatial information
of the frame by local histogram, in this report[10], the value
t
t
of difference extraction formula, combining these formulas,
will be used for robustness of value of difference extraction.
T

Fig 2 . Example Synchronization of 10 frames.


FPS -Frames per Second.
t- Time gap (temporal gap).
T-Total time
Figure 2 shows the flow of ten frames in sequence for the
prescribed time. The frames of the various events can be
synchronized and should come under the following
categories:
1. It t increases then the number of frames in time T will be
decreased, i.e. t and T are inversely proportional.
2. Both t and T will be idle (0) for the key frame.

So we can detect candidate key frames by recognizing


these peaks. However, the sensitivity of these features to
camera motion, object motion, and other noises strongly
influences detection performance. In order to remove this
phenomenon, a filtering scheme to reduce feature signal
values at high activity regions while minimizing effects on
those at actual shot changes, is needed.

www.asdf.org.in

Histogram based comparison methods are highly


preferred because they are robust to detrimental effects such
as camera and object motion and changes in scale and
rotation. However, such methods sometimes fail to identify
changes between shots having similar color content or
intensity distribution. On the other hand, pixel-wise
comparison methods can well identify changes between shots
having a similar color content or intensity distribution, but
they are very sensitive to movements of cameras or
objects[9]. Since the adopted pixel difference feature is
extracted from DC images, it becomes less sensitive to small
object and camera motions. However, it still is not enough
for reliable shot change detection. The main assumption for
candidate key frame detection is as follows: Within a single
shot, inter-frame variations are small, which results in a
slowly varying feature signal. However, an abrupt change in
histogram difference causes a sharp peak in a feature signal.
The above treatment do well in reflecting the main content of
the given video, but exist a small amount of redundancy,
which need further processing to eliminate redundancy[11].
As the candidate key frames are mainly based on the
Histogram difference which depends on the distribution of
the pixel gray value in the image space, there may cause
redundancy in the event that two images whose content are
the same exist great difference from the distribution of the
pixel gray value.

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

Fig 3: Various Key frame Extraction

As edge detection can remove the irrelevant information


and retain important structural properties of the image, we
can extract the edges of objects in the image to eliminate
redundancy [8]. At present, there are many edge detection
algorithms, which are mainly based on the differentiation and
combined with the template to extract edges of images. Edge
detection operators that are commonly used are: Robertss
operator, Sobel operator, Prewitt operator and the Laplace
operator etc. Here we extract edges of frames by Prewitt
operator.

81

Table I:KFE comparisons

Methods Initial
Ith
frame Frame

KFE

Final
frame

Spatial

0.59

0.58

0.6

0.85

Temporal

3.52

3.50

3.62

3.83

Histogram

1.7

1.65

1.75

2.71

% of KFE 4.15 % 4.06 % 3.88 %

3.63 %

III.EXPERIMENTAL RESULTS AND ANALYSIS


An example video is used in experiment. The video
actually contains 10 shots through analysis. With our
algorithm the first frame of the sub-shot and key-frame
extracted are shown in figure. As can be seen from figure, the
more fiercely the shot changes, the more sub-shot and keyframe will be extracted. On the contrary, the less fiercely the
shot changes, the fewer sub-shot and key-frame will be
obtained. The number of key frames extracted is closely
related to the intensity of changes, but has nothing to do with
the length of the shot. Table 1 also shows the comparisons
with the extraction and percentage of the KFE (Key Frame
Extraction).

www.asdf.org.in

The main contribution of the proposed approach is the


consideration of both feature extraction and distance
computation as a whole process. With a video shot
represented by key-frames corresponding to feature points in a
feature space, a new metric is defined to measure the distance
between a query image and a shot based on the concept of
Nearest Feature Line (NFL).The key step is to find a way for
video abstraction, as this will help more for browsing a large
set of video data with sufficient content representation for the
frames extracted.

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

82

Fig 4 :Sample frames

IV.CONCLUSION
The paper presents a method based on the image
histogram difference and edge matching rate for key frames
extraction. The approach avoids shot segmentation and has a
good adaptability. The entire experimental results show that
the algorithm has the high accuracy to detect key frames and
the extracted key frames represent the main content of the
given video. Meanwhile, the algorithm provides a good basis
for video retrieval. However, in the condition of the
complicated background of objects motion there exists
errors in the extracted edges of objects, leading to a certain
redundancy, and further study is needed. Furthermore, the
method can be employed to both shots and clips, and can be
easily extended to any other useful visual or semantic
features and distance metrics.
REFERENCES

Fig 5: Extraction Procedure

www.asdf.org.in

[1] Yingying Zhu and Dongru Zhou, An Approach of Key Frame


Extraction from MPEG Compressed Video, Computer Engineering, Vol.
30, pp.12-13, 2004.
[2]
Min Zhi, Key Frame Extraction from Scenery Video, IEEE
International Conference on Wavelet Analysis and Pattern Recognition,
2007.
[3] Jiawei Rong, Wanjun Jin and Lide Wu, Key Frame Extraction Using
Inter-Shot Information,IEEEInternational Conference on Multimedia and
Expo, 2004 .
[4] A. Divakaran, R. Radhakrishnan and K.A. Peker, Motion activity based
extraction of key-frames from video shots, Proc. of IEEE ICIP, Vol.1,
pp.932-935, September 2002.
[5] Priyadarshinee Adhikari, Neeta Gargote, Jyothi Digge, and B.G.
Hogade, Abrupt Scene Change Detection, World Academy of Science,
Engineering and Technology 42 2008.
[6] Guozhu Liu, and Junming Zhao ,Key Frame Extraction from MPEG
Video Stream ,College of Information Science & Technology, Qingdao
Univ. of Science & Technology,266061,P. R. China ,26-28,Dec. 2009, pp.
007-011.
[7] Z. Cernekova, I. Pitas, and C. Nikou, Information theory-based shot
cut/fade detection and video summarization, IEEE Trans. Circuits Syst.
Video Technol., vol. 16, no. 1, pp.
8291, Jan. 2006.

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

83

[8] Costas Cotsaces, Nikos Nikolaidis, and Ioannis Pitas, Video shot
detection and condensed representation: a review, IEEE Signal Processing,
vol. 23, no. 2, pp. 28-37, 2006.
[9] K. Huang, C. Chang, Y. Hsu, S. Yang. KeyProbe: A Technique for
Animation Keyframe Extraction The Visual Computer, Volume 21, pp.
532-541, 2005.
[10] Lijie Liu, Student Member, IEEE, and Guoliang Fan, Member, IEEE ,
Combined Key-frame Extraction and Object-based Video Segmentation .
[11]A.Rahimi,Fast
connected
components
on
images,http://web.media.mit.edu/~rahimi/2001.
[12] B.T. Truong and S. Venkatesh, Video abstraction: A system- atic
review and classification, ACM Trans. on Multimedia Computing,
Communications and Applications, vol. 3, no. 1, 2007.

www.asdf.org.in

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

84

A Distributed Process System Architecture for Biological Data Analysis


Kyoung-Soon Hwang*, Young-Bok Cho*,
*

dept. of Computer Science, Chungbuk National


University, BK21, PT-ERC,
Cheongju, 361-763, Korea,

Abstract With the advent of advanced computing


technology, the limit of computer applications could be
expanded to much more than we think. In recent years, a
huge and complicated system is rapidly changing into a
distributed computing environments based on web service. In
the distributed computing environment, the web service
system is able to effectively solve difficult and time-consuming
biological data analysis tasks such as a multiple sequence
alignment, motif finding, a protein structure prediction, and
so on. Meanwhile, the analyses of biological data have been
developed by approaches of the machine learning and
statistical methods, such as the Artificial Neural Network, the
Genetic algorithms, the Support Vector Machine, and the
multiple linear regression, etc. In this paper we propose the
distributed process system architecture for an analysis of
biological data using a reuse of resources such as the remote
devices (personal computers) and the existing applications
made of R, MATLAB, C, JAVA, etc.
Keywords-component;
Bioinformatics;
Multiple
sequence; alignemte;
web service; Gentic Algorithms;
Dynaminc programming

I.

INTRODUCTION

One of the main problems of the Bioinformatics is the


alignment of multiple sequences. The multiple sequence
alignments (MSAs) [1], [2] are a sequence alignment of
three or more biological sequences such as DNA, RNA, and
protein. The MSAs are the elementary step for any
phylogenetic or protein functional analysis. So, most
biological modeling methods require the MSAs at some
point. The MSAs use the dynamic programming techniques
to optimal alignment typically. But, to finding the global
optimum for n sequence, this way is considered an NPcomplete problem. Most approaches of the MSAs use
heuristic algorithm for progressive alignment. The heuristic
methods are an algorithm that gives only approximate
solution to a given problem. Moreover, heuristic algorithms
are not guaranteed to a global optimum, but very useful for
the time-consuming tasks.
Meanwhile, in a distributed environment, the web
service is providing a paradigm for building an open
environment to reuse remote devices and applications over
web connections [3]. Thus, the web service can say a
logical system that consists of several computers among

Chan-Hee Lee+, Keon- Myung Lee*


+

dept. of Microbiology, Chungbuk National University,


Cheongju, 361-763, Korea,

remote systems physically. Moreover, this technology is


able to perform a series of workflow as a part of an
application so that two or more remote processors achieve
specified tests concurrently. So, the web service such as this
way can provide faster results of a multiple sequence
alignment of a time-consuming to the field biologists using
distributed applications and systems effectively. In this
paper, we propose the distributed process system
architecture for the biological data analysis based on the
web service.
The remainder of this paper is organized as follows:
Section 2 briefly presents several related works on the web
service and algorithms concerning the MSAs. Section 3
introduces the proposed methodology for the system
implementation. Finally section 4 draws the conclusions.
II.

RELATED WORKS

A. Web service
Web Service [4] is a model of the distributed component
based on the XML. It is the standard technology for the
implementation of the service-oriented architecture
through the interaction among deployed various services
on the internet. The advantage of the web services is as
followings.
1. Using the HTTP protocol.
2. Platform and language independent (loosely coupled).
3. Synchronous and asynchronous messaging.
4. Reuse of existing system resources.

Proc. of the Intl. Conf. on Computer Applications


Volume 1. Copyright 2012 Techno Forum Group, India.

Figure 1. The component diagram of Web Service

ISBN: 978-81-920575-5-2:: doi: 10. 73837/ISBN_0768


ACM #: dber.imera.10. 73837

www.asdf.org.in

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

85

3. Evaluate the fitness function of each chromosome in


the population.
4. Select two parent chromosomes.
5. With crossover probabilities cross over the parents.
6. With a mutation probabilities mutate new offspring.
7. Place new offspring in a new population.
8. Use new generated population for a further run of
algorithm.
9. If the end condition is satisfied, stop, and return the
best solution in current population.
10. Go to step 3.

Figure 1 shows the basic component diagram of the web


service. The web services are published, found, and used
through the web as shown in Figure 1. It performs
functions that can be anything from simple requests to
complicated business processes. The property of the web
service such as this, in the analysis of large amounts of the
biological data is very useful.
B. Dynamic Progrmming
The sequence alignment is an important application
where dynamic programming (DP) [2], [5] is essential. The
goal of DP is to find the sequence of an optimal alignment
with the lowest total cost. Typically, the most used
algorithm of DP is the NeedlemanWunsch algorithm. The
basic idea is to build up the best alignment by using
optimal alignments of smaller subsequences.
It is considered all possible pairs of residue from two
sequences. It has two matrices: the score matrix and
traceback matrix [8].
This algorithm consists of three steps:
1. Initialization of the score matrix.
2. Calculation of scores and filling the traceback matrix.
3. Deducing the alignment from the traceback matrix.
A matrix D(i, j) indexed by residues of each sequence is
built recursively, such that

Figure 2. Simplified flow chart of a Genetic Algorithm


D (i, j ) max

D (i, j 1) s (x i , y i ) (match/mismatch)

(gap in x)
D (i 1, j ) g
D (i, j 1) g
(gap in y)

subject to a boundary conditions. The s(i, j) is the


substitution score for residues i and j, and g is the gap
penalty. It is able to identify the location of the element
having a maximum value by the Equation (1). It works in
the same way regardless of the length or complexity of
sequences, and guarantees to find the best alignment.
But, to find the global optimum for n sequences, it has
been shows to be NP-complete problem. Thus, most
algorithms of MSAs use the heuristic algorithm such as
FASTA and BLAST.
C. Genetic Algorithms
The genetic algorithm (GA) [2], [6] has been used for
the MSAs production. In the iterative MSAs, the GA is the
other method such as dynamic programming for improving
MSAs. GA is a population-based stochastic search
algorithm for combinatorial or numerical optimization
problems [3].
GA consists of ten steps as followings.
1. The determination of genotype as the symbolic.
2. Generate random population of n chromosomes for
sequence selection (population initialization).

www.asdf.org.in

(1)

Figure 2 show the simplified flow chart of GA. In this


ways, a large amount of the sequences divide into several
pieces, and then perform repeatedly to rearrange the pieces
with the introduction of gaps at varying positions.
D. Multiple Sequence Alignmet
MSAs are a sequence alignment of three or more
biological sequences such as DNA, RNA, and protein. The
object of MSAs is to align sequences in order to analyze
into the biological relationship between input sequences.
For example, after obtaining the ortholog and the paralog
from many different species, we can be easily grasped the
conserved regions through the analysis of MSAs.
Most approaches of MSAs use heuristic methods for
progressive alignment. The heuristic methods are an
algorithm that gives only approximate solution to a given
problem. There were the FASTA and BLAST used
heuristic methods for a string comparison in the MSAs
generally.
The FASTA was developed by Lipman and Pearson in
1985. This algorithm uses this property and focuses on
segments in which there will be an absolute identity
between the two compared strings. It can use the alignment
Dot-Plot matrix for finding these identical regions like the
Figure 3 [10].

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

86

The BLAST algorithm was developed by Altschul, Gish,


Miller, Myers and Lipman in 1990. This algorithm was the
need to increase the speed of FASTA.

Figure 3 Alignment Dot-Plot Metrix.

In this paper, we use a part of the Bioconductor [11] as


the one of R packages for the MSAs. For the multiple
sequences, it can be imported as FASTA formatted
alignments into R, and then performs the computation of
similarity distance matrices, the mapping sequence
positions to gapped alignments. Figure 4 shows the result
of the sequences alignment after performing R scripts for
the MSAs.

Figure 4. The processing result of MSAs

III.

PROPOSED METHODOLOGY

The web service allows the access to remote resource


invoking method on the internet. The proposed system was
implemented with the C#, ASP (active server pages) and
WCF (windows communication foundation), etc. based on
the .net framework in the distributed environment. Figure 5
shows the architecture of the proposed system.
The biology analyst is able to request to specified
services over the user interface as a below figure [5].

Figure 5. The architecture of the proposed system

The service management server (SMS) performs a series


of tests through the web server wrappers (WSWs) from the
requested services by the end-users. The SMS is encoding
as binary strings for entered sequences by a biology analyst.
And then, it requests that the working server (WS)
performs the invoked remote services. In addition, it
receives the processing outcome of tasks from each remote
server of the WS. Finally, it displays the result and state
information of the remote servers through the user
interface between the processing of tasks.
The WSWs are a role that is to bind services from the
remote servers. And, the job scheduling server (JSS)
monitors the state of launched jobs from the SMS. It is also
received the state information of the process from each
remote servers. The real work for the analysis of the
biology data is done on a remote server of the WS.
For example, the specified servers of the WS are to use
the R scripts so that the selected sequences perform
alignment. Whenever there is a request from the SMS,
these servers perform the below R script by the batch
process method through the window service implemented
with C#.
C:\R\bin\R.exe BATCH vanilla slave
C:\R\bioann\GAs_MAS.r
Figure 6 shows a part of the R script to achieve the MSAs.

Figure 6. A part of the R script

www.asdf.org.in

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

IV.
The remote servers of the WS are to send the SMS back
to the score of an alignment result. Each server of the WS
also must to send the state information of the assigned
works of the specified remote server of WS to the JSS at
the same time.
The service processes of the MSAs are as follows:
1. Enter the user ID and Password over the user interface
on the internet. (This offers all services through ID
and password authentication)
2. Request the services

Generate random population and then split the


generated population into the number of available
remote servers of the WS.

Invoke the corresponding task module in the


specified server of the WS.

Return the score value from the specified server of


the WS (fitness value).

Check the determination condition.

Apply the genetic operator (selection / crossover/


mutation).

Generate the next generation population.

Repeated 2.
3. When the specified server request a service from SMS.

Identify the state information of requested services


based on a user log profile from the JSS.

Run the R-script to perform the specified services.

Return the result of the processing to the SMS.

87

CONCLUSION

In this paper, we propose the system architecture for the


autonomic configuration to the biology analyst's
involvement at minimum. It could be utilized to resource
effectively by sharing of the remote applications or devices.
The Future studies will be the research about the
visualization tools of many biological data analysis methods.
We also are going to study to improve the reliability
through the analysis of the causes of performance
degradation from the aspect of users because the web
service achieves to task over the internet.
ACKNOWLEDGMENT
This work was partially supported through PT-ERC by
the Korea Science and Engineering Foundation (KOSEF)
grant funded by the Korea government (MEST) in 2011.
REFERENCES
[1]

M. F. Omar, R. A. Salam, R. Abdullah, N. A. Rashid, "Multiple


Sequence Alignment Using Optimization Algorithms,
International Journal of Computational Intelligence 1; 2
@www.waset.org, 2005
[2] http://en.wikipedia.org/wiki/Multiple_sequence_alignment
[3] K.S.Hwang, K.M, Lee, C.H. Lee, An Autonomic Configurationbased QSAR Modeling Environment, The 10th International
symposium on Advanced Intelligent Systems(ISIS2009), 2009, pp.
305-308.
[4] http://en.wikipedia.org/wiki/Web_service
[5] Shyi-Ming Chen, Chung-Hui Lin, and Shi-Jay Chen, Multiple
DNA Sequence Alignment Based on Genetic Algorithms and
Divide-and-Conquer Techniques, International Journal of
Applied Science and Engineering .2005, pp. 89-100
[6] http://www.obitko.com/tutorials/genetic-algorithms/ga-basicdescription.php
[7] http://www.avatar.se/molbioinfo2001/dynprog/dynamic.html
[8] D. Lipman and W. Pearson. Rapid and sensitive protein
similarity searches, Science, 2985, pp. 14351441
[9] S. F. Altschul, W. Gish, W. Miller, E. W. Myers, and D. J.
Lipman, Basic local alignment search tool, J Mol Biol, 1990.
pp: 40310, 1990.
[10] Ron Shamir, Algorithms for Molecular Biology Lecturenote :
December 29, 2001
[11] http://manuals.bioinformatics.ucr.edu/home/ht-seq#TOCMultiple-Sequence-Alignments-MSAs.

Figure 7. the workflow of the proposed system.

Figure 7 shows a brief service process of the proposed


system. The proposed system should be performs a
repeated process for alignment of multiple sequences using
the GA to entered sequences by the end-user in the
distributed environments. Moreover, it can provide some
faster result of a multiple sequence alignment to a timeconsuming to a field biologist using distributed
applications and systems effectively.

www.asdf.org.in

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

88

Certain Optimization Methods for Design Space Compression of Arithmetic


Circuits
Dr Ranganathan Vijayaraghavan

Georgina Binoy Joseph

General Manager (Academic),


Everonn Technical Education India Limited
Chennai, India

Associate Professor
KCG College of Technology
Chennai, India

Abstract Due to technology shrinkage, it is essential to


optimize various arithmetic circuits to account for
reduction in area, power and performance. In this paper,
we develop new methodologies of circuit optimization
from first principle in order to achieve various
combinations of area, power and performance
optimization with the possibility of combined
optimization of area, power and performance which we
call design space compression.
Keywords- high performance; low power; low area;
computer arithmetic circuits
I.

INTRODUCTION

The technologies necessary to realize present day


communication systems present unique design challenges.
Manufacturers are challenged to deliver products that offer
expanded services and operate transparently worldwide.
Product designers are challenged to create extremely
power efficient yet high-performance devices. The design
tradeoffs and implementation options inherent in meeting
these demands highlight the extremely challenging
requirements for next generation convergence processors.
The constraints on area, power and speed require new
techniques at every stage of design - architecture, micro
architecture, software, algorithm design, logic design,
circuit design, and process design.
The study of arithmetic algorithms and circuits is of
special importance especially in the case of systems
involving large amounts of data processing. Multiplication
is an important operation for computing most arithmetic
functions and so it deserves particular attention. The basic
algorithm for multiplication performs the operation
through a computation of a set of partial products, finally
summing these shifted products. As a consequence, the
execution time of any program, or circuit, based on the
basic algorithm is proportional to the number n of digits of
the operands. Many algorithms have been proposed to
reduce the execution time for multiplication each resulting
in a trade off with area or power. .
Proc. of the Intl. Conf. on Computer Applications

Volume 1. Copyright 2012 Techno Forum Group, India.


ISBN: 978-81-920575-5-2:: doi: 10. 73844/ISBN_0768
ACM #: dber.imera.10. 73844

www.asdf.org.in

II.

MULTIPLIIERS

The most basic multiplication algorithms for the


multiplication of n-digit by m-digit B-ary natural numbers
(shift and add algorithms) proceed in three phases:
1. Digit wise partial products (n _ m),
2. Compression to reduce to two rows
3. Final addition.
The production of all the digit wise partial products of
the operands is a common feature in most multiplication
algorithms.
III.

MULTIPLIER DESIGN SPACE COMPRESSION

A. Proposed Methodology
The proposed methodology identifies design space
compression and provides various alternatives. New
methodologies are developed from first principle to
perform arithmetic operations. In one such method, we try
to utilize the redundancy naturally available in arithmetic
operations such as addition, multiplication etc. Another
technique that we propose to use while optimizing is
constraining the redundant operations.
The methodology of design space compression utilizes
in the first phase the redundancy naturally available in the
output space of the arithmetic operations. The first step is
to reduce the redundancy and secondly to make them
constraints for optimization. The output is parallelized and
thus performance optimization is achieved. Further
optimization is done to the parallel circuit thus obtained
through custom design so that area gets further optimized.
As a consequence, reduction of both area and
improvement of performance is achieved and also the third
parameter, power gets reduced. Thus design space
compression is achieved.
B. Multiplier Design Space Compression
The steps involved in the typical multiplier design
space compression are as follows:
1. First identify the redundancy in the output space of
the multiplier say n bit in length.
2. After eliminating the redundant outputs identify the
bit patterns that will form the output space.
3. Identify their multiplier and multiplicand
combinations.
4. This will also have many redundancies and these are
used for reducing where ever possible or for constraining
to optimize the output space.

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

5. This takes care of time optimization and first level


area optimization as well.
6. Then for each of the partial parallel product bits
further optimization of area, performance is carried out
using custom design techniques.
This ensures power reduction and thus design space
compression is achieved.
IV.

SHORT BIT WIDTH MULTIPLER IMPLEMENTATION

Short bit width multipliers have been designed and


implemented using the above methodology and the results
have been compared with bench mark circuits.

The bits of the final product are denoted by P(i), i = 0


to 7. Pi, ,i = 0 to 7 denotes product terms of the individual
multiplier-multiplicand bits.
The following logic equations are obtained:
P(0) = P0
P(1) = P2^P1
P(2) = P5^ P3^P4P0
P(3) = P6(a2P1 + a0P7 + a0a2b0) + P7(b2P2 +
b0P6 + b0b2a0) + P0(P4a2b2 + P8(a1+b1))
P(4)= P8(P4P0 + (a1 + b1)) + P4(P3(a2b0) + P5b2)
P(5) = P8(P4 + P0(a1 + b1))

A. 2-Bit Multipliers

V.

RESULT

Two bit multiplier and three bit multiplier were simulated


using Model Sim and Synthesis was carried in Synopsys
DC Complier and compared with Designware two bit and
three bit multiplier with 180nm technology library and
results are tabulated in the table1. 3 below:
TABLE 1.3- Comparison of DW and New Multiplier
results
Multiplier Area(DW) Area(New Time
Time
Multiplier) (DW) (NewMultiplier)
2-Bit
72nm^2
72nm^2
0.49ns 0.49ns
3-Bit
298nm^2
212nm^2
2.56ns 1.96ns

TABLE 1. 2-BIT MULTIPLICATION


Multip
licand
b1 b0

89

Multiplier a1a0

The design equations for two bit multipliers are


presented below:
The bits of the final product are denoted by P(i), i = 0
to 3. Pi, ,i = 0 to 3 denotes product terms of the individual
multiplier-multiplicand bits.
Using the redundancies to minimize and constrain the
product bits the following logic equations are obtained:
P(0) = P0
P(1) = P2^P1
P(2) = P3P0
P(3) = P3P0

VI.

CONCLUSION

Thus we have demonstrated the superior performance


of our multiplier over designware component. The
performance and area improvement will be much better at
higher bit levels. Ever though we have not measured the
power since both area and timing is reduced definitely
there will be improvement in power and thus we have
demonstrated the design space compression using our
methodology. As future work we will be extending this
concept to higher bits and also evolve further optimization
efforts for each of these multipliers developed.
REFERENCES

B. 3-Bit Multipliers
[1]
TABLE 2. 3-BIT MULTIPLICATION
Multipl
icand
b2 b1 b0

Multiplier a2a1a0
0

10

12

14

12

15

18

21

12

16

20

24

28

10

15

20

25

30

35

12

18

24

30

36

42

14

21

28

35

42

49

www.asdf.org.in

Fabrizio Lamberti, Nikos Andrikos, Elisardo Antelo, Paolo


Montuschi, Reducing the Computation Time in (Short Bit-Width)
Twos Complement Multipliers, IEEE Transactions on
Computers, Vol. 60, No. 2, 148-156, February 2011
[2] Alvaro Vazquez, Elisardo Antelo, Paolo Montuschi, Improved
Design of High-Performance Parallel Decimal Multipliers, IEEE
Transactions on Computers, Vol. 59, No. 5, 679-693, May 2010
[3] A.K. Verma, P.Brisk and P. Ienne, Challenges in Automatic
Optimization of Arithmetic Circuits, 19th IEEE International
Symposium on Computer Arithmetic, 213-218, 2009
[4] A.K. Verma, P.Brisk and P. Ienne, Iterative Layering:
Optimizing Arithmetic Circuits by Structuring the Information
Flow, IEEE/ACM International Conference on Computer Aided
Design, 797-804, 2009
[5] A.K. Verma and P. Ienne, Improving XOR-Dominated Circuits
by Exploiting Dependencies between Operands, IEEE Asia and
South Pacific Design Automation Conference , 601-608, 2007
N. Homma, K. Degawa, T Aoki, T. Higuchi, Algorithm - level
optimization of multiple valued arithmetic circuits using counter tree
diagrams, Proceedings of the 37th International Symposium on
Multiple - Valued Logic (IEEE) 2007

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

90

AUSCULTATION AND SEPARATION OF HEART SOUND FROM RESPIRATORY SOUND


USING LAB-VIEW
Dr.Zahir Quazi

Mr.Yogesh M. Gaidhane

Dept. Of Community Medecine


DMIMS Sawangi Wardha,
Nagpur, India

Electronics and Communication Department,


SVSS College of Engineering and research,
Wanadongri Nagpur, India
AbstractThis document suggest an systematic approach for
the recording and analysis of respiratory sound. It is found
that in conventional stethoscopic methods of Auscultation are
quitein convinent beacouse of its dependency on manual
methods of implementation, generaly it is found that Heart
sounds interfere with lung sounds in a such a manner that, it
will hampers the potential of respiratory sound that need to be
analysis in terms of diagnosis of respiratory disease. This
review had described an simple method that have been applied
for filtering heart sounds from lung-sound recordings. This
paper focus on implementing an hardware that record
respiratory sound and a VI for heart sound (HS) cancelation
from the lung sound (LS) by using Advance Signal Processing
Toolkit of Lab View . The method uses the multiresolution
analysis of the wave-let approximation coefficients of the
original signal to detect HS-included segments, once HS
segments are identified, the method removes them from the
wavelet coefficients and estimates the created gaps by using
TSA ARMA modeling
KeywordsVirtual Instrument (VI), Multiresolution analysis
(MRA), Time series analysis (TSA), Autoregressive Moving
average (ARMA)

I. INTRODUCTION (HEADING 1)
Auscultation is one of the most important non-invasive
and simple diagnostic tools for detecting disorders in the
respiratory tract like lung diseases [1]. It is defined as the
act of listening for sounds within the body, mainly for
ascertaining the condition of the lungs, heart and other
organs [2]. Diseases such as asthma, tuberculosis can be
identified with this method through the analysis of lung and
tracheal sounds.
Research on the diagnosis of respiratory pulmonary
conditions like bronchitis, sleep apnea, asthma has
established the utility of the stethoscope's acoustic signal in
common day to day practice. However, despite their
effectiveness, these instruments only provide a limited and
subjective perception of the respiratory sounds. The
drawbacks of using stethoscopes and listening to the sounds
using the human ear are a) their inability to provide an
objective study of the respiratory sounds detected, b) their
lack of sufficient sensitivity and (c) the existence of the
imperfect system of nomenclature [3].
Proc. of the Intl. Conf. on Computer Applications

Volume 1. Copyright 2012 Techno Forum Group, India.


ISBN: 978-81-920575-5-2:: doi: 10. 73851/ISBN_0768

In the last few decades, improvements in electronic


recording and the development of computer-based methods
have made quantitative studies of lung and tracheal sound
signals possible as well as overcome many limitations of
human ear subjective auscultation. Modern digital
processing techniques, along with advancements in
computer analysis, have become an established research
method for the method for the investigation of respiratory
sound analysis can quantify. Changes in lungs sound, denoises the signal of interest from artistic and noises, store
records of the measurements made, and produce graphical
representations of characteristic features of the respiratory
sound to help with the diagnosis and treatment of patients
suffering from various lung diseases [4]. Since lung sound
have relatively low frequency and low intensity, it is
essential to remove the noise and other interfering sound
(i.e., heart sounds) from the lung sounds prior to any
diagnostic analysis.
Heart sounds interfere with lung sounds in a way that
hampers the potential of respiratory sound analysis in terms
of diagnosis of respiratory illness [5]. The features of lung
sounds may be impure by heart sounds because lung and
heart sounds overlap in terms of time domain and spectral
content. High-pass filtering of lung-sound recordings to
reduce heart sounds would remove important components of
lung sounds.
This paper presents a method of lung sound analysis
using the advanced signal processing tools of LabVIEW.
The existing methods which are discussing here are not fully
free from the artifacts of heart sounds. The software used in
this method is having more flexibility, it removes the heart
sounds and predict the gaps successfully. It can also assist to
general physicians to come up with more accurate and
reliable diagnosis at early stages.
II.

HEART SOUNDS AND LUNG SOUNDS

Respiratory sounds present noninvasive measures of


lung airway conditions [6]. However, features of lung
sounds may be contaminated by heart sounds because lung
and heart sounds overlap in terms of time domain and
spectral content [7]. Heart sounds are clearly audible in lung
sounds recorded
on the anterior chest and may be heard to a lesser extent
in lung sounds recorded over posterior lung lobes.

ACM #: dber.imera.10. 73851

www.asdf.org.in

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

A. Lung Sounds
Lung sounds are produced by vertical and turbulent flow
within lung airways during inspiration and expiration of air
[5] . Lung sounds recorded on the chest wall represent not
only generated sound in lung airways but also the effects of
thoracic tissues and sound sensor characteristics on sound
transmitted from the lungs to a data acquisition system
[8]. Lung sounds exhibit a Power Spectral Density (PSD)
that is broadband with power decreasing as frequency
increases [5]. The logarithm of amplitude and the logarithm
of frequency are approximately linearly related in healthy
subjects provided that the signals do not contain
adventitious sounds. As the flow in lung airways increases,
sound intensity that the signals do not contain adventitious
as the flow in the lung airways increases sound intensity
increases and several mathematical relations between lung
sounds and airflow have been proposed [9]. It is important
to note that aspiratory and expiratory lung sounds differ in
terms of both amplitude and frequency range. At
comparable flows, aspiratory lung sounds will exhibit
greater intensity than expiratory sounds [9].

91

low pitch. These sound implies the thickening of air


walls and related to the diseases like asthma.

StridorIt is intense continuous monotonous whistling


sound heard which is almost double then that of
normal wheeze, and it is heard without Auscultation.
Table1
TYPES OF ADVENTITIOUS AND BREATH SOUND

NAME

TYPE

Wheez

Continuous

NATU
RE
In/ Out
Both

Stridor

Continuous

Out

Rochi

Continuous

Out

Discontinuo

In

ASSOCIATED
DISEASE
Asthma

Crackl
es

Epiglottis
infection
Bronchitis
Pneumonia

us

Generally the lung sound is categorises in two types ,


1) Normal Lung Sound

BronchalThese breath sound consist of full inspiratory and


expiratory phase.

BronchovescularThese sound consist of full inspiratory phase with a


shortened expiratory sound they are normally heard over
hilar region.

Vesicular sound These sound consist of


quite wispy inspiratory phase, they are heard over the
peripheral of lung sound.
2)Abnormal Lung Sound/ Adventitious sound
These are four main types of adventitious sound
prominently used for auscultation and disease diagnosis.

CrackleIt is discontinuous explosive popping sound that


originate within the airways they are heard when an
obstructed airways suddenly opens and pressure on
either sides of obstruction equilibrated, resulting in
transient distinct vibration in the air wall.
This dynamic airway obstruction can be caused by
accumulation of secretion within the airways or
collapsed caused by air pressure.

WheezesThese are the musical tones that are mostly heard


at the end of inspiration or easy expiration. they result
as collapsed of airways lumen. This gradually opens
during inspiration or gradually closed during
expiration. As the airway lumen become smaller the
air flow velocity increases resulting in musical tonal
quality, these variation are basically high pitch and

www.asdf.org.in

B. Heart Sounds
Heart sounds are produced by the flow of blood into and
out of the heart and by the movement of structures involved
in the control of this flow [10]. The first heart sound results
when blood is pumped from the heart to the rest of the body,
during the latter half of the cardiac cycle, and it is
comprised of sounds resulting from the rise and release of
pressure within the left ventricle along with the increase in
ascending aortic pressure [10]. After blood leaves the
ventricles, the simultaneous closing of the semi lunar
valves, which connect the ventricles with the aorta and
pulmonary arteries, causes the second heart sound.
The
electrocardiogram
(ECG)
represents
the
depolarization and repolarisation of heart muscles during
each cardiac cycle. Depolarization of ventricular muscles
during ventricular contraction results in three signals
known as the Q, R, and S-waves of the ECG [10]. The first
heart sound immediately follows the QRS complex. In
health, the last 3040% of the interval between successive
R-wave peaks contains a period that is void of first and
second heart sounds [7].
Characteristics of heart sound signals have been assessed
in terms of both intensity and frequency. Though peak
frequencies of heart sounds have been shown to be much
lower than those of lung sounds [11], comparisons between
lung sound recordings acquired over the anterior right upper
lobe containing and excluding heart sounds show that PSD
in both cases is maximal below150 Hz.
III. HS CANCELLATION
During breathing lung sounds propagate through the
lung tissue and can be heard over the chest wall. The tissue
acts as

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

a frequenccy filter with special charaacteristics bassed on


ppathological changes.
c
Thereefore, auscultaation and acouustical
analysis of lung soounds are pprimary diagnostic
aassessments for
f respiratory
ry diseases. H
However, heaartbeat
pproduces an inntrusive quasii-periodic inteerference sounnd that
m
masks the clinnical interpretaation of lung ssounds over thhe low
ffrequency coomponents. Thhe main com
mponents of Heart
Sounds (HS) are
a
in the rannge 20100 H
Hz, in which tthe lung sounnd has
m
major
componennts. High Pass Filtering (HPF) with an arbbitrary
ccut-off frequeency between 70 and 100 H
Hz is not efficiient in
tthis case becaause lung souunds have maajor components in
tthat region particularly at low flow raates. Thereforee, HS
rreduction
from lunng sounds without alttering the main
ccharacteristic features of the lung sound has been of innterest
ffor many researchers [12].
Accurate diagnosis of respiratory ddisease depends on
uunderstandingg the different noises presennt in the lung sounds
aand remove thhem successfu
fully. Heart soounds interferee with
llung sounds in a mannerr that hamperrs the potenttial of
rrespiratory sound
s
analysis in terms of diagnossis of
rrespiratory
disease [5]. There are ddifferent methhods that havee been
aapplied for filtering heeart sounds from lung-sound
rrecordings
A. Linear A
Adaptive Filteers
There are four main ccomponents too a linear adaaptive
ffilter: the inpput or refereence signal; the output oof the
aadaptive filteer; the desireed filter respoonse or prim
mary
ssignal; and thhe estimationn error, whichh is the diffeerence
bbetween the filter output and desired response
r
[13]]. The
tterm linear refers to thee physical im
mplementationn of a
llinear adaptivee filter, whichh
employs thhe principle oof superpositioon between itss input
aand output siignals. In reaality, the internal structuree of a
llinear adaptivve filter is highly nonliinear. A recuursive
aalgorithm withhin the adaptivve filter updattes filter param
meters
w
with each
iteration (in discrete tim
me operation) so as to minnimize
tthe estimationn error. Noise ccancellation aand linear preddiction
aare the two main
m
classes off linear adaptiive filters thatt have
bbeen applied to lung sounnd recordings for reducing heart
ssounds [13]. L
Linear predicttion serves to develop a moodel of
a signal basedd on its past orr future values. In adaptive noise
ccancellation, tthe primary innput contains both
b
the noisee to be
rremoved by tthe adaptive filter,
f
and thee signal of intterest.
T
The referencee signal repreesents the nooise portion oof the
pprimary inputt; thus, the filtter output is a signal that m
models
tthe noise inn the input, and the siggnal of intereest is
ddetermined byy subtracting tthe filter outpuut from the prrimary
iinput. Fig.1. sshows the bassic scheme off an adaptive noise
ccancellation fi
filter.

www.asdf.org.in

92

Fig 1 Blocck Diagram foor Linear Adapptive Filtering


The arrangem
ment of the innput samples aand the updatiing
form
mulas of adapptive filter paraameters depennd on the speciific
type of filter schheme used. Alll of the linearr adaptive filtters
useed for heart-ssound reductiion from lunng sounds haave
empployed adaptivve filters with finite memoryy [13]. The moost
common form
m of a finitee memory, orr Finite-duratiion
Imppulse Responsse (FIR), filteer is the transsversal filter th
that
connsists of unit-delay elemennts that delay each of the Msam
mples of the iinput (M is thhe filter orderr); elements thhat
multiply weights by input sampples; and addeers. Each sampple,
k, oof the M-sam
mples input reeference vecttor, r(n), that is
multiplied by thee conjugate off a weight valuue, k, and theese
prooducts are sum
mmed to form tthe filter outpuut y(n) (Figuree 1)
[13] localization within the priimary and refeerence inputs aand
the subsequent tiime alignmentt between thesse inputs and tthe
chooice of refereence input. The
T stationaryy of the data is
anoother importaant considerattion. Following schemes of
adaaptive filters are used: L
Least Mean Squares
S
(LM
MS),
Fouurth-Order Sttatistics (FOS
S), Recursive Least Squaares
(RL
LS), Block Faast Transversaal (BFT), andd Reduced Ordder
Kallman (ROK).
IV PROPO
OSED METH
HOD

Method
Figg. 1 Block Diagraam For Proposed M

A. Auscultaation & database generationn


As descrribe in the beggging of the paaper auscultatiion
means the ssensing of resppiratory soundd for the purpoose
of diagnosis . The data ussed in this studdy of lung souund
were acquuired with a Piezoelectric Conttact
Accelerometter (Siemenss EMT25C) by using tthis
transducer luung sound weere digitized aat 10240 Hz aand
12 bits perr sample whiich are geneerally in .WA
AV
format. thiis can be im
mplemented byy using an ttest
system callled the Biooacoustic Traansducer Tesster

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

93

(BATT) it
i is composedd of a speakerr in rigid encllosure
covered at the top by Viscoelaastic polyureethane
polymer surface madde of Akton (action prooducts,
Hagerstow
wn, MD). By using a simpple Audio recoording
software this system can generate an
a data base for
f the
detail futuuristic, compaarative diseasee diagnosis sysstem

Fig.4 Block D
Diagram For heartt sound detection

Fig. 3 Piezoeleectric Contact Acccelerometer

B. Hearrt Sound Locallization


The detecttion of heart soounds (Heart sound localizaation )
aare done bby the Muultiscale product of wavelet
w
aapproximationn coefficientss. Multiresoluution analysis..VI is
uused for this ppurpose, whicch is an expreess VI. Three scales
w
were used inn wavelet deccomposition with
w
the fifth--order
Symlet wavellet as the mothher wavelet. T
The product oof two
aadjacent decoomposition bbands presentts very interresting
pproperties. Siggnal and noise have totallyy different behhavior
iin the wavelet domain. In [18], this behavior was anaalyzed
uusing the concept of Lipchhitz regularity. The multipliccation
oof the DWT coefficients
c
beetween the deecomposition levels
ccan lead to ideentification off singularities [18]. In the case of
H
HS detectionn, the Multiscale producct of the w
wavelet
ccoefficients off the original LS record is uused to identiify the
H
HS segments within the LS
S signal. Multiiresolution Annalysis
V
VI decomposees the signal aaccording to thhe level we sppecify
aand reconstruucts the signaal from the frrequency bandds we
sselect. Fig. 4 is the arranngement for finding Multtiscale
pproduct of waavelet approxim
mation coefficcients.

www.asdf.org.in

Fig.5 Wave foorm For heart ssound detection,,(Top)original innput


resppiratory sound, (Bottom left)ffirst sampled w
wave form(Botttom
righht)detected heart sound

C. Heart Sounnd Cancellatioon


The DWT of the original L
LS record wass obtained usiing
the
Symlet (orrder 5) waveelet, which iis a compacctly
suppported
wavelet withh least asymm
metry and decomposing tthe
signnal
into 3 leveels. Then, thhe product of
o the waveelet
coeefficients
was calculateed. The cancelllation of hearrt sounds is doone
by applying a thrreshold such thhat,

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

Fig.4 Block
B
Diagram
m For heart souund Cancellatiion
V. CO
ONCLUSION
N
s
researchh is to
The primaary objective oof respiratory sound
bbring about im
mprovements to monitorinng and diagnoosis of
rrespiratory dissease, the pottential usefulnness of any m
method
ffor filtering heeart sounds froom lung soundds rests on its ability
a
tto perform in a clinical settting. Because the heart andd lung
ssound signals overlap in thhe time and frequency
f
dom
mains,
rremoving HS interference from
f
respiratorry sound recorrdings
iis a challenginng task. The pproposed methhod for HS rem
moval
bbased on a single
s
recordinng has shownn promising rresults
m
mainly in term
ms of lung sound characteeristic preservvation.
T
This method ddid not add anny noticeable clicks
c
or artifaacts in
tthe reconstruccted signal. Manual inspectiion by visual means
m
oof the reconstrructed signals confirmed thaat lung soundss were
tthe dominantt sounds wiith no perceeptible HS inn the
bbackground. F
Furthermore, the proposedd technique inn this
ppaper is far m
more efficiennt than other techniques foor HS
ccancellation inn terms of com
mputational loaad and speed.
This papeer presents a novel methood for heart sound
ccancellation fr
from lung sounnd records usiing LabVIEW
W. The
m
method uses the Multisscale productt of the w
wavelet
ccoefficients oof the originall signal to deetect HS segm
ments.
O
Once the HS segments aree identified, thhe method rem
moves
tthem from thhe wavelet ccoefficients aat every leveel and
eestimates the created
c
gaps. The
T results weere promising in HS
rremoval from
m LS without hhampering thee main compoonents
oof LS

94

S
andd G.R.Wodicka, Respiratory souunds
[7] H.Pasterkamp, S.S..Kraman,
Amer. J. Respir. Crit Care Med.vvol.
advaances beyond thee stethoscope, A
156,, no. 3, Pt 1, pp. 9974987, Sept. 19997.
[8] H
H. Pasterkamp, R.
R Fenton, A. Taal, and V. Cherniick, Interferencee of
carddiovascular soundds with phonopnneumography in children,
c
Am. Rev.
R
Resppir. Dis., vol. 1311, no. 1, pp. 61644, Jan. 1985.
[9] I.V.
I
Vovk, V.T. G
Grinchenko, and V
V.N. Oleinik, M
Modeling the acouustic
propperties of the cheest and measuringg breath sounds, Acoustical Physsics,
vol. 41, no. 5, pp. 6677676, 1995.
[10]] I. Hossain andd Z. Moussavi, Relationship between airflow and
norm
mal lung sounds, in Proc. 24th A
Ann. Int. Conf. IIEEE Eng. Mediccine
Biollogy Soc., EMBC
C02, Oct. 2002, pp.
p 11201122.
[11]] A.A. Luisada, The areas of aauscultation and the two main heart
sounnds,Med.Times, vol. 92, pp. 811, Jan. 1964.
[12]] P.J. Arnott, G.W
W. Pfeiffer, and M
M.E. Tavel, Specctral analysis of heart
sounnds: Relationships between some pphysical characterristics and frequeency
specctra of first and ssecond heart sounnds in normals annd hypertensivess, J
Biom
med. Eng., vol. 6,, no. 2, pp. 121128, Apr. 1984.
[13]] M. T. Pourazad , Z. Moussavi , G.
G Thomas,Hearrt sound cancellattion
from
m lung sound reccordings using tim
me-frequency filttering, Internatioonal
Fedeeration for Medical and Biologiccal Engineering 22006, Med Biol Eng
E
Com
mput (2006) 44: 216225.
[14]] S. Haykin, Adaaptive Filter Theoory, 4th ed. Uppeer Saddle River, NJ:
Prenntice-Hall, 2002.
[15]] Z. Moussavi, D
D. Flores, and G.. Thomas, Heartt sound cancellattion
baseed on multiscale pproducts and lineaar prediction, in Proc. 26th Ann. Int.
Connf. IEEE Eng. Medicine Biology Sooc., EMBC04, Sept. 2004, pp. 3840
38433.
[16]] I. Hossain and Z
Z. Moussavi, Ann overview of heaart-noise reductionn of
lungg sound using waavelet transform bbased filter, in Proc. 25th Ann. Int.
Connf. IEEE Eng. Meedicine Biology Soc.,
S
EMBC03, Sept.
S
2003, pp. 458
461..
[17]] J. Gnitecki, I. H
Hossain, Z. Mousssavi, and H. Pasteerkamp, Qualitaative
and quantitative evaaluation of heartt sound reductioon from lung souund
Trans. Biomed. Eng., vol. 52, no. 10, pp. 1788
recoordings, IEEE T
17922,Oct. 2005.
[18]] S. Mallat and S. Zhong, Characcterization of signnals from Multisccale
edgees, IEEE Trans. Pattern Anal. Mach.
M
Intell., vol. 14, no. 7, pp. 710
732,, Jul. 1992.
[19]] Daniel Flores-Taapia, Zahra M. K.
K Moussavi, , Gabbriel Thomas, Heart
sounnd cancellation bbased on multisccale products andd linear predictioon,
IEEE
E Trans. Biomed. Eng., vol. 54, noo. 2, pp.234-243,F
FEBRUARY 2007.
[20]] http://www.ni.coom
[21]] http://www.rale..ca

REFE
ERENCES
[1] T E Ayoob K
Khan, Dr. P Vijayyakumar, Separating Heart Sounnd from
L
Lung Sound Usiing LabVIEW, International Jouurnal of Computter and
E
Electrical Engineering, Vol. 2, No. 3, June, 2010 17793-8163
[2] V. Gross, A. Dittmar, T. Pennzel, F. Schuttler,, P. Von Wicherrt, The
rrelationship betw
ween normal lunng sounds, age aand gender, Am
merican
JJournal of Respiratory and Criticall care Medicine 2000;162: 905-9099.
D. Jones, K. Kwoong, Y. Burns, E
Effect of positionning on
[3] A. Jones, R.D
rrecorded lung sound intensitties in subjects without pulm
monary
ddysfunction, Phyysical Therapy 19999;79:7,682-690.
[4] F.Dalmay, M.T.Antonini, P.M
Maequet, R.Menierr,Acoustic propeerties of
tthe normal chest, European Respiiratory Journal 19995; 8:1761-1769.
[5] Z.Moussavi, Respiratory sound analysis, IE
EEE in Engineeering in
M
Medicine and Bioology Magazine, ppp15, January/Febbruary 2007.
[6] J.Gnitecki, Z.. Moussavi, Sepperating heart souunds from lung soounds,
IIEEE in Engineeering in Mediciine and Biologyy Magazine, pp 20-29,
JJanuary/Februaryy 2007.

www.asdf.org.in

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

95

SELF-T
TEST TEC
CHNIQU
UE FOR TWO
T
FISH
H ALGO
ORITHM
R
Rahul R Naiir
Deepartment of ECE
E
Kaarunya Univerrsity
Cooimbatore, Ind
dia
AbstractA
A
u
universal built-in self-test strrategy for devvices
implementing symmetric en
ncryption algo
orithms like T
Two
m is describeed in this paper.
p
Using the
fish algorithm
advantage of iinner iterative structures of the cryptograp
phic
cores testing ccan be easily setup
s
for circu
ular test of cryypto
cores. Since teesting strategyy based on scan chain is foound
liable to attacck of hackers, the importantt advantage off the
proposed imp
plementation is
i architecturee with no vissible
scan chain. Beecause the sam
me reason it willl be less amen
nable
to side chann
nel attack, an
nd 100% fau
ult coverage. The
recommended
d work can bee divided in to three main sub
blocks of LFS
SR based pseu
udorandom pa
attern generattion,
Two fish alggorithm and MISR based output respoonse
analyser. The coding is done in VHDL ussing Xilinx verrsion
nthesized by SYNOPSYS design comp
piler
9.2i and syn
version B-20088.09-SP3.
Keywords: Lin
near feedback shift register(L
LFSR), Maxim
mum
distance sep
parable(MDS),,Pseudo-hadam
mard transfform
(PHT) ,
G
Galois
field
(GF),Comb
binational
llogic
block(CLB)

I. INTR
RODUCTION
N
Nowadayss, secure circuuits are comm
monly adoptedd for
applications ssuch as e-bannking, pay TV
V, cell phone.. As
they hold ppersonal dataa and must process seccure
operations, ssecurity requiirements such
h as source/ssink
authenticationn, data integrrity, confidenttiality, or tam
mper
resistance aree maintained by
b means of several dedicaated
components. Confidentiaality is en
nsured throough
plemented onn cocryptographicc mechanismss generally imp
processors. These mechhanisms enco
ode or deccode
w the suppo
ort of secret kkeys
plaintexts or cipher texts with
m compromisee.
that must be ppreserved from
Through the core of an electronicc device offerring
digital securitty services is the cryptograaphic coproce ssor
that executes the cryptoographic fun
nction [1]. S
Such
crypto-coress provide seccurity servicess such as privaacy,
sincerity, andd authenticattion. Becausee weak cryp
yptoalgorithms, poor design of the device or hardw
ware
physical failuures can render the prod
duct delicate and
place highly ssensitive inforrmation or infr
frastructure at risk
[3], validatioon of cryptoo-algorithm and
a
test of the
hardware impplementing succh functions are
a essential.
OSED BIST ARCHITECTU
A
URE FOR TW
WO
II PROPO
FISH AL
LGORITHM

Figure. 1 A gen
neralized block diiagram for self teest architecture off
two fish alg
lgorithm

This work aiims at providding effectual test solutions


for possible phy
ysical failuress on the elecctronic devicee
plementing th
he cryptographhic algorithm
m. Proposing a
imp
BIS
ST [5] solution for cryypto-devices implementingg
stan
ndard symmeetric block ciipher algorith
hms like Twoo
fish
h.
a
A. Two fish algorithm
Twofish is block
b
cipher of 128 bits length. It cann
work with flexib
ble key length
ths: 128,192,o
or 256-bits. Inn
thee present reporrt, only a verssion with 128--bit key lengthh
willl be discussed
d. Input and ouutput data aree XOR-ed withh
eig
ght sub keys K0K7. Th
These XOR operations
o
aree
called input and
d output whittening [2]. The
T F-functionn
nsists of five kinds
k
of compponent operatiions: fixed lefft
con
rotaation by 8 bits, key depeendent S-boxes, Maximum
m
Disstance Separaable (MDS) m
matrices, Pseu
udo-Hadamardd
Traansform (PHT
T), and two sub
ub key addition
ns modulo 2322.
Theere are four kinds
k
of key ddependent S-boxes [7]. Fouur
diff
fferent S-boxes together witth the MDS matrix
m
form ann
h-fu
function. Thiss h-function appears two times in thee
cipher structure, which causses significan
nt redundancyy.
Key
y dependent S-boxes are ssomething neew in a cipheer
dessign. In majorrity of knownn ciphers, S-b
boxes are usedd
as a non-linearr fixed subsstitution operration [2]. Inn
wofish, each S-box
S
consists
ts of three 8-by-8-bit fixedd
Tw
perrmutations deetermined froom a set of two possiblee
perrmutations, q0
0 and q1. Thhe sub keys are computedd
uniiquely once fo
or a particularr global key, and stay fixedd
durring the entire encryption annd decryption
n process.
n
B. Encryption
The attractiv
ve feature off the Twofish
h algorithm is
thaat, after little modifications,
m
we can perfo
orm encryptionn
and
d decryption using
u
exactly tthe same struccture [2].
Two sub key
ys S0 and S1 are fixed durring the entiree
enccryption and decryption
d
proocess. The tw
wo of them aree
com
mputed as a result of multiiplying an suitable part of a
glo
obal key.

Proc. of the Intl. Conf. on


n Computer Applications
Vo
olume 1. Copyyright 2012
2 Techno Foru
um Group, In
ndia.
ISSBN:978-81--920575-5-2
2:: doi: 10.738
858/ISBN_07768
AC
CM #: dber.im
mera.10.738
858

www.asdf.org.in

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

96

III
I IMPLIMEN
NTATION OFF BIST RCHIITECTURE
FOR
R TWO FISH ALGORITHM
M
A. 128 BIT LFSR

Figure 2 Encryption proceess

C. Decrypption
Decryptioon requires appplying sub keeys in the revverse
order and maaking a little modification
m
to
o the main cippher
structure.

Figure 4 Block diag


agram 128 BIT LF
FSR

The bits in th
he LFSR statee which influence the inpuut
aree called taps. A maximum
m-length LF
FSR produces
an m-sequence unless
u
it contaains all zeros, in which casee
it will
w never chaange. As an allternative to th
he XOR basedd
feeedback in an LFSR, one can also usse XNOR .Thee
feeedback poly
ynomial forr 128 bit LFSR is
X1227+X28+X26+X
X+1
B. TWO FIS
SH ALGORITH
HEM

Figure 3 Decryption
D
proceess

D. A Lineaar Feedback Shift


S
Register
A linear feedback shift register (L
LFSR) is a sshift
register whose input bit is a linearr function off its
previous statee [1]. Mostlyy use linear fu
unction of sinngle
bits is XOR, thus commonnly it is a shiift register whhose
input bit is drriven by the XOR
X
of some bits
b of the oveerall
shift register value. The initial value of the LFSR
R is
called the seeed, and becausse the operation of the regiister
is deterministic, the stream
m of values produced by the
register is ccompletely deetermined by
y its current (or
previous) staate. Similarlyy, because the register haas a
finite numberr of feasible states, it mu
ust finaly enteer a
repeating cyccle. Afterall, an LFSR with a well-choosen
feedback funnction could produce a sequence
s
of bits
which appearrs random andd which has a very
v long cyclle.
E. Multiplle Input Signaature Register
Multiple-iinput signaturre register (M
MISR) [4] is the
answer that coompacts all ouutputs into on
ne LFSR. It woorks
because LFS
SR is lineaar and obey
ys superposiition
principle. Alll responses are overlaped in
n one LFSR. The
last remaindeer is XOR sum
m of remaindeers of polynom
mial
divisions of each Primaryy Output by the
t characteriistic
polynomial. IIts output devvelops a signatture based onn the
effect of all tthe bits fed innto it. If any bit is wrong, the
signature willl be different from the expeected value annd a
fault will have been detecteed.

Fig
gure 5 Over view of two fish algorrithm

XOR operaations are ccalled input and outpuut


wh
hitening. The F-function cconsists of five
f
kinds of
o
com
mponent operations: fixed left rotation by
b 8 bits, keyy
dep
pendent S-bo
oxes, Maxim
mum Distance Separablee
(M
MDS) matricess, Pseudo-Had
adamard Tran
nsform (PHT)),
and
d two sub keey additions m
modulo 232.T
There are fouur
kin
nds of key deependent S-booxes [2]. Fou
ur different Sbox
xes together with
w the MDSS matrix form h-function. Inn
Tw
wofish, each S-box
S
consists
ts of three 8-by-8-bit fixedd
perrmutations ch
hosen from a set of two possiblee
perrmutations, q0
0 and q1. Theese sub keys are computedd
onlly once for a particular gglobal key, and
a stay fixedd
durring the entire encryption annd decryption
n process [6].
S
C. S-BOXES
Key dependeent S-boxes arre something new
n in a
cipher design. In
n majority of kknown cipherss, S-boxes are
useed as a non-lin
near fixed subsstitution operaation.

www.asdf.org.in

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

97

Figure 6 Keey dependent S.bo


oxes

hree 8-by-8-biit
In Twofish, each S-boxx consists of th
fixed permutaations chosen from a set of two possible
permutations,, q0 and q1.
RMUTATIONSS
D. Q PER
Each q-peermutation reppresents a fixeed function [99]; it
is also descriibed by a reguular structure shown in Figgure
7. Main compponents of thee q-permutatio
ons are 4-by-44-bit
fixed S-boxess t0...t3. Bothh permutationss q0 and q1 hhave
the same innternal structuure, and difffer only in the
contents of thhe S-boxes t0....t3

Figure 7 q permutation
p

T
Table I
Fixxed table for q0 and
a q1 permutatio
on

MATRIX
E. MDS M
p
by
y this matrixx is
The transsformation performed
described by tthe formula give in matrix formula.

www.asdf.org.in

Figure 8 MDS mat


atrix formula

he input 32-biit
Here y3...y0 are consecutiive bytes of th
d z3...z0 form
m
word (y3 is the most significcant byte), and
thee output word
d. This matrixx multiplies a 32-bit inpuut
vallue by 8-bitt constants, with all multiplication
m
s
perrformed (bytee by byte) inn the Galois field [8]. Thee
prim
mitive polyno
omial is x8+ x66 + x5 + x3+ 1.
There are only
o
two diffferent multip
plications thaat
req
quire to be im
mplemented. Inn case of mu
ultiplication byy
5B16, every outtput dependss on at mostt four inputss.
Theerefore, this multiplicatiion consumees only fouur
parrallel CLBs see F Thhe multiplicaation by EF16
con
ntains two outtputs dependinng on five inp
put bits. Thesee
outtputs require an
a entire CLB
B each, thereffore the entiree
mu
ultiplication will take five pparallel CLBs. Neverthelesss,
any
y multiplicatio
on in GF can bbe implementted using up too
eig
ght parallel CLBs. As a reesult, the tim
me of a singlee
mu
ultiplication iss equal to thhe delay of one
o CLB .Thee
resu
ults of all multiplications
m
in each row
w of the MDS
S
maatrix are finallly XOR-ed bbit by bit. Such operationn
neeeds only four CLBs
C
for eachh row.

Figure 9 Multipplication by 5B

Figure 10 Mu
ultiplication by E
EF

F. GENERAT
TION OF KEYYS S0 AND S1
1
There are tw
wo different ssets of sub keeys. S and K.
K
Tw
wo sub keys S0 and S1 aare fixed during the entiree
enccryption and decryption pprocess. Both
h of them aree
obttained as a ressult of multipllying an appropriate part of
o
a global key by RS matrix [9]]. This matrix
x also performs
mu
ultiplication in
n the Galois ffield GF, butt the primitivee
pollynomial is different for thiis MDS matrix
x: x8 + x6 + x3
2
+ x + 1. The alg
gorithm used to compute S keys is givenn
by the matrix as given.

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

98

ults from the lleast significaant to the mosst


inteermediate resu
sign
nificant positiion. Such struc
ucture is shown
n in figure 4.88

Fig
gure 13 Carry chaain in modulo add
dition

Figure 11 Generation of S0 ad S1
S keys
Heree index i iss from the range
r
0...1, and
indicates the appropriate key
k S0 or S1. Si,3...Si,0 arre 8bit parts of thhe 32-bit key Si (si,3 is thee most significcant
byte). M8i+77...M8i are 8--bit parts of the 128-bit uuser
supplied key M
M.
EY GENERAT
TION
G. SUBKE
Set of subb keys K is computed in a structure vvery
similar to thhat used forr encryption [10]. The oonly
difference is that there is no
n key addition after the P
PHT
transform, annd that the fiixed left rotattion by 8 bitts is
performed aft
fter instead off before the second h-funcction
[5]. This featuure enables coomputing

It is obviou
us that the titime of signaal propagationn
thro
ough the en
ntire carry chhain limits the speed of
o
add
dition. The lon
nger words arre being added
d, the longer it
i
tak
kes to computee the result.
I. THE PSEUDO
O-HADAMARD TRANSFORM

The PHT traansform is com


mposed of tw
wo additions. It
I
was shown in figure 14 thhat both additions can bee
perrformed in paarallel. the PPHT transform
m is a simplee
fun
nction that con
nsists of two aadditions modu
ulo 232

Figu
ure 14 The pseuddo-hadamard tran
nsform

The only additional operaation is one leeft shift by onee


bit,, as shown in Figure
F
4.9

Figure 12 Sub key generatiion

sub keys K using the same


s
piece off hardware ass the
k M is usedd to
one used for encryption. The global key
parameterize all key depenndent S-boxess. All sub keyys K
are independeent of each otther, and are computed onn the
basis of their index value [8]. It means,, that they cann be
b
directions: for encrypption
computed onn the fly in both
and decryptioon. Keys M33M0, show
wn in Fig4.7, are
derived directtly from the main
m 128-bit key.
k They are just
32-bit parts oof the main key,
k
where M3
M correspondds to
the most siggnificant 32 bits, and M0
M to the lleast
significant bbits of M. These
T
sub keeys are usedd to
s
fixed forr all rounds. The
customize S--boxes, and stay
variable i is ffrom range 0
19, and is used
u
to generaate a
set of correspponding sub keeys K0K39
H. ADD
DITIONS MOD
DULO 232 ON
N 32-BIT LON
NG
W
WORDS
mple
Although the addition modulo 232 is not as sim
operation ass XOR, it can still bee realized ussing
comparable aamount of haardware. The most convennient
way to perfoorm this operration is to compute
c
the ssum
position by pposition, and use carry ch
hain to propaggate

www.asdf.org.in

Tabble II
Cipher text after encr
cryption
Plain
Serial
key
text
t
number
0
0
1

9F58
9F5CF6
122C32
B6BFEC
2F2AE8
C35A
9F58
9F5CF6
122C32
B6BFEC
2F2AE8
C35A

Serial
number
1

6078
8AA9F72
8AA798C
B882466
3FFD6E4
5BB82

Cipher
text
5AC3E8
2A2
2FECBF
B6322C12
F65C9F58
9F
64A438
765564AF
AF77CDB0
218E7698
9572
60788A
9F728A79
B82466
8CB
3FD
D6E45B
82

T
Table III
Plain textt after decryption
n
Ciphe
Key
Plain
r text
text
0
0
5AC3
E82A2F
ECBFB6
322C12
F65C9F
589F

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

99

Seria
l Number
1

64A4
387655
64AFAF
77CDB0
218E76
989572

6078
8A9F72
8A798C
B82466
3FD6E4
5B82

9F589F
5CF6122C
32B6BFEC
2F2AE8C3
5A

[10]
C. E. Shannona Mathematical Theory of Communication,
The Bell System Technical Journal,Vol. 27, pp. 379423, 623656,
July, October, 1948.

6078
8A9F72
8A798C
B82466
3FD6E4
5B82

9F589F
5CF6122C
32B6BFEC
2F2AE8C3
5A

Table IV
Analysis report of Logic Blocks
Logic
Cell Area
Power
Block
33.8986
LFSR
1805.0000
uw
m2
Two
Fish
Algorithm
MISR

4144.0000
m2

3.3892m
w

2816.0000
m2

35.3342
uw

IV. CONCLUSION AND FUTURE WORK


Two fish algorithm in cryptography is done in VHDL
and its encryption and decryption process is executed.
BIST architecture for the two fish algorithm is performed
successfully. Fault coverage of 100 percentages is
attained in this architecture. Area and power analysis are
done in SYNOPSYS design compiler with negligible area
overhead.
REFERENCES
[1]
Giorgio D: Natale Marion Doulceir,Marie-Lise Flottes and
Bruni Rouzeyre Self test techniques for Crypto devices. IEEE
tansactions on VLSI VOL.18,No.2,February 2010.pp.329-333.
[2]
Pawel Chodowiec and Kris Gaj Implementation of the
Twofish Cipher Using FPGA Devices Technical Report of Electrical
and Computer Engineering, George Mason University ,July 1999
[3]
B.Yang, K.Wu, and R. Karri, Scan-based side-channel
attack on dedicated hardware implementations on data encryption
standard, in Proc.Int. Test Conf., 2004.
[4]
B. Konemann, J. Mucha, and G. Zwiehoff, Built-in logic
block observation technique, in Proc. IEEE Int. Test Conf., 1979, pp.
3741.
[5]
A. Schubert and W. Anheier, On random pattern testability of
cryptographic VLSI cores, J. Electron. Test.: Theory Appl., vol. 16,
no. 3, pp. 185192, Jun. 2000.
[6]
N. A. Touba, Circular BIST with state skipping, IEEE
Trans. Very Large Scale Integr (VLSI) Syst., vol. 10, no. 5, pp. 668
672, Oct. 2002.
[7]
J. Lee, M. Tehranipoor, C. Patel, and J. Plusquellic,
Securing scan design using lock and key technique, in Proc. IEEE
Int. Symp. Defect Fault Tolerance VLSI Syst. (DFT), Oct. 2005, pp. 51
62.
[8]
B. Yang, K. Wu, and R. Karri, Secure scan: A design-fortest architecture for crypto chips, IEEE Trans. Comput.-Aided Design
Integr. Circuits Syst., vol. 25, no. 10, pp. 22872293, Oct. 2006.
[9]
Bruce Schneier Applied Cryptography, SecondEdition:
Protocols, Algorthms, and Source,John Wiley & Sons publications,pp
214-322, Publication January 96.

www.asdf.org.in

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

100

Genetic Algorithm (GA) method of evolving Carnatic Music concert swarams


K.Priya

Dr.R.Geetha Ramani

Ph. D Scholar, Department of Computer Science


and Engineering,
Rajalakshmi Engineering College,
Thandalam,Chennai 602 105.

Professor & Head, Department of Computer Science


and Engineering,
Rajalakshmi Engineering College,
Thandalam,Chennai 602 105.

Abstract - Evolutionary music is an emerging field in research


area to help the field of fine arts with computational music.
Through Evolutionary Computation method, music can be
evolved either automatically or with human interaction.
Genetic Algorithm, an Evolutionary Computation Method
with computational intelligence algorithm are now a days
being used for automatic evolution of music. This paper
presents the evolution of carnatic music concert Swarams
through genetic algorithm. A new mutation operator viz.
Gene Silence mutation Operator is devised for evolving
better music concert note swarams. This details pertaining to
suitable individual representation method, fitness assessment,
selection and genetic operators are elaborated to evolve
carnatic music concert Swarasthanas (musical Concert Note).
The new Gene silence mutation Operator is introduced to
silence the allele/gene in the chromosome. The silenced gene
will not participate in the mutation process. The advantage of
having this new operator is to have diversity in the
population, through which the evolved music will have better
music notes (swaras). The work is implemented using ECJ
simulator (Evolutionary Computation in Java).
Keywords: Evolutionary Computation, Genetic Algorithm,
Evolutionary Music, Individual Representation, Selection,
Genetic Operator, Gene Silence Mutation Operator.

I.

INTRODUCTION

Evolutionary Computation [1][6][5] algorithms


are designed to mimic the performance of biological
systems. Evolutionary Computing algorithms are used for
search and optimization applications. Evolutionary
algorithms search or operate on a given population of
potential solutions to find solution that approach some
specification or criteria. The algorithm applies the principle
of survival of the fittest to find better and better
approximations.
A new set [6] of approximations is created for
each generation by the process of selecting individual
potential solutions (individuals) according to their level of
fitness in the problem domain and breeding them together
using operators borrowed from natural genetics. This
process leads to the evolution of populations of individuals
Proc. of the Intl. Conf. on Computer Applications
Volume 1. Copyright 2012 Techno Forum Group, India.
ISBN:978-81-920575-5-2:: doi: 10.73865/ISBN_0768

that are better suited to their environment than the


individuals that they were created from, just as in natural
adaptation.
Genetic Algorithm (GA) [7][3] is the larger
subdivision class of evolutionary computing with
computational intelligence for search optimization. It
generates individuals and population inspired by natural
evolution such as crossover and mutation. Genetic
Algorithm requires genetic representation of problem
domain called genome and a fitness function which may be
single or multi objective to evaluate the solution generated.
As soon as individual representation and fitness functions
are defined GA proceeds to populate solutions randomly
and improves it iteratively through genetic operators
namely cross over, mutation, selection and reproduction
operator. This process continues until a termination
condition is reached. Genetic Algorithm finds its
application in engineering fields like scheduling, computer
aided design, economics, manufacturing, bioinformatics
and evolutionary music.
Evolutionary Music [8] is the audio counterpart to
Evolutionary art, whereby algorithmic music is created
using an evolutionary algorithm. The process begins with a
population of individuals which by some means or other
produce audio (e.g. a piece, melody, or loop), which is
either initialized randomly or based on human-generated
music. The rise of evolutionary music marks a shift in
computer music research from the study of musical
structures to an examination of the dynamic interaction
between aspects of musical systems. There have been some
significant advances in this area by researchers examining
interactive performance.
The music has two types of classification as Western
Music and Indian Classical Music. The Indian classic
music can be further divided into Hindustani Music and
Carnatic Music. Carnatic music is a South Indian Classical
music which consists of Raga (melody), seven Swaras
(note with pitch), and Thalas (rhythmic beat) to produce a
musical concert note. Each Raga consists of template
Arohana and Avarohana from which notes (Swaras) are
produced. It consists of 72 main ragas (melakartha ragas)
from which many combinations of ragas (janya ragas) can
be formed. According to the Sapta Thla system, there are
seven families of Thla, each of which can incorporate one
of five jatis which produce a total of 35 thalas.

ACM #: dber.imera.10.73865

www.asdf.org.in

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

101

II. ORGANIZATION OF THE PAPER


The paper was organized as follows. Section 3 discusses
the features of Carnatic music. Section 4 presents the
Literature survey on GA based music evolutions. Section 5
narrates the GA method of evolving better Carnatic Music
concert Swaras. Section 6 Presents the Experimental
Results and Section 7 provides the Conclusion.
III. CARNATIC MUSIC
Carnatic Music [13] is the classical music of Southern
India. The basic form is a monophonic song with
improvised variations. There are 72 basic scales on the
octave, and a rich variety of melodic motion. Both melodic
and rhythmic structures are varied and compelling. This is
one of the world's oldest & richest musical traditions. The
important elements[13] of carnatic music are Sruthi which
is musical pitch, swara which specifies note with lower or
higher position, raga which prescribes the rules for melody,
thala which refers to beat set(measure of time) of
particular composition. the other elements are
the Arohana which defines the ascending order of swaras
in the raga, and Avarohana defines the descending order.
The Arohana and Avarohana are orthogonal in that the
ascending order and descending order can be different for a
raga .
The swaras[13] are seven notes of scale in music as s
shadja, rishabh, gandhar, madhyam, pancham, dhaivat and
nishad, and are shortened to Sa, Ri, Ga, Ma, Pa, Dha, and
Ni and written as S, R, G, M, P, D, N. these seven swaras
will have tweleve forms S , R1, R2, G1, R3, G2, G3, M1,
M2, P, D1, D2, N1, D3, N2
The ragas have two classifications based on linearity(all
Swaras) and non linearity(missing Swaras) of Arohana and
avarohana. First is Samporna ragas called melakartha ragas
with all 7 notes with variation in rishabha, gandhar,
madhyam, pancham, daivatham and nishagam. It totally
constitutes of 72 ragas. Some sample ragas are given in
table below
TABLE 1 SAMPLE SAMPOORNA RAGAS
Name
kanakAngi
rathnAngi
gAnamUrthi
vanaspathi
mAnavathi

arohanam
S R1 G1 M1 P D1 N1 S
S R1 G1 M1 P D1 N2 S
S R1 G1 M1 P D1 N3 S
S R1 G1 M1 P D2 N2 S
S R1 G1 M1 P D2 N3 S

avarohanam
S N1 D1 P M1 G1 R1 S
S N2 D1 P M1 G1 R1 S
S N3 D1 P M1 G1 R1 S
S N2 D2 P M1 G1 R1 S
S N3 D2 P M1 G1 R1 S

The second classification is janya ragas which was formed


from the melakartha ragas used called as born from
melakarthas. Sample ragas are given in the table below
TABLE 2 SAMPLE JANYA RAGAS
Raha name
AHiri

Belongs to
melakartha
8

asAvEri

www.asdf.org.in

Arohana

Avarohana

S R1 S G3 M1 P D1
N2 S
S R1 M1 P D1 S

S N2 D1 P M1 G3
R1 S

bhUpALam
dhanyAsi

8
8

S R1 G2 P D1 S
S G2 M1 P N2 S

punnAgavarA
Li

N2 , S R1 G2 M1 P
D1 N2

SN2SPD1M1PR1G
2R1S
S D1 P G2 R1 S
S N2 D1 P M1 G2
R1 S
N2 D1 P M1 G2
R1 S N2 ,

The seven Tala families and the number of aksharas (


Zbeat count) for each of the 35 talas is tabulated below.
TABLE 3 TALA SYSTEM

Tala

Tisra Chatusra Khanda Misra Sankeerna

Dhruva 11

14

17

23

29

Matya 8

10

12

16

20

Rupaka 5

11

Jhampa 6

10

12

Triputa 7

11

13

Ata

10

12

14

18

22

Eka

In the above table the number represents the beat count


for example adhi tala refers to chatusra triputa tala whose
beat is 8 and roopaka tala refers to thisra eka tala whose bat
count is 3. Based on the above characteristics Swaras can
be evolved based on raga and thala system.
IV. LITERATURE SURVEY
The following researchers done their work to evolve
music through GA. Dragan Matic[1] in his paper titled A
genetic Algorithm for composing music proposes a
genetic algorithm for composing music by applying a
predefined rhythm to initial population giving good
solutions with allowed tunes of MIDI notes as individual
representation and multi objective fitness function to
evaluate the MIDI notes evolved. He states Modified
genetic operators enable significantly changing scheduling
of pitches and breaks, which can restore good genetic
material and prevent from premature convergence in bad
suboptimal solutions. He used set of allowed tones as
individual representation. He gives weight to the fitness
based on intervals, similarity, pitch and chord. Based on
summation of this fitness weight he evolved melodies of
western music. In the same way Ender Ozean et al[2] in his
paper A genetic Algorithm for Generating Improvised
music proceeded by considering two objectives for fitness
function . One is core objective which gives beginning
note, ending note, relationship and direction of notes. The
other is adjustable objective for rest proportion and hold
event proportion and changes in direction of chords and
also Damon Daylamani Zad et al[3] in their paper specified
use of precise assumptions and adequate fitness function it
is possible to change the music composing into an

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

optimization problem. He considered entropy of the notes


distribution as a factor of fitness function and developing
mutation and crossover functions based on harmonic rules.
He considered only one objective function. All the above
researchers carried out their work based on Western music
MIDI notes.
The music can also be evolved instrumentally. The
research work done on this area was by David E.Golberg
et al[8] who in their paper titled Genetic Algorithm and
computer assisted Music Composition make use of
thematic bridging concept with GA to evolve . The
bridging is brought about modifying or reordering pitch,
rapy.amptitude or duration values. He deleted last element
rotated the individual and mutated the first element to get
next population and also W.D Potter et al[9] in their paper
GA based music for guitar" make use of distributed
genetic algorithm. They introduced noted weight
component for chords to calculate fitness. A fitness reward
is assigned based on number of notes in each chord.
The music can also be evolved in the interactive way as
specified by Hua zhu et al[10] who generates emotional
music in interactive way
with subjective measures
modeled by selection of rules and rule weights. Takehisa
ONISAWA et al[11] also evolved music interactively with
human evaluation as finess function
In the field of carnatic music G.Jeyakumar et al[4] try to
bring in the idea of automated Carnatic music generation
using genetic algorithm and its applications with single
objective fitness function. They enumerate a variety of
compositions on specific ragas can be generated which can
be improvised to deliver a good quality musical concert.
They used database to mine the Swaras of different ragas.
The same author makes use of the generated carnatic music
in the field of medicine therapy. In this paper also he used
only single objective function and cross over operator to
generate music
In our work, we are devising a special operator to
evolve better carnatic music swarams. The explanation of
the work is presented in the following sections.
V.

GENETIC ALGORITHM METHOD OF


EVOLVING CARNATIC MUSIC

The GA Process of evolving music notes is given below


Algorithm for GA Process
Step 1 Generate Initial population with individual
representation as gene/allele (Swaras).
Step 2 Evaluate fitness for the individuals produced.
Step 3 If termination criteria is achieved then stop the
process and report best individual.
Step 4 Else select the individual to mutate and
reproduce based on the probability.
Step 5 The best individual is selected for reproduction.
Step 6 Other individuals will be mutated using gene

www.asdf.org.in

102

silence operator and new offsprings are


produced. Continue the process from step 2.
The detailed narration of the GA process is presented in
the following sub sections
5.1 Individual Representation
The individual (chromosome) is a collection of relevant
Swaras (gene) specific to ragas.
A sample individual for kanakangi raga is
S R1 G1 M1 P D1 N1 S as shown in table 1.
The number of genes in the chromosome is specific to
thala. For adhi tala the number of genes in the individual
will be multiples of 8. For roopaka thala the number of
individuals will be multiples of 3 as shown in table 4.
The sample initial population generated for Kanakangi
raga and adhi tala is
Individual 1

G1 G1 R1 G1 M1 M1 P D1 P D1 D1 P
P M1 M1 G1
Individual 2
D1 P M1 G1 R1 G1 M1 P
Individual 3
S G1 M1 R1 M1 P D1 N1 S N1 D1 P
M1 G1 R1 G1
Individual 4
S R1 G1 G1 M1 P P D1 N1 D1 P M1 G1
R1 S P G1 M1 P D1 N1 N1 N1 S N1 D1
P M1 G1 R1 S G1
The population is generated in accordance to required
raga and thala.
5.2 Fitness Function
The fitness function of an individual is assessed based on
their thala beat count, correctness of Arohana and
avarohana and number of incorrect swarams.
The fitness assessment is carried as follows.
F1+F2+F3 if F3=0
Fitness =
-(F1+F2+F3) if F3<0
The narration of F1, F2 AND F3 is presented below.

The first objective function gives the multiples of Swaras


in Thala beat count.
F1 = No of genes / Thala Beat count
F1 calculation for above given sample set of individual
is given below
TABLE 4 MULTIPLES OF SWARAS IN TALA BEAT COUNT
No
Individual 1
Individual 2
Individual 3
Individual 4

Individual
G1 G1 R1 G1 M1 M1 P D1 P
D1 D1 P P M1 M1 G1
D1 P M1 G1 R1 G1 M1 P
S G1 M1 R1 M1 P D1 N1 S N1
D1 P M1 G1 R1 G1
S R1 G1 G1 M1 P P D1 N1 D1
P M1 G1 R1 S P G1 M1 P D1

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

F1
2
1
2
4

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

N1 N1 N1 S N1 D1 P M1 G1
R1 S G1

The second objective function specifies whether the


individuals follow Arohana (ascending order) or avarohana
(descending order)
1 if correct Arohana/avarohana exists
F2 =
0 otherwise
The F2 calculation for the above mentioned four
individuals is given in Table 5.
TABLE 5 CORRECT AROHANA/AVAROHANA COMPONENT
No
Individual 1
Individual 2
Individual 3
Individual 4

Individual
G1 G1 R1 G1 M1 M1 P D1 P D1
D1 P P M1 M1 G1
D1 P M1 G1 R1 G1 M1 P
S G1 M1 R1 M1 P D1 N1 S N1
D1 P M1 G1 R1 G1
S R1 G1 G1 M1 P P D1 N1 D1 P
M1 G1 R1 S P G1 M1 P D1 N1
N1 N1 S N1 D1 P M1 G1 R1 S
G1

F2
1
1
0
0

The assessment procedure of F3 is as follows


F3 = No of incorrect Swarams in the individual * (-1)
F3 calculation for the above 4 individuals will be as
follows
TABLE 6 NUMBER OF INCORRECT SWARAMS
No

Individual

F3

Individual 1

G1 G1 R1 G1 M1 M1 P D1 P D1
D1 P P M1 M1 G1
D1 P M1 G1 R1 G1 M1 P
S G1 M1 R1 M1 P D1 N1 S N1
D1 P M1 G1 R1 G1
S R1 G1 G1 M1 P P D1 N1 D1 P
M1 G1 R1 S P G1 M1 P D1 N1
N1 N1 S N1 D1 P M1 G1 R1 S
G1

Individual 2
Individual 3
Individual 4

0
-1
-1

The total fitness calculated for the above mentioned four


individuals is given in Table 7.
TABLE 7 FITNESS VALUE (F)
No
Individual 1
Individual 2
Individual 3
Individual 4

Individual
G1 G1 R1 G1 M1 M1 P D1 P D1 D1
P P M1 M1 G1
D1 P M1 G1 R1 G1 M1 P
S G1 M1 R1 M1 P D1 N1 S N1 D1 P
M1 G1 R1 G1
S R1 G1 G1 M1 P P D1 N1 D1 P M1
G1 R1 S P G1 M1 P D1 N1 N1 N1 S
N1 D1 P M1 G1 R1 S G1

5.3 Genetic Operators

www.asdf.org.in

Fitness( F)
3
2
-1
-3

103

1. Selection Operator
In this work the individuals are randomly selected for
mutation and reproduction in accordance to probability of
selection. Cross over is not applied in this work.
2. Reproduction
Individual with highest fitness is considered as
best individual and considered for reproduction.
In the above fitness value highest value will be
considered as Best individual and the best
individual for above example is
Individual 1
G1 G1 R1 G1 M1 M1 P D1 P
D1 D1 P P M1 M1 G1
has the highest fitness value 3 which will be
considered for reproduction.
3.

Mutation
The fitness with least negation will be considered
for mutation in the above example as
Individual 4
S R1 G1 G1 M1 P P D1 N1 D1
P M1 G1 R1 S P G1 M1 P D1
N1 N1 N1 S N1 D1 P M1 G1
R1 S G1
If standard mutation operator is applied there is a
chance of fitness degradation and hence a new
operator viz. gene silencing operator is devised in
which the genes with incorrect Swarams alone
will participate in the mutation process. The genes
corresponding to the correct Swarams are silenced
thereby they will not participate in the mutation
process. The position of incorrect individuals is
taken from fitness 4 and only those part of
individuals are mutated randomly over multiple
points. Other parts are masked from mutation.
The sample scenario is given below considering above
generation
For individual 4 Fitness 4 = 2 so the genes in the 1,3,4
subsets are not mutated. Only the gene subset 2 alone is
mutated. In single point mutation only one point is mutated
randomly in 2nd subset and in multi point mutation more
than one point is mutated.
N1 D1 P M1 G1 R1 S P alone will be mutated because
of gene silencing operator in Individual 4.This will give
rise to next Generation of individuals as
Individual 1
Individual 2
Individual 3
Individual 4

G1 G1 R1 G1 M1 M1 P D1 P D1 D1 P
P M1 M1 G1
D1 P M1 G1 R1 G1 M1 P
S G1 M1 R1 M1 P D1 N1 S N1 D1 P
M1 G1 R1 S
S R1 G1 G1 M1 P D1 D1 N1 S P M1 G1
R1 S P G1 M1 P D1 N1 N1 N1 S N1 D1
P M1 G1 R1 S G1

Now the fitness will be calculated again as


Individual

F1

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

F2

F3

www.icca.org.in

fitne

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

G1 G1 R1 G1 M1 M1
P D1 P D1 D1 P P M1
M1 G1
D1 P M1 G1 R1 G1 M1
P
S G1 M1 R1 M1 P D1
N1 S N1 D1 P M1 G1
R1 S
S R1 G1 G1 M1 P P D1
N1 D1 P M1 G1 R1 S P
G1 M1 P D1 N1 N1 N1
S N1 D1 P M1 G1 R1 S
G1

4
0

ss
3

-1

-1

-2

-2

Suppose if Individual 4 is taken for mutation, the fitness


assessment of individuals will be as given below.
Individual
G1 G1 R1 G1 M1
M1 P D1 P D1 D1 P
P M1 M1 G1
D1 P M1 G1 R1 G1
M1 P
S G1 M1 R1 M1 P
D1 N1 S N1 D1 P
M1 G1 R1 S
S R1 G1 G1 M1 P P
D1 N1 D1 P M1 G1
R1 S R1 G1 M1 P
D1 N1 N1 N1 S N1
D1 P M1 G1 R1 S
G1

F1
2

F2
1

F3
0

F4

fitness
3

-1

-1

Now the best individual considered will be Individual 4


(given below) with the fitness of 5.
S R1 G1 G1 M1 P P D1 N1 D1 P M1 G1 R1 S R1 G1 M1
P D1 N1 N1 N1 S N1 D1 P M1 G1 R1 S G1
VI. EXPERIMENTAL RESULTS
The experimental results are obtained for this research
work which incorporate the following parameters are as
follows.
Initial Population :
100 Populations
No. of Generations:
50
Mutation Probability: 0.8
Reproduction Probability: 0.2
The mutation and reproduction are selected based on 8020 ratio trail was used to produce the result. The
probability can also be varied for any ratio.
Several runs and trails have been attempted for various
ragas and thalas. To show the result kanakangi raga and

www.asdf.org.in

104

adhi thala concert note evolution was taken for


consideration.
Figure 1 shows the fitness generation for the best
individuals evolved for each generation for kanakangi raga
and adhi thala.
Figure1 shows the fitness improvement as the generation
progresses. It shows better individuals are evolved in the
subsequent generations. At the outset the best individual
evolved at the end of the a run is given below
R1 M1 R1 D1 P G1 M1 R1 N1 D1 R1 R1 D1 P G1 N1 G1 M1 P P N1
N1 G1 R1 N1 G1 G1 R1 M1 M1 R1 D1 N1 D1 M1 D1 P G1 G1 G1
D1 G1 R1 N1 R1 P D1 M1 D1 R1 M1 M1 R1 N1 P G1 G1 N1 M1 R1
P G1 M1 G1 G1 M1 P D1 M1 R1 N1 G1 P R1 N1 G1 R1 D1 M1 G1
R1 G1 P M1 P N1 N1 D1 D1 P M1 M1 G1 D1 G1 M1 M1 P M1 M1
M1 G1 N1 R1 M1 N1 M1 G1 D1 D1 D1 G1 M1 N1 G1 G1 P N1 M1
G1 N1 P D1 R1 R1 R1 N1 M1 P G1 G1 N1 D1 N1 R1 N1 R1 N1 D1
M1 N1 D1 G1 D1 M1 D1 P R1 D1 D1 D1 N1 D1 M1 G1 M1 P M1 P
R1 R1 M1 R1 R1 D1 R1 R1 M1 R1 N1 G1 R1 N1 M1 N1 M1 P P G1
N1 D1 G1 R1 R1 M1 M1 M1 D1 R1 M1 D1 R1 M1 R1 D1 M1 R1 G1
D1 D1 G1 D1 P G1 G1 G1 M1 P D1 R1 R1 N1 R1 G1 M1 R1 N1 N1
M1 P M1 M1 P G1 R1 D1 D1 P D1 D1 G1 P M1 P G1 N1 N1 P G1
R1 R1 R1 N1 P M1 G1 G1 D1 R1 P P N1 P M1 M1 G1 G1 N1 G1 P
G1 G1 P N1 D1 N1 P D1 R1 P M1 M1 M1 G1 M1 R1 G1 G1 P G1 P
N1 N1 P R1 P M1 N1 G1 N1 D1 G1 R1 P G1 M1 D1 R1 R1 P N1 P
M1 G1 G1 N1 D1 G1 M1 D1 R1 D1 N1 M1 P R1 N1 P D1 R1 D1 R1
D1 R1 R1 G1 G1 N1 M1 N1 M1 G1 D1 D1 G1 R1 G1 D1 M1 N1 R1
P D1 P N1 M1 G1 D1 M1 P M1 P M1 G1 D1 M1 G1 R1 M1 G1 G1
D1 R1 R1 N1 D1 R1 N1 P D1 G1 M1 P M1 D1 R1 N1 M1 M1 R1 N1
R1 R1 M1 M1 N1 G1 P D1 N1 N1 D1 M1 N1 D1 G1 M1 M1 G1 M1
G1 D1 M1 P R1 G1 D1 M1 R1 P R1 M1 M1 P N1 D1 D1 D1 D1 P P
M1 N1 R1 M1 M1 G1 N1 G1 R1 D1 N1 P P M1 N1 G1 N1 N1 P G1
R1 R1 R1 N1 P M1 G1 G1 D1 R1 P P N1 P M1 M1 G1 G1 N1 G1 P
G1 G1 P N1 D1 N1 P D1 R1 P M1 M1 M1 G1 M1 R1 G1 G1 P G1 P
N1 N1 P R1 P M1 N1 G1 N1 D1 G1 R1 P G1 M1 D1 R1 R1 P N1 P
M1 G1 G1 N1 D1 G1 M1 D1 R1 D1 N1 M1 P R1 N1 P D1 R1 D1 R1
D1 R1 R1 G1 G1 N1 M1 N1 M1 G1 D1 D1 G1 R1 G1 D1 M1 N1 R1
P D1 P N1 M1 G1 D1 M1 P M1 P M1 G1 D1 M1 G1 R1 M1 G1 G1
D1 R1 R1 N1 D1 R1 N1 P D1 G1 M1 P M1 D1 R1 N1 M1 M1 R1 N1
R1 R1 M1 M1 N1 G1 P D1 N1 N1 D1 M1 N1 D1 G1 M1 M1 G1 M1
G1 D1 M1 P R1 G1 D1 M1 R1 P R1 M1 M1 P N1 D1 D1 D1 D1 P P
M1 N1 R1 M1 M1 G1 N1 G1 R1 D1 N1 P P M1 N1 M1 G1 N1 G1 R1
D1 N1 P P M1 N1 G1 N1 N1 P G1 R1 R1 R1 N1 P M1 G1 G1 D1 R1
P P N1 P M1 M1 G1 G1 N1 G1 P G1 G1 P N1 D1 N1 P D1 R1 P M1
M1 M1 G1 M1 R1 G1 G1 P G1 P N1 N1 P R1 P M1 N1 G1 N1 D1 G1
R1 P G1 M1 D1 R1 R1 P N1 P M1 G1 G1 N1 D1 G1 M1 D1 R1 D1
N1 M1 P R1 N1 P D1 R1 D1 R1 D1 R1 R1 G1 G1 N1 M1 N1 M1 G1
D1 D1 G1 R1 G1 D1 M1 N1 R1 P D1 P N1 M1 G1 D1 M1 P M1 P
M1 G1 D1 M1 G1 R1 M1 G1 G1 D1 R1 R1 N1 D1 R1 N1 P D1 G1
M1 P M1 D1 R1 M1 D1 R1 D1 N1 M1 P R1 N1 P D1 R1 D1 R1 D1
R1 R1 G1 G1 N1 M1 N1 M1 G1 D1 D1 G1 R1 G1 D1 M1 N1 R1 P
D1 P N1 M1 G1 D1 M1 P M1 P M1 G1 D1 M1 G1 R1 M1 G1 G1 D1
R1 R1 N1 D1 R1 N1 P D1 G1 M1 P M1 D1 R1

The note evolved will last for 3 minutes. It needs a lot of


practice for a vocalist to formulate such a concert note.
This work eases the process of singers to formulate and
deliver a concert.

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

105

FIGURE 1 GENERATION VS FITNESS

Similarly the individuals can be evolved for other ragas


and thalas with constant fitness improvement.
VII. CONCLUSION
The trail run shows constant improvement in the fitness
value of the individual. The Gene Silencing operator
helps to mutate only specified subsets so that other subset
with fittest correct Swaras(Genes/alleles) are not
disturbed and thereby we will get best individual to the
survival of fittest. a highly correct swarams. The result
gives the output for kanagangi ragha and adhi tala as trial.
In the same way any ragha and thala combinations can be
evolved with best individual to evolve the concert notes.
This work helps the musicians to learn or to verify the
notes they sung and eases the process of singing or
learning music. Automatic evolution of music can also be
used in the field of medical therapy.
REFERENCES
[1]. Dragan Matic , A genetic Algorithm for composing music ,
Yugoslav Journal of Operations Research, 2010.
[2]. Ender Ozean, Turker Ercal, A genetic Algorithm for generation
improvised music,2007
[3]. Andre Horner, David E. Golberg, Genetic Algorithm and
Computer Assisted Music , CCSR,1991
[4]. Jeyakumar.J,Computer Assisted music, International Journal of
Science AND TECHNOLOGY, 2011
[5]. Nada,Evolutionary Algorithm Defination,American Journal of
Engineering and Applied Sciences, 2009
[6]. Wikipedia/evolutionary computing and genetic algorithm
[7]. Wikipedia/genetic algorithm
[8]. Wikipedia/Carnatic music
[9]. http://www.wardsystems.com/manuals/genehunter/index.html?ov
erview_of_the_ga_process.htm
[10]. http://www.cs.cmu.edu/Groups/AI/html/faqs/ai/genetic/part2/fa
q-doc-2.html

www.asdf.org.in

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

106

Incorporating 3D Features in Internet TV: A Proposal


A.Cellatoglu

K.Balasubramanian

Dept. of Comp. Engg., European univ of Lefke


Turkish Republic of Northern Cyprus
Mersin-10, Turkey

Dept. of EE Engg., European univ of Lefke


Turkish Republic of Northern Cyprus
Mersin-10, Turkey

Abstract Amongst several stereo display techniques being


experimented in practice few techniques are favoring for their
adaptation to television systems. In this project proposal,
viable techniques such as bi-circular polarization based
stereoscopic display, vari-focal lens based volumetric display
and lenticular lens based auto-stereoscopic display techniques
are extended to internet TV for communicating the video
image packets and producing 3D vision in the computers
accessing the web. The flat panel computer monitor is adapted
to stereo display for producing 3D vision. Scalable H.264/AVC
video coding scheme with error concealment used for standard
video is suitably modified and applied to implement the coding
scheme involved in the three techniques considered here. The
methodologies involved, difficulties encountered and
performance analysis of the techniques are presented.
Keywords- autostereoscopy; bi-circular polarization;
Internet TV; scalable H.264/AVC; vari-focal lens;

I. INTRODUCTION
Stereoscopic methods of displaying images require
viewing aids, such as color filters[1] or polarised filters[2],
for viewers to have an illusion of 3D vision. On the other
hand, techniques such as volumetric display tank, autostereoscopic technique employing lenticular lens and
holographic methods of displaying 3D images do not call
for any special viewing aids for depth perception[3].
Therefore, researches are progressing in these directions as
to contribute for future 3D television.
As computers have become important part of everyday
life, viewing television images in computer monitors is
preferred in several instances while working with computers
for other tasks. Internet TV is solution for watching
television images on the computers screen lively. The trend
on watching internet TV is increasing rapidly nowadays and
its importance is finding a momentum. As researches on 3D
TV transmission and display are progressing several
techniques are experimented to produce 3D display on TV
screen. It is worthwhile to consider extending the viable 3D
TV display techniques to internet TV as to watch in
computer screen with depth perception. This paper makes an

attempt in this direction.


II. REALIZATION OF INTERNET TV THROUGH BICIRCULAR POLARIZATION
Use of bi-circular polarization for stereo vision routing
the stereoscopic images displayed in two picture tubes to
two eyes of the viewer was reported earlier [4]. This has
reduced the crosstalk effect to a greater extent and yielded
good 3D vision. Later this technique was extended for
displaying stereo pair of images in a single picture tube with
electro-optically polarized screen [5]. Now we extend this
concept to transmit and display video images in flat panel
screen of the PC monitor. Fig.1 shows the simple block
schematic of the receiver system displaying stereo pair of
images.
On the transmission side, the stereoscopic image pair
captured in video cameras are digitized, encoded and the
packets are posted online. The left and right images are sent
on successive video frames. In order to identify the left and
right video images in the PC, an additional sync signal
known as Dsync [4] is used in the system. Dsync is inserted
in the packets by using first few pixels of the left-eye-view
video frame and encoded appropriately.
While reconstructing the video signals with
insertion of usual Hsync and Vsync signals the Dsync is
also identified and used with the electro-optic drive. The
screen of the flat panel monitor is fitted with a stack of [5]
plane polarizer and quarter wave plate as to produce bicircularly polarized optical waves emerging from the screen.

Proc. of the Intl. Conf. on Computer Applications


Volume 1. Copyright 2012 Techno Forum Group, India.
ISBN: 978-81-920575-5-2:: doi: 10. 73718/ISBN_0768
ACM #: dber.imera.10. 73718

www.asdf.org.in

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

Internet
Polarizer

107

camera is regenerated (Fig.3) and driven to the electro-optic


lens. The focal variation of the electro-optic lens together
with the fixed focal length of another lens conveys an
illusion of the image frames displayed in the screen taken to
respective farther distances in the space yielding 3D vision.
Being
volumetric
in
nature
this

PC

Flat Panel Screen

Dsync
Identifier

Quarter Wave
Plate

Electro-optic
Drive
Fig.2. 10-Level Staircase Drive Pattern to Electrooptic Lensular Screen

Fig.1. Receiver of Bi-circular polarization Based System

Supporting Lens

The polarization vector is electro-optically changed to


orthogonal angles, in synchronism with the display of left
and right images in the screen. This causes left and right
images displayed on the screen to produce respectively left
and right circularly polarized waves propagating from the
screen. The viewer wearing in his eyes a stack of glasses of
quarter wave plate and polarizer analyzer with respective
polarized angles get left circular wave to left eye and right
circular wave to right eye rendering 3D vision. This method,
however, does not concede full parallax effect. On the other
hand, bi-circular polarization method has proved to be the
best amongst the anaglyph glasses type of producing stereo
vision [6]. In this realization scheme, while working with
PC for monitoring other information, the optical fitting on
the screen needs be removed.
III. INTERNET TV WITH VOLUMETRIC MEDIA
Direct display of volumetric images in the screen of the
PC monitor makes it hard for its realization. Therefore, by
employing vari-focal lens imaging the illusion of the vision
of volumetric images is indirectly produced [3]. Ten
sectional images of the 3D object, at evenly set lateral
intervals, are picked up by a camera having ten-stepstaircase waveform (Fig.2) driven electro-optic lens.
Capturing sectional images is achieved by the differences in
focusing caused by the voltages of the staircase steps. The
duration of each step is set as the period of a frame for
giving its sectional image. Therefore, in one switching
cycle there would be ten video frames conveying the
successive sectional images of the object.
While transmitting the sequential frames, in this scheme,
ten coded pattern of pixels are inserted as sync signals at the
beginning of each frame. In PC these coded pattern are
identified and a similar staircase wave employed in the

www.asdf.org.in

Electro-optic
Lens

PC

Sync Identifier &


Staircase Wave
Generation

Electro-optic
Drive

Fig.3. Receiver of Vari-focal Image Presentation

Concedes parallax effect in both dimensions. For


watching the screen of PC for regular use the lens-fitting on
the screen is moved away.
IV. AUTOSTEREOSCOPIC TECHNIQUES FOR INTERNET TV
Auto-stereoscopic method of 3D image registration and
reconstruction employs lenticular lens screen shown in
Fig.4. Lenticular lens has a set of cylindrical lenses lined up
to register the images of the object as graded lines at its
back focal plane. Multiple angle views of the object are
captured in four video cameras and the images are projected
through respective video projectors onto a lenticular screen
as shown in Fig.5. At its back focal plane it forms a
complex picture pattern with a pattern of groups of four line
images concerning to different angular views of the object
seen through each lens. With a high resolution video camera
the complex picture pattern is captured and the digitized
frames are posted on the web as packets.

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

The complex colour picture pattern being very high


resolution, it is difficult to capture with conventional
standard video camera. Therefore, a specially made high
resolution video camera need be employed to register the
image of complex colour picture pattern.

108

Complex picture pattern

Graded line images

High Res
Video
Camera

Obj

Focal distance
Video Cameras

Video

Lenticular
Screen

Fig.4. A Part of Lenticular Screen

The resolution needed for good quality image


registration and reproduction is four times in vertical
direction and three times in horizontal direction making an
improvement of a factor of 12 times of the standard video
image. This covers the aspect ration of 3:4. Therefore, in
place of one pixel there are 12 pixels representing the image
contents. Fig.6 shows its segmentation process for a part of
complex picture pattern.
Since we can not post these images as packets in the web
in normal procedures, a process known as segmentation is
needed. The image of complex picture pattern bearing high
resolution is spatially divided into 12 segments resulting in
sub video frames. Therefore, it creates a video sequence of
segmented 12 sub video frames representing one frame of
image of complex colour picture pattern. Standard Sync
signals are available in the main frame and additional Sync
signals are incorporated in sub video frames as to maintain
the synchronism process.

Digitizing
Segments
making

16-Video
frame
Sequence

To
Internet
Fig.5.Auto-stereocopic Image Registration

Fig.6. Image Segmentation

The received signal from the packets and segments in


the PC undergoes de-segmentation process and
reconstructing the original complex colour picture pattern.
De-segmentation involves in integrating the 12 segmented
images into one complex video frame. The flat panel screen
holds a high resolution screen taking the high density pixel
contents of the image into screen where the lenticular lens is
fixed on it (Fig.7). The geometry and structure of the
lenticular lens screen used in receiver PC screen is
compatible with that of the lenticular screen used in the
transmitter. The display of complex picture pattern on the
screen is optically taken to the viewer through the lenticular
screen conceding 3D vision. The convergence of images of
four video cameras of displaced angular positions results in
parallax in horizontal direction limited to three adjacent
flipped views.
While using the PC for regular purposes, the display of
video signal is in standard format where resolution is 1/4th
of the complex colour picture pattern. Since the video signal

www.asdf.org.in

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

Lenticular
Screen

From
Internet
PC
Image
De-segmentation

109

Video Storage

Server

Scalable H264/AVC
Coding
Fig.7 .Auto-stereocopic Image Reconstruction

Standard Video excitation

Slices/Packets

Internet

High Resolution excitation

Service Provider
Server

Scalable H264/AVC
Decoding

3D Display
installed PC

Fig.9. Transmission and Reception of Images with


Internet TV
Fig.8. Excitation for High Resolution and
Normal Resolution Displays

is generated in conventional format, a video adapter is


attached to make a trace of display one out of every 4th
vertical line and one out of every third horizontal line.
(Fig.8).
V. CODING SCHEMES EMPLOYED
The digitized video images are encoded using efficient
coding schemes [7] and are sent as packets to video on
demand locations. H.264/AVC video coding scheme [8] are
employed for more efficient video compression where the
frames are sub divided into slices. In the receiver
H.264/AVC decoder is used to recover the original images.
In the three schemes considered here we use scalable
H.264/AVC video coding scheme with improved efficiency
and error concealment.
The process of transmitting video images with the
internet TV is illustrated in Fig.9. The recorded video image
from camera is digitally stored and extended to server for
broadcasting to internet. Scalable H264/AVC Coding is
applied here for the slices and packets made.

www.asdf.org.in

This goes to the concerned node in the internet and


makes it available worldwide. In a receiving node the
packets are accumulated by service provider and extended
to the consumers. The server being installed with Scalable
H264/AVC decoding software it merges the slices and formulates
the standard video.

Conventional video transmission with web employs


one video image only in successive frames after video
coding and for making as packets. This video coding has to
be suitably modified and adapted for the transmission of
video images in the three stereo techniques considered here.

The bi-circular polarization based technique


employs successive transmission of left eye view and right
eye view video frames. Therefore, the video coding and
placing packets in the web have to be such that left eye view
image frame and right eye view image frame are launched
successively. Dsync information is mixed along Vsync and
Hsync information such that the receiver in the computer is
able to detect the Dsync and use it for electro-optic
polarization needed for bi-circular optical polarization.
Whenever the concerned website is launched this becomes
activated. The Dsync detection and elctrooptic polarization
activities do not arise when computer performs other tasks

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

and they do not affect the visual display on the monitor


screen.
Vari-focal lens technique needs the screen to be fixed
with the supporting lens system as to produce the illusion of
volumetric display. Having fixed 10 sectional video images
in a switching cycle, we need to have synchronization of
video frames accordingly. Although a master sync
information is available while starting a switching cycle
each and every frame would have synchronization
information such that no ambiguity arises in routing the
video frames. Always first few pulses in the Vsync are used
to induct these information. Conventional PCM encoding
and decoding techniques are used to extract the sync pulses
needed to generate the staircase wave for driving the
electro-optic lens. The coding scheme used would take care
of these events to take place.
The autostereoscopic imaging system sends high
resolution video image of complex color picture pattern
captured from four angular views of the object processed
through the lenticular lens screen. Therefore, this video
cannot be launched as in conventional manner with usual
packets of video images. As discussed, segmentation
process is first made and then packets are generated. This
special refined arrangement is needed as to arrange the
packets for the complex video of complex color picture
pattern.
Coding scheme and packets generation for video
frames are adapted to this method of transmission.
A.

Scalable H.264/AVC video coding scheme-SVC


It has been shown that the scalable extension of
H.264/AVC provides spatial, temporal and quality
scalability comparable to the normal H.264/AVC [9, 10].
Nevertheless, losses and errors in communication channels
require precautions and error concealment methods for SVC
[11]. Improving further the error concealment with motion
vectors for SVC another scheme has been reported recently
[12]. This method applicable for H.264/AVC uses
hierarchical prediction structure for error concealment. They
calculate the location and motion vectors of the lost blocks
by assuming that the reference blocks used are similar to the
block to be concealed. In this scheme, when a slice was lost
and the second slice of the picture was received correctly,
the second slice was shown to contain the neighboring
macro blocks of the lost macro blocks. They performed tests
by considering different categories of picture and compared
Y-PSNR values and losses and proved the improvement in
error concealment.
In the three stereo reproduction schemes used in internet
TV of this project we employ scalable H.264/AVC video
coding scheme [12] for error concealment in efficient
transmission of respective images. The video images used in
that system was the standard video frames obtained from
standard video cameras. However, the video images
involved in the three stereo image generation and
reproduction schemes are differing from the standard videos

www.asdf.org.in

110

to a slight extent especially in synchronization aspects. The


synchronization overheads are incorporated in the beginning
at the end of video frames as to take care of the additional
Sync insertion in transmission and recovery during
reception. Therefore, the scalable H.264/AVC video coding
scheme is suitably modified to accommodate these features
and transmission and reception are carried as on
conventional manner.
VI. DISCUSSIONS AND CONCLUSION
The transmission of video images and
reconstruction of images in the receivers for 3D vision in
television monitor and in the screen of computer are
different as discussed. Television monitor is solely
responsible for viewing video images transmitted by TV
transmitters and the receivers are designed to receive the
video images as per the set standards. On the other hand the
computer is being used for multiple purposes and viewing
video images are performed through downloading them
from website. Therefore, the methodologies involved in
video image transmission and reception are different for
internet TV. Internet TV relies on the methods of coding
schemes employed and the transmission of video images in
packets and their reconstruction in images. The scalable
H.264/AVC video coding scheme with modifications as to
adapt to three different techniques and standards would
provide reasonably better visualization of 3D images.
The extra fittings made on the flat panel screen of the
computer monitor is of removable type that only when the
video is viewed for 3D vision it is fixed onto the screen.
While using the computer for other purposes the optical
fittings have to be removed. The constraints involved in the
fittings of bi-circular polarization technique using a stack of
electro-optic polarizer and the quarter wave plate and that of
the vari-focal technique employing electro-optic polarizer
and the supporting lens are more or less similar.
Nevertheless, due to interspacing of lenses of vari-focal
system is space consuming and little complicated.
Therefore, the display schemes of these two systems can be
handled easily for realistic 3D vision. On the other hand,
fixing of the lenticular lens screen on the flat panel monitor
screen requires great care in alignment procedures every
time as to avoid degradation in the 3D vision produced. In
these circumstances one could prefer to use a well prepared
lenticular lens fitted monitor as a standby and could be
switched on whenever watching internet TV. Like manual
fine tuning employed in certain TV systems, micro
movement facilities for adjusting the lenticular screen with
the displayed complex picture pattern is also provided.
The complexity involved in the complex picture pattern
of the autosteroscopic method can be resolved further by
making further spatial division of images. There are four
cameras producing the line images concerning to the set
angle of view. These line images are having high density
image contents and the density can be reduced to 1/4th by
extracting line pattern of each of the four camera images

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

and formulating separate four complex picture patterns.


This therefore calls for transmitting four complex picture
patterns in place of one picture pattern. However, the
transmission of very high density complex video signals can
be handled smoothly and faithful reproduction of images
with improved clarity can be made.
We used only four cameras in picking up of the multiple
angle views of the object being televised as to produce three
flipped views to the viewer offering a feeling of parallax.
Adding more cameras in capturing multiple angle views
will enhance the parallax effect at the cost of increase in
resolution of complex color picture pattern needing more
number of spatially divided images and their transmission.
This enters in a trade off between quality and complexity in
transmission of images in web.
Compared to the conventional video images, the 3D
schemes described here transmit more number of video
images. The bi-circular polarization scheme transmits
successively left eye view image and right eye view image
captured by stereo pair of cameras. On the other hand, the
varifocal scheme producing an illusion of volumetric
display transmits successively ten video images. In the autostereoscopic scheme the high resolution camera capturing
the complex picture pattern bearing the line images of four
video cameras is spatially divided into 12 video images by
the process of segmentation. The video images are
undergoing scalable video coding as to transmit and receive
them with minimum error and loss. We intend to work
further to develop dynamic algorithm for the scalable video
coding adaptable to each of the three schemes as to make
better error concealment and to be more reliable in 3D
image reconstruction.
Of the three techniques considered here bi-circular
polarization has simplicity in display systems compared to
other techniques but calls for the user to wear bi-circular
polarizer analyzers on eyes and has restricted parallax
effect. With improved parallax effect the other two
techniques do not call for the user to wear any special
glasses but call for relatively complex display scheme.

111

IEEE Trans. on Cons. Electronics, Vol 54 , No 2, pp 307-315, May


2008.
[6] K.Balasubramanian,
S.Gunasekaran,
N.Nithiyanandam
and
K.P.Rajappan, "On the merits of bi-circular polarization for a stereo
color TV", IEEE Trans on Cons. Electron, Vol CE 28, No 4, pp 638651, Nov 1982.
[7] A. Cellatoglu, S. Fabri, A.M. Kondoz, Use of Prioritised Objected
Video Coding for the Provision of Multiparty Video Communications
in Error Prone Environments, IEEE Vehicular Technology
Conference, VTC, Amsterdam, Netherland, Oct 1999.
[8] B.Kamolrat, W.A.C.Fernando, M.Mrak and A.Kondoz, Joint source
and Channel Coding for 3D Video with Depth Image-Based
Rendering, IEEE Trans. on Cons. Electronics, Vol 54, No 2, pp
887-894, May2008.
[9] Wieagand.T,
Sullivan
G.J,
Bjontegaard
and
Luthra.A,
Overview of the H.264/AVC video coding standard, IEEE Trans.
on Circuits Sys. Video Technology, Vol 13, No 7, pp 560-567, 2007.
[10] Schwarz.H, Marpe.D and Wiegand.T, Overview of the scalable
video coding extension of the H.264/AVC standard, IEEE Trans. on
Circuits Sys. Video Technology, Vol 17, No. 9, pp 1103-1120, 2007.
[11] Chen. Y, Xie.K, Zhang.F, Pandit.P, and Boyce.J, Frame loss error
concealment for SVC, J Zhejiang Univ Sci A, Vol 7, No. 5, pp 677683, 2006.
[12] Dereboylu Z, Worrall S, Cellatoglu A and Kondoz.A, Slice loss
concealment using bridge pictures in scalable video coding,
Electroncs Letters, Vol 46, Issue 12, pp 842-857, Jun 2010.

ACKNOWLEDGMENT
Authors express their sincere thanks to the Rector of the
University for granting financial support for the project.
[1]
[2]
[3]

[4]

[5]

REFERENCES
N.Nithiyanandam and K.P.Rajappan, Compatible 3-D TV,
Applied Optics, Vol 16, pp 2042, 1977.
Pieter J.H. Seuntiens, Visual Experience of 3D TV, Eindhoven
University Press, pp 1-135, 2006.
K.Balasubramanian, On the realization of constraint-free stereo
Television, IEEE Trans. on Cons. Electronics, Vol 50, No 3, pp
895-902, Aug 2004.
K.Balasubramanian,
S.Gunasekaran,
N.Nithiyanandam
and
K.P.Rajappan, "On the realization of 3-D color TV through
achromatic analyzers", Zeitchrift elektr Inform-u-energietechnique
(Germany), Leipzig 13, No 4, pp 329-338, 1983.
K.Balasubramanian
and
A.Cellatoglu,
Simplified
Video
Transmission of Stereo Images for 3D Image Reconstruction in TV,

www.asdf.org.in

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

112

An Efficient Method for Video Shot Segmentation and Classification


Vijayakumar.V
Research Scholar
Bharathiar University
Coimbatore, Tamil Nadu, India

Abstract Video shot segmentation and classification are


fundamental steps for efficient retrieving, browsing and
mining the large amount of video data. Shot Segmentation or
video shot boundary detection and key frames extraction plays
a vital role in video retrieval and video indexing, and it is
performed in all video data mining schemes. Video shot
segmentation, or shot change detection, involves identifying the
frame(s) where a transition takes place from one shot to
another. The major techniques that have been used for the shot
segmentation are pixel differences, edge difference, and
histogram comparison. This paper presents a video
segmentation technique based on edge features. The key
frames are extracted using edge change ratio and edge based
contrast features. The extracted key frames consists low-level
features such as color, edge, and etc., It can be used for
extracting the domain specific high-level applications.
Specifically, the features are used to detect the various events
in the real-time games by segmenting a sports game into
various groups such as plays and breaks. This paper also
proposes a shot classification algorithm based on dominant
colors. Dominant color ratio feature for sports video is
essentially the color of the ground and its size and distribution
offers significant cues for video classification. The dominant
color ratio is applied on the extracted key frames and are
exploited to classify shots into play starts, base-line, net
approach, audience shots in a tennis sports video. Experiments
shows that proposed shot segmentation algorithm is effective to
both shot transitions, and shot classification algorithm can
classifies more than 85% shots correctly.
Keywords - Video Hierarchy, Edge Features, Key Frame
Extraction

I.
INTRODUCTION
In recent years, there has been increasing research
interests in video analysis due to its tremendous commercial
potentials, such as video indexing, retrieval, abstraction and
mining. In video analysis, shots are often basic processing
unit and give potential semantic hints. Video data can be
decomposed into several types of video shots which are
sequences of frames taken contiguously by a single camera.
Videos can be represented by a hierarchical structure
(Fig.1), while shots are the basis units for constructing high
level semantic scenes. Generally, the video data consists of
three basic units: frame, shot, and scene. A scene is a

Proc. of the Intl. Conf. on Computer Applications


Volume 1. Copyright 2012 Techno Forum Group, India.

collection of one or more shots focusing on one or more


objects of interest. It implies that if n number of shots
taken from different cameras with different angles, but
describing the same object or event, then such collection of
n shots form a scene. A shot is defined as a sequence of
frames taken by a single camera with no major changes in
the visual content and representing a continuous action in
time and space. Frame is considered as a still image. A
Video consist large amount of video frame sequences. For
example, a five-minute video shot may contain over 10,000
frames. Well-chosen key frames can help video selection
and make the listing more visually appealing. Breaking the
given video stream into a set of basic units is called video
scene analysis and segmentation.
Video segmentation is one of the most challenging tasks
in video mining as it requires a semantic understanding of
the video to some extent [1]. Shot change detection
algorithms reduce the large dimensionality of the video
domain by extracting a small number of features from each
video frame. These are extracted either from the whole
frame or from a subset of it, which is called a region of
interest. Shot segmentation technique can be used for
intelligent video indexing, applied for browsing and
efficient classification of video sequences for video
databases.
A lot of initiative has already been taken to detect shots
and current research is being pursued to make this
segmentation process efficient and space-consuming. This
paper introduces a novel method for video segmentation
using edge based features and classification is done by using
the common feature of the dominant color ratio.

Figure 1. Hierarchical structure of video database

The second section presented the background of video


segmentation and classification. The proposed edge based
shot segmentation and classification process discussed in

ISBN: 978*81*920575*5*2:: doi: 10. 73809/ISBN_0768


ACM #: dber.imera.10. 73809
www.asdf.org.in

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

the third section. The experiments and results discussed in


the fourth section and the fifth section concluded the paper.
II. BACKGROUND
Generally, Video segmentation task involves the shot
boundary detection and the key frame extraction. It provides
summarization of a video by selecting a small number of
frames (key frames), able to catch the visual and semantic
properties of a video or video-shot. A shot boundary is the
junction between two shots. The detection of shot boundary
is a base for video abstraction and video analysis, such as:
indexing, browsing, searching, summarization, retrieval and
mining of video based on its content. A key-frame is a
frame which contains the most information and is the most
distinct from the other frames within a shot. Key frames
extraction play an important role in video data mining
process such as extracting high level concepts and events,
video indexing and browsing, video editing, summarization,
classification and event detection.
Based on different semantic properties, video
segmentation often refers to two categories as temporal and
object-based video segmentations. Temporal based
segmentation is the first and essential step for visual data
processing, which aims to find the location of shot
boundaries. Its goal is to divide the video stream into a set
of meaningful segments that are used as basic elements for
indexing
and
classification.
Object-based
video
segmentation extracts objects for content-based analysis and
provides structured representation for many object-oriented
video applications. Current object-based video segmentation
methods can be classified into three types: segmentation
with spatial priority, segmentation with temporal priority,
and joint spatial and temporal segmentation [7].
The key frame extraction refers to detecting the
transition between successive shots. They can be roughly
grouped into hard cut and soft cut. A hard cut (abrupt) is
when one clip ends and the next one begins with no overlap.
It usually generates great change in one single frame. Soft
cut, or gradual transitions can be further classified as fade,
dissolve and wipes. A fade is a slow change of luminance in
which a frame gradually darkens and disappears (fade out),
or the reverse (fade in). A fade is a transition with gradual
diminishing (fade out) or heightening (fade in) of visual
intensity. Dissolve is when two shots overlap for one period
of time in which the first shot fades out while the second
shot fades in. A dissolve occurs when superimposing
images of one video on images of another video with the
underneath frames getting dimmer and those on top
becoming brighter. A wipe is actually a set of shot change
techniques, where the appearing and disappearing shots
coexist in different spatial regions of the intermediate video
frames, and the region occupied by the former grows until it
entirely replaces the latter.
A shot segmentation algorithm needs to detect
discontinuity in the continuous video sequence. Measure of
difference between consecutive frames is key for
recognizing boundary frames. The main idea of shot
boundary detection techniques is that if the difference
between the two consecutive frames is larger than a certain

www.asdf.org.in

113

threshold value, then a shot boundary is considered between


two corresponding frames. Compute the distance of feature
vectors of adjacent frames, and compare it against a
threshold. An abrupt cut is usually detected when a certain
difference measure between consecutive frames exceeds a
threshold. The gradual transition spreads across a number of
frames, which is more difficult to be detected. The domain
of video shot segmentation falls into two categories
according to the features used for processing such as
uncompressed and compressed. Algorithms in the
uncompressed domain utilise information directly from the
spatial video domain [2][4][11]. The most commonly used
techniques in shot transition detection in the compressed
domain are based on the pixel differences, histogram and
statistical differences, edge differences and motion based
approaches. These techniques are computationally
demanding and time consuming, and thus inferior to the
compressed features based approach.
Pixel Differences Based Approaches are the easiest way
to compare two frames is to compare their colorimetric
pixels values one by one. The advantage of those
approaches is the simplicity of their implementation;
however, they are sensitive to scene objects motions, zooms
and movements of the camera. As a result many false alarms
will be triggered. Static Threshold is the most basic decision
method, which entails comparing a metric expressing the
similarity or dissimilarity of the features computed on
adjacent frames against a fixed threshold. This only
performs well if video content exhibits similar
characteristics over time, and only if the threshold is
manually adjusted for each video. Adaptive Threshold is the
obvious solution to the problems of the static threshold is to
vary the threshold depending on a statistic (e.g. average) of
the feature difference metrics within a temporal window. In
another technique, Instead of comparing two consecutive
frames, compare each frame with a background frame. A
background frame is defined as a frame with only nonmoving components. A background frame can be a frame of
the stationary components in the image.
Gentao Liu Xiangming Wen Wei Zheng Peizhou
proposed a framework to detect the shot boundaries and
extract the key frames of a shot. Firstly, the scale invariant
feature transform was adopted to compute the visual content
discontinuity values. Then the local double threshold shot
boundary was used to detect shot boundaries [6].
The Trained Classifier is a radically different method
for detecting shot changes is to formulate the problem as a
classification task where frames are classified (through their
corresponding features) into two classes, namely shot
change and no shot change, and train a classifier (e.g. a
neural network) to distinguish between the two classes.
Histogram-based method uses the statistics of the luminance
and color [3]. The advantage of the histogram-based shot
change detection is that it is quite discriminant, easy to
compute, and mostly insensitive to translational, rotational,
and zooming camera motions. For these reasons, it is widely
TABLE I. DIFFERENT TYPES VIDEO SEGMENTATION METHODS

Features / Type of edit

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

Hard

Fades

Dissolve

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

Cuts

Color Histogram Differences


Standard Deviation of pixel intensities
Edge change ratio

Edge Based Contrast

used. The weakness of the histogram-based shot boundary


detection is that it does not incorporate the spatial
distribution information of various color, hence it will fail in
the case which similar histograms but different structures.
Within one shot, due to the motion, appearance or
disappearance of objects, the color histogram may not be
consistent. The color histogram-based shot boundary
detection algorithm is one of the most reliable variants of
histogram-based detection algorithms. Its basic idea is that
the color content does not change rapidly within but across
shots. Thus, hard cuts and other short-lasting transitions can
be detected as single peaks in the time series of the
differences between color histograms of contiguous frames
or of frames a certain distance k apart. Since histograms do
not carry spatial information, they are expected to be robust
to object and camera motion. However, histograms are
generally sensitive to lighting changes [3].
Ahmet Ekin [13] proposed an algorithm to detect shot
boundary algorithm and which is based on dominant color
region, is effective for soccer video. Moreover, it is robust
to variations in the dominant color. The color of the grass
field may vary from stadium to stadium, and also as a
function of the time of the day in the same stadium.
The pixel based methods and histogram based methods
completely lose the location information of pixels. The table
I is shown the various shot segmentation method [5] and
which type of edit is detected by what algorithm.
III.

PROPOSED FRAMEWORK

The proposed framework consists of two stages. In the


first stage, the input video database is segmented into shots
and then key frames are extracted based on edge features.
The Edge change ratio is used to detect the hard cut. After
the hard cuts are detected in a given video, the video is

114

divided into several shots. The first frame of each shot is


considered as a key frame in each shot. The Edge based
contract method is used to identify the fades (in and out) and
dissolve. The first frame in a fade out and last frame in the
fade in are considered as key frame. Both the first and last
frames in dissolve are regarded as key frames. In the second
stage, the dominant color ratio property is utilized as the
descriptor to extract the some important basic events for
classification. The dominant color ratio classifies the key
frames into different categories of shots such as baseline,
net approach and audience crowd. The system flow of this
framework is shown in Fig. 2.
A. Video Shot Segmentation
Video shot segmentation algorithm consists of two
components: (1) computing edge change ratio and (2)
computing edge based contrast. The representative key
frames are extracted per sub shot using visual attention edge
features.
1) Edge Change Ratio: The Edge change ratio attempts
to compare the actual content of two frames. It transforms
both frames to edge pictures, i.e. it extracts the probable
outlines of objects within the pictures. Afterwards it
compares these edge pictures using dilatation to compute a
probability that the second frame contains the same objects
as the first frame. The ECR is one of the best performing
algorithms for scoring (Each pair of consecutive frames of a
digital video is given a certain score that represents the
similarity/dissimilarity between these two frames). It reacts
very sensitively to hard cuts and can detect many soft cuts
by nature.
Let n be the number of edge pixels in frame n, Xn in and
Xn-1 out the number of entering and exiting edge pixels in
frames n and n-1, respectively. Then
ECRn=max (Xnin n, Xn-1 out/ n-1)
gives the edge change ratio ECRn between frames n-1
and n. It ranges from 0 to 1.

Figure 2. Framework for Video Shot Segmentation and Classification

www.asdf.org.in

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

115

Figure 3. ECR patterns for cuts, fades and dissolves

The edges are calculated by the canny edge detector. In


order to make the measure robust against small object
motions, edge pixels in one image which have edge pixels
nearby in the other image (e.g. within 6 pixels distance) are
not regarded as entering or exiting edge pixels. Moreover,
before calculation of the ECR a global motion compensation
based on the Hausdorff distance (maximum distance of a set
to the nearest point in the other set) is performed.
According to Zabih [15]et. al. hard cuts, fades, dissolves
and wipes exhibit a characteristic pattern in the ECR time
seriesis shown Fig 3. Hard cuts are recognized as isolated
peaks; during fade-ins/fade-outs the number of
incoming/outgoing edges predominates; and during a
dissolve, initially the outgoing edges of the first shot
protrude before the incoming edges of the second shot start
to dominate the second half of a dissolve. They proposed the
temporal visual discontinuity usually comes along with
structural discontinuity, i.e., the edges of objects in the last
frame before the hard cut usually cannot be found in the first
frame after the hard cut, and the edges of objects in the first
frame after the hard cut in turn usually cannot be found in
the last frame before the hard cut.
Edge change ratio (ECR) Algorithm:
Step 1. Detect edges in two contiguous frames fn and
fn+1, respectively using the Canny edge detector to
get the edges.
Step 2. The edge frames are dilatated and inverted as
block and white color frame.

Figure 4. Entering and exiting edges in frames

www.asdf.org.in

Step 3. Count the number of edge pixels n and n+1 in


frame fn and fn+1
Step 4. Define the entering and exiting edge pixels Einn+1
and Eoutn. The entering edge pixels Einn+1 are the
fraction of edge pixels in which are farther than a
fixed distance r away from the closest edge pixel in
IMn+1 . Similarly, the exiting edge pixels Enout are the
fraction of edge pixels in which are farther than r
away from the closest edge pixel in Imn+1 (Fig. 4).
Step 5. Compute the edge change ratio ECRn between
the frames fn and fn+1:
ECR (n, n+1) = max (E inn+1 n+1, Enout/ n)
ECR (n, n+1) [0, 1]
Step 6. If the edge change ratio is larger than a
predefined threshold, it is considered the two frames
as a cut.
Step 7. Repeat the steps from 1-6, until end of the video
sequence.
After going through the whole video, the hard cuts can
be detected. The video can be broken into shots then
every first frame of the shot is extracted as key
frames for the further classification process.
2) Edge Based Contrast: During a fade in, object edges
or contours gradually show up, while during a fade out
object edges gradually disappear. During a dissolve, object
edges gradually disappear and new object edges gradually
show up. As a consequence, the perceived contrast
decreases toward the center of a dissolve. Therefore the
edge-based contrast (EC) feature is employed to detect the
fades and dissolves in the video. Like the edge change ratio,
EC is also a measure of the change of contours/edges.

Figure 5: Intensity scaling function applied to produce dissolves

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

Dissolves are produced by fading out the outgoing and


fading in the incoming shot. The scaling factor of frame
intensities is used to produce the dissolve and fade effect.
Two types of dissolves are common: the cross-dissolve and
the additive dissolve. Their respective scaling functions for
incoming and outgoing shots are shown in Figure 5.
Independent of the type of scaling function a spectator
observes a loss of contrast and sharpness of the images
during a dissolve that generally reaches its maximum in the
middle of the dissolve. Hence, the basic idea of the
subsequently defined edge-based contrast feature is to
capture and emphasize the loss in contrast and/or sharpness
to enable dissolve detection. The edge-based contrast
feature captures and amplifies the relation between stronger
and weaker edges.
Lienhart [12] tried to avoid the motion sensitivity by
analyzing the change in the average edge strength over time,
which can be calculated on a per frame basis.
Edge Based Contrast (EBC) Algorithm:
Step 1. Detect the edge map K (x,y,t) of frame ft using
the Canny edge detector [4].
Step 2. Assign a lower threshold value w. for weak and
a higher threshold value s strong edges.
Step 3. The strengths of strong and weak edge points are
summed up by
w(K) = wk( x, y ) and s(K) = Sk( x, y )
x, y

x, y

and

Step 4. Compute the Edge-based contrast (EC)


EC (K) =1 +

s(K) - w(K) -1
, EC(K) [0, 2]
s(K) + w(K) +1

Step 5. It possesses the following features:


If an image lacks strong edges, the EC is 0.
Examples are night scenes of little contrast and
monochrome frames.
If the number of weak edges clearly exceeds the
number of strong edges, the EC lies between 0 and 1.
If the number of weak edges is roughly equivalent to
the number of strong edges, the EC is about 1.
If the number of strong edges clearly exceeds the
number of weak edges, the EC lies between 1 and 2.
If the image contains only strong edges, the EC
approaches 2.
Note, that the EC is only little affected by slow local or
global motion. However, rapid motion may influence it in a
manner similarly to that of a dissolve, since edges get
blurred. Some video genres such as commercials or music
clips of love songs dissolves may occur in rapid succession.
It therefore may happen that their determined boundaries
overlap slightly.

www.asdf.org.in

116

B. Video Shot Classsification


Video shots classification usually offers interesting
information for the semantics of shots and cues. It is
undoubtedly one of the critical techniques for analysing and
retrieving digital videos. A number of shot classification
algorithms for videos have been proposed recently
[12][13][14][15]. All these methods, however, rely on the
proper thresholds or the accurate shot information.
In sports videos, generally shots are classified into
playing shots and non-playing shots, and the playing shots
are kept for further analysis. The playing shots consists lot
of semantic events. We classified the shots as follow: play
starts, base-line, net approach, audience crowd shots in
tennis sports video based on the color feature.
1) Dominant Color Ratio:Color is a typical and useful
low-level feature for video processing, which contains
plenty of semantic information. Generally, there are several
basic colors in the sports video, such as color of grass,
background color of audience, color of player uniform and
so on. It implies some semantic information, which named
as semantic color. It is observed that the ratio of the number
of pixels having semantic color to the pixel number of
whole frame varies greatly with different classes of shot.
For example, the field-view has lots of pixels with color of
grass, and close-up has many pixels with color of player
uniform. This information is helpful to classify shots into
various categories (classes), and we proposed a low-level
semantic feature dominant color ratio (DCR) feature in
our paper to classify the shots into various classes[8][9].
Playing field can be described by one distinct dominant
color. This dominant color demonstrates variations from one
sport to another, from one stadium to another, and
sometimes within one stadium during a sporting event.
Dominant color region detection algorithm [12] [13] is
robust in both outdoor and indoor scenes, for play field
region and some event detection which consist variations in
color. The dominant color for a tennis video is essentially
the color of the ground and its size and distribution offer
significant cues for shot classification. To obtain the
dominant color ratio, count the number of dominant colored
pixels of the frame and divide it by the total number of
pixels in that frame.
The dominant color ratio Gi is calculated on each frame
as
Gi=Pd/P where P is the set of all pixels, and Pd is the set
of dominant color pixels.
The absolute difference between two frames in their
dominant color ratio Gd,
Gd ( i,k) =|G i Gi-k| where the Gi and Gi-k are the
dominant color ratio of the ith and (ik)th frame
respectively.
The algorithm automatically learns the dominant color
statistics of the field independent of the sports type and
updates color statistics throughout a sporting event.
2) Classification through DCR in Tennis Sports Video:
The proposed shot type classification algorithm detects
different types of shots based on the color with necessary
threshold values. The algorithm classifies tennis sports

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

video shots, immediately after the detection of a shot


boundary key frame detection into different classes such as
play starts, base-line, net approach, audience clap shots.
In sport videos, playfield is one of the main parts
appearing in typical scenes. Grass of football videos, red
soil of tennis videos, and grass/soil of baseball videos are
good examples. The major issue in video shot classification
is to design a mechanism that can classify pixels to grass,
soil and other colors. The segmentation of playfield video is
essential because it can offer higher level content analysis
for sport videos.
Generally, four types of shots (audience crowd, play
starts (base-line, net approach) shots) are directly related to
the events of in-play of tennis game. Classification is
performed for each key frame. The dominant color ratio
measures the amount of dominant color pixels in a frame, to
classify the main shots in tennis video. Play field consist
Dominant color value. Audience crowd, substitutes,
audience activities, or coach close-ups contain no dominant
color. The first challenge is to decide the dominant colorhue index which is typically different from one stadium to
another.
A set of thresholds are used to distinguish the DC-ratio
for the different shot types. The thresholds are determined
and updated using the color histogram to minimize the error
due to the misclassification.
IV.

EXPERIMENTS

A. Video Shot Segmentation


To test of our method, two different TV video clips of
comedy video from various sources are used. We compared
the edge change ratio with the color histogram to analysis the
performance of the shot segmentation.
The number of cuts detected in ECR are most evident
that the color histogram. The number of cuts detected by
each scheme under various threshold values is plotted in Fig
6. There are two parameters recall and precision are
commonly used to evaluate the effectiveness of IR
(Information Retrieval) techniques [10].

117

Figure. 6 Comparisons of Color Histogram and Edge Change Ratio

The same properties can be used to the analysis the


video classification.
Recall (V) is the ratio of the number of shot changes
detected correctly over the actual number of shot
changes in a given video clip.
V=

C
C+M

Precision (P) is the ratio of the number of shot changes


detected correctly over the total number of shot changes
detected (correctly or incorrectly).

P=

C
C+F

F1 is a combined measure that results in high value if,


and only if, both precision and recall result in high
values:

F1 =

2* P* V
P+V

The symbols stand for: C, the number of correctly


detected transitions ("correct hits"), M, the number of not
detected transitions ("missed hits") and F, the number of
falsely detected transitions ("false hits"). All of these
measures are mathematical measures, i. e. they deliver values
in between 0 and 1. The basic rule is: the higher the value,
the better performs the algorithm. Table II gives the number,
the precision and recall for each type of shot transition
detection. According to the results obtained, the algorithm is
characterised by high precision and recall. This shows its
good performance for detecting shot transitions.

TABLE II. THE RESULTS OF SHOT TRANSITION DETECTION

S.No
1
2

Video

Comedy
Video- 1
Comedy
Video -2

www.asdf.org.in

Total Number of
Transitions

Detected
Transitions (C)

Missed
Alarms
(M)

False
Alarms
(F)

Recall
(V)

Precision
(P)

F1

25

22

0.916

0.956

0.96

18

15

0.88

0.937

0.91

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

118

TABLE III. THE RESULTS OF SHOT CLASSIFICATION

S.No

Video

Total number
of Key frames

Correct
Detection

Missed
alarms

False
Alarms

Recall

Precision

F1

Tennis
Video-1

16

12

0.857

0.857

0.86

Tennis
Video -2

14

11

0.917

0.846

0.88

B. Video Shot Classification


To test the video shot classification, we used two
difference tennis video. All the video clips in our test set are
digitized at 30 frames / second. Table III summarizes the
results of the evaluation of shot classification. The test is
based on calculating the number of both correct and wrong
classifications.
The recall is less good because of some wrong
classifications, where some key frames were considered as
play field keyframes. This has badly affected the precision
in the classification of play shots.
V.

CONCLUSION

The paper has proposed a novel method to segment and


classifies the various events of tennis sports video based on
edges and color. The segmentation results could facilitate
further analysis of various video. It is much less expensive
and more reliable than existing techniques. The principal
challenge is distinguishing amongst transition effects (e.g.
fades), object motion, and camera motion using low-level
frame features. Further researches include constructing more
match analysis methods based on basic semantic clues of
different sports videos.
ACKNOWLEDGMENT

Feature Transform, In Proceedings of Eigth IEEE/ACIS International


Conference on Computer and Information Science, 2009,pp. 11261131.
[7]. Xiaomu Song, and Guoliang Fan, Joint Key-frame Extraction and
Object-based Video Segmentation, In Proceedings of the IEEE
Workshop on Motion and Video Computing (WACV/MOTION05).
[8]. Zhenxing Niu1, Xinbo Gao1, Dacheng Tao2, and Xuelong Li3,
Semantic Video Shot Segmentation Based on Color Ratio Feature and
SVM, In Proceedings of International Conference on Cyberworlds
2008, pp. 157-163.
[9]. Ahmet Eki' and A. Mnrat Tekalp,Robust, Dominant Color Region
Detection And Color-Based Applications For Sports Video, In
Proceedings 2003 International Conference on Image Processing, pp.
I- 21-4 vol.1.
[10]. W. B. Frakes and R. Baeza-Yates, Information Retrieval - Data
Structures and Algorithms, Prentice; Hall, Englewood Cliffs, 1992.
[11]. Aissa Saoudi, and Hassane Essafi, Spatio-Temporal Video Slice
Edges Analysis for Shot Transition Detection and Classification,
World Academy of Science, Engineering and Technology 28, 2007.
[12]. Ahmet Ekin and A. Murut Tekalpx, Shot Type Classification By
Dominant Color For Sports Video Segmentation And Summarization,
In Proceedings of ICASSP 2003, pp. III- 173-177.
[13] .Ahmet Ekin, Automatic Soccer Video Analysis and Summarization,
IEEE Transactions on Image Processing, July 2003, Vol. 12, No. 7. 15,
pp. 796-807
[14] SHI Ping, YU Xiao-Qing,, Goal event detection in soccer videos
using multi-clues detection rules, In Proceedings of International
Conference on Management and Service Science, MASS '09. 2009.
pp.1 4.
[15] R. Zabih, J. Miller, and K. Mai, A feature-based algorithm for
detecting and classifying scene breaks, Proc. ACM Multimedia 95,
San Francisco, CA, pp. 189-200, Nov. 1995.

The authors would like to thank the Management,


Director, Dean and Principal of Sri Ramakrishna
Engineering College for providing laboratory resources and
valuable supports.
REFERENCES
[1]. Jung-Hwan Oh, and Babitha Bandi., Multimedia Data Mining
Framework for Raw Video Sequences, In Proceedings of
MDM/KDD'2002, pp.1-10.
[2]. Jun Li, Youdong Ding, Yunyu Shi, and Wei Li., A Divide-And-Rule
Scheme For Shot Boundary Detection Based on SIFT, JDCTA, 2010,
pp. 202-214.
[3]. Jian Zhou, and Xiao-Ping Zhang, Video Shot Boundary Detection
Using Independent Component Analysis, In Proceedings of ICASSP
2005, vol. II, pp.541-545.
[4]. Alan F. Smeaton, Paul Over, and Aiden R. Doherty, Video Shot
Boundary Detection: Seven Years of TRECVid Activity, Computer
Vision and Image Understanding, vol. 114, no. 4, 2010 , pp. 411-418
[5]. Rainer Lienhart. Comparison of automatic shot boundary detection
algorithms, in Proceedings of SPIE, 1998.
[6]. Gentao Liu Xiangming Wen Wei Zheng and Peizhou He, Shot
Boundary Detection and Keyframe Extraction based on Scale Invariant

www.asdf.org.in

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

119

Part II
Proceedings of the Second International Conference on
Computer Applications 2012

ICCA 12

Volume 2

www.asdf.org.in

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

120

A Combinatorial Approach for Design of Fuzzy Based Intrusion Detection System


Vydeki Dharmar
Department of ECE
Easwari Engineering.College
Chennai

Abstract:- Security threats exist almost in all the layers of the


network of Mobile Ad hoc Network (MANET) due to the
inherent characteristics such as distributed control and the
lack of infrastructure. This makes the design of Intrusion
Detection Systems (IDS) for MANETS extremely hard. This
paper proposes a fuzzy logic based intrusion detection
approach that combines both Specification and Anomaly
techniques. The system is based on Fuzzy Inference System
(FIS) and detects routing layer attacks, specifically, black
hole attack. The proposed system has been experimented
with simulation and the preliminary results show that our
scheme performed better for large scale networks.
Keywords: IDS, Black Hole, Fuzzy

I.

INTRODUCTION

A mobile ad hoc network (MANET) is a collection of


mobile computers or devices that cooperatively
communicate with each other without any pre-established
infrastructures such as a centralized access point.
Computing nodes (usually wireless) in an ad hoc network
act as routers to deliver messages between nodes that are
not within their wireless communication range. Because
of this unique capability, mobile ad hoc networks are
envisioned in many critical applications (e.g., in
battlefields). Therefore, these critical ad hoc networks
should be sufficiently protected to achieve confidentiality,
integrity, and availability. The dynamic and cooperative
nature of MANETs presents substantial challenges in
securing these networks.
The MANET is more vulnerable to attacks than wired
network. These vulnerabilities are nature of the MANET
structure that cannot be removed. Attack prevention
measures, such as authentication and encryption, can be
used as the first line-of-defense for reducing the
possibilities of attacks. However, these techniques are
designed for a set of known attacks. They are unlikely to
prevent newer attacks that are designed for circumventing
the existing security measures. For this reason, there is a
need of second mechanism to detect and response these
newer attacks, i.e. intrusion detection.
Proc. of the Intl. Conf. on Computer Applications
Volume 1. Copyright 2012 Techno Forum Group,
India.
ISBN: 978-81-920575-5-2-::doi:10.73305/ISBN_0768
ACM #: dber.imera.10. 73305

www.asdf.org.in

R.S.Bhuvaneswaran
Ramanujan Computing Centre
Anna University Chennai
Chennai
This paper aims at combining two approaches of intrusion
detection: anomaly and specification based techniques.
Anomaly detection [1] recognizes deviations from
normalcy by building models of normal behaviour.
However, it is not able to detect unknown attacks.
Specification based detection detects attacks with use of a
set of constraints that define the correct operation of a
protocol. The main advantage of specification-based
methods is that it provides the capability to detect
previously unknown attacks. However the specifications
are usually derived manually from the descriptions of the
protocols. It is proposed to combine the above two
approaches and design a hybrid Intrusion Detection
System (IDS) in such a way that it reduces the false
positives.
This paper is organized as follows: Section II provides
an overview about IDS and its various types. A brief
discussion about AODV and its vulnerabilities are
provided in Section III. The routing layer attacks are
described in Section IV. A brief overview about the Fuzzy
Logic Control is provided in Section V. The design of
proposed system and its performance analysis is dealt in
detail in Section VI. Section VII briefs about the future
work and conclusion.
II. IDS AN OVERVIEW
An intrusion-detection system (IDS) can be defined as
the tools, methods, and resources to help identify, assess,
and report unauthorized or unapproved network activity.
Based on the techniques used, IDS can be classified into
three main categories as follows:
i. Misuse Detection: In misuse detection [2],
decisions are made on the basis of knowledge of a
model of the intrusive process and what traces it
might leave on the observed system. Such a system
tries to detect intrusion irrespective of any
knowledge regarding the background traffic. There
are several approaches in the signature detection,
which differ in representation and matching
algorithms employed to detect the intrusion
patterns.
ii. Anomaly detection: This technique establishes a
"normal activity profile" for a system and the states
varying from the established profile by statistically
significant amounts are identified as intrusion
attempts. It flags observed activities that
abnormally deviate from the recognized normal
usage as anomalies. It must first be trained using

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

normal data before it can be released in an


operative detection mode. The main advantage of
this model is that it can detect unknown attacks. On
the other hand, its disadvantage is that it has high
false positive alarm rate when normal user profiles,
operating system, or network behavior vary widely
from their normal behavior.
iii. Specification-based detection: Specification-based
detection defines a set of constraints that describe
the correct operation of a protocol, and monitors
the execution of protocol with respect to the
defined constraints. This technique may provide
the capability to detect previously unknown
attacks, while exhibiting a low false positive rate.
III. AODV THE ROUTING PROTOCOL
The Ad hoc On-demand Distance Vector (AODV)
routing protocol [3] is a reactive and stateless protocol
designed for MANETs. It establishes routes only as
desired by a source node using the Route Discovery
Process (RDP). It uses route request (RREQ) and route
reply (RREP) packets to build a route to the destination.
Fig.1 illustrates the RDP.
b2

d2

c2

B
a1

C
b1

D
c1

RREQ Broadcast
RREP Unicast
Fig. 1. AODV- Route Discovery Process
Fig. 1 illustrates the flow of the RREQ and RREP
messages in a scenario wherein a node A wants to find a
route to a node D. (Initially, nodes A, B, C and D do not
have routes to each other). Node A broadcasts a RREQ
message (a1), which reaches node B. Node B then
rebroadcast the request (b1). Node C receives the
messages and broadcasts the message (c1), which arrives
at the destination node D. Finally, node D unicasts back
the RREP message to node A. When node A receives the
RREP, a route is established. In case where node A
receives multiple RREP messages, it will select a RREP
message with the largest destination sequence number
value.
AODV is efficient and scalable in terms of network
performance, but it allows attackers to easily advertise
falsified route information to redirect routes and to launch
various kinds of attacks. In each AODV routing packet,
some critical fields such as hop count, sequence numbers
of source and destination, IP headers as well as IP
addresses of AODV source and destination, and RREQ
ID, are essential to the correct protocol execution. Any
misuse of these fields can cause AODV to malfunction.
Table 1 denotes several vulnerable fields in AODV

www.asdf.org.in

121

routing messages and the possible effects when they are


tampered.
Table 1 Vulnerable fields in AODV
Field
RREQ ID
Hop Count

IP Headers As
well as Source and
Destination
IP
addresses
Sequence
number of Source
and Destination

Modifications
Increase to create a new
request
If sequence number is the
same, decrease it to update other
nodes forwarding tables or
increase it to invalidate the update
Replace it with another or
invalid IP address
Increase it to forward other
nodes forward route tables or
decrease it to suppress its update

IV. ROUTING ATTACKS


Among the intrinsic vulnerabilities of ad hoc
networks, some reside in their routing and
autoconfiguration mechanisms. Both these key
functionalities of ad hoc networks are based on a full trust
between all the participating hosts. In the case of routing,
the correct transport of the packets on the network relies
on the veracity of the information given by the other
nodes. The emission of false routing information by a host
could thus create bogus entries in routing tables
throughout the network making communication difficult.
Furthermore, the delivery of a packet to a destination is
based on a hop-by-hop routing and thus needs total
cooperation from the intermediate nodes. A malicious
host could, by refusing to cooperate, quite simply block or
modify the traffic traversing it.
The following are the types of active attacks:
i. Black hole attack
It is a kind of attack service where a malicious
node can attract all packets by falsely claiming a
fresh route to the destination and then absorb them
without forwarding them to the destination.
DATA
M

SREP
DAT

D
E
In AODV, the black hole attack is performed as
follows: the malicious
node
M first detects the active
Fig. 2. Black
Hole
route in between the sender D and destination node

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

C. The malicious node M then sends the RREP which


contains the spoofed destination address including small
hop count and large sequence number than normal to node
B. This node B forwards this RREP to the sender
node D. Now this route is used by the sender to send the
data and in this way data will arrive at the malicious node.
These data will then be dropped. Fig.2 illustrates the black
hole attack. However, the sender and destination node
will not be in a position to know the attack posed on their
communication.nside following
ii. Wormhole attack
Two malicious nodes share a private communication
link between them. One node captures the traffic
information of the network and sends them directly to
other node. Warm hole can eavesdrop the traffic,
maliciously drop the packets, and perform man-in- themiddle attacks against the network protocols.
iii. DoS (Denial of Service) attack
When the network bandwidth is hacked by a malicious
node, then it results to the DoS attack. In order to utilize
precious network resources like bandwidth, or to utilize
node resources like memory or computation power, the
attacker inserts packets into the network. The specific
instances of the DoS attack are the routing table overflow
attack and energy consumption attack.
V. FUZZY LOGIC
Fuzzy logic is a set of concepts and approaches
designed to handle vagueness and imprecision. A set of
rules can be created to describe a relationship between the
input variables and the output variables, which may
indicate whether an intrusion has occurred. Fuzzy logic
uses membership functions to evaluate the degree of
truthfulness. The basic Fuzzy logic system is illustrated in
Fig. 3.
Crisp Input

Fuzzification
Fuzzy Input

Fuzzy Logic

Fuzzy Output

De-Fuzzification

Crisp Output

Fig. 3. Fuzzy Logic


P [4] is the process of formulating the
Fuzzy inference
mapping from a given input to an output using fuzzy
logic. The mapping then provides a basis from which
decisions can be made, or patterns discerned. Fuzzy

www.asdf.org.in

122

inference systems have been successfully applied in fields


such as automatic control, data classification, decision
analysis, expert systems, and computer vision. There are
two types of Fuzzy Inference Systems (FIS):
1. Mamdani Type, and
2. Sugeno Type.
Mamdani's fuzzy inference method is the most
commonly seen fuzzy methodology. Mamdani-type
inference, expects the output membership functions to be
fuzzy sets. After the aggregation process, there is a fuzzy
set for each output variable that needs defuzzification.
Sugeno method of fuzzy inference is similar to the
Mamdani method in many respects. The first two parts of
the fuzzy inference process, fuzzifying the inputs and
applying the fuzzy operator, are exactly the same. The
main difference between Mamdani and Sugeno is that the
Sugeno output membership functions are either linear or
constant.
Advantages of the Sugeno Method
It is computationally efficient.
It works well with optimization and
adaptive techniques.
It has guaranteed continuity of the
output surface.
It is well suited to mathematical
analysis.
VI. PROPOSED SYSTEM
This section presents the combinatorial intrusion
detection System, which combines specification and
anomaly approaches for detecting malicious nodes, which
perform black hole attack. In general, for specificationbased detection [5], the norms, which are used to describe
the normal operation, are defined manually. This is
followed in the design of the proposed technique. The
threshold values are defined for parameters specific to
AODV routing protocol, as in Specification based
approach. Following parameters are used to describe the
normal routing and forwarding behaviour of wireless
nodes:
a. Route Request Forwarding Rate (RFR), which is
the ratio of no. of RREQs sent to the number of
RREQs received
b. Number of Route Replies Sent, and
c. Number of packets dropped by each node.
Determining the threshold value is based on Anomaly
approach, which constitutes a normal behaviour usually
established by automated training. Training the network
with various traffic scenarios derives the threshold
value for each of the above parameter. Statistical mean
for each parameter is computed and fixed as threshold
value. The aforementioned parameters and threshold
values are used as inputs to the FIS. The black hole
nodes are identified using FIS.
The various MANETs with different traffic scenarios
are simulated in ns2 and the detection of black hole nodes
is carried out using FIS in Matlab. It is assumed that all

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

Simulation of MANETs with Black Hole


nodes

Max. Speed & Pause Time


No. of Black hole Nodes

Simulation Duration
Simulation Area
Application Type

200m/s & 2s
1,2,3 (For 15 node
Network)
1,2,5 (For 25 node
Network)
3,6,10 (For 50 node
Network)
500 s
1000 1000
CBR

The various parameters used to derive the different


fuzzy rules are depicted in Fig. 5 and 6 for a Mobile
adhoc network of 15 nodes, 1 black hole node with low
and medium traffic between nodes.
FIS Input Parameters

140
120
Average Value

the nodes in the network are genuine in forwarding the


RREQ packets to the neighbouring nodes in the coverage
region, when they dont have a route to the destination
defined in the request packet.
Initially MANETs with varying number of nodes are
simulated using ns2. Each network consists of varying
number of black holes. Black hole is simulated by
changing the hop count to 1 and the destination sequence
number to the highest value in the SREP packet, so that
the black hole node places itself in the path selected by
the source node. Various CBR traffic scenarios are
simulated between different set of source and destination
nodes. The network is trained with different set of
simultaneous communications and the parameters used as
inputs to FIS are computed/derived from the training data
set obtained from the various simulation scenarios. The
threshold value for each parameter is determined from the
training data set. The Fuzzy Rule Set in FIS operates on
the derived input and threshold values to detect the black
hole nodes. Sugeno Type Fuzzy Inference System in
MATLAB is used for the detection. The detection of
black hole nodes follows a credit system. Genuine nodes
get incrementing credits for normal behaviour and
malicious nodes get reduction of credits. The initial credit
allocation is based on the performance of each node
interms of the number of data packets dropped and the
RREQ forwarding rate. The nodes with minimum or no
credits are considered to be the black hole nodes. The
entire system is represented as flow diagram in Fig.4.

123

100
80

Parameter Avg

60

Value in Black Hole Node

40
20
0
PD

SREP

RFR

Parameters

Fig. 5. FIS Parameters: Avg value and value in attacker node, for low traffic

FIS Input Parameters

Collection of Various Parameters viz., PD,


RREQ Fwd Rate

250
200

150

Run the simulation with different traffic


Scenarios

Parameter Avg
Value in Black Hole Node

100

50
0
PD

SREP

RFR

Average Value

Fix the threshold values for the given


parameters

Apply Fuzzy Rule Set to Detect the black


hole nodes
Table Fig.
2 lists
the Diagram
details of
ofProposed
the various
parameters
used
4. Flow
System
Process
in the simulation of MANETs.
Table 2. Simulation Parameters
Description
Network Size

www.asdf.org.in

Value
15, 25, 50

Fig. 6. FIS Parameters: Avg value and value in attacker node, for medium traffic

It can be seen from Fig. 5 and Fig.6 that the black hole
nodes drop more number of Packets than that by the
genuine nodes. Also they do not forward any RREQ
packets. Moreover, it is inferred that the malicious nodes
send more fake RREPs to the source nodes to enable them
to be a part of communication path between source and
destination nodes.
Performance Analysis:

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

The following plots show the performance of the


proposed system in detecting the black hole nodes in
different traffic scenarios.

124

From the plots it is inferred that the proposed system


produces true and false positives. The performance of the
proposed system can be evaluated by computing the
following three parameters:

Node performance with 1 black hole,low traffic


1.8

1.
2.
3.

1.6

Performance of nodes

1.4

True Positive Rate (TPR)


False Negative Rate (FNR), and
False Positive Rate (FPR)

1.2

The performance parameters for various scenarios are


tabulated in Table 3.

1
0.8

Table 3. Performance Analysis

0.6

No.of
Nodes

0.4
0.2
0

15
-0.2

10
15
No. of nodes

20

No.of
Black
Hole
Nodes
1

25

Fig. 7.Detection of black hole node for a 15-node


network, 1 black hole and low traffic

15

Fig.7 shows the performance of the proposed Fuzzy


based intrusion detection in a network of 15 nodes having
one black hole and low traffic case. The nodes having
least values of credits are considered to be black hole
nodes. It is understood from Fig.9 that the system detects
the nodes with node id 1, 2 and 14 as black hole nodes.
Earlier, only the node with node id 14 is simulated as
black hole.

15

25

25

25

Node performance with 2 black holes,med traffic


2

Traff
ic

TPR

Low
Med
High
Low
Med
High
Low
Med
High
Low
Med
High
Low
Med
High
Low
Med
High

100
66.7
100
66.67
50
100
33
44.33
67
100
100
100
83.33
100
100
86.67
100
100

FNR

0
33.3
0
33.33
50
0
67
55.7
33
0
0
0
16.67
0
0
13.3
0
0

FPR

6.667
7
7
7
7
7
7
7
7
15.33
13
13
13
13
13
13
13
13

Performance of nodes

1.5

It is clear from Table.3 that when the network size


increases, the detection rate increases.

VII. FUTURE WORK AND CONCLUSION


0.5

-0.5

10
15
No. of nodes

20

25

Fig. 8.Detection of black hole node for a 25-node network, 2 black


holes and medium traffic

The detection of black hole nodes in a 25-node


network with 2 black holes (node id 23 and 24) is shown
in Fig.8. It is clear from the figure that in addition to
detecting the correct malicious nodes, the system also
detects nodes with node id 1 and 2 also as black hole
nodes.

www.asdf.org.in

This paper discusses about the design of a hybrid


approach for fuzzy based Intrusion Detection System that
combines anomaly and specification techniques. The
performance analysis clearly exhibits that the proposed
method is efficient for a large network. The combinatorial
approach works well in large networks with high traffic
producing 100% detection and 0% false negative
indication with comparatively lesser percentage of false
positives. Future work includes the reduction of false
positives and False negatives for all sizes of network and
all kinds of traffic by attempting a Cross-layer approach.
It is also desired to isolate the attacker nodes and to
publicize their malicious behaviour to the entire network
so that genuine nodes may avoid them in the Route
Discovery Process and path setting. The same method can
be extended to detect more attacks.

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

125

REFERENCES
[1]

[2]

[3]
[4]
[5]

[6]

[7]

[8]

[9]

Jimmi Grnkvist, Anders Hansson, and Mattias Skld,


Evaluation of a Specification-Based Intrusion Detection
System for AODV, The Sixth Annual Mediterranean Ad Hoc
Networking WorkShop, Corfu, Greece, June 12-15, 2007,
pg.121-128.
Amitabh Mishra,Ketan Nadkarni and Animesh Patcha,
ntrusion Detection in Wireless Ad hoc Networks, IEEE
Wireless Communications, February 2004.
C.P. et al, n-demand distance vector (AODV) routing, RFC
3561, 2003
TheMathWorks, Inc.
Zinaida Benenson, Ioannis Krontiris, Felix Freiling, Thanassis
Giannetsos, Tassos Dimitriou, Cooperative Intrusion Detection
in Wireless Sensor Networks, Proceedings of the 6th
European Conference on Wireless Sensor Networks, EWSN
2009, 263-278, Springer, 2009-02-11
Ming-Yang Su, Prevention of selective black hole attacks on
mobile ad hoc networks through intrusion detection systems,
Computer Comunications, Issue 34, Pg: 107-111, 2011
Jiwen CAI, Ping YI, Jialin CHEN, Zhiyang WANG, Ning
LIU, An Adaptive Approach to Detecting Black and Gray
Hole Attacks in Ad Hoc Network, 24th IEEE International
Conference on Advanced Information Networking and
Applications, 2010
Chin-Yang Tseng, Poornima Balasubramanyam, Calvin Ko,
A Specification-based Intrusion Detection System for
AODV, Proceedings of the 1st ACM workshop on Security of
ad hoc and sensor networks, 2003.
Satria Mandala, Md. Asri Ngadi, A. Hanan Abdullah, A
Survey on MANET Intrusion Detection, International Journal
of Computer Science and Security, Volume (2) :Issue (1)

www.asdf.org.in

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

126

Fuzzy Logic in Digital Image Processing


-A Pragmatic Review
Prashanth G.K
Assistant Professor,
Department of Master of Computer Applications,
Siddaganga Institute of Technology,
Tumkur, India,

Abstract This paper elaborates the application of fuzzy


logic one of the predominate component of artificial
intelligence. A through review of copious literature has been
surveyed. It is expected that this paper would provide a
significant contribution to those prospective researchers in the
area of digital image processing. Possible uses of fuzzy logic
from the view points of processing of images related to
morphological features, image segmentation, edge detection,
clustering, classification, enhancement, contrast enhancement
have been elaborated discussed.
Keywords: Image processing,
enhancement, Segmentation.

Fuzzy

logic,

Morphology

I. INTRODUCTION
Fuzzy Logic (FL) is a convenient and early approach for
mapping an input space to the output space [1]. Here the
inputs could be the patters and output could be the classes
or some interfaces. Basically fuzzy logic is one of the
powerful components of artificially intelligence it has its
validity and high applicability in numbers of felids where
ever uncertainty of non-statistical kind in encountered.
The sole purpose applying fuzzy logic in image processing
is to identify the objects or the achieve clear persecution of
images to offer more realistic results in terms of image
clarity and information content typically specking image
clarity is critical in areas like medicine, security and
military application. The intent of this paper is to study as
how the concepts of fuzzy logic could be augmented and to
show the related applications in image processing this
paper is organized as follows
Section 2 explains the concepts of fuzzy logic, section 3
focuses on various application of fuzzy logic in different
tasks of image processing, and section 4 presents the
conclusion.
II. FUZZY LOGIC CONCEPTS
Fuzzy Logic is a means of dealing with information in the
same way that humans of animals do. Fuzzy Logic is built
around the concept of reasoning in degrees,
Proc. of the Intl. Conf. on Computer Applications

Volume 1. Copyright 2012 Techno Forum Group, India.

ACM #: dber.imera.10. 73249


rather than in Boolean (yes/no 0/1) expressions like
computers do.
Variables are defined in terms of fuzzy sets. Rules are
specified by logically combining fuzzy sets. The
combination of fuzzy sets defined for input and output
variables, together with a set of fuzzy rules that relate one
or more input fuzzy sets to an output fuzzy set, which built
a fuzzy system.
Most of the fuzzy systems are for to use in decision-making
or pattern recognition applications. An ordinary set splits
the data into those items that are completely in the set and
those items that are completely outside of the set.
The first point of the fuzzification of the images is that a
gray value pixel at a position g(x, y) defines the degree to
which a pixel actually exists in the image. This will always
create a renormalization of the gray values because they
usually cover a range of integer values starting from 0 to
255, but membership degrees are to be expected from the
range [0, 1]. It is only understandable from the membership
degrees is that the pixel at position (x,y) belongs to the
degree g(x,y)/gmax to the image. Defuzzification of the
image is binary. But we can not determine which type of
degree that pixel position belongs to because it is not
deterministic.
In general a fuzzy logic merits its consideration from two
view points [2]
Fuzzy techniques are powerful tools for
knowledge representation and processing
Fuzzy techniques can manage the vagueness and
ambiguity efficiently
There are three important problems in which the image
processing is fuzzy by itself. For example grayness
ambiguity, geometrical fuzziness, vague (complex/illdefined) knowledge is some of those.
The application of fuzziness in mathematical morphology
applied to the image pixels and the structural element in
order to perform the morphological operations. In the
following sections, we presents mathematical in image
processing tasks.

ISBN: 978-81-920575-5-2-::doi:10.73249/ISBN_0768

www.asdf.org.in

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

III. IMAGE PROCESSING AND FUZZY LOGIC


Image Processing is inter disciplinary topic being
extensively used in the multi discipline Image Processing is
all about rendering an image so that it is more practical and
readily usable five predominate application of Image
processing are in order [3].

The five predominate features are listed below:

Computer Graphics: This area involves desktop


publication, electronic media, video games
Image Transmission: Here passing of information
in the form of an image from one location to
another location through cables or satellite is
involved. The image is usually in the comprised
form.
Image Analysis: It involves identification of
various shapes, symbols, hand writing, etc.
Image Correction: This area includes geometric
corrections of an image.
Object Analysis: Field used in identification of the
space and objects in typical electronic application
such as Robotic vehicle.

Through the fields of applications of improving are


enamors only typical ones are listed image analysis and
related procedures are founded on visual takes which are
carried out by human beings and where presences and
accuracy cannot be achieved and they became matter of
degree so it is here the fuzzy logic comes in to use because
it is proven tool which would be used to capture
ambiguities or vagueness which will come to play when
ever human expertises.
A .FUZZY LOGIC IN IMAGE CONTRAST
ENHANCEMENT
Contrast enhancement is the one of the predominant issues
of the image processing and analysis it is indeed a
fundamental step in image segmentation, image
enhancement is employed to transform an image on the
basis of psycho physical characteristic of human visual
system [4] two commonly used techniques for contrast
enhancement are (1) indirect method and (2) direct method.
The indirect method approach is to modify histogram.
Through histogram modification the original gray level is
assigned a new value as a result the intensity span of the
pixel is expended. Histogram specification and histogram
equalization two popular indirect contrast enhancements
methods [4]. How ever histogram modification techniques
simply stretches the global distribution of intensity has
perfection mechanism are involved in maturing the degree
of enhancement an the contrast human decision is involved
fuzzy logic exactly fits in here [5] have adopted fuzzy logic

www.asdf.org.in

127

entropy principal and fuzzy set theory to develop a novel


adaptive contrast enhancements methods by conducting
experimental on several images. It as also been reported
that proposed algorithm is very effective in contrast
enhancements as well as in preventing over enhancement
[6] have proposed an adaptive contrast enhancement
scheme which enhanced contrast with a lower degree of
noise amplification [8] used smoothing method with fuzzy
sets to enhancement images they applied contrast
identification operations on pixels to modify there member
ship values [8] have proposed a new filtering based on
fuzzy logic which has high performances mixed noise
environment this filter is mainly based on idea that each
pixel is not allowed to be uniformly fired by several fuzzy
rules the results are reported to be very promising and have
indicated a high performance in image restoration when
compared to the convention filter.
A new fuzzy supported color tone reproduction algorithm
introduced to help in developing hardly or non-viewable
features and contrast of color images[9] the method as
proved to produces high quality color high dynamic range
the maximum levels of details and color information.
B. MORPOLOGY IMAGES PROCESSING USING FUZZY
LOGIC
Morphology transforms the original image into another
image through the interaction with the other images of
certain shape and size.
Mathematically morphology is based on set theory the
shapes of object in a binary image are reported by pixel
with value one while the background pixel zero fuzzy
approach to image morphology aims to extend the binary
morphological operations to gray level images [9] in order
to extend this morphological operations such as fuzzy
erosion dilation based on fuzzy implications have been
tried.

C. IMAGE SEGMENTATION
Image Segmentation is the process of dividing an image
into regions, each of which corresponds to homogenous
surfaces in a scene. Hence our goal is t o extract closed
boundaries around the surfaces. If complete edge
information can be extracted a reliable image segmentation
can be achieved using different types of edge maps has the
advantage of presenting most of the information needed in
the scene. By inspecting the segmented images, all the
major regions that are isolated and labeled can be seen. In
order to evaluate the segmentation process, results are
tested and compared against an ideal segmented image
which is synthetically generated. The evaluation criteria are
based on the degree of similarity between actual and ideal
segmented images. This degree of similarity may be

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

quantified by computing a metric distance between the two


segmented [10, 11].

D. IMAGE DENOISING
A generalized fuzzy inference system (GFIS) in noise
image processing. The GFIS is a multi-layer neuro-fuzzy
structure which combines both Mamdani model and
Takagi-Sugeno (TS) fuzzy model to form a hybrid fuzzy
system. The GFIS can not only preserve the interpretability
property of the Mamdani model but also keep the robust
local stability criteria of the TS model. Simulation results
indicate that the proposed model shows a high-quality
restoration of filtered images for the noise model than those
using median filters or wiener filters, in terms of peak
signal-to-noise ratio (PSNR) [16].
Recently, application of fuzzy techniques in image noise
reduction processing is a promising research field [17].
Fuzzy techniques have already been applied in several
domains of image processing, e.g. filtering, interpolation,
and morphology, and have numerous practical applications
in industrial and medical image processing. These fuzzy
filters, including FIRE-filter [18], the weighted fuzzy mean
filter [19], and the iterative fuzzy control based filter [20],
are able to outperform rank-order filter schemes (such as
the median filter). Nevertheless, most fuzzy techniques are
not specifically designed for gaussian(-like) noise or do not
produce convincing results when applied to handle this
type of noise. The proposed a generalized fuzzy inference
mode, which is a hybridizations of the Mamdani and TS
models. The GFIS can be characterized by the neuro-fuzzy
spectrum, in light of linguistic transparency and inputoutput mapping accuracy [16].

to the starting value within some short distance [14].


However, Step and Line edges are rare in real images.
Because of low frequency components or the smoothing
introduced by most sensing devices, sharp discontinuities
rarely exist in real signals. Step edges become Ramp Edges
and Line Edges become Roof edges, where intensity
changes are not instantaneous but occur over a finite
distance [15].
IV. CONCLUSION.
This paper focused a various applications of fuzzy logic
concepts in image processing. Fuzzy set theory provide an
extend way of processing the gray scale images, it is
provided operations for morphological analysis of images,
from the literature survey, it may be concluded that fuzzy
logic provides new vista in algorithm development with
special reference to generalized morphologies, 3D
morphologies, geodesic morphologies, multivariate (color)
morphologies and statically morphologies.

REFERENCE
1.
2.

3.

4.
5.
6.

7.

E. EDGE DETECTION
Edge detection techniques transform images to edge
images benefiting from the changes of grey tones in the
images. Edges are the sign of lack of continuity, and
ending. As a result of this transformation, edge image is
obtained without encountering any changes in physical
qualities of the main image [12][13]. Objects consist of
numerous parts of different color levels. In an image with
different grey levels, despite an obvious change in the grey
levels of the object.
An Edge in an image is a significant local change in the
image intensity, usually associated with a is continuity in
either the image intensity or the first derivative of the
image intensity. Discontinuities in the image intensity can
be either Step edge, where the image intensity abruptly
changes from one value on one side of the discontinuity to
a different value on the opposite side, or Line Edges, where
the image intensity abruptly changes value but then returns

www.asdf.org.in

128

8.
9.

10.

11.

12.

13.

Timoty J Ross, Fuzzy logic with engineering applications,


Wiley publications, second edition, 2010.
University of Waterloo, Canada, Homepage of Fuzzy Image
Processing, June 1997, Webpage Retrieved From
http://pami.uwaterloo.ca/ tizhoosh/fip.htm available at.
Bassmann,H.,
Beslich,P.W.,International
Thomson
Computer adoculos Digital Image Processing Student
Version,s, pp.153-214, 1995
Gonzalez & Woods and for Digital Image
H.D. Cheng, Huijuan Xu, A novel fuzzy logic approach to
contras enhancement, Pattern Recognition 33 ,809-819, 2000.
Laxmikant Dash, B.N. Chatterji, Adaptive contrast
enhancement and de-enhancement, Pattern Recognition 24
,289-302, 1991.
S.K. Pal, R.A. King, Image enhancement using smoothing with
fuzzy sets, IEEE Trans. System Man Cybernet. 11, 404-501,
1981.
Farzam Farbiz etal, an iterative method for image
enhancement based fuzzy logic.
Annamria R. Vrkonyi-Kczy, Andrs Rvid Fuzzy Logic
Supported Coloured Image Information Enhancement Magyar
Kutatk 8. Nemzetkzi Szimpziuma, 8th International
Symposium of Hungarian Researchers on Computational
Intelligence and Informatics.
Abidi, M, Abdulghafour M., and Chandra T. Fusion of visual
and range features using fuzzy logic. The journal of control
engineering practice (IFAC) 2, no 5, 833-847, 1994.
M., Fellal, A., and Abidi M. Fuzzy logic based data
integration: Theory and Application. Proceedings of IEEE Int.
Conferences on Multisenors Fusion and integration for
Intelligent Systems(Las Vegas, Nevada),151-160, 1994.
Ibrahiem M. M. El Emary, On the Application of Artificial
Neural Networks in Analyzing and Classifying the Human
Chromosomes, Journal of Computer Science, vol.2 (1), 2006,
pp.72-75.
N. Senthilkumaran and R. Rajesh, A Study on Edge Detection
Methods for Image Segmentation, Proceedings of the
International Conference on Mathematics and Computer
Science Vol.I, pp.255-259, 2009

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]
14. M. Abdulghafour, Image segmentation using Fuzzy logic and
genetic algorithms, Journal of WSCG, vol. 11, no.1, 2003.
15. Metin Kaya,Image Clustering and Compression Using an
Annealed Fuzzy Hopfield Neural Network, International
Journal of Signal Processing, 2005, pp.80-88.
16. Nguyen Minh Thanh and Mu-Song Chen Image Denoising
Using Adaptive Neuro-Fuzzy System IAENG International
Journal of Applied Mathematics, 36:1, IJAM_36_1_11
17. E. Kerre and M. Nachtegael, Eds., Fuzzy Techniques in Image
Processing. New York: Springer-Verlag, 2000, vol. 52, Studies
in Fuzziness and Soft Computing.

www.asdf.org.in

129

18. F. Russo and G. Ramponi, A fuzzy operator for the


enhancement of blurred and noisy images, IEEE Trans. Image
Processing, vol. 4, pp. 11691174, Aug. 1995.
19. C.-S. Lee, Y.-H. Kuo, and P.-T. Yu, Weighted fuzzy mean
filters for image processing, Fuzzy Sets Syst., no. 89, pp.
157180, 1997.
20. F. Farbiz and M. B. Menhaj, Fuzzy Techniques in Image
Processing. New York: Springer-Verlag, 2000, vol. 52, Studies
in Fuzziness and Soft Computing, ch. A fuzzy logic control
based approach for image filtering, pp. 194221

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

130

Distance based Data aggregation for Energy Efficiency in Wireless Sensor Networks
Kusuma S.M

P.I. Basarkod

Veena K.N.

B.P. Vijaya Kumar

Dept. of Electronics
&Communication Engg.,
REVA ITM, Banagalore

Dept. of Electronics &


Communication Engg.,
REVA ITM, Bangalore

Dept. of Telecomm
Engineering , JNNCE.,
Shimoga, India

Dept. of Comuter Science


& Engineering., MSRIT,
Bangaore, India

Abstract Wireless Sensor Network is an important area due


to the fact that they are potentially low cost solutions to the
real world challenges. Minimizing energy consumption is a
challenging issue in Wireless Sensor Networks. In this paper
we have proposed a methodology to design and analyze a
distance based energy consumption for data aggregation in
Wireless sensor networks (WSN). Our method involves in
clustering of sensor nodes based on the distance from the
environment and its physical location. Further classifications
of sensors nearer to the environment, nearest neighbour nodes
and the intermediate nodes are done to decide on either to
sense directly from the environment or to depend on the
neighboring nodes for the sensed data. The analysis mainly
focus on identifying sensing, trans-receiving operations by the
nodes nearer to the environment, intermediate nodes,
dependency of the node, etc. on how to conserve energy. The
network lifetime and energy conservation for data aggregation
is analyzed using the simulation platform for random
deployment of nodes along with Kohonen Self Organizing Map
Neural Network (KSOM-NN) for clustering. Various cases of
network dynamics are studied to understand the challenges of
WSN deployment, energy efficiency, data aggregation on how
to improve the reliability and degree of confidence over the
sensed data from the environment in WSN.

Keywords-- Wireless Sensor Network; Clustering; KSOM;


Data aggregation;
I.

INTRODUCTION

In the recent years, Wireless Sensor Networks (WSNs)


[1] has found a wide variety of applications ranging from
military, scientific, industrial, health-care and domestic
services. In WSN the sensor nodes have native capabilities
to detect the nearest neighbors and help to develop an adhoc network through a set of well-defined protocols.
There are number of nodes involved in WSN, all nodes
are likely to relay on limited battery power. Transmitting at
unnecessary high power not only reduces the life time of
nodes and network but also introduce excessive
interferences. Minimizing energy consumption in wireless
sensor network is has been a challenging issue. Thus to
determine the optimal deployment of sensors in order to
minimize energy consumption and prolong network lifetime
Proc. of the Intl. Conf. on Computer Applications Volume
1. Copyright 2012 Techno Forum Group, India.

ISBN: 978-81-920575-5-2-::doi:10.73263/ISBN_0768

www.asdf.org.in

ACM #: dber.imera.10. 73263


becomes an important problem during the network planning
and dimensioning phase. Hence there is a requirement for
most of the existing works and in future to design and
develop a methodology to analyze energy consumption, data
aggregation, degree of confidence, network lifetime with
respect to the deployment of sensors, sensor network in a
given phenomenon of interest. In this work, we propose,
analyze and evaluate the energy consumption with random
deployment of sensors and their distance to the environment
measured in terms of physical distance, other interferences,
noise and also distance among themselves. A distance
based data aggregation and realistic power consumption
model is presented in this paper. Major components power
consumptions are individually identified, and the effective
transmission range of a sensor node is modeled by the
output power of the transmitting power amplifier, sensitivity
of the receiving low noise amplifier, and RF environment.
Using this basic model, conditions for minimum sensor
network power consumption are derived for communication
of sensor data from a source device to a destination node.
The rest of the paper is organized as follows. A brief
introduction to sensor networks is given in section I. Sensor
network model is explained in section II. Some of the
related works are briefly explained in section III. The sensor
node and network energy model is described in section IV.
Clustering sensor network using KSOM-NN is described in
section V. The proposed model is described in section VI.
Simulation and results of the proposed technique are given
in section VII. Finally, summary of this paper are presented
in section VIII.
II.

SENSOR NETWORK MODEL

Sensor Network

Cluster

Wireless Communication
Environment
Node
Figure1. Sensor network model

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

A typical sensor network consisting of sensor nodes are


shown in fig.1 Clustering of sensor nodes based the distance
from the environment and its physical location. Further
classification of cluster is done with respect to sensors nearer
to the environment, nearest neighbour nodes and the
intermediate nodes. Nodes within the cluster senses the
environmental data with respect to the distance measured in
terms of physical distance, noise, external interference and
malicious data, etc., Based on the distance, sensor node will
decide on either to sense directly from the environment or to
depend on the neighboring nodes for the sensed data.
III.

RELATED WORK

In this section a brief discussion on the related works


pertaining to clustering and power consumption in WSNs are
discussed. A Two-Phase Clustering (TPC) scheme [2], which
includes a cluster head electing stage and an energy-saving
data relay link setup is discussed. Determination of the
optimal node density by using Voronoi cells to guarantee a
lower bound of network lifetime is proposed in [3]. A Hybrid
Energy-Efficient Distributed (HEED) clustering scheme [4],
introducing a variable known as cluster radius, which defines
the transmission power to be used for intra-cluster broadcast
is discussed. Simplification of the average distance in
calculating energy consumption is studied in [5] [6]. With
the assistance of GPS or localization techniques [7], the
square grid also provides an easier coordination among all
sensor nodes in the network is examined. EECS [8]
introduces a distance-based cluster formation, where clusters
farther away from the sink have smaller sizes, so the energy
could be preserved for long-haul data transmission to the
sink. In [9] proposed an unequal clustering model to balance
the energy consumption in multi-hop networks, where CHs
are deterministically deployed at precomputed locations are
discussed. The work in [10] divides the sensors into clusters
according to the hop count from the sink, and the size of the
clusters grows with the distance increase in a power-of-2
function without optimizing the size ratio. In [11] two
routing schemes, Diagonal-First routing and Manhattan
Walk, which only rely on the one-hop geographical
information is examined. A cluster in these two schemes is a
square region in which any node can communicate via a
single-hop transmission with any other nodes in a neighbor
cluster. This is similar to GAF [12]. In [13] a simple linear
network and the relationship between the optimal radio range
and traffic load distribution in one-dimensional networks is
studied. While [14] proposed variable grid sizes in twodimensional networks, they only partition the network
according to the traffic pattern in one dimension and left the
other dimension intact, i.e., quasi-two-dimensional. A
variable-size grid in image processing [15] is implemented.
An Energy-Efficient Communication Protocol for Wireless
Microsensor Networks communication protocols, which can
have significant impact on the overall energy dissipation [16]
of these networks are discussed.

www.asdf.org.in

IV.

131

SENSOR NODE AND NETWORK ENERGY


MODEL

Sensor node energy consumption includes sensing,


processing and radio energy model are described in this
section to understand the energy dissipation for sensing,
transmitting and receiving the sensed information from the
environment, along with energy dissipation in the WSN.
A. Sensor Node Energy Model
The power dissipation [17] in the following components of
sensor node is considered for sensor node energy model.
Sensor sensing:
The sensing system links the sensor node to the physical
world. Sources of sensor power consumption are: signal
sampling and conversion of physical signals to electrical
signals, signal conditioning, and analog to digital
conversion (ADC). The total energy dissipation for sensing
activity for b bit packet is evaluated as follows.
Esens (b) = b*Vsup*Isens*Tsens
Where Isens is the total current required for sensing activity,
Tsense is the time duration for sensor node sensing and Esens is
the energy at the sensor node per round.
Micro-controller Processing:
The energy for processing and aggregation of the data
mainly consumed by the micro-controller, is attributed to
two components: energy loss from switching, Eswitch, and
energy loss due to leakage current, Eleak. The Microcontroller processing energy is given below
Emicro = Eswitch+Eleak
Radio Transmission:
Communication of neighboring sensor nodes is enabled by a
sensor radio.
The energy dissipation due to transmit b bit packet, in a
distance dij from sensor node is given below
Etx (b, dij) = b*Eelec+b*dijn*Eamp
Where, Eelec is the energy dissipated to transmit electronics,
Eamp is the energy dissipated by the power amplifier, N is the
distance based path loss exponent (we use n = 2 for free
space fading, and n = 4 for multi-path fading).
Radio Reception:
Energy dissipation due to receiving b bit packet from the
sensor node is given by
Erx (b) = b*Eelec
Where, Erx is the total energy dissipation in receive
electronics where, Eelec is the energy dissipated in receive
electronics, b is the number of bit packet.

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

B. Network Energy Consumption:


Network energy consumption of wireless sensor network is
given below. Ns are number of sensor nodes are randomly
and uniformly deployed in distributed in a M M region.
Eclu = (nc* Erx*b) + Etxsink*b;
Enet=k*Eclu
Where, k is Number of clusters in a sensor network, nc is
number of nodes in a cluster and Enet is energy consumed in
the network.
V.

Selection of Neurons and KSOM-NN architecture


Kohonen Self-Organizing Map Neural Networks have
two layers, an input layer and an output layer known as
competitive layer. Each input neuron is connected to every
neuron on the output layer, which is organized as two
dimensional grids. The fig.2 shows a typical example of a
KSOM-NN with four input and 20 output neurons [18].
The input layer in a Kohonen network has the same
number of neurons as the size of each input pattern. In the
present case the input patterns are based on the parameters of
the different sensor nodes. However, the neurons on the
output layer find the organization of relationships among
input patterns, which are classified by the competitive

www.asdf.org.in

neurons and can organize a topological map from a random


starting point and the resulting map shows the natural
relationships among the patterns that are given to them. The
number of output layer neurons (competitive layer neurons)
is selected based on the number of distinct patterns (classes)
present in the input patterns. The number of output neurons
is approximately more than the double the size of input
neurons is chosen, with suitable neighborhood value.
Competitive (output) Layer

CLUSTERING IN WSN

In clustering schemes, sensor nodes are partitioned into


a number of small clusters. Each cluster will contain a
cluster head. Each cluster head gathers information from
its group of sensor nodes, perform data aggregation and
relay only relevant information to the sink. Meanwhile,
redundant cluster nodes can be put into the sleep mode,
since sensors within the sense and transmission range of
others have no need to be active all the time. Therefore,
clustering schemes are widely used in wireless sensor
networks, not only due to their simple node coordination,
but also because they use multi-hop routing between CHs
to avoid long-range transmissions. This minimizes the
energy consumption by not letting all the nodes to send
data to the distant sink, involving high-energy
transmission process, by aggregating the information
from individual sensors, can abstract the characteristics
of network topology along with applications requirement
and it also reduces the bandwidth. This provides
scalability and robustness for the network and increases
the lifetime of the system.
KSOM-NN is competitive, feed forward type and
unsupervised training neural network. They have the
capability of unsupervised learning and self-organizing
properties, is able to infer relationships and learn more as
more inputs are presented to it. In the process of learning the
network is able to discover statistical regularities in its input
space and automatically develops different groups or
clusters to represent different classes of inputs, this
computation is useful in WSN fieldscenario.
A.

132

Connection weights,
Input Layer

Neurons

Figure 2. A typical KSOM-NN model with 4 input and 20 output neurons.

B.

Learning Algorithm
The Neural network is presented with a set of training
input patterns corresponding to each sensor nodes. Let the
input pattern be: I = (I1, I2,.,I|V| ), where there are |V| input
neurons in the input layer. Now suppose there are Q neurons
in the output (competitive) layer, let Oj be the jth output
neuron. So the whole of the output layer will be: O= (O1, O2,
.,OQ). For each output neuron Oj there are |V| incoming
connections from the |V| input neurons. Each connection has
a weight value W. So, for any neuron Oj on the output layer
the set of incoming connection weights are
Wj = (Wj1, W j2, ..,Wj|V|).
The Euclidean distance value Dj of a neuron Oj in the
output layer, whenever an input pattern I is presented at the
input layer, is:
Dj=

| V |.
i 1

( I i - W ji ) 2

(1)

The competitive output neuron with the lowest


Euclidean distance at this stage is the closest to the current
input pattern, called as winning neuron.
The neighborhood size and weight updating computation
use the following equations (1) and (2). The size of the
neighborhood h, starts with a big enough size and decreases
with respect to learning iterations such that:
ht = h0 (1- t/T)
(2)
Where, ht denotes the actual neighborhood size, h0
denotes the initial neighborhood size, t denotes the current
learning epoch and T denotes the total number of epochs to
be done. Where an epoch is when the network has swept
through all the input patterns of every sensor node once.
During the learning stage, the weights of the incoming
connections of each competitive neuron in the neighborhood
of the winning one are updated, including the winning
neuron. The overall weights update equation is:

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

Wjnew= Wjold + (I - Wjold)


(3)
If unit Oj is in the neighborhood of winning unit Oc.
Where, is the learning rate parameter, typical choices are
in the range [0.2 . . 0.5]. A learning algorithm II for
clustering is summarized as follows.
C.

Algorithm II: Learning algorithm for Clustering

Begin
1. Assign random values to the connection weights, W in the
range (0, 1);
2. Select an input pattern ie., parameters of the
nodes.

sensor

3. Determine the winner output neuron Oc using equation


(2);
4. Perform a learning step affecting all neurons in the
neighborhood of Oc by updating the connection weights
using equation (3); and
update ht and continue with step 2 until no noticeable
changes in the connection weights are observed for all
the input patterns.
5. Stop
End.
VI.

PROPOSED MODEL

Our proposed sensor network model is as shown in fig.1


involves in clustering the sensor nodes according to their
physical location and distance from the environment. Within
these clusters, again it is clustered with nodes having nearest
neighbour nodes and nodes which are intermediate nodes.
These nodes senses the environment with respect to the
distance measured in terms of physical distance, noise,
external interference and malicious data, etc. Based on the
distance sensor node will decide on either to sense directly
from the environment or depend on the other neighbouring
nodes for the environmental sensed data. Based on the
above factor novelty of the work involves in identifying
number of sensing operation, number of nodes nearer to the
environment, total number of nodes, dependency of the
node, etc. and also on how to conserve energy by only
sensing-transmitting / receiving- transmitting to improve the
reliability of the data for data aggregation with energy
efficiency in wireless sensor networks.
Proposed algorithm is as shown in Algorithm 1. Initially
sensor nodes are deployed with uniform distribution, with
each node having equal energy. Nodes are clustered
depending on nodes which are nearest to the environment,
nodes which have nearest neighbour, nodes depending on
the location using KSOM-NN. Within each clusters power
consumption is computed depending on the decision based
on the location as to sense and transmit, or receive and
transmit. Accordingly the energy is reduced for every node
in the network for each iteration. Later the threshold values

www.asdf.org.in

133

for power content in the node is compared with its


operational energy requirement, normally based on the
values given in the specification data sheet of the
manufacturer. Node death is decided based on their residual
power, and intern the network life time is decided based on
the number of live nodes and its normal working principles
in the WSN, generally based on the accuracy of data
monitoring, applications type, etc. The threshold values are
initialized in our proposed model to decide on the same as
threshold_node, threshold_power mentioned in the
algorithm. These threshold parameter values are suitably
considered in the simulation experiment to realize the
realistic simulation environment.
Algorithm 1: Proposed algorithm
BEGIN
Initialize : network_alive= Total _nodes.
Initialize:threshold_node, threshold_power, nodes_power.
Random deployment of nodes with equal initial energy.
while (network_alive>=threshold_node)
{
Using KSOM neural network
Clustering of nodes which are near to
environment.
Clustering of nodes which have nearest neighbour.
Clustering of nodes depending on the location.
For every node i in the cluster
If (nodes _power>= threshold_power)
{
Compute power consumption.
If (node is near to environment)
power=power-power(sense+Tx)
If ( node is intermediate node)
power=power-power(Rx+Tx)
}
Else
network_alive = network_alive 1 (node i)
}
END
VII.

SIMULATION AND RESULTS

Simulation is carried out for evaluating the performance


of the proposed model in sensor networks. It considers
sensor network consisting of 100 to 500 nodes with change
in topology. The performance of the algorithm is taken with
respect to the number of sensor nodes near to the
environment and intermediate nodes. Initially, nodes are
randomly placed within the square and are uniform
distributed. For our simulation we consider practical sensor
network with variable cluster size having one or two or
three hop transmissions from nodes to sink. We simulated
our program to run for, to generate 1000 to 5000 random
events each with random distances with respect to
environment and neighboring nodes. Simulation parameters
are chosen as shown in the table 1.

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

We have simulated the algorithm on a Pentium system


with a Linux OS using C++ programming language. For the
purpose of evaluating and analyzing the performance of the
proposed Distance based data aggregation for energy
efficiency model in Sensor networks, we consider several
key parameters to plot the graphs. The experiment is carried
out for the variations of different parameters considered in
the table 1 to reflect the dynamic environment that depends
on the application type and phenomenon.

Figure 3. Number of clusters formed.

The fig. 3 shows the number of clusters formed with


respect to the number of parameters such as distance from
neighboring node, power availability, memory etc. in a
sensor node. From the figure it can be seen that as the
number of parameters of the node s increased (like distance,
energy, no. of hop, memory, etc.), the number of clusters
formed will be increasing. These results show on how the
individual parameters affect the formation of cluster and its
size. These results are essential and can be easily used in
various applications depending on the region of interest. For
example, low energy cluster would be used for event based
monitoring applications and high-energy cluster can be used
for continuous monitoring applications and also as per the
requirement of different applications.

134

of sensor node in a given area, otherwise it becomes


difficult in sensing the parameter of interest from the
phenomenon. If it is not uniformly distributed, then energy
consumption in the nodes will be unevenly distributed and
some nodes will be dead soon and hence, it not only leads to
the death of the nodes but also the life time of the network
will be reduced.
TABLE 1

SIMULATION PARAMETERS

Initial energy in sensor node

50000nJ

Area of deployment
Node distance from
environment

100 m 100 m
1 to 10

the

# Simulation event
Network life time
Neighbour distance
Tx energy
Rx energy
Sense energy
Network size (No. of nodes)
No. of hop distance
Neural network size

1000 to 5000
60% nodes are alive
10 to 20 m
500nJ
400nJ
100nJ
10 to 100
1, 2, 3
4 input, 3 to 10 output neurons

Therefore we may have to go for proper deployment of


nodes in the network to improve the network life.

Figure 5. Power in the network w.r.t. intermediate node.

Fig.5 shows the power in the network with respect to the


intermediate node. It shows that as the number of
intermediate node increases, the power consumed by the
node also increases. Hence the power consumed by the
network also increases. This results in low network life
time. This requires redeployment of sensor nodes in the
network to make the sensor network to work efficiently.

Figure 4. Distribution of the no. of nodes near the environment.

Fig.4 shows the distribution of sensor nodes nearer to the


environment for a given sensor network. Sensor nodes are
deployed with uniform distribution. The sensor nodes have
to be uniformly deployed, in order to have uniform spread

www.asdf.org.in

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

Figure 6. Remaining power in the network w.r.t nodes near environment.

Graph in fig. 6 shows the remaining power in the network as


specified as power in the network across y-axis, with respect
to number of nodes near the environment as x-axis. Here we
can see that as the number of nodes near the environment
increases the remaining power in the network also increases.
So as the number of nodes near the environment increases,
network life time also increases. It also shows that the nodes
are deployed properly well with in the area of interest.

Figure 7. Power in the network w.r.t iterations.

Fig.7 shows the power in the network with respect to


number of iterations for which the network operation is
performed in the simulation. It can be seen that as the no. of
iterations increases, power consumed by the intermediate
nodes is more, hence forth the power present in the network
decreases as the nodes start operating. The total power
present in the network with respect to the nodes nearer to
the environment is more compared to intermediate nodes as
the number of iteration increases. These results show that it
is best to have nodes nearer to the environment to gather the
information and for communication. These results are
essential and can be easily used in various applications
depending on the critical region of interest. For example,
low energy cluster would be used for event based
monitoring applications and high-energy cluster can be used
for continuous monitoring applications and also as per the
requirements of different applications.
VIII.

CONCLUSION

The work involves in the design and analysis of distance


based data aggregation for energy efficiency in Wireless
Sensor Networks. We have proposed a methodology for

www.asdf.org.in

135

analyzing the deployment of sensor nodes, their distances to


the environment for sensing, receiving the sensed data from
other nodes and transmission of sensed data to the
neighboring nodes based on the distance for data
aggregation, energy dissipation and the reliability on the
aggregated data at the sink node is developed. Simulation
experiments are carried out for various dynamics of the
networks, like number of nodes, cluster formation using Self
organized Map (SOM) neural networks, uniform
distribution of sensor nodes, distances from the environment
and neighbor node distances. Analytics of energy dissipation
at different units are considered for calculating the network
lifetime, remaining network energy at individual node level
with certain threshold for their well being in their
operations. From the analysis we have explored the
possibility of deployment and distribution of sensor nodes in
a given area and the influence on the energy dissipation,
data aggregation and network lifetime for reliable data
delivery using WSN.
REFERENCES
[1] I.F.Akyildiz, W. Su, Y. Shankarasubramanyam, E.Cayirci,Wireless
Sensor Networks: A Survey , Computer Netwoks(38), Elsevier,
2002, PP 1-20
[2] W. Choi, P. Shah and S.K. Das,A framework for energy-saving data
gathering using two-phase clustering in wireless sensor networks, in
The First Annual International Conference on Mobile and Ubiquitous
Systems: Networking and Services, 2004. MOBIQUITOUS 2004, pp.
203 212, 2004.
[3] V. Mhatre, C. Rosenberg, D. Kofman, R. Mazumdar and N. Shroff, A
minimum cost heterogeneous sensor network with a lifetime
constraint, IEEE Trans on Mobile Computing, vol. 4, no. 1, pp. 4
15, 2005.
[4] O. Younis and S. Fahmy, Distributed clustering in ad-hoc sensor
networks: A hybrid, energy-efficient approach, in Proceedings of
IEEE INFOCOM, pp. 629640, 2004.
[5] S. Phoha, T.F. La Porta, and C. Griffin, Sensor Network Operations,
Wiley-IEEE Press, 2006.
[6] KH. Liu, L. Cai, and X. Shen, Exclusive-region based scheduling
algorithms for UWB WPAN, in IEEE Transactions on Wireless
Communications, 2008.
[7] L. Lazos and R. Poovendran, Serloc: Robust localization for wireless
sensor networks, ACM Trans. Sen. Netw., vol. 1, no. 1, pp. 73100,
2005.
[8] M. Ye, C. Li, G. Chen and J. Wu, EECS: an energy efficient clustering
scheme in wireless sensor networks, in 24th IEEE International
Performance, Computing, and Communications Conference, IPCCC,
pp. 535540, 2005.
[9] S. Soro and W. Heinzelman, Prolonging the lifetime of wireless sensor
networks via unequal clustering, in Proceedings of 19th IEEE
International Parallel and Distributed Processing Symposium, 2005.
[10] A. Egorova-Forster, and A. Murphy, Exploring Non Uniform Quality
of Service for Extending WSN Lifetime, in Proc. of 5-th Annual
IEEE International Conference on Pervasive Computing and
Communications Workshops, 2007.
[11] Y. Zhuang, J. Pan and G. Wu, Energy-optimal grid-based clustering
in wireless microsensor networks, in the 6-th International
Workshop on Wireless Ad hoc and Sensor Networking (WWASN) of
the 29-th ICDCS, Montreal, Quebec, Canada, June 2009.
[12] Y. Xu, J. Heidemann, and D. Estrin, Geography-informed energy
conservation for ad hoc routing, in Proceedings of ACM MobiCom,
pp. 7084, 2001.
[13] Q. Gao, K. J. Blow, D. J. Holding, I. Marshall, and X. H. Peng, Radio
Range Adjustment for Energy Efficient Wireless Sensor Networks,

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

136

Ad- Hoc Networks, pp. 7582, January 2006. [Online]. Available:


http://www. cs.kent.ac.uk/pubs/2006/2194
[14] R.Vidhyapriya and P. T. Vanathi, Energy efficient grid-based routing
in wireless sensor networks, International Journal of Intelligent
Computing and Cybernetics, vol. 1, no. 2, pp. 301318, Jan. 2008.
[15] W. Dai, L. Liu, and T.D. Tran, Adaptive block-based image coding
with pre-post-filtering, in Proceedings of Data Compression
Conference, pp. 7382, March 2005.
[16]Wendi Rabiner Heinzelman, Anantha Chandrakasan, and Hari
Balakrishnan, Energy-Efficient Communication Protocol for
Wireless Microsensor Networks, IEEE. Published in the
Proceedings of the Hawaii International Conference on System
Sciences, January 4-7, 2000, Maui, Hawaii.
[17] M. N. Halgamuge, M. Zukerman, and K. Ramamohanarao, An
Estimation of sensor energy consumption, Progress In
Electromagnetics Research B, Vol. 12, 259295, 2009.
[18] Simon Haykin, Neural Networks: A Comprehensive foundation,
Macmillan college publishing, Newyork, USA, 1995, 2nd edition.

www.asdf.org.in

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

137

A Hybrid Genetic Algorithm for the Job Shop Scheduling Problem

Sudipta Mondal

Tandra Pal

Dept. of Information Technology


Bengal College of Engineering & Technology
Durgapur, India
e-mail: sudipta.hit@gmail.com

Dept. of Computer Science & Engineering


National Institute of Technology
Durgapur, India
e-mail: tandranit@yahoo.com

AbstractJob Shop Scheduling problem (JSSP) is one of the


well-known computationally challenging combinatorial
optimization problems. It is not only NP-hard, but also one of
the worst members in the class. Genetic Algorithms (GAs) are
stochastic search processes based on the mechanics of natural
selection and evolution. They are highly parallel and believed
to be robust in searching global optimal solutions of complex
optimization problems. In this paper we propose a Hybrid
Genetic Algorithm (HGA) to solve JSSP. We propose a Local
Search based on Critical Path Heuristic and use it in the
genetic loop in order to improve the solution. We also modify a
crossover technique, proposed for Traveling Salesman
Problem (TSP), to make it suitable for JSSP. The approach is
tested on a set of standard instances from the literature and
compared with several existing approaches. The computation
results validate the effectiveness of the proposed algorithm.
Keywords-Local Search; Critical Path Heuristics; Order
Based Crossover(OBC); Induced Mutation; Diversity Mutation

I.

INTRODUCTION

A classical Job Shop Scheduling Problem (JSSP) with n


jobs and m machines may be described as follows.
Each of the n jobs is composed of m operations
that must be processed on the m machines.
Operations of each job have given processing
order and processing time.
Each machine can process at most one operation
at a time without any interruption.
The objective is to find a schedule of the operations on
the machines, taking into account the precedence constraints,
which minimizes the makespan. Makespan is the time to
complete all the jobs.
The problem has captured the interest of a significant
number of researchers and there are many literatures on it. In
the very early stage, Giffler and Thompson [1] have
proposed an algorithm (GT algorithm) for solving problems
to minimize the length of production schedules. They have
shown that for small size problem GT algorithm can generate
the complete set of all active schedules, i.e., the set of all
possible schedules and selects the optimal schedules directly
from this set.
Proc. of the Intl. Conf. on Computer Applications Volume
1. Copyright 2012 Techno Forum Group, India.

ISBN: 978-81-920575-5-2-::doi:10.73270/ISBN_0768

www.asdf.org.in

ACM #: dber.imera.10. 73270


Adams et al. [2] have described an approximation
method for solving the minimum makespan problem of Job
Shop Scheduling. It sequences the machines one by one
successively, each time identifying a machine as a bottleneck
among the machines not yet sequenced. Every time after a
new machine is sequenced, all previously established sequences are locally re-optimized. Both the bottleneck
identification and the local re-optimization procedures are
based on repeatedly solving certain one-machine scheduling
problems. Besides this straight version of the Shifting
Bottleneck Procedure (SB I), they have also implemented a
version (SB II) that has applied the procedure to the nodes of
a partial search tree.
A branch and bound method has been proposed by
Carlier and Pinson [3] to solve JSSP. Their method is based
on one-machine problems and considers two conflicting
objectives: optimizing complexity of local algorithms and
minimizing memory space. The former has been satisfied by
information redundancy and the latter by a good choice of
data structures.
Brucker et al. [4] have also developed a fast branch and
bound algorithm for the Job Shop Scheduling Problem.
Among other hard problems they have solved the 1010
benchmark problem which has been open for more than 20
years.
Croce et al. [5] have introduced a look ahead simulation
(LAS) encoding based on preference rules and an updating
step to speed up the evolutionary process.
A greedy randomized adaptive search procedure
(GRASP) for JSSP has been proposed by Binato et al. [6]. It
is a metaheuristic for combinatorial optimization. They have
also incorporated two new concepts to the conventional
GRASP: an intensification strategy and POP (Proximate
Optimality Principle) in the construction phase.
Nakano and Yamada [7] and Yamada [8] have proposed
the use of binary genotype for solving the classical JSSP and
Flow Shop Problems. They have introduced a repairing
technique, as most of the genotypes produced by the onepoint or two-point crossover and bit-flip mutation are illegal.
They have shown how a conventional GA [7] can effectively
solve tough combinatorial problem like JSSP. Yamada and
Nakano [9] have used the GT algorithm proposed by Giffler
and Thompson [1], as a basic schedule builder. They have
also proposed multi-step crossover fusion (MSXF) [9] as a

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

unified operator of a Local Search method and a


recombination operator in genetic Local Search for JSSP.
Beatrice and Mario [10] have implemented a deadlockfree strategy for solving JSSP. They have also hybridized the
strategy with genetic algorithm and introduced a simple
Local Search technique (LSGA). They have also developed
tabu search and integrated tabu search with genetic algorithm
(Hybrid). They have shown that this approach performs
better than LSGA.
Hasan et al. [11] have implemented the simple genetic
algorithm (SGA) proposed by Nakano and Yamada [7] and
Yamada [8] and proposed two genetic algorithm based
approaches for solving JSSP. First one is a simple heuristic
to reduce the completion time of jobs on the bottleneck
machines that they call as reducing bottleneck technique
(RBT). The other one has been proposed to fill any possible
gaps left in the SGA solutions by the tasks that are scheduled
later. They call this process as gap-utilization technique
(GUT). With GUT, they have also applied a swapping
technique that deals only with the bottleneck job.
In this paper, we present a new Hybrid Genetic
Algorithm based on Local Search using Critical Path
Heuristic for the JSSP. The chromosome representation and
fitness function are discussed in Section II. In Section III, we
describe the Local Search process and Critical Path
Heuristic, which we use in our algorithm. Various genetic
operators of the proposed algorithm are described in Section
IV and we present the proposed Hybrid Genetic Algorithm in
Section V. Section VI gives the experimental results and
comparisons with other algorithms. At last we conclude in
Section VII.
II.

CHROMOSOME REPRESENTATION AND FITNESS


FUNCTION

A genetic algorithm treats each valid solution as an


individual, thus an appropriate encoding/decoding scheme
has to be chosen. We use the operation-based representation
of chromosome [12]. This representation encodes a schedule
as a sequence of operations. Here we describe the encoding
and decoding processes with the help of the JSSP instance,
given in Table I. An encoded chromosome consists of nm
genes where each gene denotes an operation of a job and
each of the n jobs appears for m times. Each job is scheduled
to a machine according to its machine processing order. Fig.
1 represents decoding of a chromosome (3, 2, 2, 1, 1, 2, 3,
1, 3), where first gene 3 denotes operation 1 of job 3 (j31)
which is to be processed by machine 2 (m2) according to its
machine processing order as given in Table I. Second gene
2 denotes operation 1 of job 2 (j21) which is to be
processed by machine 1 (m1) and so on.
The objective is to find a schedule with minimum
makespan. So, we use the following fitness function, which
is to be maximized.

f(x) = 1/makespan

Operations routing
(processing time)

Job
1

1(3)

2(3)

3(3)

1(2)

3(3)

2(4)

2(3)

1(2)

3(1)

j31

j21

j22

j11

j12

j23

j32

j13

j33

m2

m1

m3

m1

m2

m2

m1

m3

m3

Figure 1. Example of Decoding.

III.

LOCAL SEARCH AND CRITICAL PATH HEURISTIC

Job Shop Scheduling can also be viewed as defining the


order of all operations that must be processed on the same
machine, i.e., to fix precedence relationships between these
operations. In the disjunctive graph model [9], this is done by
turning all undirected (disjunctive) arcs into directed ones.
The makespan is given by the length of the longest weighted
path from source to sink in this graph. This path is called a
critical path and is composed of a sequence of critical
operations. A sequence of consecutive critical operations on
the same machine is called a critical block [9]. At any instant
of time, operations running on all the machines are called
active operations. Identifying critical path is NP-hard, so we
try to find it by a heuristic procedure. The Critical Path
Heuristic is illustrated in Fig. 3. The path identified by the
procedure we propose, will start from source of the
disjunctive graph but may or may not end on sink and may
or may not partially or fully match with the actual critical
path. However, we assume it as critical path.
Next, we propose a Local Search procedure based on
Critical Path Heuristic. Pseudo code of Local Search is
shown in Fig. 4. In the Local Search we use Critical Path
Heuristic to identify critical path of a schedule and then we
search for a critical block from the starting of critical path.
Consecutive operations in the critical blocks can be swapped
if they are also in consecutive positions in the chromosome,
to check whether there is any improvement in the makespan
or not. As operations in a particular critical block are all
executed by the same machine and if they are also in
consecutive positions in the chromosome, swapping between
them does not change machine processing order of any job.
Critical operations belonging to different critical blocks are
not swapped. Thus precedence constraints are always
satisfied.
IV.

TABLE I.

www.asdf.org.in

A 33 JSSP INSTANCE

138

GENETIC OPERATORS

We use Roulette-wheel selection [13] with elitism for


selection of chromosomes.

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

A. Crossover
We have been inspired by the crossover technique proposed
by Sur-Kolay et al. [14] for Traveling Salesman Problem
(TSP), which is based on Order Based Crossover (OBC)
[15]. However, the concept of maintaining the order of the
operations of jobs in JSSP is more important than the order
of cities in TSP. We have modified their idea to make it
suitable for JSSP. The advantage of using this operator is
that it always generates valid schedules (as in TSP) keeping
the machine processing order of all jobs unchanged.
The operator is applied on each pair of parent
chromosomes after decoding the chromosomes, so each job
index with corresponding machine index is considered as a
decoded gene. A randomly chosen crossover point divides
the parent strings in left and right substrings. In the example,
given in Fig. 2, crossover point is considered at position 4.
The left substrings of parents S1 and S2 are copied to the left
substrings of the children C1 and C2 respectively. The
elements of the right substring of S1 are inserted in the right
substring of C1 in the order in which they occur in S2. Similarly
the right part of C2 is obtained by inserting the elements of the

139

right substring of S2 in the order it occurs in S1.


B.

Mutation
We first apply Simple Inverse Mutation (SIM) [16] and
then Swap Mutation [17] in the population generated after
S1:

Job index
Machine index

3 3 1 3|2 2 1 2 1
2 1 1 3|1 3 2 2 3

S2:

Job index
Machine index

2 1 1 3|2 3 3 2 1
1 1 2 2|3 1 3 2 3

Job index
Machine index

3 3 1 3|2 1 2 2 1
2 1 1 3|1 2 3 2 3

Job index
Machine index

2 1 1 3|3 3 2 2 1
1 1 2 2|1 3 3 2 3

C1:

C2:

Figure 2. Example of Crossover.

applying SIM. They have been applied in induced manner,


i.e., we have considered only those mutations for which

begin
i) to select first operation of critical path, perform
a) find an operation ( ji, mx) with starting time (st) = 0 and end time (et) = maximum end time among all
active operations;
b) search for another operation ( jk, my) which has staring time (st) = et among all active operations;
c) add operation ( ji, mx) as first operation of critical path list and update starting time (st) = end time of
operation ( ji, mx);
ii) to select second operation onwards of critical path, do
a) find an operation (jk, my) such that its starting time = st;
[if more than one operation is found with starting time = st, then select the operation with least machine index]

b) add operation ( jk, my) in critical path list and update starting time(st) = end time of operation (jk, my);
c) if step ii)a) cant find an operation then no more operation is added to critical path and the procedure
ends;
end of do;
end;
Figure 3. Pseudo Code of Critical Path Heuristic

begin
for each chromosome in the population, do
i) apply Critical Path Heuristic to identify the critical path and then identify the first pair of consecutive
critical operations belonging to same critical block;
ii) a) swap between identified pair of critical operations in the chromosome if they are also in consecutive
positions in the chromosome;
b) if swapped, a new schedule is achieved; if makespan of the new schedule is less than that of old
schedule then replace the old chromosome by the new one; otherwise, restore the old one;
iii) if no swap in step ii)a) or if the old chromosome is restored in step ii)b), then identify next pair of
consecutive critical operations belonging to same critical block in the critical path and go back to step ii);
if no such pair is found, the Local Search ends;
iv) if the old schedule is replaced by the new schedule in step ii)b), then again perform step a) to step d);
end of do;
end;
Figure 4. Pseudo Code of Local Search

www.asdf.org.in

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

fitness of the chromosome is better. SIM and Swap mutation


operators are applied on encoded chromosome directly
without decoding the machine indices. Then Diversity
Mutation is applied.
1) Simple Inverse Mutation: This operator selects
randomly two cut points in the string, and it reverses the
substring between these two cut points. For example,
consider the string (3 3 | 1 3 2 2 | 1 2 1) and first random cut
point between 2nd and 3rd genes and second random cut
point between 6th and 7th genes. Then the resulting string
will be (3 3 | 2 2 3 1 | 1 2 1).
2) Swap Mutation: This operator randomly selects two
genes of a chromosome and simply swaps their position in
the chromosome. For example, consider the string (3 3 1 3 2
2 1 2 1) and randomly selected genes are 2nd gene and 7th
gene. Then the resulting string will be (3 1 1 3 2 2 3 2 1).
3) Diversity Mutation: It randomly selects 20% chromosomes of population size and replaces those selected
chromosomes by randomly generated new chromosomes to
increase diversity without increasing population size. It
prevents premature convergence and decreases the chance
of getting stuck in to local minima. Hence, we name it as
Diversity Mutation.
V.

PROPOSED HYBRID GENETIC ALGORITHM

In order to improve the solution, we use the proposed


Local Search procedure within genetic loop along with
conventional genetic operators. The pseudo code of the
proposed Hybrid Genetic Algorithm is given in Fig. 5.

140

proposed by Nakano and Yamada [7] and Yamada [8], RBT


and GUT algorithms proposed by Hasan et al. [11], GRASP
algorithm proposed by Binato et al. [6], LSGA and Hybrid
algorithm proposed by Beatrice and Mario [10], LAS
algorithm proposed by Croce et al. [5] and SB-I and SB-II
algorithms proposed by Adams et al. [2]. The results of SGA
[7] for the test problems are taken from [11]. Table II
summarizes the result. It lists problem name, problem
dimension, the best known solution; best result obtained by
ours proposed HGA, iteration number at which the best
result obtained, and the best results obtained by other
algorithms.
We have implemented the HGA in C and executed on a
computer with Intel(R) Core(TM) 2 Duo CPU T6400 @
2.00GHz on the MS Windows XP operating system. All the
results are based on 30 individual runs with different random
seeds. Population size is 20 for la01 to la02, 40 for la03 to
la20 and 50 for la21 to la35. Crossover and mutation
probability is 1.
We obtain optimal solutions for 23 problems out of 35
problems. It is evident that our HGA outperforms all other
algorithms in achieving optimal solutions except the GUT of
Hasan et al. [11] which has also got optimal solutions for 23
problems. We have got optimal solutions for la17 and la18
whereas GUT has not got the optimal solution. Moreover,
our HGA gives better results than GUT for other four
problems, la21, la22, la24 and la28. However GUT has got
optimal solution for la16 and la30, which we have not found.
GUT also gives better results than our HGA for more four
problems, la25, la26, la27 and la29. So there is a close
competition between our HGA and GUT.

begin
i) create initial population randomly;
ii) generation-count = 1;
iii) while generation-count <= k do
/* k = maximum number of generations */
a) Roulette-wheel Selection with Elitism;
b) Local Search using Critical Path Heuristic;
c) Crossover: Produce children from the selected individuals;
d) Induced Mutation: Simple Inverse Mutation and Swap Mutation;
e) Diversity Mutation;
f) increment generation-count;
end of do;
iv) output the best individual found;
end;
Figure 5. Pseudo Code of the Proposed Hybrid Genetic Algorithm

VI.

EXPERIMENTAL RESULTS AND COMPARISONS

To test the performance of our proposed algorithm we


solve 35 bench mark problems designed by Lawrence [18]
and compare with 9 existing algorithms. These includes SGA

www.asdf.org.in

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

Problem Size

Best Known
Solution

HGA

Iteration No.

SGA

RBT

GUT

GRASP

LSGA

Hybrid

LAS

SB I

SB II

COMPARISON OF THE BEST SOLUTIONS OBTAINED WITH LITERATURE

Instances

TABLE II.

141

la01
la02
la03
la04
la05
la06
la07
la08
la09
la10

10 5
10 5
10 5
10 5
10 5
15 5
15 5
15 5
15 5
15 5

666
655
597
590
593
926
890
863
951
958

666
655
597
590
593
926
890
863
951
958

19
196
1428
179
4
9
68
44
24
5

666
655
605
607
593
926
890
863
951
958

667
655
617
607
593
926
890
863
951
958

666
655
597
590
593
926
890
863
951
958

666
655
604
590
593
926
890
863
951
958

666
655
597
590
593
926
890
863
951
958

666
666
666
926
-

666
720
623
597
593
926
890
868
951
959

669
605
593
863
-

la11
la12
la13
la14
la15
la16
la17
la18
la19
la20
la21
la22
la23
la24
la25

20 5
20 5
20 5
20 5
20 5
10 10
10 10
10 10
10 10
10 10
15 10
15 10
15 10
15 10
15 10

1222
1039
1150
1292
1207
945
784
848
842
902
1046
927
1032
935
977

1222
1039
1150
1292
1207
947
784
848
850
907
1079
946
1032
969
1012

26
24
18
5
425
612
441
723
583
205
2288
2568
948
7619
1623

1222
1039
1150
1292
1207
982
793
861
885
915
1114
993
1037
1022
1084

1222
1039
1150
1292
1207
994
785
861
881
915
1115
1021
1032
978
1066

1222
1039
1150
1292
1207
945
787
861
850
907
1091
970
1032
986
991

1222
1039
1150
1292
1207
946
784
848
842
907
1091
960
1032
978
1028

1222
1039
1150
1292
1207
1114
989
1035
1032
1047

959
792
857
860
907
1097
980
1032
1001
1031

1222
979
1097
-

1222
1039
1150
1292
1207
1021
796
891
875
924
1172
1040
1061
1000
1048

978
787
859
860
914
1084
944
1032
976
1017

la26
20 10
1218
la27
20 10
1235
la28
20 10
1216
la29
20 10
1157
la30
20 10
1355
la31
30 10
1784
la32
30 10
1850
la33
30 10
1719
la34
30 10
1721
la35
30 10
1888
No. of Optimal solutions
obtained (Total no. of
problems considered)

1235
1295
1248
1250
1362
1784
1850
1719
1721
1888

6679
6545
3793
8501
1661
1006
1365
1524
3577
2621

1305
1343
1316
1294
1469
1784
1850
1719
1744
1907

1301
1328
1311
1296
1454
1784
1855
1719
1747
1903

1226
1292
1260
1238
1355
1784
1850
1719
1721
1888

1271
1320
1293
1293
1368
1784
1850
1719
1753
1888

1307
1350
1312
1311
1451
1784
1850
1745
1784
1958

1295
1306
1302
1280
1406
1784
1850
1719
1758
1888

1231
1784
-

1304
1325
1256
1294
1403
1784
1850
1719
1721
1888

1224
1291
1250
1239
1355
-

23
(35)

16
(35)

15
(35)

23
(35)

22
(35)

17
(30)

5
(20)

4
(9)

16
(35)

3
(19)

To show the superiority of our algorithm, we calculate


Average Relative Deviation and Standard Deviation of
Relative Deviations in Table III. Each of the authors did not
consider the same number of problems, so Average Relative
Deviation and Standard Deviation of Relative Deviations are
calculated by the number of problems they considered. The
results in Table III, confirms the superiority of our algorithm
over other algorithms including GUT. Proposed HGA gives
the Average Relative Deviation and Standard Deviation of
Relative Deviations as 0.009023 and 0.018014 respectively
whereas these are 0.010040 and 0.019105 for GUT.

approach is tested on a set of 35 standard instances from the


literature and compared with 9 other approaches. The
computational results show that the proposed algorithm
produces either optimal or near optimal solutions for all
instances with best Average Relative Deviation and Standard
Deviation of Relative Deviations compared to other
approaches. No existing algorithm guarantees optimal
solutions for all instances. In future, JSSP can be solved by
introducing constraints such as machine breakdown,
dynamic job arrival, machine addition and removal, and due
date restrictions, etc.

VII. CONCLUSIONS
This paper presents a Hybrid Genetic Algorithm for JSSP
using Local Search based on Critical Path Heuristic. The

www.asdf.org.in

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

Algorithms

No. of Problems
Considered

Average of
Relative
Deviations

Standard
Deviation of
Relative
Deviations

TABLE III.
COMPARISON OF THE AVERAGE AND
STANDARD DEVIATION OF THE RELATIVE DEVIATIONS

HGA
SGA
RBT
GUT
GRASP
LSGA
Hybrid
LAS
SB I
SB II

35
35
35
35
35
30
20
9
35
19

0.009023
0.028125
0.026800
0.010040
0.014758
0.029861
0.032561
0.025309
0.031839
0.021833

0.018014
0.037925
0.035823
0.019105
0.027332
0.039164
0.031529
0.038179
0.040136
0.019411

142

[15] G. Syswerda, Schedule optimization using genetic algorithms.


handbook of genetic algorithms, Van Nostrand Reinhold, pp. 332349, New York, 1991.
[16] P. Larranaga, C. Kuijpers, R. Murga, I. Inza and S. Diz-darevic,
Genetic algorithms for the traveling salesman problem: A review of
representations and operators, In Artificial Intell. Rev. 13. pp. 129170, 1999.
[17] W. Pullan, Adapting the genetic algorithm to the traveling salesman
problem, In IEEE, 2003.
[18] S. Lawrence, Resource constrained project scheduling: an
experimental investigation of heuristic scheduling techniques,
Graduate School of Industrial Administration, Carnegie-Mellon
University, Pittsburgh, Pennsylvania, 1984.

REFERENCES
[1]
[2]

[3]
[4]

[5]

[6]
[7]

[8]

[9]

[10]

[11]

[12]

[13]
[14]

B. Giffler and G. L. Thompson, Algorithms for solving productionscheduling problems, Operation Research, vol. 8, pp. 487-503, 1960.
J. Adams, E. Balas and D. Zawack, The shifting bottleneck
procedure for the job shop scheduling, INFORMS, vol. 34, pp. 391401, 1988.
J. Carlier and E. Pinson, An algorithm for solving the job-shop
problem, Management Science, vol. 35, pp. 164-176, Feb 1989.
P. Brucker, B. Jurisch and B. Sievers, A branch and bound algorithm
for the job-shop scheduling problem, Discrete Applied Mathematics,
Issues 1-3, vol. 49, pp. 107-127, 1994.
F. D. Croce, R. Tadei and G. Volta, A genetic algorithm for the job
shop problem, Computers and Operations Research, vol. 22, pp. 1524, 1995.
S. Binato, W. Hery, D. Loewenstern and M. Resende, A GRASP for
job shop scheduling, Kluwer Academic Publishers, 2000.
R. Nakano and T. Yamada, Conventional genetic algorithm for jobshop problems, In Proceedings of the 4th International Conference
on Genetic Algorithms and their Applications. San Diego, USA, pp.
474-479, 1991.
T. Yamada, Studies on metaheuristics for job shop and flow shop
scheduling problems, In Department of Applied Mathematics and
Physics, Doctor of Informatics Kyoto, Japan: Kyoto University, p.
120, 2003.
T. Yamada and R. Nakano, Genetic algorithms for job-shop
scheduling problems, In Proceedings of Modern Heuristic for
Decision Support, UNI-COM seminar, London, pp. 67-81, March,
1997.
M. O. Beatrice and V. Mario, Local search genetic algorithms for
the job shop scheduling problem, Kluwer Academic Publishers, vol.
21, pp. 99-109, 2004.
S. M. K. Hasan, R. Sarker and D. Cornforth, Modified genetic
algorithm for job-shop scheduling: a gap-utilization technique, In
IEEE congress on evolutionary computation. Singapore, pp. 38043811, 2007.
Y. Tsujimura, M. Gen and R. Cheng, Improved genetic algorithms
for solving job-shop scheduling problem, Engineering Design and
Automation, vol. 3, No. 2, pp.133-144, 1997.
D. E. Goldberg, Genetic algorithm in search, optimization and
machine learning, Addison-Wesley, Canada, 1989.
S. Sur-Kolay, S. Banerjee and C. A. Murthy, Flavours of traveling
salesman problem in VLSI design, In 1st Indian International
Conference on Artificial Intelligence, 2003.

www.asdf.org.in

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

143

An Intelligent Location Prediction System for Banking Loans Using Bayes


Approach
S.Rajaprakash,
Department of Computer Science Engineering
Aarupadai Veedu Institute of Technology
Vinayaka Mission University
Chennai, Tamil Nadu,India
Abstract - The banking Industry in India is undergoing a
major transformation due to changes in economic
conditions and continuous deregulation. This continuous
deregulation has made the Banking market extremely
competitive with greater autonomy, operational flexibility
and decontrolled interest rate in loan system. The new
generation private/foreign bank are planning to invest in
expanding their business into loan promotion to achieve
that the banks are in need of a finding solution which can
get the micro detailed survey about the prospective loan
location in any part of state which is considered to be
challenging task. In order to overcome this problem an
attempt is made to exactly find out the prospecturners in
the target location with Belief Network.

I.

INTRODUCTION

This paper An Intelligent Loan Prediction System


with Bayes Approach deals with the measures to
obtain the details on feasibility of availing loan as well
the micro details involved in a particular location in
prospects of loaning with loan, which helps the bank to
make their decision on of starting new branches in that
location. The micro details about the peoples income
are collected and the information is classified according
to the Business type and source of income, etc. The
total data specification for this paper has been collected
from the leading Multinational bank namely ICICI
Bank Ltd. It is having many branches at district level
and wishes to expand the business. This particular bank
is chosen because they are pioneer in processing the
loan faster and smoother. The relevant data is collected
with the help of well framed questionnaire, recording
the individual views of the people in the targeted
location/analyzing demands and understanding people
needs. All such data are categorized as nodes and these
nodes are considered as Random variable with the help
of Belief network. The Belief network helps to achieve
this by its joint probability distribution in Directed
Acyclic Graphs (DAG). The relevant data is statistically
grouped as a network with belief network in parent and
child combinations and the final result will be given in
ratio basis of the prospective loan availing chances in a
particular targeted location.
Proc. of the Intl. Conf. on Computer Applications
Volume 1. Copyright 2012 Techno Forum Group,
India.

R. Ponnusamy
Madha Engineering College
Affiliated to Anna University
Chennai, Tamil Nadu, India
ACM #: dber.imera.10. 73277
II. BAYES INFERENCE
Bayesian network models
support efficient
reasoning under uncertainty in a given domain.
Reasoning under uncertainty is the task of computing
our updated belief in events based on given
observations on other events .
2.1 Conditional probability
The basic concept in the Bayesian treatment of
uncertainty is that of conditional probability of event a
is x is written as P(a/b)=x (i.e) if event b is true and
everything else known is irrelevant for event a then
the probability of event a is x.
The following axiom gives the basis for Bayesian
probability
1. For any event a, 0p(a)1 p(a)=1if
and only if a occurs with certainty.
2. For mutually exclusive events a & b
the probability that either a or b occurs is
P(a or b)=P(a)+P(b).
3. Fundamental Rule of probability calculus
For any two event a and b the both a&b.
(i.e)P(a,b)=P(a/b)P(b)=P(b/a)P(a). P(a,b) is called
the joint probability of the event a & b.
2.2 Bayes Rule
Generalizing the above rule 3 to random variables X
and Y we get the fundamental rule of probability
P(X,Y)=P(X/Y)P(Y)=P(Y/X)P(X).
(1)

(2)
P(Y/X)=
(3)
That is , the denominator can be derived from the
numerator in (2) Furthermore, the denominator is
obviously the same for all states of Y.
III. Chain Rule
For a probability distribution, P(X), over a set of
variables X={x1,xn}, we can use the fundamental rule
repetitively to decompose it into a product of
conditional probabilities

ISBN: 978-81-920575-5-2::doi:10.73277/ISBN_0768

www.asdf.org.in

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

144

P( X ) P ( x1 | x2 ,...xn ) P ( x2 ,...xn )

nodes ) which helps to form a relevant parent and child


P( x1 | x2 ,...xn ) P ( x2 | x3 ..., xn )...P ( X n 1 | pn ) P ( xn )node with help of Belief network.
The Vital node: - The L node is considered as vital
n
node because it use to find the location prediction of
P ( xi | xi 1 ,...xn )
uncertainty involved in loan prediction process with
i 1
belief network. The loan prediction belief network
2.4 Reasoning under Uncertainty
explained above in targeted location is illustrated in
A probabilistic interaction model between a set of
Figure 1.
random variables may be represented as a joint
probability distribution. Considering the case where
random variables are discrete, it is obvious that the size
of the joint probability distribution will grow
exponentially with the number of variables as the joint
distribution must contain one probability for each
configuration of the random variables. Therefore, we
need a more compact representation for reasoning about
the state of large and complex systems involving a large
number of variables. To facilitate an efficient
representation of a large and complex domain with
many random variables, the framework of Bayesian
networks uses a graphical representation to encode
dependence and independence of relations among the
random variables. The dependence and independence
relations induce a compact representation of the joint
probability distribution. By representing the dependence
and independence relations of a domain explicitly in a
graph, a compact representation of the dependence and
independence relations is obtained.
2.4 Construction of Belief Network
Figure 1
Choose the set of relevant variables that
3.1 Data Source
describe the domain
The Data specification for this paper is collected
Choose an ordering for the variables
from
the Multi National Bank namely ICICI Bank Ltd
Choose a variable x and add a node for it
Kanchipuram,
Tamilnadu. The Datas were actually
Set parents to some minimal set of existing
collected
by
the
bank in order to expand and promote
nodes such that the conditional independence
their
business
into
loan section. From the data collected
property is satisfied
with
the
help
of
the
bank, the data is grouped in two
Define the conditional probability table for
main
categories
Loan
availing and Loan non availing
the network
3.2 Data Process
This network should be DAG.
The process adopted to collect the Data is Targeted
location survey
III. CURRENT SYSTEM DESIGN

Questionnaire to people
In this paper the loan prediction has been made with

Recording individual views


Belief network by considering nodes as random

Analysis the people needs


variable. The variables are considered to be random.

Organizing and recording group


They are chosen with the feasibility of location
interviews
of individual business, agriculture,
prediction which has higher chance in making the
salaried
and
other type of people involved
location prediction process as a successful one. The

Finding
the interest level of people in
variable are connected in the form of DAG, with each
availing the loan facility
node having a conditional probability table (CPT) that
3.3 Data Analysis
quantifies the effects that the parents have on the node.
The Data gathered in the above manner is subjected
The parents of a node X are all those nodes that have
to research with the help of this paper An Intelligent
arrows pointing to X. The same above process is
Loan Predication with Bayes Approach that is Belief
modified and adapted as described below.
network.
In this paper the attributes are classified into two
3.4 Research Methodology
major categories
The real time Datas are further analyzed and
1. Major nodes
statistically grouped with their relevant matching
2. Vital node
criteria and formed in the manner of a network called
The Major nodes:belief network.
Agriculture,Business,salaried and others are
considered as major attributes (hence termed as major

www.asdf.org.in

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

3.5 Formation of Belief network


The Area of business or source of Income is
classified as Agriculture, Business, Salaried, Others.
The ultimate reason for above collection is to find the
loan availing ratio which is also considered in this
paper.
The Belief network is formed by assuming the
following factors as Nodes
Node Representation
A: Agriculture
B: Business
S: Salaried
O: Others
A1: Professional
A2:Others
A3:Professional
A4:Others
A5:Seasional crop
A6:Non seasonal crop
A7:Small scale
A8: Large scale
A9:Government
A10:Private
A11:NGO(non Government Organization)
A12: Charitable Trust L: Loan
Each node is first classified with it relevant
matching criteria and forming the Conditional
probability table and calculated the value.
Here P(A1)=0.6 (availing loan as professional)
P(A2)=0.5(availing loan as other category)

Figure 2
Table 1

Conditional Probability Table


A1

A2

P(A9/A1A2)

T
T
F
F

T
F
T
F

0.7
0.5
0.4
0.002

IV. SIMULATION RESULT


The vital node L is derived with the valu 31%with
the help of belief network. This percentage decides on
the expansion of the bank. The Loan availing chances
of the target location for example in this paper is
P(L)=0.31 which means this that location has lower
prospectus and chances of loan availing , hence the
bank need not venture in the business into this area. If
the percentage is lesser than the 50%then it is not
advisable for the bank to venture in new business into
this Area.
V. CONCLUSION
In paper, an attempt is made to help the bank to find
out the venturing and expansion of their new business
into a targeted location with the help of statistical
information by the belief network. The information
given in this paper is only a simple sample, when the
belief network is fully expanded and supported with
huge mass of data then the predicted result will be
accurate. The future extension of this paper will be on
finding the closely netted matching data criteria and
finding the ratio of not only the interest level in availing
the loan, but also the repaying capacity of the loan of an
individual will also be expected to be predicted. More
than the targeted location the results can be used to find
the Targeted Zone as well as the success ratio of it. This
work not only considers macro or even the micro view
of a single person loan availing chances but also help us
to get a better results. The belief network can be further
tested with complex data to create a huge and larger
vision and to bring out the results in any sector of the
business for finding the success ratio of targeted
projects. This paper helps the bank to find out the
venturing and expansion of their new business into a
targeted location with the help of statistical information
by using the belief network.
REFERENCES
1.
2.

3.
4.

Here the calculated value of P(A9) is 0.4404 from


the collected data .Similarly we can find the following
values
P(A1) = 0.6 P(A2) = 0.5 P(A3) = 0.5 P(A4) = 0.35
P(A8)=0.8
P(A5) = 0.7 P(A6) = 0.8 P(A7) =0.7
P(A10)= 0.343 P(A11) =0.4
P(A9) = 0.44
P(A12)=0.45
P(A) =0.5592 P(B) =0.448
P(S) = 0.180 P(O) = 0.277
P(L) = 0.312

www.asdf.org.in

145

5.
6.
7.
8.
9.

Astrid A. Dick , Demand Estimation and Consumer Welfare in


the Banking Industry, Journal of Banking & Finance 32 (2008),
pp. 16611676.
Richard S. Barr, Lawrence M. Seiford, Thomas F. Siems,
"Forecasting Bank Failure: A Non-Parametric Frontier
Estimation Approach", Recherches Economiques de Louvain,
Vol. 60, No. 4. (1994), pp. 417-429.
Uffe B.Kiaerulff ,Anders L.Madsen Probabilistic Network- An
Introduction to Bayesian Networks and Influence Diagrams
may- 2005-Aalborg University
G.H.Bakir, T.Hofmann,B.scholkopf, A.J.Smola, and B.Taskar.
Predicting structured Data. Cambridge: MIT press,2007
Jeasen,F.V.(1996). An introduction to Bayesian Networks, UCL
Press
, London
L.C.van der Gaag and P.R.de Wall. Multi-dimensional Bayseian
network classifiers. In third eruopean conference on
Probabilistic Graphical Models. Pages 107-114,2006
W.Cheng and E.Hiilermerier. Combining instance-based
lerarning and logistic regression for multilabel classification.
Machine Learning, 221-225,2009
Laurizen, S.L.& Jensen, F.(2001). Stable Local with Mixed
Gaussian Distributions, statistics and Computing 11(2): 191203.
J.pearl. Probabilistic Reasoning in Intelligent system: Networks
of Plausible Inference. Morgan Kaufmann Publishers, 1988.

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

146

10. P.Lacas. Bayesian model-based diagnosis. International journal


of Approximate Reasoning, 99-119,2001

www.asdf.org.in

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

147

Face Recognition using Fusion of Thermal and Visual Images


S. Vijay Ananth
Department of Telecommunication Engineering,
SRM University,
Chennai, India.

Abstract-Human face recognition has a very large field of


applications such as security systems, human computer
interface. Security systems using visual image are widely
implemented. Identification using thermal images is used for
medical purposes. But, both thermal and visual image
recognition systems have their individual flaws hence not
completely efficient. This paper is an approach towards the
goal of developing better facial recognition system for the
above mentioned applications. In this system, the input visual
and thermal image of the person is taken and enhanced using
histogram compensation and image resizing to increase
precision. Then the fusion of both the images is done to
complement the deficiencies in one of the images by the
corresponding features in the other image. Spatial Domain
Method is used for fusion. In this we take weighted sum of the
visual and thermal image. The fused image is taken and
compared with the previously stored image in the database of
that particular person. The output is generated by using the
PCA comparison algorithm for verification from the database
incase input image does not match. The proposed feature
selection procedure helps to overcome the bottlenecks of
illumination variations, partial occlusions, expression
variations and variations due to temperature changes that
affect the visual and thermal face recognition techniques by
combining the advantageous features of both techniques, thus
improving the recognition accuracy.
Keywords-Spatial Domain Method; Histogram Compensation;
Principle Component Analysis
I.

INTRODUCTION

Human face recognition is a challenging task and its


domain of applications is very vast, covering different areas
like security systems, defense applications, and intelligent
machines. It involves different image processing issues like
face detection, recognition and feature extraction. Face
recognition using visual image is gaining acceptance as a
superior biometric. Images taken from visual band are
formed due to reflectance. Therefore they depend on the
external light source, which sometimes might be absent e.g.
night time or when there are heavy clouds. Imagery is also
difficult because it depend on the intensity of light and angle
of incident of light.
Proc. of the Intl. Conf. on Computer Applications Volume
1. Copyright 2012 Techno Forum Group, India.
ISBN: 978-81-920575-5-2-::doi:10.73298/ISBN_0768
ACM #: dber.imera.10. 73298

www.asdf.org.in

N. Thangadurai
Department of Telecommunication Engineering,
SRM University,
Chennai, India.

Face recognition is one of the most accurate method to


implement in security systems. Traditionally either visual
(produced by reflection of light rays by a body) or thermal
image (produced by emission of heat radiations by a body)
is used to in face recognition systems. But both of them
have their flaws. Visual image can be affected by
illumination variation while the thermal image remains
unaffected by illumination of angle of incidence. Image
from one source (ie. thermal) may lack some information
that might be available in images from the other source (ie.
visual). Data fusion of thermal and visual images is a
solution to overcome the drawbacks in individual visual and
thermal images. Out of the few algorithms available for
image fusion, Spatial Domain method gives 91% accuracy.
In this algorithm the weighted sum of the visual and thermal
images is used.
The fused image is taken up and now compared with
the previously stored image in the database of that particular
person. The output is generated by the comparison
algorithm named Principle Component Analysis (PCA) in
terms of whether the facial images belong to the same
person or not. An essential feature of PCA is that in case the
input image does not match to any of the images in the
database, it gives the nearest matching image from the
database. In PCA instead of matching the images pixel by
pixel, we compare only the weights of selective features
such of the image called the eigenvalues.
II. PROPOSED METHOD
A. Description:
At input, we provide two images: Visual image: Visual
image is formed due to reflection of the incident radiation.
Thermal image: Thermal image is formed due to emission
of IR radiation from any body above 0 K.
Pre-processing: Involves conversion of both the images
into grayscale. Deformities such as illumination variation
are corrected using histograms. Interpolation technique is
applied to restore missing pixels.
Fusion: Fusion image is formed using spatial domain
method. Temporary storage: The fused image is temporarily
stored until forwarded for further processing. Database: It
contains the visual and thermal images of many people
which act as a reference for comparison with the input
image.

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

Similarity calculation: The reference images from the


database are fused and compared with the input image using
PCA algorithm applied on eigenfaces. Output: Output is
provided based on image match or mismatch calculated by
the differential value off the input image with each of the
database images.

Visual Image

Thermal Image

+
Pre processing

Fusion

Temporary
Storage

148

In recent years, a great deal of effort has been put into


multisensor fusion and analysis. Available fusion techniques
may be classified into three abstraction levels: pixel, feature,
and semantic levels. At the pixel level, images acquired in
different channels are combined by considering individual
pixel values or small arbitrary regions of pixels in order to
make the fusion decision.
At the feature-level fusion, images are initially
subjected to feature-driven segmentation in order to produce
a set of regions with various properties that are used to
determine which features from which image are to be
included in the fused image. Semantic driven methods
transform all types of input data into a common variable
space, where the data is. The motivation for the waveletbased methods emerges from observations that the human
visual system is primarily sensitive to local contrast
changes, for example, edges and corners. However,
observation systems are characterized by diversity in both
location and size of the target of interest; therefore, rigid
decomposition, which is characteristic for multiresolution
fusion methods, turns out to be less suitable for longrange
observation tasks. Both feather and semantic level fusion
result in inaccurate and incomplete transfer of information.
Among the pixel level fusion algorithms available, we will
be using Spatial Domain Fusion due to its simpler
implementation and higher accuracy.
A. Spatial domain data fusion

Similarity
calculation

Database

Output

Figure1. Block Diagram of the proposed method

III. IMAGE FUSION


Image fusion is the process of combining relevant
information in two or more images of a scene into a single
highly informative image. Relevant information depends on
the application under consideration. If we combine the
features of both visual and thermal images, then an
improved image suitable for efficient and accurate face
recognition can be obtained.
Image fusion methods can be broadly classified into
two - spatial domain fusion and transform domain
fusion.The fusion methods such as averaging, Brovey
method, principal component analysis (PCA) and IHS based
methods fall under spatial domain approaches. Another
important spatial domain fusion method is the high pass
filtering based technique. Here the high frequency details
are injected into upsampled version of images.

Data fusion techniques provide an advantage of


combining information from both the sources (ie. visual &
thermal) to produce more informative image. Spatial
domain fusion method is performed directly on the source
images. Weighted average is the simplest spatial domain
method, which neednt any transformation or decomposition
on the original images unlike other methods such as
Wavelet Decomposition Method of image fusion. The merit
of this method is simple and fit for real-time processing, but
simple addition will reduce the signal-to-noise of the result
image. Improved method is to compute the degree of focus
for each pixel use various focus measures in multifocus
images. The visual and thermal image is combined by using
the equation:
F(x,y) = FwT(x,y) + (Fw-1)V(x,y)
where,
F(x,y) = fused image
T(x,y) = thermal image
V(x,y) = visual image
Fw = weight of fusion and its value is from 0-1.
In the figures shown below, we see that a lower
weight gives a darker image while a higher weight gives a
lighter image. For implantation, we use weight of fusion 0.5
to get optimum results. The Spatial domain fusion method
provides an accuracy of 91%. It is essential to convert the
color images into grayscale prior to fusion.

www.asdf.org.in

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

149

IV. PRINCIPLE COMPONENT ANALYSIS (PCA)


Traditionally, Phase Congruency Method is used for
image comparison but this mechanism suffers certain
disadvantages such as: Calculating the phase congruency
map of an image is very computationally intensive, sensitive
to image noise; techniques of noise reduction are usually
applied prior to the calculation.
Another feature from which the presented system
suffers is the decision based fusion technique adopted. This
approach has the particular threshold value as the basis of
making the decision of whether to carry out fusion of the
particular grid or not to enhance the face recognition
accuracy of our system. The disadvantage is that if the
threshold value is not fixed upto the optimum level, then the
fusion would not help us achieve the desired objective. So,
in our proposed system, instead of decision based fusion,
the whole image is fused. It also has almost the same space
and time complexity.
Hence we use the Principle Component Analysis
(PCA) method to compute is the input and the database
image match. PCA algorithm will provide the following
benefits: low memory demands, low computational
complexity, and better recognition accuracy, less execution
time.
The Principal Component Analysis (PCA) is one of
the most successful techniques that have been used in image
recognition and compression. PCA is a statistical method
under the broad title of factor analysis. The purpose of PCA
is to reduce the large dimensionality of the data space
(observed variables) to the smaller intrinsic dimensionality
of feature space (independent variables), which are needed
to describe the data economically. This is the case when
there is a strong correlation between observed variables.
Start

Image from Database

mean

S=Data-Mn

[V D]=eig(s)

Lower Value

Higher Value

Figure 2: PCA computation

PCA computes the basis of a space which is


represented by its training vectors. These basis vectors,
actually eigenvectors, computed by PCA are in the direction
of the largest variance of the training vectors. As it has been
said earlier, we call them eigenfaces. Each eigenface can be
viewed a feature. When a particular face is projected onto
the face space, its vector into the face space describes the
importance of each of those features in the face. The face is
expressed in the face space by its eigenface coefficients (or
weights). We can handle a large input vector, facial image,
only by taking its small weight vector in the face space. This
means that we can reconstruct the original face with some
error, since the dimensionality of the image space is much
larger than that of face space.
Each face in the training set is transformed into the
face space and its components are stored in memory. The
face space has to be populated with these known faces. An
input face is given to the system, and then it is projected
onto the face space. The system computes its distance from
all the stored faces.
The main idea of using PCA for face recognition is
to express the large 1-D vector of pixels constructed from 2D facial image into the compact principal components of the
feature space. This can be called eigenspace projection.
Instead of comparing the images pixel by pixel which is a
tedious task, we compare only selective features such as
eyes, nose, lips, distance between the eyes etc. For each
input image we calculate the eigen values (weights) for
these selective features. These eigen values when compared
with the threshold values give a differential value. If this
value is high eg. 3 or 4, it implies that there is high
deviation from the threshold value hence the features are not
matched. So it can be said that the input image and the
database image do not match. If the differential value is low
like 0 or 1, it implies that is very less or no deviation from
the threshold value hence the weight of the selected features
match. Hence, it can be said the input image and the
database image do not match.
The task of facial recognition is discriminating
input signals (image data) into several classes (persons). The
input signals are highly noisy (e.g. the noise is caused by
differing lighting conditions, pose etc.), yet the input images
are not completely random and in spite of their differences
there are patterns which occur in any input signal. Such
patterns, which can be observed in all signals, could be - in
the domain of facial recognition - the presence of some
objects (eyes, nose, mouth) in any face as well as relative
distances between these objects. These characteristic

3
Matched

www.asdf.org.in

Matched

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

features are called eigenfaces in the facial recognition


domain (or principal components generally). They can be
extracted out of original image data by means of a
mathematical
tool
called Principal
Component
Analysis (PCA).

V. IMAGE MATCHING

150

rank of the covariance matrix cannot exceed M 1 (The -1


come from the subtraction of the mean vector m).
VI. OUTPUT INTERFACE
Following are the dialogue boxes which are used for
entering commands for managing the database, import the
query images from the fusion module and find the matched
or nearest possible matched image.
A. Database management

A 2-D facial image can be represented as 1-D


vector by concatenating each row (or column) into a long
thin vector. Lets suppose we have M vectors of size N (=
rows of image columns of image) representing a set of
sampled images. pjs represent the pixel values.
xi = [p1 : : : pN]T ; i = 1; : : : ;M
The images are mean centered by subtracting the
mean image from each image vector. Let m represent the
mean image.
M

m =(1/M)

xi

i=1

And let wi be defined as mean centered image


wi = xi- m
Our goal is to find a set of eis which have the
largest possible projection onto each of the wis. We wish to
find a set of M orthonormal vectors ei for which the quantity

is maximized with the orthonormality constraint

It has been shown that the eis and is are given by the
eigenvectors and eigenvalues of the covariance matrix
C = WWT
where W is a matrix composed of the column vectors wi
placed side by side. The size of C is N x N which could be
enormous. For example, images of size 64 x 64 create the
covariance matrix of size 4096x4096. It is not practical to
solve for the eigenvectors of C directly. A common theorem
in linear algebra states that the vectors ei and scalars i can
be obtained by solving for the eigenvectors and eigenvalues
of the M xM matrix WTW.
Let di and i be the eigenvectors and eigenvalues of
WTW, respectively.
WTWdi = idi
By multiplying left to both sides by W
WWT (Wdi) = i(Wdi)
which means that the first M - 1 eigenvectors ei and
eigenvalues i of WWT are given by Wdi and i, respectively.
Wdi needs to be normalized in order to be equal to ei. Since
we only sum up a finite number of image vectors, M, the

Figure 3. Screenshot 1

The database contains many sample visual and


thermal facial images. These images are fused during run
time and are compared with the input image. The output is
either a matched image or an image from the database
which has the least deviation in eigen values of selective
features from the input image. The database management
window shown above is the interface window used keeping
a record of the images added or deleted from the database. It
has provisions to add or remove a folder (containing visual
and thermal image of the same person), change image or
folder name hence updating the database each time the
program is run.
The various functions provided hereby are as
follows:
New: this is for the addition of the new data in the
existing database. By the new function, a fresh form appears
in which we fill out the details under the following form
fields such as name, folder name, visual image, thermal
name.
Update: once the new form fields are filled up, the
update function is used up to carry out actual updation of the
database by adding the details to the database.
Remove : this functionality is provided to remove
or delete the details from the database. It calls the required
commands to eventually remove from the database storage.

www.asdf.org.in

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

Previous: In this, the database form display


navigates to the previous record in the database record
sheet. But the current record is not changed.
Next: In this, the database form display navigates
to the next record in the database record sheet. But the
current record is not changed.

B. face recognition

151

Figure 5. Screenshot 3

The above window shows the input image being matched


with an image in the database hence authenticating the
person.

VII. CONCLUSION
Our proposed model is totally viable and can be
implemented easily. The system is simple, user friendly and
reliable. The main disadvantage of the existing system is
that they either used the visual methodology or the thermal
methodology, both of which have their own disadvantages.
The visual methodology suffers from the inherent
disadvantages such as illumination variations, partial
occlusions, etc while the thermal methodology suffers from
effects such as the temperature variations, etc. The proposed
system overcomes all these disadvantages by combining the
features of both the systems.
VIII.

Figure 4. Screenshot 2

Comparing two images means finding differences


or similarities between two images. Differences may be
quantitative or qualitative. The above window shows the
interface from which the database is loaded (load database),
the database images are fused (train data), the fused images
are loaded (load trained data), the input fused image (select
query), the matched image or image having least difference
in eigen weights of selective eigen features is the output
(find match). The database tab is use to open the database
management window.

FUTURE WORK

There are certain aspects that can be further


enhanced as part of the future works for enhancing the
proposed system. The system developed here requires the
initial process of fusion and the training of images to be
explicitly done from the main interface. Rather this can be
automatically executed by the system as part of future
enhancements as these two modules have to be implemented
for every execution. Secondly, another enhancement that
can be done is that in the developed system, if the query
image is not matched with another image in the database, it
shows recognized image not found but at the end displays
the nearest image based on eigen vectors distance. This can
be improved to end with displaying that the image has not
been found. For this, proper threshold distance between
eigen vectors have to be found such that above which the
image is not matched discarded rather than displaying other
image.
REFERENCES
[1] Facial recognition using multi-sensor images based on
Localized kernel Eigen spaces Vijayan K. Asari &
Satyanadh Gundimada, Senior Member, IEEE
[2] View-based and modular Eigen spaces for Facial recognition
Alex Pentland, Perceptual Computing Group, MIT, USA
[3] Kernel Sub-space LDA with optimized Kernel Parameters on
Face recognition Jian Huang, Pong C Yuen, Wen-Sheng
Chen

www.asdf.org.in

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

152

[4] Face Recognition using Principle Component Analysis


Kyungnam Kim, Dept of Computer Science, University of
Maryland, College Park, MD 20742 USA
[5] Exploiting Spatial Domain and Wavlet Domain Cumulants
for fusion of SAR and Optical Images- Esra Tunc Gormus,
C.Nishan Canagarajah, Alin M. Achim, Department of
Electrical and Electronic Engineering University of Bristol,
BS8 1UB, UK

www.asdf.org.in

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

153

Euler Number, Symmetry and Histogram Based


Hierarchical Approach for English OCR
Mahesh M Goyani
Department of Computer Engineering
LDCE, Gujarat Technological University
Ahemedabad, India
mgoyani@gmail.com

Abstract - Recognizing text from scanned document is the


classical problem of pattern recognition and image processing.
In this paper, we have proposed hierarchical algorithm for
English Optical Character Recognition (OCR). Beauty of this
algorithm is that it does not need any features or classifiers like
Neural Network (NN) or Support Vector Machine (SVM).
Proposed approach recognizes various characters at different
level of its hierarchy using Euler property, symmetry and
histogram. We have exploited geometric shape of characters
for recognition process.
Keywords - Optical Character Recognition, Euler Number,
Histogram, Vertical Symmetry, Horizontal Symmetry.

I.

INTRODUCTION

Optical Character Recognition (OCR) is playing vital


role at many places like postal service, government offices,
examinations and many other places. OCR is the ability of
the digital system that takes scanned image as an input and
then interprets and recognizes the text [1]. OCR helps to
create paperless environment by transferring physical
document in to digital form. Our OCR system is offline
system for U.S. English documents. As a first step, OCR
needs to segment the entire document in lines, words, and
characters. It can be done in three ways: top-down approach,
bottom-up approach, and hybrid approach [2].
Top down technique starts with identifying course
features like lines and words and keep splitting the document
until finest features like individual characters are extracted. It
relies on methods like Run length smearing [3], [4],
Projection profile methods [5], [6], [7], white stream [8],
Fourier transform [9], Template [10], Form definition
language [11], [12], Rule based system [7], Gradient [13]
etc.
Bottom up methods start with the smallest element like
pixel and derives character or words like higher concept by
merging and growing [14]. Such methods are flexible but
prone to error and require more computation. Connected
component analysis [15], [16], Run length smoothing [4],
Region growing [17], Neighborhood line density [18] and
Neural networks [19].
Proc. of the Intl. Conf. on Computer Applications
Volume 1. Copyright 2012 Techno Forum Group, India.

ACM #: dber.imera.10. 73242


Moreover, many other techniques which does not fit in
any of the above method, so they fall in hybrid method.

Texture based method [20], [21] Gabor filter based method


[22] are the example of such hybrid method. We have
proposed

novel

characters in top down order.

which

recognizes

Rest of the paper is organized as follows. Section II

describes the proposed algorithm followed by experimental


results

in

next

section.

Conclusion

and

future

enhancements are discussed in section IV followed by


reference section

II.

PROPOSED ALGORITHM

A. In House Dataset
We have carried out our experiment on in-house image
dataset. We are using computer generated test images with
uni-font (Calibri) with various size and weight. We have
created 10 documents with varying text property.
B. Algorithm
English alphabet contains twenty six (26) letters. English

language is accepted as international language and has


maximum followers. Unlike Devnagari script, there is no
feature like headline to segment words in English
document. Fig. 1 shows the steps for our OCR system.
Preprocessing includes noise removal, binarization of

document, line segmentation, word segmentation, and


character segmentation. Isolated characters are fed to the

OCR system, which process it at different levels and


recognize it.

ISBN: 978-81-920575-5-2:: doi: 10.73242/ISBN_0768

Preprocessing

www.asdf.org.in

algorithm,

Euler Number
Calculation
(L1)

Bottom Line
Left Edge
Scanning
Histogram
(L5)
(L4)
2012 :: Techno Forum Research and Development Centre, Pondicherry, India

Vertical
Symmetry
(L2)

Horizontal
Symmetry
(L3)
www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

154

Scanned documents often contain noise that arises due to


printer, scanner, print quality, age of the document, etc.
Therefore, it is necessary to filter this noise before we
process the image. The commonly used approach is to low
pass filter the image and to use it for later processing.
Pepper and salt noise can be best handled with median filter.
Only salt and only pepper noise can be removed effectively
using min and max filters respectively. Histogram of
original document and thresholded document is shown in
fig. 3

Figure 1. Block diagram of proposed OCR system

Because of different font family, sizes, orientation,


weight etc it is difficult task to recognize characters. Several
preprocessing steps are required to implement OCR system
which shown in fig. 2.
Character
Isolation
Input Document
Word
Segmentation

Digitization

Line
Segmentation
SEGMENTATION

Thresholding

Figure 2. Block diagram of preprocessing step

Preprocessing is essential step to produce data that are


easy for the OCR to operate at high robustness and
accuracy. Preprocessing process includes several steps like
digitization, binarization, noise removal, skew detection and
correction, segmentation in various levels and scaling. This
approach is for uni font (Calibri); in future it can be
extended for multi font system.
2.1 Digitization and Thresholding
The process of text digitization can be performed either
by a scanner, a computer or a digital camera. We have used
a computer generated text document. The digitized images
are in gray tone. Binarization is a technique by which the
gray scale images are converted to binary images. Though
we generate document on computer, while we save it,
because of quantization noise is generated. This noise is
appeared as gray spots on image, so it is necessary to
binarize the input image. We have used a histogram-based
threshold approach to convert gray scale image into two
tone image.

www.asdf.org.in

Figure 3: Histogram of original Image (Left) and Thresholded


Image (Right Normalized Intensity)

2.2 Slant Angle Correction


When document to be scanned is fed to the scanner
carelessly, digitized image may be skewed. Identification of
skew angle and correction of it is essential. Skew correction
can be done by rotating document in inverse direction by
same skew amount.
2.3 Segmentation
We have employed hierarchical approach for
segmentation. Segmentation is performed at different levels
like line segmentation, word segmentation, character
segmentation
2.3.1 Line Segmentation: Histogram enjoys the central
position in binary image segmentation. From histogram of
the test document, it is very easy to calculate the boundaries
of each line. For isolating text lines, image document is
scanned horizontally to count number of pixels in each row.
Frequency of black pixels in each row is counted in order to
construct the row histogram. The position between two
consecutive lines, where the number of pixels in a row is
non zero denotes a boundary between the lines.
2.3.2 Word Segmentation: Minimum and maximum x
coordinates are calculated for each character. if difference
between xmin of next character and xmax of current
character is greater than some threshold than space is
recognized between two characters and hence the word.
2.3.3 Character Segmentation / Isolation: Column
histogram of each segmented line gives us the
boundaries of each word. The portion of the line with
continuous black pixels is considered to be a word in
that line. If no black pixel is found in some vertical
scan that is considered as the spacing between words.

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

155

2.4 Character Recognition


Actually, characters are recognized at different level, so
no need to go for all level check for each character. At very
first level, we are calculating Euler number of each
character. Therefore, we have either -1, 0 or 1 Euler value
for each character. Based on that entire character set is
divided in three groups, as shown in table 2. If input
character is B than its Euler number is -1 so it would be
extracted at very first level algorithm will directly get next
character to process. The return value of Euler function is a
scalar whose value is the total number of objects in the
image minus the total number of holes in those objects.

In level 3, we are checking horizontal symmetry of


character. For checking the horizontal symmetry of
character, we are first splitting the character from
horizontally middle. Upper half is reflected around x axis
and joined it with upper half itself and new character is
generated. Euclidean distance is calculated between this
newly created character and original character. If distance is
less than predefined threshold than the input character is
horizontal symmetric otherwise not. At this level we are
able to recognize three more characters (A, D and O).

Level 1: Euler Number Calculation

1.
2.

1.
2.

3.

Find Euler Number of input character


if Euler number of character is -1, classify it as B
else
If Euler Number is 0 then
Put it in Group 1 of Level 1
else
Put it in Group 2 of Level 2
end

Level 3: Horizontal Symmetry

4.

end
Possible output characters: B
If character is other than B, then at next level we are
checking the vertical symmetry of the input character. In
level 2, any character is not recognized but input character is
placed in any one of the four group of level L2. For
checking the vertical symmetry of character, we are first
splitting the character from vertically middle, right half is
reflected around y axis and joined it with right half itself and
new character is generated. Euclidean distance is calculated
between this newly created character and original character.
If distance is less than predefined threshold than the input
character is vertical symmetric otherwise not.

Possible output characters: - A, D, O


In rest of the levels, we are examining histogram
property of various edges like left edge, bottom edge, top
edge, middle edge, right edge etc and based on it,
discriminating the input characters. At different stage, we
are getting extracted various characters. Complete
procedural output is shown in table 2.
Level 4: Left Edge Histogram
1.
2.

Level 2: Vertical Symmetry


1.
2.
3.
4.

Divide character in two halfs from middle column


Reflect right half about y axis and join it with right half
to form full character
Find Euclidean distance ED between original character
and newly created character.
for i = 1 to 2 do
if ED < theta then
Put it in sub group 1 of group i of level 1
else
Put it in sub group 2 of group i of level 1
end
end

Possible output characters: No character is recognized

www.asdf.org.in

Divide character in two halfs from middle row


Reflect upper half about x axis and join it with upper
half to form full character
Find Euclidean distance ED between original character
and newly created character.
for each ith group of level 2
if ED < theta then
Put it in sub group 1 of group i of level 2
else
Put it in sub group 2 of group i of level 2
end
end

Take histogram of n left most columns and count


number of pixels
for each ith group of level 3
if count > threshold then
Put it in sub group 1 of group i of level 3
else
Put it in sub group 2 of group i of level 3
end
end

Possible output characters: - C, Q, X


Level 5: Bottom Edge Scanner
1.
2.

Scan bottom row of each character and increment count


every time when scan line enter in black region
for each ith group of level 4
if count = = 1 then
Put it in sub group 1 of group i of level 4
else

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

156

Put it in sub group 2 of group i of level 4


end
end
Possible output characters: - P, R, I, H, U, M, W, E, K, N
Level 6: Top Edge Histogram
1.
2.

Take histogram of n top most rows and count number


of pixels
for each ith group of level 5
if count > threshold then
Put it in sub group 1 of group i of level 5
else
Put it in sub group 2 of group i of level 5
end
end

Possible output characters: - T, F, L, Z

www.asdf.org.in

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

Level 7: Middle line histogram


1.

Take histogram of n middle columns and count number


of pixels
2. for each ith group of level 6
if count > threshold then
Put it in sub group 1 of group i of level 6
else
Put it in sub group 2 of group i of level 6
End
end

Table II to descirbes the complete hierarchy of recognition process


and its result at various levels. At the end of level 5, almost 65
percentage of charecters are recognized which are having higher
probability of occurance in english lanuage. At the end of level 6,
algorithm is able to extract out around 80 percentage charecters of
set of 26 alphabets.
TABLE II:
L1

A
D
O
P
Q
R

Level 8: Right Edge Histogram

2.

Take histogram of n right most columns and count


number of pixels
For each ith group of level 7
if count > threshold then
Recognized character is J
else
Recognized character is G
end
end
Possible output characters: - J, G
III.

EXPERIMENTAL RESULTS

As this algorithm operates up to eight levels, it is


necessary that the characters with higher frequency should
be extracted at upper level of the tree. Relative frequency of
the English alphabets is shown in table I. Low frequency
characters like G and J are recognized are level 8, so it does
not affect much the performance of algorithm as their
contribution is very less compare to other characters. On the
other hand, most of higher frequency characters are detected
up to level 5, so its helping in speeding up the procedure.
TABLE I.

Relative frequency of letters in English text

Char

Freq

E
T
A
O
I
N
S
H
R
D
L
C
U

12.702
9.056
8.167
7.507
6.996
6.749
6.327
6.094
5.987
4.253
4.025
2.782
2.758

www.asdf.org.in

Detection
Level
L5
L6
L3
L3
L5
L5
L7
L5
L5
L3
L6
L4
L5

Char

Freq

M
W
F
G
Y
P
B
V
K
J
X
Q
Z

2.406
2.360
2.228
2.015
1.974
1.929
1.492
0.978
0.772
0.153
0.150
0.095
0.074

Detection
Level
L5
L5
L6
L8
L7
L5
L1
L7
L5
L8
L4
L4
L6

Level wise character recognition


L2
A
O

Possible output characters: - Y, V, S

1.

157

L3
O

L4

L5

L6

L7

L8

A
D

D
P
Q
R

P
Q
R

P
R

P
R

Q
A
B
C
D
E
F
G
H
I
J
K
L
M
N
O
P
Q
R
S
T
U
V
W
X
Y
Z

Recognized
Characters
At each level

H
I
X

C
E
F
G
N
I
J
K
L
M
N
S
T
U
V
W
X
Y
Z

H
I
M
T
U
V
W
X
Y

H
I

I
H

X
M
U
M
T
U
V
W
Y

U
M
T

T
V
W
Y

T
V
Y

V
Y

Y
V

C
E
K
C
E
F
G
J
K
L
N
S
Z

E
K

E
K

F
G
J
L
N
S
Z

F
L
N

F
L

F
L

N
Z
G
J
S
Z

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

G
J
S
Z

10

S
G
J
S

G
J
3

G
J
2

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

Cumulative

17

21

24

26

Table III shows the robustness of proposed algorithm.


We have tested the system against various five documents
having different number of characters in it. Average
accuracy of algorithm is 96 percent. G is the only character
which is misclassified. G and S are separated based on
middle line histogram threshold and hence there is
misclassification in some cases.
TABLE III. Result analysis of proposed algorithm
Document
Index
1
2
3
4
5
6
7
8
9
10
Total

Total
Characters
70
135
65
114
178
123
69
79
136
165
1134

Correct
Recognition
70
128
65
106
171
119
68
76
128
157
1088

Recognition
Rate
100.00
94.82
100.00
92.98
96.07
96.75
98.55
96.20
94.11
95.15
95.94

Following snap shots shows the two of the input


document and their output.

IV.

REFERENCES
[1]
[2]
[3]

[4]

[5]

[7]

[8]

[9]

[10]

Figure 5. Result of Test Document 1


[11]

[12]

Figure 6. Test Document 2

[13]

[14]

[15]

[16]

Figure 7. Result of Test Document 2

www.asdf.org.in

CONCLUSIONS

Beauty of this algorithm is that it is independent of any


kind of training. We do not need any type of classifier to
classify the character. It is hierarchical approach and
classifies the characters at different level. Algorithms are
applied in such order so that most probable characters are
getting classified as early as possible. Table I shows the
relative frequency of all characters. Most of higher
frequency characters are getting recognized at early stage of
tree, so less processing is required. It mainly uses histogram
based comparison and exploits geometry of character and
hence its very faster and simple to process the documents.

[6]

Figure 4. Input Test Document 1

158

S. Mori, C.Y. Suen, K.Yamamoto, Historical review of OCR


research and development, Proc. IEEE, 80(7):1029-1058, 1992.
O. Okun, D. Doermann, Matti P. Page Segmentation and zone
classification. The State of the Art, Nov 1999.
J. Kanai, M.S. Krishnamoorthy, T. Spencer Algorithm for
Manipulating nested block represented images. SPSEs 26th Fall
Symposium, Arlington VA, USA, Oct 1986, pp.190-193.
K.Y.Wang, R.G. Casey, F.M. Fahl Document analysis system. IBM
Journal of Research and Development, Vol. 26, No. 6, Nov 1982,
pp. 647-656.
M.S. Krishnamoorthy, G.Nagy, S. Seth, M. Vishwanathan Syntactic
Segmentation and Labeling of Digitized Pages from Technical
Journals. IEEE Computer Vision, Graphics and Image Processing,
Vol. 47, 1993, pp. 327-352.
K.H. Lee, Y.C. Choy, S. Cho Geometric Structure Analysis of
Document Images: A Knowledge-Based Approach. IEEE Trans. on
Pattern Analysis and Machine Intelligence, Vol. 22, Nov 2000, pp.
1224-1240.
O. Iwaki, H. Kida, H. Arakawa A Segmentation Method based on
Office Document Hierarchical Structure. Proc. of Intl. Conf. on
Systems, Man and Cybernetics, Alexandria VA, USA, Oct 1987,
pp. 759-763.
T. Pavlidis, J. Zhou Segmentation by White Streams. Proc. Intl.
Conf. on Document Analysis and Recongition, ICDAR91, StMalo, France, pp. 945-953.
M. Hose and Y. Hoshino. Segmentationmethod of document images
by two-dimensional Fourier transformation. System and Computers
in Japan, Vol. 16, No. 3, 1985, pp. 38-47.
A. Dengel and G. Barth. Document description and analysis by
cuts. Proc. Conference on Computer- Assisted Information
Retrieval , MIT USA, 1988.
H. Fujisawa and Y. Nakano. A top-down approach for the analysis
of document images. Proc. of Workshop on Syntactic and Structural
Pattern Recognition (SSPR 90), 1990, pp. 113-122.
J. Higashino, H. Fujisawa, Y. Nakano andM. Ejiri. A knowledgebased segmentation method for document understanding. Proc. 8th
Intl. Conf. on Pattern Recognition, Paris, France, 1986, pp. 745748.
J. Duong, M. Ct, H. Emptoz, C. Suen. Extraction of Text Areas in
Printed Document Images. ACM Symposium on Document
Engineering ,DocEng01, Atlanta (USA), November9-10, 2001, pp.
157-165.
Swapnil Khedekar, Vemulapati Ramanaprasad, Srirangraj Setlur,
Venugopal Govindaraju, Text-Image Separation in Devanagari
Documetns, Proc. of 7th IEEE conference on Document Analysis
and Recognition, 2003.
J. P. Bixler. Tracking text in mixed-mode document. Proc.
ACMConference on Document Processing System, 1998, pp. 177185.
H. Makino. Representation and segmentation of document images.
Proc. of IEEE Computer Society Conference on Pattern
Recognition and Image Processing, 1983, pp. 291-296.

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

[17]
[18]

[19]

A.K. Jain, Fundamentals of Digital Image Processing. Prentice Hall


USA, 1989.
O. Iwaki, H. Kida and H. Arakawa. A character graphics
segmentation method using neighborhood line density. Trans. of
Institute of Electronics and Communication Engineers of Japan,
1985, Part D J68D, 4, pp. 821 828.
C.L. Tan, Z. Zhang Text block segmentation using pyramid
structure. SPIE Document Recognition and Retrieval, Vol. 8,
January 24-25, 2001, San Jose, USA, pp. 297-306.

www.asdf.org.in

[20]

[21]

[22]

159

D. Chetverikov, J. Liang, J. Komuves, R.M. Haralick. Zone


classification using texture features. Proc. Of Intl. Conf. on Pattern
Recognition, Vol. 3, 1996, pp. 676-680.
W.S. Baird, S. E. Jones, S. J. Fortune. Image segmentation by shape
directed covers. Proc. of Intl. Conf. On Pattern Recognition, Vol. 4,
1996 pp. 820-825.
A. K. Jain and S. Bhattacharjee, Text SegmentationUsing Gabor
Filters for Automatic Document Processing, Machine Vision and
Applications, Vol. 5, No. 3, 1992, pp. 169-184.

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

160

COMPARISON OF IMAGE COMPRESSION THROUGH NEURAL NETWORK AND


WAVELETS
Kavita Pateriya
Comp. Science & Engg.
BUIT,BU
Bhopal,India

AbstractThe aim of this paper is based on the concept of


wavelet based image compression of images, using Neural
Network, SPIHT & STW. While compressing the wavelet
based images, these have shown good adaptability towards a
wide range of data, while existing reasonable complexity.
Spatial-orientation tree wavelet (STW) has been proposed as a
method for effective and efficient embedded image coding. This
method holds good for an important features like PSNR, MSE
image size. SPIHT has been successfully used in many
applications. (The techniques are compressed by using the
performance parameters PSNR & MSE).
Keywords- PSNR, MSE, STW, SPIHT, Neural Network

I.
INTRODUCTION
The data are in the form of graphics, audio, video and image.
These types of data have to be compressed during the
transmission process. Large amount of data cant be stored if
there is low storage capacity present. The compression offers
a means to reduce the cost of storage and increase the speed
of transmission. Image compression is used to minimize the
size in bytes of a graphics file without degrading the quality
of the image. There are two types of image compression is
present. They are lossy and lossless. Some of the
compression algorithms are used in the earlier days [3] and
[4] and it was one of the first to be proposed using wavelet
methods [2]. Over the past few years, a variety of powerful
and sophisticated wavelet based schemes for image
compression have been developed and implemented. The
coders provide a better quality in the pictures. Wavelet
based image compression based on set partitioning in
hierarchical trees (SPIHT) [5] and [6] is a powerful, efficient
and yet computationally simple image compression
algorithm. It provides a better performance when compared
to the Embedded Zero tree wavelet [7] transform.
The objective of this paper is twofold. First, the images
are compressed by using the techniques SPIHT and STW.
Second, the image quality is measured objectively, using
Peak Signal-to-Noise Ratio (PSNR) and Mean Squared Error
(MSE).
Proc. of the Intl. Conf. on Computer Applications Volume
1. Copyright 2012 Techno Forum Group, India.
ISBN: 978*81*920575*5*2*::doi:10.73291/ISBN_0768
ACM #: dber.imera.10. 73291

www.asdf.org.in

II.

SPITH ALGORITHM

The SPIHT algorithm is a highly refined version of the


EZW algorithm. It was introduced in [5] and [6] by Said and
Pearlman. Some of the best results highest PSNR values
forgiven compression ratios for a wide variety of images
have been obtained with SPIHT. Consequently, it is
probable the most widely used wavelet-based algorithm for
image compression, providing a basic standard of
comparison for all subsequent algorithms.
SPIHT stands for Set Partitioning in Hierarchical Trees. The
term Hierarchical Trees refers to the quad trees that we
defined in our discussion of EZW. Set Partitioning refers to
the way these quad trees divide up, partition, the wavelet
transform values at a given threshold. By a careful analysis
of this partitioning of transform values, Said and Pearlman
were able to greatly improve the EZW algorithm,
significantly increasing its compressive power.
Our discussion of SPIHT will consist of three parts. We
shall refer to it as the Spatial-orientation Tree Wavelet
(STW) algorithm. STW is essentially the SPIHT algorithm,
the only difference is that SPIHT is slightly more careful in
its organization of coding output. Second, we shall describe
the SPIHT algorithm. It will be easier to explain SPIHT
using the concepts underlying STW. Third, we shall see
how well SPIHT compresses images.
The only difference between STW and EZW is that STW
uses a different approach to encoding the zero tree
information. STW uses a state transition model. From one
threshold to the next, the locations of transform values
undergo state transitions. This model allows STW to reduce
the number of bits needed for encoding. Instead of code for
the symbols R and I output by EZW to mark locations, the
STW algorithm uses states IR, IV , SR, and SV and outputs
code for state-transitions such as IR IV , SR SV , etc.
To define the states involved, some preliminary definitions
are needed.
For a given index m in the baseline scan order, define the set
D(m) as follows. If m is either at the 1st level or at the all-

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

low pass level, then D(m) is the empty set ;. Otherwise, if m


is at the jth level for j > 1, then D (m) = f Descendents of
index m in quad tree with root mg. The significance
function S is defined by:

161

Table 1: Code for state transitions, indicates that SV SV transition is


certain (hence no encoding needed).

III.
STW ALGORITHM
Now that we have laid the groundwork for the STW
algorithm, we can give its full description.
STW encoding

With these preliminary definitions in hand, we can now


define the states.
For a given threshold T, the states IR, IV, SR, and SV are
defined by

IR if and only if |w(m) | < T; S(m) < T

IV if and only if |w(m) | < T; S(m) <T

SR if and only if |w(m) | < T; S(m) < T

m SV if and only if |w(m)| <T; S(m)

Step 1 (Initialize). Choose initial threshold, T = T0, such that


all transform values satisfy |w(m)| < T0 and at least one
transform value satisfies |w(m)|>T0=2. Assign all indices for
the Lth level, where L is the number of levels in the wavelet
transform, to the dominant list (this includes all locations in
the all-lowpass subband as well as the horizontal, vertical,
and diagonal subbands at the Lth level). Set the refinement
list of indices equal to the empty set.
Step 2 (Update threshold). Let Tk = Tk-1/2.
Step 3 (Dominant pass). Use the following procedure to
scan through indices in the dominant list (which can change
as the procedure is executed).

<T

Fig 1 we show the state transition diagram for these states


when a threshold is decreased from T to T 0 < T. Notice that
once a location m arrives in state SV , then it will remain in
that state. Furthermore, there are only two transitions from
each of the states IV and SR, so those transitions can be
coded with one bit each.

Do
Get next index m in dominant list
Save old state Sold = S(m,Tk-1)
Find new state Snew = S(m, Tk) using (6)-(9)
Output code for state transition Sold -Snew
If Snew # Sold then do the following
If Sold # SR and Snew # IV then
Append index m to refinement list

Figure 1 : State transaction diagram for STW

A simple binary coding for these state transitions is shown


in Table 1.

Output sign of w(m) and set wQ(m) = Tk


If Sold # IV and Snew # SR then
Append child indices of m to dominant list

Old/New

IR

IV

SR

SV

If Snew = SV then
IR

00

IV

SR

SV

www.asdf.org.in

01

10

11

Remove index m from dominant list

Loop until end of dominant list

Step 4 (Refinement pass). Scan through indices m in the


refinement list found with higher threshold values Tj , for j <
k (if k = 1 skip this step). For each value w(m), do the
following:

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

If |w(m)|

[wQ(m),wQ(m) + Tk), then


Output bit 0

Else if |w(m)|

[wQ(m) + Tk,wQ(m) + 2Tk), then

Output bit 1
Replace value of wQ(m) by wQ(m) + Tk.
Step 5 (Loop). Repeat steps 2 through 4.
To see how STW works and how it improves the EZW
method it helps to reconsider, we show STW states for the
wavelet transform using the same two thresholds as we used
previously with EZW. It is important to compare the three
quad trees enclosed in the dashed boxes with the
corresponding quadtrees. There is a large savings in coding
output for STW represented by these quad trees. The EZW
symbols for these three quadtrees are +I I I I, -I I I I, and
+RRRR. For STW, however, they are described by the
symbols +SR, -SR, and +SR, which is a substantial
reduction in the information that STW needs to encode.
There is not much difference between STW and SPIHT. The
one thing that SPIHT does differently is to carefully
organize the output of bits in the encoding of state
transitions in Table 1, so that only one bit is output at a time.
For instance, for the transition IR SR, which is coded as 1
0 in Table 1, SPIHT outputs a 1 first and then (after further
processing) outputs a 0. Even if the bit budget is exhausted
before the second bit can be output, the first bit of 1
indicates that there is a new significant value.
The SPIHT encoding process, as described in [6], is phrased
in terms of pixel locations [i, j] rather than indices m in a
scan order. To avoid introducing new notation, and to
highlight the connections between SPIHT and the other
algorithms, EZW and STW, we shall rephrase the
description of SPIHT from [6] in term of scanning indices.
We shall also slightly modify the notation used in [6] in the
interests of clarity.
First, we need some preliminary definitions. For a given set
I of indices in the baseline scan order, the significance ST
[I] of I relative to a threshold
T is defined by

www.asdf.org.in

162

It is important to note that, for the initial threshold T0, we


have ST0 [I] = 0 for all sets of indices. If I is a set
containing just a single index m, then for convenience we
shall write ST [m] instead of ST [{m}].
For a succinct presentation of the method, we need the
following definitions of sets of indices:
D(m) = {Descendent indices of the index m}
C(m) = {Child indices of the index m}
G(m) = D(m) -C(m)
= {Grandchildren of m, i.e., descendants which are not
children}.
In addition, the set H consists of indices for the Lth level,
where L is the number of levels in the wavelet transform
(this includes all locations in the all-low pass sub band as
well as the horizontal, vertical, and diagonal sub bands at
the Lth level). It is important to remember that the indices in
the all-lowpass sub band have no descendants. If m marks a
location in the all low pass sub band, then D(m) = .
SPIHT keeps track of the states of sets of indices by means
of three lists. They are the list of insignificant sets (LIS), the
list of insignificant pixels (LIP), and the list of significant
pixels (LSP). For each list a set is identified by a single
index, in the LIP and LSP these indices represent the
singleton sets {m} here m is the identifying index. An index
m is called either significant or insignificant, depending on
whether the transform value w(m) is significant or
insignificant with respect to a given threshold. For the LIS,
he index m denotes either D(m) or G(m). In the former case,
the index m is said to be of type D and, in the latter case, of
type G.
IV.

TRANSFORM CODING USING NEURAL


NETWORKS

One solution to the problems associated with the calculation


of the basis vectors through Eigen decomposition of the
covariance estimate is the use of iterative techniques based
on neural network models. These approaches require less
storage overhead and can be more computationally efficient.
As well, they are able to adapt over long-term variations in
the image statistics.
In 1949, Donald Hebb proposed a mechanism whereby the
synaptic strengths between connecting neurons can be
modified to effect learning in a neuro-biological network.
Hebbs postulate of learning states that the ability of one
neuron to cause the firing of another neuron increases when
that neuron consistently takes part in firing the other. In
other words, when an input and output neuron tend to fire at
the same time, the connection between the two is reinforced.

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

163

For artificial neural networks, the neural interactions can be


modelled as a simplified linear computational unit. The
output of the neuron, y, is the sum of the inputs {x1,
x2,.,,xN} weighted by the synaptic weights
{w1,w2,.,wn}, or in vector notation,
y = wT x
Taking the input and output values to represent firing
rates, the application of Hebbs postulate of learning to this
model would mean that a weight wi would be increased
when both values of xi and y are correlated. Extending this
principle to include simultaneous negative values
(analogous to inhibitory interactions in biological networks),
the weights w would be

Fig. Principal components linear network.

modified according to the correlation between the input


vector x and the output y.A simple Hebbian rule updates the
weights in proportion to the product of the input and output
values as
w(t+1)=w(t)+y(t)x(t)
MSE Value

where is a learning-rate parameter. However, such a rule


is unstable since the weights tend to grow without bound.

V.

EXPERIMENTAL RESULTS

The single images are used for the experiments. The results
of experiments are used to find the PSNR (Peak Signal to
noise Ratio) values and MSE (Mean Square Error) values
for the reconstructed images.

PSNR Values

www.asdf.org.in

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

164

video codecs is nearly always far superior to that of the


audio and still-image equivalents.

VI.

PERFORMANCE ANALYSIS

The above algorithms are compared and the results are


shown in the figures. The PSNR and MSE values for the
images compressed by SPIHT and STW are tabulated in
Table 1. The PSNR value and MSE values are calculated by
using the following formula.

Video can be compressed immensely (e.g. 100:1)


with little visible quality loss
Audio can often be compressed at 10:1 with
imperceptible loss of quality
Still images are often lossily compressed at 10:1, as
with audio, but the quality loss is more noticeable,
especially on closer inspection.

The compression rate is 5 to 6 % in lossy compression


while in lossless compression it is about 50 to 60 % of
the actual file

MSE is calculated by this formula

The formula for calculating the Peak Signal to noise Ratio


is as follows:

Image/Method

SPITH

STW

Neural

1.jpg

136.5054

113.5719

110.2

2.jpg

56.1403

47.03213

46.02

3.jpg

54.32996

49.3481

49.2

Table 1 Mean Square Error

These are the result in tables showing


Compression ratio, BPP, MSE and PNSR

Image Size,

The number of distinct colors that can be represented by a


pixel depends on the number of bits per pixel (bpp). A 1 bpp
image uses 1-bit for each pixel, so each pixel can be either
on or off. Each additional bit doubles the number of colors
available, so a 2 bpp image can have 4 colors, and a 3 bpp
image can have 8 colors:

Image/Method

SPITH

STW

Neural

1.jpg

26.7793

27.57809

28.28

2.jpg

30.63806

31.40686

31.50

3.jpg

30.78041

31.1981

31.20

STW

Neural

Table2 PSNR in db

1 bpp, 21 = 2 colors (monochrome)

For color depths of 15 or more bits per pixel, the depth is


normally the sum of the bits allocated to each of the red,
green, and blue components. High color, usually meaning
16 bpp, normally has five bits for red and blue, and six bits
for green, as the human eye is more sensitive to errors in
green than in the other two primary colors. For applications
involving transparency, the 16 bits may be divided into five
bits each of red, green, and blue, with one bit left for
transparency. A 24-bit depth allows 8 bits per component.
On some systems, 32-bit depth is available: this means that
each 24-bit pixel has an extra 8 bits to describe its opacity
(for purposes of combining with another image).
The compression ratio (that is, the size of the compressed
file compared to that of the uncompressed file) of lossy

www.asdf.org.in

Image/Method

SPITH

1.jpg

2.072271

3.002421

3.02

2.jpg

3.011831

4.389699

4.40

3.jpg

2.175776

3.147634

3.20

Table 3 Compression Ratio


Image/Method

SPITH

STW

Neural

1.jpg

0.497345

0.720581

0.79

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

2.jpg

0.722839

1.053528

1.35

3.jpg

0.522186

0.755432

0.85

165

Table 4 BPP

Image/Method

SPITH

STW

Neural

1.jpg

33341

34300

34400

2.jpg

30378

31486

32000

3.jpg

22408

23498

23550

Table 5 Image Size in KB

VII.

CONCLUSION

In this paper we have analyzed that SPIHT and Neural


Network is able to compress the image more accurately as
compared to STW and produces less Mean Square Error
(MSE), for example image 1.jpg is of size 40.4 kb and we
apply SPIHT and Neural Network to compress this image it
can compress this image to 32.55 kb .Where as if we use
STW it can compressed this image only up to 33.50
kb.Hence SPIHT produces batter result as compare to STW.
REFERENCES
[1]

[2]

[3]

[4]

[5]

[6]

[7]

S.P.Raja1, Dr. A. Suruliandi Performance Evaluation on EZW


& WDR Image Compression Techniques, ICCCCT10,IEEE,
978-1-4244-7770-8/10,2010.
G.M. Davis, A. Nosratinia. Wavelet-based Image Coding: An
verview. Applied and Computational Control, Signals and
Circuits, Vol. 1, No. 1, 1998.
M. Antonini, M. Barlaud, P. Mathieu, I. Daubechies. Image
coding using wavelet transform. IEEE Trans. Image Proc., Vol.
5, No. 1, pp. 205-220, 1992.
J.M. Shapiro. Embedded image coding using zerotrees of
wavelet coefficients. IEEE Trans. Signal Proc., Vol. 41, No. 12,
pp.3445 -3462, 1993.
Said, W.A. Pearlman. Image compression using the spatialorientation tree. IEEE Int. Symp. on Circuits and Systems,
Chicago, IL, pp. 279-282, 1993.
Said, W.A. Pearlman. A new, fast, and efficient image codec
based on set partitioning in hierarchical trees. IEEE Trans. on
Circuits and Systems for Video Technology, Vol. 6, No. 3, pp.
243-250, 1996.
Shapiro J.M. Embedded image coding using zerotrees of
wavelet coefficients. IEEE Trans. Signal Proc., Vol. 41, No.
12, pp. 3445-3462, 1993.

www.asdf.org.in

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

166

An ANN Based Movie Business Prediction Method using a Subset of Training Data
Debaditya Barman

Dr. Nirmalya Chowdhury

Department of Computer Science and Engineering


Jadavpur University
Kolkata - 700032, India

Associate Professor, Department of Computer Science


and Engineering Jadavpur University
Kolkata - 700032, India

AbstractFilm industry is the most important component of


the Entertainment industry. A large amount of money is
invested in this high risk industry. Profit and loss both are very
high for this business. Before the release of a particular movie,
if the Production House gets any type of prediction as to how
profitable the film will be, then it can reduce the risk
considerably. Apart from Stock Market Prediction, Weather
Prediction; back propagation neural network can be applied to
predict the possible business of a movie. In this paper, we have
proposed a training method based on similar movie features to
train back propagation neural network for prediction of
possible business of a movie.
Keywords: Film industry; film genre; Back- propagation
network; ANN

I.

INTRODUCTION

A movie [4], also called a film or motion picture, is a


series of still or moving images. It is produced by recording
photographic images with cameras, or by creating images
using animation techniques or visual effects.
Films are cultural artifacts created by specific
cultures, which reflect those cultures, and, in turn, affect
them. It is considered to be an important art form, a source
of popular entertainment and a powerful method for
educating or indoctrinating citizens. The visual elements of
cinema give motion pictures a universal power of
communication.
The process of filmmaking has developed into an
art form and has created an industry in itself. Film Industry
is an important part of present-day mass media industry or
entertainment industry (also informally known as show
business or show biz). This industry [45] consists of the
technological and commercial institutions of filmmaking:
i.e.
film
production
companies,
film
studios,
cinematography, film production, screenwriting, preproduction, post production, film festivals, distribution; and
actors, film directors and other film crew personnel.
The major business centers of film making are in the
United States, India, Hong Kong and Nigeria. The average
Proc. of the Intl. Conf. on Computer Applications Volume
1. Copyright 2012 Techno Forum Group, India.
ISBN: 978*81*920575*5*2*::doi:10.73284/ISBN_0768

www.asdf.org.in

ACM #: dber.imera.10. 73284


cost [5] of a world wide release of a Hollywood film or
American film (including pre-production, film and postproduction, but excluding distribution costs) is about $65
million. It can be stretched up to $300 million [6] (Pirates of
the Caribbean: At World's End). Worldwide gross revenue
[7] can be almost $2.8 billion (Avatar). Profit-loss is found
to vary from a profit [8] of 2975.63 % (City Island) to a loss
[9] of 1299.7 % (Zyzzyx Road). So it will be very useful if
we can develop a prediction system which can predict about
Films business potential.
Many artificial neural network based methods have
been used to design for successful Stock Market
Prediction[1], Weather Prediction[2], Image Processing [3],
and Time Series Prediction [10], and Temperature
Prediction system [11] etc. Like above, Back propagation
neural network can be used for prediction of profit/loss of a
movie based on some pre-defined genres. In this paper we
have proposed a training method to train back propagation
neural network for prediction of possible business of a
movie [47]. In this training method we choose a subset of
training data from entire training data based on the target
movies feature. The formulation of the problem is
presented in the next section. Section III describes our
proposed method. The proposed method is presented in the
form of an algorithm in section III-A. Experimental results
on 25 movies selected randomly from a given database can
be found in section IV. Concluding remarks and scope for
further work have been incorporated in section V.
II.

STATEMENT OF THE POBLEM

Every Film can be identified by certain film genres. In


film theory, genre [12] refers to the method based on
similarities in the narrative elements from which films are
constructed. Most theories of film genre are borrowed from
literary genre criticism. Some basic film genres are - action,
adventure, animation, biography, comedy, crime, drama,
family, fantasy, horror, mystery, romance, science-fiction,
thriller, war etc. One film can belong to more than one
genre. As an example the movie titled Alice in Wonderland
(2010) belongs to [13] action, adventure, and fantasy
genres.

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

Any films success is highly dependent on its film


genres. Other important factors are reputation of Film
Studio or Production house and present popularity of casted
actor/actress. We can consider these genres and the said
factors as a Films attributes. We can then collect these
attributes data of past Films. Based on these data we can
predict about an upcoming Films future business.
We have used 19 genres such as action, adventure,
animation, biography, comedy, crime, documentary, drama,
family, fantasy, horror, musical, mystery, romance, sciencefiction, sport, thriller, war and western. Note that the factors
such as the overall rating, reputation of film studio or
production house and present popularity of actor/actress
casted for the film, have been taken care of by the inclusion
of the genre named Production house rating.
Our main database consists of 118 movies released in
the year 2010 [14]. For our training purpose we have chosen
20 movies, which are similar to our target movie. We have
considered the movies as Euclidean vectors. We have taken
the Euclidean distances from our target movie and took 20
movies which are closest to our target movie. We have used
these 20 movies to train our neural network.
Artificial neural network [16] learning methods provide
a robust approach to approximating real-valued, discretevalued, and vector-valued target functions. For certain types
of problems, such as learning to interpret complex realworld sensor data, artificial neural networks are among the
most effective learning methods currently known.
Artificial neural networks have been applied in image
recognition and classification[17], image processing [18],
feature extraction from satellite images [19], cash
forecasting for a bank branch [20] , stock market prediction
[21], decision making [22], temperature forecasting [23],
atomic mass prediction [24], Prediction of Thromboembolic Stroke [25], time series prediction [26], forecasting
groundwater level [27].
Back-propagation is a common method of teaching
artificial neural networks about how to perform a given task.
It is a supervised learning method. It is most useful for feedforward networks [28].
Back-propagation neural network is successfully
applied in image compression [29], satellite image
classification [30], irregular shapes classification [31], email
classification [32], time series prediction [33], bankruptcy
prediction [34], and weather forecasting [35].
III.

PROPOSED METHOD

In this paper, we have proposed a method that uses a


multilayer feed-forward neural network as shown in Figure
1. Note that, a multilayer feed-forward neural network
consists of an input layer, one or more hidden layers, and an
output layer.
Here the back-propagation algorithm [15] performs
learning on the said multilayer feed-forward neural network.
It iteratively learns a set of weights for prediction of the

www.asdf.org.in

167

profit/loss percentage (10 scales) label of instances. A


typical multilayer feed-forward network is shown in Figure
1.
We have used 19 genres such as action, adventure,
animation, biography, comedy, crime, documentary, drama,
family, fantasy, horror, musical, mystery, romance, sciencefiction, sport, thriller, war and western. Note that the factors
such as the overall rating, reputation of film studio or
production house and present popularity of Actor/actress
performed for the film, has been taken care of by the
inclusion of the genre named Production house rating. Since
we have 21 attributes to be considered for this method, we
need 21 nodes in the input layer, we have taken 10 nodes in
the hidden layer and 1 node in the output layer.
Note that input nodes have received real numbers that
represents the values of the individual genres. A positive
(negative) real number is generated at the output node which
indicates the predicted profit (loss) of the movie in
consideration.

Figure 1. A multilayer feed-forward neural network.


We have used a database of 118 movies released in the
year 2010 [14]. For training of the said network we have
used 20 movies, depending on shortest Euclidean distance
from the target movie. After the network has been
successfully trained it can be used for prediction of
profit/loss of new movie to be released. In our experiment
we have taken some of the released movie (not taken in the
training set) of 2010 for evaluating the efficiency of the
trained back-propagation network. The experimental results
are presented at the section IV.
A. Algorithm
Selection of training data set
Input
 , a data set consisting of the 19 genres of the
movies. Note, target movie is not present in this
data set.

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

168

Output
20 movies which are similar to target movie

Step11. For each unit j in the hidden layers, from the last to
the first hidden layer

Method
Step1. For all movies in the data set , {

Step12. "## =  $1   & ( "##( )( ; // compute the


error with respect to the next higher
Layer, k


Step2.  =|  |= 
 |   | ; // where

p= ,  , ,    is the target movie and q= ,  , ,   


Is the movie from data set .}
Step3. Sort the movies according to the distances (  );
//increasing order.
Step4. Select first 20 movies;
Step5. Stop
B. Algorithm
Back-propagation Neural network learning for
prediction, using the back-propagation algorithm.
Input
D, a data set consisting of the genres of the
previously selected 20 movies and their actual
values in percentage of their profit or loss
associated target values( profit or loss
percentage );
l, the learning rate;
Output
A trained neural network, which can predict profit
percentage.
Method
Step1. Initialize all weights and biases in network;
Step2. While terminating condition is not satisfied {
Step3. For each training instance X in D {
// propagate the inputs forward:

Step13. For each weight )  in network {


Step14. *)   +"##  ; // weight increment
Step15. )   )   *)  , ; // weight update
Step16. For each bias  in network {
Step17.  =+"## ; // bias increment
Step18.  =  *  ; // bias update
Step19. }}
Step20. Stop
At first, the weights in the network are initialized to
small random numbers [15], bias associated with each unit
also initialized to small random numbers.
The training instance from movie database is fed to the
input layer. Next, the net input and output of each unit in the
hidden and output layers are computed. A hidden layer or
output layer unit is shown in Figure2.

Step4. For each input layer unit j {


Step5.    // output of an input unit is its actual input
value
Step6. For each hidden or output layer unit j {
Step7.  =  ,    ; //compute the net input of unit j
with respect to the
Previous layer, i
Step8.  



 !

;} // compute the output of each unit j

// Back propagate the errors:


Step9. For each unit j in the output layer

Figure 2. A hidden or output layer unit j

Step10. "## = $1   &'   ; // compute the error

The net input to unit j is


  )    

www.asdf.org.in

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

(1)

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

Where )  is the connection weight from unit i, in the


previous layer to unit j;  is the output of unit i from the
previous layer; and  is the bias of the unit.
As shown in the Figure2, each unit in the hidden and
output layers takes its net input and then applies an
activation function to it. The function (sigmoid) symbolizes
the activation of the neuron represent by the unit. Given the
net input  to unit j, then , the output of unit j computed as
 



 !

(2)

The error of each unit is computed and propagated


backward. For a unit j in the output layer the error "## is
computed by
"## =  $1   &'   
(3)
where  is the actual output of unit j, and ' is the known
target value of the given training instance. The error of a
hidden layer unit j is
"## =  $1   & ( "##( )(
(4)
where )( is the weight of the connection from unit j to a
unit k in the next higher layer, and "##( is the error of unit
k.
The weights and biases are updated to reflect the propagated
errors. Weights are updated
By the following equations, where )  is the change in
weight ) 
*)   +"## 
(5)
)   )   *) 
(6)
l is the learning rate. In our experiment it is 0.1.
Biases are also updated, if  is the change in  then
 =+"##
(7)
 =  * 
(8)
The weight and bias increments could be accumulated
in variables, so that the weights and biases are updated after
all of the instances in the training set have been presented.
In our experiment we have used this strategy named epoch
updating, where one iteration through the training set is an
epoch.
The training stops when
All *)  in the previous epoch were so small as to
be below some specified threshold, or
The percentage of instance misclassified in the
previous epoch is below some threshold,
Or
A pre specified number of epochs have expired.
In our experiment we have specified the number of epochs
as 1000.
IV.

www.asdf.org.in

EXPERIMENTAL RESULT

169

We have carried out our experiments on a movie


database containing 118 American films with 19 genres,
released in 2010[14].
In our first Experiment- Experiment 1 we have used 21
movie attributes and scaled profit percentage (profitpercentage/100) of 117 movies out 118 for training the
network. The attributes of the target movie are used as input
of the trained network to predict the scaled profit
percentage. We have carried out our experiment for
randomly selected 25 movies released in 2010.
In our second Experiment- Experiment 2 we have
conducted our experiment according to our proposed
technique. At first we have determined 20 movies from 117
movies, which are similar to our target movie. Then we
trained our neural net with that 20 movies 21 attributes and
their scaled profit percentage. The attributes of the target
movie are used as input of the trained network to predict the
scaled profit percentage. We have carried out our
experiment for randomly selected 25 movies released in
2010.
Results obtained from our 2 Experiments are presented
in the following table (Table 1).
It may be noted that the data about investment and
earnings of all the movies are obtained from Wikipedia
(http://en.wikipedia.org). And rating and genres of the
movies are obtained from imdb (http://www.imdb.com/).

TABLE 1. Experiment Result (Values are


rounded up to 3 decimals)
Movie
Name(A)

Actual
Profit in
%(B)

Profit
%
predict
ed by
Experi
ment
1(C)

Blue
Valentine

1135.57
4

521.1

Case 39

4.408

Convictio
n

Error
Rate in
Experi
ment 1
(%)
=(ABS
(B3C)*100
)/ABS(
B)
54.112

Profit
predict
ed by
Experi
ment 2
(E)

Error
Rate in
Experim
ent 2
(%)=(A
BS(BE)*100)/
ABS(B)

1168.9
8

2.942

18.995

330.92
2

3.002

31.902

-22.32

-98.229

340.09
5

-14.199

36.385

Cyrus

41.77

48.412

15.902

43.189

3.398

Clash of
the Titans

294.572

263.58
5

10.52

306.39
5

4.014

Daybreak
ers

157.083

190.17

21.064

163.24

3.92

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

Dinner for
Schmucks

25.228

81.741

224.01

24.393

3.31

Due Date

225.716

290.06

28.507

221.86

1.709

Eat Pray
Love

240.991

212.72

11.732

218.15

9.478

Faster

48.096

49.728

3.394

52.779

9.737

Grown
Ups

239.288

242.89

1.506

236.55

1.145

Hereafter

162.995

164.8

1.108

172.45

5.801

Inception

415.958

338.02

18.737

356.52

14.29

Leap Year

71.618

65.626

8.367

144.42

101.654

V.

Faster

178.023

156.02

12.36

184.45

3.611

Megamin
d

147.304

158.99

7.934

139.03

5.617

My Soul
to Take

-16.095

21.854

35.782

-10.513

34.682

Salt

168.831

178.65

5.816

169.6

0.456

Tangled

127.201

122.32

3.838

131.65

3.498

The
Bounty
Hunter

240.805

269.06

11.734

250.57

4.056

The
Expendabl
es

243.086

238.58

1.854

238.87

1.735

The Kids
Are All
Right

767.647

805.31

4.907

749.96

2.305

The Next
Three
Days

109.163

117.83

7.94

109.55

0.355

The
Twilight
Saga:
Eclipse

927.194

835.12

9.931

927.52

0.036

Wall
Street:
Money
Never
Sleeps

92.498

80.096

13.408

89.561

3.176

CONCLUSION AND SCOPE FOR


FURTHER WORK

It can be seen from Table 1, that our training method


(Experiment 2) based on similar movies shows significant
improvement in error percentage over conventional training
method (Experiment1) using all the training instances. We
have obtained better result for 22 movies out of 25 movies.
Details of other 3 movies are given in below Table 2.
TABLE 2
Movie name

Life as We
Know It

170

Error Rate in
Experiment 1
(%) (E1)
3.394

Error Rate in
Experiment 2
(%)(E2)
9.737

Difference
ABS(E1-E2)
6.343

Hereafter

1.108

5.801

4.693

Leap Year

8.367

101.654

93.287

In Table 2, for first two movies the difference between


error rate for Experiment 1 and Experiment 2 are quite low.
For movies named Faster and Hereafter the difference
are 6.343 and 4.693 respectively.
For the movie named Leap Year the difference is
very large, (93.287). This movie was released on the 8th of
January, 2010. On this day total 7 movies were released in
the USA. The other 6 Movies were Daybreakers, Youth
in revolt , Bitch Slap, Crazy on the outside, The Loss
of a teardrop diamond , Wonderful world and their
overall rating were 6.5[48], 6.6[49], 4.6[50], 5.6[51],
5.7[52] and 6.1[53]. Clearly the movie Leap Year which
has an overall rating of 6.1[54] due to tough competition
from other movies couldnt make the profit as expected.
Note that, it is very difficult even for the human expert
to predict the possible profit or loss of a new movie to be
released. It seems that the genres of the movie play a
significant role in the profit of the movie but it is very
difficult to analytically establish the relation of the value of
the genres of a given movie with the profit that it makes. In
this paper we have attempted to develop a heuristic method
using back-propagation neural network to solve this
problem.
Further research work can be conducted in following
areas. We can search for more genres and/or division of an
existing genre into subgenres that may led to a higher
success rate of prediction. Also we can conduct research for
finding profitable movie trend (movie genres) in the way of
making a psychological analysis of peoples likings or
interests in some specific kinds of movies.
REFERENCES
[1] "Stock Market Prediction with Back propagation Networks" by Bernd
Freisleben
Published in: IEA/AIE '92 Proceedings of the 5th international conference
on
Industrial and engineering applications of artificial intelligence
and expert systems

www.asdf.org.in

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

[2] "Training back propagation neural networks with genetic algorithm for
weather forecasting" by Gill, J.; Singh, B.; Singh, S.; This paper appears
in: Intelligent Systems and Informatics (SISY), 2010 8th
International Symposium
[3] "Multispectral image-processing with a three-layer back propagation
network" by McClellan, G.E.; DeWitt, R.N.; Hemmer, T.H.; Matheson,
L.N.; Moe, G.O.; Pacific-Sierra Res. Corp., Arlington, VA This paper
appears in: Neural Networks, 1989. IJCNN., International Joint Conference
[4] http://en.wikipedia.org/wiki/Film
[5] http://www.the-numbers.com/glossary.php
[6] http://en.wikipedia.org/wiki/List_of_most_expensive_films
[7]http://en.wikipedia.org/wiki/List_of_highest-grossing_films
[8] http://en.wikipedia.org/wiki/City_Island_%28film%29
[9] http://en.wikipedia.org/wiki/Zyzzyx_Road
[10] Time Series Prediction and Neural Networks R.J.Frank, N.Davey,
S.P.Hunt Department of Computer Science, University of Hertfordshire,
Hatfield, UK.
[11] An Efficient Weather Forecasting System using Artificial Neural
Network Dr. S. Santhosh Baboo and I.Kadar Shereef
[12] http://en.wikipedia.org/wiki/Film_genre
[13] http://www.imdb.com/title/tt0499549/
[14]http://en.wikipedia.org/wiki/List_of_American_films_of_2010
[15] Data Mining: Concepts and Techniques, 2nd ed.Page 328 by Jiawei
Han and Micheline Kamber
[16] " Machine Learning " by Tom Mitchell page 81
[17] "Application of artificial neural networks in image
recognition and classification of crop and weeds" by C.C. YANG, S.O.
PRASHER, J.A. LANDRY, H.S. RAMASWAMY and A. DITOMMASO
[18] " Applications of Artificial Neural Networks to Facial Image
Processing " by Thai Hoang Le
[19] " THE APPLICATION OF NEURAL NETWORKS, IMAGE
PROCESSING AND CAD-BASED ENVIRONMENTS FACILITIES IN
AUTOMATIC ROAD EXTRACTION AND VECTORIZATION FROM
HIGH RESOLUTION SATELLITE IMAGES " by F. Farnood Ahmadia,
M. J. Valadan Zoeja, H. Ebadia, M. Mokhtarzadea
[20] " Cash Forecasting: An Application of Artificial
Neural Networks in Finance " by PremChand Kumar, Ekta Walia
[21] " Stock Market Prediction Using Artificial Neural Networks " by
Birgul Egeli, Meltem Ozturan, Bertan Badur
[22] " ARTIFICIAL NEURAL NETWORK MODELS FOR
FORECASTING AND DECISION MAKING " by Tim Hill, Leorey
Marquez, Marcus O'Connor, William Remus
[23] " Application of Artificial Neural Networks for
Temperature Forecasting " by Mohsen Hayati, and Zahra Mohebi
[24]" Atomic Mass Prediction with Articial Neural Networks " by xuru.org
[25] " Designing an Artificial Neural Network Model for the Prediction of
Thrombo-embolic Stroke " by D.Shanthi, Dr.G.Sahoo , Dr.N.Saravanan
[26] " Time Series Prediction and Neural Networks " by R.J.Frank,
N.Davey, S.P.Hunt
[27] " Forecasting groundwater level using artificial
neural networks " by P. D. Sreekanth, N. Geethanjali, P. D. Sreedevi,
Shakeel Ahmed, N. Ravi Kumar and P. D. Kamala Jayanthi
[28]http://en.wikipedia.org/wiki/Backpropagation
[29] " Image Compression with Back-Propagation
Neural Network using Cumulative Distribution
Function " by S. Anna Durai, and E. Anna Saro
[30] " Satellite Image Classification using the Back Propagation Algorithm
of Artificial Neural Network. " by Mrs. Ashwini T. Sapkal, Mr.
Chandraprakash Bokhare and Mr. N. Z. Tarapore
[31] " Irregular shapes classification by back-propagation
neural networks " by Shih-Wei Lin, Shuo-Yan Chou and Shih-Chieh Chen
[32] " Email Classification Using Back Propagation Technique " by Taiwo
Ayodele, Shikun Zhou, Rinat Khusainov
[33] "Parallel back-propagation for the prediction of time series " by Frank
M. Thiesing, Ulrich Middelberg and Oliver Vornberger
[34] " APPLYING BACKPROPAGATION NEURAL NETWORKS TO
BANKRUPTCY PREDICTION " by Yi-Chung Hu and Fang-Mei Tseng
[35] " An Efficient Weather Forecasting System using Artificial Neural
Network " by Dr. S. Santhosh Baboo and I.Kadar Shereef
[36] http://www.film-releases.com/film-release-schedule-2010.php

www.asdf.org.in

171

[37]http://www.imdb.com/title/tt1126591/
[38]http://www.imdb.com/title/tt1433108/
[39]http://www.imdb.com/title/tt0758752/
[40]http://en.wikipedia.org/wiki/Tron:_Legacy
[41]http://en.wikipedia.org/wiki/Tron_%28film%29
[42]http://www.imdb.com/title/tt1104001/
[43]http://www.imdb.com/title/tt0398286/
[44] http://en.wikipedia.org/wiki/List_of_American_films_of_2009
[45] http://en.wikipedia.org/wiki/Film_industry
[46]http://www.imdb.com/title/tt1041804/
[47] A Method of Movie Business Prediction using Artificial Neural
Network by Debaditya Barman and Dr. Nirmalaya Chowdhury published
in International Conference on Soft computing and Engineering
Application SEA 2011, page no. 54-59
[48]http://www.imdb.com/title/tt0433362/
[49]http://www.imdb.com/title/tt0403702/
[50]http://www.imdb.com/title/tt1212974/
[51]http://www.imdb.com/title/tt1196134/
[52]http://www.imdb.com/title/tt0896031/
[53]http://www.imdb.com/title/tt0857275/
[54]http://www.imdb.com/title/tt1216492/

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

172

Analysis of Soft Computing in Neural Network


M.Premalatha
Research Scholar, Department of Mathematics
Sathyabama University,
Chennai,India

AbstractSoft Computing technologies are the main topics


provide the basic knowledge of fuzzy systems (FSs), neural
networks (NNs), and genetic algorithms (GAs). Most of the
attempts for defining the evolving term Soft Computing coincide
in that it is a collection of techniques which uses the human mind
as a model andaims at formalizing our cognitive processes. These
methods are meant to operate in an environment that is subject
to uncertainty and imprecision. The objective of this paper is to
study, model and analyses complex phenomena for which more
conventional methods of Neural Network. To analyze
performance of the network various test data are given as input
to the network. To speed up the learning process, at each neuron
in all hiddenand output layers. The results show that the
multilayer neural network is trained quickly than single layer
neural network and the classification efficiency is also high. The
experimental results proved that neural networks technique
provides satisfactory results for the classification task. The
guiding principle of soft computing is to exploit the tolerance for
imprecision and uncertainty to achieve tractability, robustness,
and low solution cost.
Keywords- Branches of Soft computing, General Network
Structure, Linear classification, Adaline, Application of NN in
Medical System (Breast Cancer)

I.

INTRODUCTION

Soft computing replaces the traditional time-consuming


and complex techniques of hard computing with more
intelligent processing techniques. According to scientific
folklore, the name Computational Intelligence is chosen to
indicate the link to and the difference with Artificial
Intelligence [1]. Artificial Intelligence techniques are topdown whereas Computational Intelligence techniques are
generally bottom-up, with order and structure emerging from
anunstructured beginning.It is widely accepted that the main
components of Soft Computing are:
Fuzzy Logic,
Probabilistic Reasoning,
Neural Computing and GeneticAlgorithms.
This four constituents share common features and they are
considered complementary instead of competitive. The first
and probably the most successful hybrid approach till now are
the so-called neuro fuzzy systems. In the partnership of the
mentioned collection of computational techniquesthat
compose soft computing, fuzzy logic,is concerned with
Proc. of the Intl. Conf. on Computer Applications Volume 1.
Copyright 2012 Techno Forum Group, India.
ISBN: 978*81*920575*5*2*::doi:10.73256/ISBN_0768
ACM #: dber.imera.10. 73256

www.asdf.org.in

imprecision, approximate reasoning and representation of


aspects that are only qualitatively known; Probabilistic
Reasoning such as Bayesian Belief Networks, with uncertainty
and belief propagation; Neural Computing focuses on the
understanding of neural networks and learning systems, selforganizing structures, and the implementation of models from
available data; last but not least, Genetic and Evolutionary
Computing provide approaches to computing based on
analogues of natural selection, such as, the optimization
methods. The interest in neural networks comes from the
networks ability to mimic human brain as well as its ability to
learn and respond. These include pattern recognition,
classification, vision, control systems, and prediction.
Adaptation or learning is a major focus of neural net research
that provides a degree of robustness to the NN model. In
predictive modeling, the goal is to map a set of input patterns
onto a set of output patterns[1] [4]. NN accomplishes this task
by learning from a series of input/output data sets presented to
the network. The trained network is then used to apply what it
has learned to approximate or predict the corresponding
output.
II.

BRANCHES OF SOFT COMPUTING

Soft computing is the fusion of methodologies designed to


model and enable solutions to real world problems, which are
not modeled or too difficult to model mathematically. The aim
of the soft computing is to exploit the tolerance for
imprecision uncertainty, approximate reasoning and partial
truth in order to achieve close resemblance with human like
decision making. Soft Computing is an emerging approach to
computing which parallel the remarkable ability of the human
mind to reason and learn in a environment of uncertainty and
imprecision [2] [3].
Approximation: Here the model features are similar to the real
ones, but not the same.
Uncertainty: Here we are not sure that the features of the
models are the same as that of the entity.
Imprecision: Here the model features (quantities) are not the
same as that of the real ones, but close to them.
FUZZY SET: For knowledge representation via fuzzy IF
THEN RULEs
NEURAL NETWORKS: For learning and adaptation
GENETIC ALGORITHMS: For evolutionary computation
Soft Computing = Evolutionary Computing + Neural Network
+ Fuzzy Logic
SC
=
EC
+
NN
+
FL

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

Evolutionary Computing
= Genetic Programming +
Evolution Strategies
+Evolutionary programming
+
Genetic Algorithms
EV = GP + ES + EP + GA
"Artificial Neural Networks are massively parallel
interconnected networks of simple (usually adaptive) elements
and their hierarchical organizations which are intended to
interact with the objects of the real world in the same way as
biological nervous systems do. The biological neural systems
do not apply principles of digital or logic circuits [4]
1. Neither the neurons, nor the synapses are bistable
memory elements. Neurons seem to act as analogue
integrators and the efficacies of the synapses change
gradually without flipping back and forth.
2. No machine instructions or control codes occur in
neural computing.
3. The neural circuits do not implement recursive
computation using algorithms.
4. Even on the highest level, the nature of information
processing is different in the brain or neural networks
and in the digital computers.
2.1. Soft Computing : Neural Network
Neural Nets: Classification - Supervised Learning
Multilayer perceptrons
Radial basis function networks
Modular neural networks
LVQ (learning vector quantization)
Unsupervised Learning
Competitive learning networks
Kohonen self-organizing networks
ART (adaptive resonant theory)
Others
Hopfield networks
Supervised Neural Networks
2.2. General Networks
The structure ofthe process is built into NN; we can
obtain a particular state space structure asthe result of the
identification,
such
as
controllability
canonical,
observabilitycanonical forms, etc.
Self feedback

Output &
Internal

u1

Internal
&output
Known threshold
X1

y1
up

X2

yq
Input & output
connections
Input layer
neurons

Internal interconnections (Wxx)

xN
hidden layer
neurons

output layer
neurons

III.

173

LINEAR CLASSIFICATION

Consider two input patterns classes C1and C2. The weight


adaptation at the kth training phase can be formulated asfollow:
1. If k member of the training vector x (k) is correctly
classified, no correctionaction is needed for the weight vector.
Since the activation function isselected as a hard limiter, the
following conditions will be valid:W (k+1) =W (k) if output>0
and x (k) C1 , and
W (k+1) =W (k) if output<0 and x (k)  C2.
2. Otherwise, the weight should be updated in accordance with
the followingrule:W (k+1) =W (k) + x (k) if output0 and x
(k) C1
W (k+1) =W (k)- x (k) if output0 and x (k) C2
Where is the learning rate parameter, which should be
selected between 0and 1.
2.3. Perceptron: Learning
The perceptron learning algorithm (Delta rule)
Step 1: Initialize the weights W1, W2Wn and threshold to
small randomvalues.
Step 2: Present new input X1, X2,..Xn and desired output dk.
Step 3: Calculate the actual output based on the following
formula:

yk = f k (xi wi ) k
i =1

Step 4: Adapt the weights according to the following


equation:

Wi (new) = Wi (old ) + (dk yk ) xi , 0 i N


Where is a positive gain fraction less than 1 and dkis the
desired output.The weights remain the same if the network
makes the correct decision.
Step 5: Repeat the procedures in steps 24 until the
classification task is completed.
3.2. Adaline
An ADAptive LINear Element (ADALINE) consists of a
single neuron of theMcCulloch-Pitts type, where its weights
are determinedby the normalized leastmean square (LMS)
training law [5].The LMS learning algorithm was originally
proposed by Widrowand Hoff. This learning rule is also
referred to as deltarule. Since ADALINE is a linear device,
any combination of theseunits can be accomplished with the
use of a single unit.During the training phase of ADALINE,
the input vector x belongs to Rn : X = [x1,x2,x3,.,xn] Tas well
as desired output are presented to thenetwork. The weights are
adaptively adjusted based on delta rule. After theADALINE is
trained, an input vector presented to the network with
fixedweights will result in a scalar output. Therefore, the
network performs amapping of an n dimensional mapping to a
scalar value.The activation functionis not used during the
training phase. Once the weights are properly adjusted,
theresponse of the trained unit can be tested by applying

Fig: 1 General Neural Network

www.asdf.org.in

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

various

inputs,

which

arenot

in

the

174

training

X1
W1

Output

X2

W2

*
*

e = (d o) f (w0 +WTI) I(6)

Wn

Xn

Error

Desired
Output

Fig:2 Adaline

Set.If the network produces consistent responses to a


highdegree with the test inputs, it said that the network could
generalize. Therefore,the process of training and
generalization are two important attributes of thenetwork.
3.3. Delta Training Rule
Thetraining of the network is based on the delta training
rule method [5]. Consider asingle neuron Fig.3. The relations
among input, activity level and output of the system can be
shown as follows:
Or in the matrix form:
a = w0 + w1i1 + w2i2 + . . . + wnin
(1)
a = w0 +WT I(2)
O
=
f
(a)
(3)
Where W and I are weight and input vectors of the
neuron, a is activity level of the neuron and Ois the output of
the neuron. W0is called bias value.
Suppose the desired value of the output is equal to d. Error e
can be defined as follows:
1
(d o )2 (4)
e =
2
By substituting Equations (2) and (3) into Equation (4), the
following relation holds:
2
1
e =
d f ( w0 +W T I )
(5)
2

i1

The components of gradient vector are equal to:

l
= ( d 0 ) f ' ( w 0 + W T I ) I j (7)
w j
Where f' (.) is derivative of activation function. To
minimize the error the weight changes should be in negative
gradient direction. Therefore we will have
W = e (8)
Where is a positive constant, called learning factor.
By Equations (6) and (7), the W is calculated as follows:
W = (d o) f ' (a)I (9)
For each weight j Equation (9) can be written as:
wj= (d o) f ' (a)I j j = 0,1,2,..., n (10)
Therefore we update the weights of the network as:
wj (new) = wj(old ) + wj j = 0,1,2,..., n (11)
Through generalization of Equation (11) for normalized error
and using Equation (10) for every neuron in output layer we
will have:

wj ( new) = wj (old) +

W j (new) = w j (old) +
W1 x1 +

W2
-

w2 x2 +
---+

f (-)

wn xn

Wn

(dj 0j ) f '(a )xj


X

j=0, 1, 2,, n (12)

Where X Rn is the input vector to the last layer, xj is


the jth element of X and . denotes L2-Norm.The above
method can be applied to the hidden layers as well. The only
difference is that the ojwill be replaced by y jin (12). yjis the
output of hidden layer neuron, and not the output of network.
One of the drawbacks in the back propagation learning
algorithm is the long duration of the training period. These
include addition of first and second moments to the learning
phase, choosing proper initial conditions, and selection of an
adaptive learning rate.Insuch an approach, the network
memorizes its previous adjustment, and, therefore it will
escape the local minima, using previous updates. The new
equation can be written as follows:

W1

i2

in

Fig: 3 Single Neuron


The error gradient vector can be calculated as follows:

(d j o j ) f `(a j )x j
X

+ Wj (new) w j (old) (13)

Where is a number between 0 and 1, namely the momentum


coefficient.

W0
IO=1

www.asdf.org.in

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

0.1

0.5

j[0]

i1 =1

0[0]
0.6

0.6
0.3

i 2=0

0.2

0.7

0.8

0[1]
0.9

175

Wj10(new)
-0.3173
-0.4015
Wj 11(new)
0.3
0.3
Wj 12(new)
0.3827
0.2985
The weights after the two iterations of training of the network
can be calculated
J [0] = 0.5878
J [1] = 0.5257
O [0] = 0.4424
O [1] = 0.7005

j[1]

i3 =1

0.5

Output and Error Norm

Nguyen and Widrow have proposed a systematic


approach for the proper selection of initial conditions in order
to decrease the training period of the network. Another
approach to improve the convergence of the network and
increase the convergence speed is the adaptive learning rate
[10]. In this method, the learning rate of the network () is
adjusted during training. In the first step, the training
coefficient is selected as a large number, so the resulting error
values are large. However, the error will be decreased as the
training progresses, due to the decrease in the learning rate.
Consider the network of Figure: 4with the initial values as
indicated. Thedesired values of the output ared0 = 0 d1 = 1. We
show two iterations oflearning of the network using back
propagation. Suppose the activation functionof the first layer
is a sigmoid and activation function of the output is a
linearfunction. f ( x ) =

1
1 + ex

(14)= f '(x)[1- f(x)]

Table: 2 Iteration Value of Feed forward Network


Steps
Step 1
Step 2
J[0]
J[1]
O[0]
O[1]
k[0]
k[1]
Wk00(new)
Wk01(new)
Wk10(new)
Wk11(new)
Wj00(new)
Wj01(new)
Wj02(new)

www.asdf.org.in

Iteration Number
1
Initialization
Forward
calculation
0.7109
0.7503
0.88066
0.80169
Step3:
-0.88066
0.19831
Step 4:
0.3694
0.56309
0.6301
0.5138
-0.2213
0.6
0.4787

Iteration Number
2:
New weight value
in Iteration 1
Previous Iteration
0.5640
0.5163
0.4991
0.6299
-0.4991
0.3701
0.3032
0.5025
0.6774
0.5751
-0.17248
0.6
0.5275

Iteration Output Value

Fig: 4 Feed forward Network with Initial Weights.

0.6
0.4
0.2
0
-0.2
-0.4
-0.6
-0.8
-1
Initial

Iteratio
n1

Iteratio
n2

Error Norm

0.9027

0.6213

0.5342

Output 2

0.1983

0.3701

0.2995

Output 1

-0.8807

-0.4991

-0.4424

Fig: 5Graphical Representations of Output and Error


Table: 3Summaries of Outputs and Error Norm after Iterations
Error
Initial
Iteration 1
Iteration 2
Output 1
-0.8807 -0.4991
-0.4424
Output 2
0.1983
0.3701
0.2995
Error Norm 0.9027
0.6213
0.5342
Choosing a very small value for this maximum error level may
force thenetwork to learn the inputs very well.
IV.

APPLICATION of ANN

Applications of artificial neural networks (ANNs)in


medicine and biological sciences. ANN solutions to classical
engineering problems of detection, estimation,extrapolation,
interpolation, control, and pattern recognition as it pertains to
these sciences[6] [14]. ANNs in medicine and biological
sciences, we will first introduce a few standard measures that
will be used to compare or report variousresults. These
measures have been recommended and used to evaluate
physicians
and
healthcare
workers
by
various
organizations,and therefore are good measures for evaluating
the performance of any automated system that is designed to
assist these healthcare professionals [15] [16].

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

4.1. STANDARD MEASURES


The
American
Heart
Association
(AHA)
recommends the use of four measures to evaluate procedures
for diagnosing CAD. Since these measures are useful in other
areas of diagnosis as well, evaluating most diagnostic systems.

Sensitivity TPF=


176

input layer networks used localized receptive fields without


overlapping in the mammogram image. The hiddenand the
output layers of the network were each a single layer of
perceptrons. The input layer was first trained, with
regionstaken from several mammograms, to become a feature
extractor using the competitive learning algorithm [7]. The
perceptron layers were then trained with the back propagation
learning algorithm report a TPF of 0.75, and an FPF value of
0.06 for the optimum operating point of the ANN system [8].

Specificity TNF=


PA = sensitivity * P (D) +specificity *[1- P(D)]


PV=


()

()( )[ 
()

Where TP stands for true positive, FN stands for false


negative,TN stands for true negative, and FP stands for false
positive. Sensitivity, or true-positivefraction (TPF), is the
probability of a patient who is suffering from a disease to be
diagnosed as such. Specificity, or true-negative fraction
(TNF), is the probability that a healthy individual is diagnosed
as such by a diagnosis mechanism for a specific disease. PA is
the predictive accuracy, or theoverall percentage of correct
diagnosis. PV is the predictive value of a positive test, or the
percentage of those who have the disease and have tested
positive for it. P(D) is the a priori probability of a patient who
is referred to the diagnosis procedure actually having cancer.
In addition to TPF and TNF, we define two other related
values. False positive fraction (FPF) is the probability of a
healthy patient being incorrectlydiagnosed as having a specific
disease. And false-negative fraction (FNF) is theprobability
that a patient who is suffering from a disease will be
incorrectlydiagnosedas healthy. In this way, the following
relations can be established:

Taking 100 patients were referred to the


mammographydepartment for diagnosis of breast cancer. Let
us further assume that of the 100 individuals, 38 actually had a
cancerous tumor, and the remaining 62 either did not have any
tumor or did not have one that was malignant (cancerous). Let
us further assume that a diagnosis procedure (manually
conducted by physicians, by an automated system, or by both)
correctly diagnosed 32 of the 38 cancer sufferers as having
breast cancer. It, however, misdiagnosed six of those as being
cancer free. Let us also assume that the procedure correctly
classified 58 of the 62 cancer-free patients as such, and
misclassified the remaining 4 as having breast cancer. Finally,
let us assume that on the average, 35 % of those who are
referred to themammography procedure actually have breast
cancer [8].
In this Data TP = 32, FN = 6, TN = 58, FP = 4, and P (D) =
0.35.
Hence,
Sensitivity = TPF = 32 100
FNF = 1 84.21 = 15.79%
Specificity = TNF = 58 100
58

FPF =1-TNF
FNF =1 -TPF

= 93.55%

+ 4

FPF = 1 93.55 = 6.45%


PA = 84.21 * 0.35 + 93.55 * [1- 0.35 ] = 90.28%

3.4. ANNs performance in Medical System

PV

Cancer Research Pattern recognition using ANNs in


cancer research is likely to be the most active area in terms of
application of ANNs in medicine [14]. ANNs have been used
extensively in various roles in cancer research anywhere from
tumor detection and analysis, to the detection of biochemical
changes in the body dueto cancer, to analysis of follow-up
data in primary breastcancer, to visualizing anticancer drug
databases. Among various types of cancer and detection
methods, breast cancer diagnosis by the means of ANN
classificationof mammography images has been one of the
most widely studied [9]. Using neural networks to aid in the
diagnosis of breast cancer. This system uses digital
mammogram images to classify a case as having one of three
possible outcomes: suspicion ofmalignant breast cancer,
suspicion of benign breast cancer, or nosuspicion of breast
cancer. Here used a staged (layered) neuralnetwork with a set
of identical single layer networks as the input layer. These

87.55%

www.asdf.org.in

= 84.21%

32 + 6

84 . 21 0 . 35
84 . 21 0 . 35 + (100 93 . 55 ) [1 0 . 35 ]

the overall system accuracy is 90.28 %, while the


predictivevalue of a positive test is at 87.55 %.
3.5. Application of soft computing
The applications of soft computing have proved in to two
main advantages. 1. In solving nonlinear problems, where
mathematical models are not available, or not possible. 2.
Introducing the human knowledge such as cognition,
recognition, understanding, learning and others into the field
of computing. This resulted in the possibility of constructing
intelligent\ systems such as autonomous self-tuning systems,
and automated designed systems. The subject has recently
gained importance because of its potential applications in
problems like: Remotely sensed Data Analysis, Data Mining,

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

Web Mining, Global Positioning system, Medical Imaging,


Forensic Applications,
Optical Character Recognition, Signature Verification,
Multimedia, Target Recognition, Face Recognition, Man
Machine Communications
V.

CONCLUSION

The described networks consist of highly parallel building


blocks NN design principles. Ingeneral, the NN architectures
cannot compete with the conventional techniquesat
performing precise numerical operations. To analyze
performance of the network various test data are given as
input to the network. To speed up the learning process, at each
neuron in all hidden and output layers. The results show that
the multilayer neural network is trained quickly than single
layer neural network and the classification efficiency is also
high.Applications of ANNs inthese fields are sure to help
unravel some of the mysteries in various diseasesand
biological processes. In the SCA case, the ANN developed for
the variableselection process helped pinpoint the parameters
that possibly play an importantrole in understanding the works
of SCA. This could lead to a significantincrease in the life
expectance of SCA sufferers.Research in applications of
ANNs in medicine and biological sciencescurrently remains
strong.
With
more
systematic
data
collection
routinesimplemented in healthcare facilities, to find their way
into doctors offices and hospital laboratories.However, there
are large classes ofproblems that often involve ambiguity,
prediction, or classifications that aremore amenable to solution
by NN than other available techniques.

[8]
[9]
[10]
[11]

[12]
[13]
[14]
[15]
[16]

177

WorldCongress on Neural Networks, San Diego, CA, 1, 416 - 421


June 5 9, 1994.
Astion, M.L. and Wilding, P., Application of Neural Networks to
the Interpretation of Laboratory Data in Cancer Diagnosis, Clin.
Chem. (US) 38, 34 - 38, 1992.
Chen, D., Chang, R.F., and Huang, Y.L., Breast Cancer Diagnosis
Using Self-Organizing Map for Sonography, Ultrasound. Med.
Biol., 26(3), 405 - 11, March, 2000.
Bradys M.A. and Tatjewski P. (2005): Iterative Algorithms
forMultilayer Optimizing Control. London: Imperial
CollegePress/World Scientific.
Haimovich H., Seron. M.M., Goodwin G.C. and Agero
J.C.(2003): A neural approximation to the explicit solution
ofconstrained
linear
MPC.Proc.
European
Control
Conf.,Cambridge, UK, (on CD-ROM).
Vila J.P. and Wagner V. (2003): Predictive neuro-control of
uncertainsystems: Design and use of a neuro-optimizer.
Automatics, Vol. 39, No. 5, pp. 767777.
Yu D.L. and Gomm J.B. (2003): Implementation of neural
networkpredictive control to a multivariable chemical reactor.
Contr. Eng. Pract. Vol. 11, No. 11, pp. 13151323.
H. Bunke and A. Kandel, Neuro-Fuzzy Pattern Recognition, World
Scientific Publishing CO, Singapore, 2000.
S. Mitra and Y. Hayashi, Neuro-Fuzzy Rule Generation: Survey in
Soft Computing Framework, IEEE Transactions on Neural
Networks, Vol II, No. 3. pp. 748 768, 2000
Kusiak, K.H. Kernstine, J.A. Kern, K.A. McLaughlinand T.L.
Tseng, Data mining: Medical and EngineeringCase Studies,
Proceedings of the Industrial EngineeringResearch 2000
Conference, Cleveland, Ohio, May21-23,pp.1-7,2000

ACKNOWLEDGEMENT
I shall say my sincere thanks to my Guide Dr. C.
Vijayalakshmi to give the Great Support and valuable
suggestion to this work.
References
[1]
[2]

[3]
[4]
[5]
[6]

[7]

Nguyen, D. and Widrow, B., Neural Networks for Self-Learning


Control Systems, IEEE Control Syst. Mag., 18, 1990.
Abraham, Neuro-Fuzzy Systems: State-of-the-Art Modeling
Techniques, Connectionist Models of Neurons, Learning
Processes, and Artificial Intelligence, Springer-Verlag Germany,
Jose Mira and Alberto Prieto (Eds.), Granada, Spain, pp. 269276,
2001.
Nekovie, R. and Sun, Y., Back-propagation Network and its
Configuration for Blood Vessel Detection in Angiograms, IEEE
Trans. on Neural Networks, Vol. 6, No. 1, 64, 1995.
Ham, F. and Kostanic, I., Principles of Neurocomputing for
Science and Engineering, McGraw Hill, New York, NY, 2001.
Widrow, B. and Lehr, M. A., Thirty Years of Adaptive Neural
Networks: Perceptron, MADALINE, and back propagation. Proc.
of the IEEE, Vol.78, 1415-1442, 1990.
Asada, N., Doi, K., MacMahon, H., Montner, S., Giger, M.L., Abe,
C., and Wu, Y., Neural Network Approach for Differential
Diagnosis of Interstitial Lung Diseases, Proc. SPIE (Medical
Imaging IV), 1233: 45-50, 1990.
Ashenayi, K., Hu, Y., Veltri, R., Hurst, R., and Bonner, B., Neural
Network Based Cancer Cell Classification, Proc. of the

www.asdf.org.in

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

178

Part III
Proceedings of the Second International Conference on
Computer Applications 2012

ICCA 12

Volume 2

www.asdf.org.in

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

179

RECONOS: Re-Configurable & Extensible OS Architecture


Pratik Gohil, Asst. Professor
S. V. Institute of Computer Studies
Kadi, India

Abstract The RECONOS is an environment dedicated to


dynamic extensibility and re-configurability. The principles
which I am implementing are, to use language techniques
including virtual machines, reflection and dynamic code
generation in the kernel of RECONOS. The portions of
operative architecture are customizable and configurable
during its execution for which, I have designed a single
environment supporting targeted application build in almost
any byte code programming or scripting language. All
those services, which are part of operative environment as
well as user applications, are typed with an appropriate
execution model. At the end of my research, I am expecting
maximum extensibility and minimum kernel size. This paper
is describing RECONOS and related operative architecture
components including system structure, inter-service
communication, extensibility, and application specificity to
outline a stable part of my research.
KeywordsRECONOS,
architecture, extensible kernel

I.

Re-configurable

operative

INTRODUCTION

Any OS design a paradigm which explain how various


components making up an OS are structured, their interdependencies and their interaction method should be such
that the OS itself becomes flexible and able to adjust itself
according to environment given for its execution, able to
accommodate new hardware and software technologies and
provides customized behavior for environment, operations
and applications.
Talking a lot about scaling out the hardware instead of
scaling up, controlling e-waste, utilizing available and pastaway infrastructure to run-on your modern demands, ever
changing hardware architecture, platform and software
requirements, availability and improvements in Internet
services and communication infrastructure, it is desirable
that OS be capable of evolving in a timely manner. OS
should be provided as a base platform for developers to
make it possible for them to quickly and easily design,
implement, debug and install new OS services or even
modify. Maintenance should be as easy as possible to make
it self-improving system for a layman even.
Next section shall introduce RECONOS devised during
my research work.
Proc. of the Intl. Conf. on Computer Applications
Volume 1. Copyright 2012 Techno Forum Group, India.
ISBN: 9788192057552

:: doi: 10.73228 /ISBN_0768

www.asdf.org.in

ACM #: dber.imera.10. 73228


II.

RECONOS ARCHITECTURAL OVERVIEW

The RECONOS is an environment dedicated to


dynamic extensibility and re-configurability (the ability to
specialize every level of both system and execution
environment to specific application need by using platform
independent re-configurable hardware).
Figure 1 - show executable architecture of RECONOS.
The portions of operative architecture are customizable and
configurable during its execution for which, I have
proposed a single environment supporting target application
(or application components) build in almost any byte
code programming or scripting language. All those
services, which are part of operative environment as well as
user applications, are typed with an appropriate execution
model. Each application type corresponds to RECONOS
descriptions. They are loaded on demand, whenever a new
application type is encountered.
Any application, travelling over network (local as well
as on web), will almost certainly be written in a byte code
language, for which, RECONOS specifications will define
an executable engine suitable for interpreting the
application itself. Software communication buses and other
middleware components will define system services
possibly destined for use with fully compiled (native)
applications. In most of the cases, RECONOS
specifications define both the execution engine - service
and the system abstraction that enable applications to
coexist and access local resources.
The RECONOS architecture is a modular, with
individual reusable and reconfigurable service components.
RECONOS specifications are imperative (not declarative)
specifications that are executed when the RECONOS is
loaded. Executing supportive services when RECONOS is
loaded permits a large degree of both flexibility and
security. External application can introspect on and reason
about the environment into which it is loaded, modifying its
behavior appropriately.
III. MY IDEA
My work describes few common terms applicable to
RECONOS, which are:
Self-Reconfigurable:
A
system
is
selfreconfigurable if the components comprising the
system can be rearranged in desired manner either
on the basis of self-discovered statistics or
manually. With respect to OS, this is restricted to

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

mean that replacement of services, or migration of


services between address spaces. These operations
can be carried out to enhance the system
performance, to replace components which have
been found containing errors, unavailability of
desired hardware, or to provide richer functionality.
Self-Extensible: A system is self-extensible if new
functionalities can be added or unused
functionalities can be removed out of system
automatically or manually. This concept is further
divided into sub-layers:
Global extensibility: It refers to the ability of
system to self-extend itself through intranet or
internet repositories containing open-sourced
services designed and developed by variety of

180

developers which will become available for

users on OS in the current environment. This is


very close but not identical to the concept of
reconfigurability.
Application specific extensibility: It is ability
of application developer to insert code or
replace or modify behavior of a service for
single application depending on the availability
of hardware and OS services to carry out
application specific operations.

Figure 1. RECONOS System Execution Architecture

It is important here to mention that self-reconfiguration


and self-extension is performed dynamically at run-time. I
had conceptualized several advantages while starting
working on this idea:
For developers, it should become easier to
improve/change part of functionalities of OS at
runtime.
New services could be developed and downloaded
to kernel, replacing older from the central
repositories.
Re-allocation of unreliable services could be
installed into their own address space for
debugging.
System would generate and present statistics of
available and used services into system in terms of
reliability, efficiency and availability.
System would identify available hardware and
operative environment and can load related services
to start OS with minimum functionality at any time.
Run time change in hardware and software would
be self-healed by OS kernel itself after identifying

www.asdf.org.in

changed demands and downloading and loading


required services accordingly.
Applications could modify system as per users
choice to increase its own performance.
More than this, while designing model, I targeted
several properties that are desirable for my architecture:
Incremental Development: System components
must be built on incremental basis. This lets the
system developer to conceptualize, develop and
implement core step-wise and restricts the scope of
changes for the desired extension.
Adjustable Communication: Variety of components
in the system must communicate with each other,
and the communication system used, must be
capable of supporting changes to the components at
run time.
No Performance Degradation: Compared to
conventional OS, there must not be degradation in
performance because; performance of OS is the key
factor to measure its efficiency.

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

Run-Time Replacement: To make OS flexible, it


must support dynamic replacement of its parts. This
removes rebuilding and rebooting of system with
every change implemented into the system.
Transparency: Changes made to the services of
kernel should be transparent to applications running
as well as other kernel services so that, they can
increase or reduce their functionalities to match the
changes.
Security and Protection: Security mechanism must
prevent arbitrary users or application from
modifying vital/core system components, and
should allow only those changes which are
indicated safe by the system admin or the analytical
reports on the repository. System also must contain
protection mechanism to protect it from damage by
dynamically inserted code or services.
Easy to Configure and Use: Steps to
install/configure changes in the system components
must either be automated or easy in terms of
manual steps.
RECONOS, incorporating most of the properties listed
above, are under development in its prototype state. I
decided to design and implement basic kernel and related
services in a modular and structure fashion. Based on my
study following statements can be considered:
An OS based on modular system services and
incorporates a flexible communication infrastructure with
run-time building can be built without decreasing its
performance. Kernel implemented using this paradigm is
fully self-configured and self-extensible, as service can be
dynamically installed, migrated and replaced, both globally
as well as on an application specific basis.
The important innovations that would make RECONOS
unique include communication structure, which allows
transparent run-time building, capability for dynamic
modification of the systems structure, and the
programming environment, which further enhances
flexibility, as well as presenting substantial software
engineering benefits.
RECONOS is also conceptualized to provide
environment and application specific platform in mind, and
demonstrates that this can be accomplished with this type of
system architecture. It also defines some of the inherent
problems associated with any form of application specific
platform, due to the conflict between an applications local
resource requirements, and the need of the OS to balance
these with both the global resource availability and the
demand of other applications. I have tried to solve such
problems by implementing self-healing mechanism during
dynamic changes in terms of hardware and requirements by
applications.
During conceptualization and implementation of
RECONOS architecture, I have tried to decompose OS
structure into smaller independent modules which is then
implemented in form of load-on-demand services. Couple
of most important portions of this system includes
scheduler and protection mechanism. In the case of

www.asdf.org.in

181

scheduling, my work specifies a unique lo-level interface


that enables the construction of many types of schedulers.
RECONOS projects protection and security as separate
components in the system. RECONOS isolates security
information into small number of structures that are defined
at system level. By only providing base functions that treat
these as anonymous structures, designers can implement as
much security checks as they want. Hooks are created and
provided in code wherever security decisions need to be
made, and in the present structure, Unix-like protection
system has been utilized wherever needed.
IV. DESIGN OUTLINE OF RECONOS
Three major changes conceptualized in RECONOS
include:
Overall system structure how the individual
system components are combined into the whole.
Communication structure that binds these
components.
Specification of the functionality of extensibility
and hardware and application specific environment.
Each of these is discussed in the subsequent sections.
A. System Structure
Before discussing structure of RECONOS, I would like
to outline existing paradigm of OS structuring. Designing,
configuring and building best system kernel which is in
discussion and research since long. Many paradigms are in
discussion, but monolithic and microkernel designs are
major out of all.
OS kernels have been constructed as a single monolithic
image, complied from a large body or kernel code. This
code includes many hundreds of thousands of lines to
implement disparate sets of functionalities including
process management, virtual memory, file system, network
communication and so on. Kernel code, therefore, becomes
more complex to code and implement compare to
application code. This also reduces possibilities to develop
and implement kernel debuggers. One should be technically
expert while maintaining kernel code.
Microkernel is another approach pursued by researchers
to combat the cost of monolithic kernel development. As its
name suggests, microkernel is designed to be small where
the kernel itself provides only the needed functionality to
implement native/remote-programs that run in their own
address spaces in user mode and provide the system which
those services, which is not provided by the microkernel.
The microkernel design offers the developer, an advantage
of modularity, as different services can be implemented in
separate servers and can often be debugged as user level
program using standard debugger.
Even though, microkernel offers modularity in design, it
incurs a performance penalty, and makes it complicated to
many developers, for whom, performance is an important
issue. Performance penalty arises out of the means through
which, different servers communicate with one another.
Monolithic kernel does this with procedure calls while
microkernel can achieve it through IPC between servers.

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

The architecture proposed during my work resolves


much of these conflicts between modularity, system
decomposition, and performance by implement hybrid
scheme. My design is decomposing the system into set of
services, which together implement a complete OS. A
service becomes a well-defined interface through which an
entity can be accessed and asked to carry out tasks for the
caller. There are two basic parts of any service, an interface
and an implementation. The interface provides description
related to procedure, while the implementation refers to the
compiled code which implements the interface - i.e. the set
of procedures that manipulate the virtual memory of an
address space might be grouped into a single service called
VirMem service. From the application viewpoint, a
service appears as a number of pointers to functions one
for each of the service entry point. By dereferencing these
pointers and calling the specific function, a service can be
accessed. This becomes a local function call for
programmers as well as application.
I have tried to implement services in a highly modular
fashion, and each can be executed in a separate address
space, they can also be co-located into any address space,
including kernel itself. It might happen that service gets
loaded into address space in which another service is
already running. The underlying communication system
optimizes any inter-service communication into the
appropriate procedure call, which improves system
performance. At any point of time, services can be migrated
between address spaces transparently. This also allows
services to be dynamically replaces or removed from the
system while required.
B. Inter-Service Communication (ISC)
A service appears to be a set of procedures, which can
be invoked by other services or applications in order to
carry out tasks. This requires focusing on support of
procedure calls for the communication paradigm. Massage
passing with additional layer of RPC is used by
microkernel, but procedure calls are more generous
compare to other paradigms, like:
Familiarity: Programmers are more familiar to
procedure calls. Direct support to procedure call
offers easier development environment.
Efficiency: RECONOS eliminates the need of
receive/decode/send
components
mechanism
followed by microkernel and other similar
mechanism. These operations are carried out in the
kernel, with the service procedures being invoked
directly.
Higher Throughput: Any service can have a
potentially unlimited number of threads executing
in parallel within itself, allowing higher throughput
in different cases. This model of parallelism also
reduces priority inversion problems caused by highpriority threads, which are waiting for low-priority
threads whose requests, are under execution
schedule. In RECONOS, high-priority threads can
run in parallel with the low priority one, and is not
blocked from entering the service.

www.asdf.org.in

182

IPC mechanism of RECONOS is considerably easy to


implement for service co-location. Each service would be
compiled into a single object file / implementation file.
While loading into new address space, the implementation
file is linked at run-time against any necessary libraries.
When the service gets loaded into, or migrated to, it is first
dynamically linked against existing application code /
service. This prevents duplication of routines within address
space, while keeping service code separate. Kernel keeps
token information in a token-table for each address space
and service available to help preventing such duplications.
To achieve performance optimization, the kernel would
detect when services are co-located, and optimizes the
service invocation into a single indirect procedure call,
eliminating kernel trap overhead completely. This could be
achieved easily because the access points to each service
are kept in function pointers. Kernel has access to a token
table which gives details about such pointers, and identifies
procedure comprising the interface. This helps in
minimizing performance overheads imposed by IPC/RPC in
microkernel.
C. Extensibility
The structure comprising number of independent
services, combined with some extensions to the service
loading mechanism described above, allows implementing
number of features which makes RECONOS selfextensible. This feature is referred to as service migration,
replacement and interposition.
Service Migration
It may be required for a service to move into another
address space. Prime reason behind this is to achieve
performance. If a service can be migrated into the address
space of the most frequent client of the service, the time
needed for each service invocation can be reduced by the
time taken for changing address spaces, which is potentially
expensive operation. Instead, invocations into the kernel are
far cheaper than those into another address space. Thus, for
trusted such as device drivers, file systems and network
protocols, it is desirable to move or migrate, services into
kernel.
This offers several advantages to service developers:
The strictly defined service interfaces enforce
modularity, which makes the system as a whole and
makes it more maintainable.
It is not uncommon for newly developed kernel
services to cause a system crash, necessitating a
tedious and time-consuming program/crash/reboot
development cycle. Inside RECONOS, services can
initially be developed within private address. It
becomes easier than to debug using standard tools.
Service Replacement
For the given ability to migrate services between
address spaces, it is simple to support dynamic replacement
of services, as almost identical operations need to take
place. Replacement is easier technique, as there is single
address space involved. There are three major reasons for
why developers and system admin might wish to replace
service:

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

Performance: Replacement service can offer


improved performance. This requires reduced
memory footprints and small amount of CPU.
Correctness: Replacement allows the system admin
to ensure system correctness, without potentially
expensive down-time. Either source developer or

183

repositories would feel easier to upgrade system,


without rebuilding and reinstalling it.
Testing: During development, its easy to replace
faulty services, which enables process to proceed in
a
timelier
manner.

Application

Service A

Service B

Service
Migration

Service A

Service D

Service
Replacement

Service B

Service D

Base Kernel Service


Hardware
Figure 2. Service Migration & Replacement Process (Dotted Boxes Indicate Address Spaces, Thick Arrows Are Service Operations & Thin Arrows Are
Call To Services)

Figure 2 illustrates service migration and replacement. It


shows several services and applications loaded in several
different address spaces. Thin arrows indicate calls made on
services. Calls which are crossing address spaces will be
achieved using a form of RPC. Other calls, within address
spaces, will be optimized through direct procedure calls.
Two basic service operations making RECONOS extensible
are shown here. The first, service migration, is when a
service is moved transparently between address spaces. In
the above case, service can reside in either its own or
kernels address space. The second, service replacement
shows Service B being replaced by B. This change is
transparent to both the service clients and services. Calls
between services are optimized when possible. In the given
example, B can make upcall to C, but for optimization, it
can even call D directly.
Service Interposition
This refers to the ability to interpose a new service
between existing pair of services. At the interface between
OS and applications, there are several usages of this type of
facility:

Call Tracing / Monitoring: An interposed service can


record or monitor an applications use of the
underlying service.

www.asdf.org.in

OS Emulation: A service can be used to translate


calls made by an application written for other OS
into those used by native system.

Protected Environment: A service wrapper can be


developed that limits the actions of the un-trusted
binaries.
Alternate/Enhanced Semantics: A Service could
offer an interface that extends that of the underlying
service, e.g. a transaction-based file system.
By interposing services at different points in the service
hierarchy and allowing this to occur dynamically, the set of
applications to which interposition applies is greatly
expanded. The primary usage for service interposition is in
interposing on system services that implement policies.
D. Application Specificity
RECONOS allows each of the above operations to be
performed on an application specific basis. If an application
wishes to provide its own service, equivalent to one already
available in the system, but having different performance
characteristics or enhanced semantics, it can specify the use
of service for its own purpose only.
This ability can be applied both, to services on which the
application is directly dependent, but also to services further

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

down the applications calling chain. This allows application


to install its own code into the system and enhance its own
performance and leaving other system client unaffected.
Figure 3, (c) shows an application specific interposition. It
inserts service S5, taking the place of S2 for all calls from
application B. Any call originated from B is redirected
(a)
(b)
A

184

through S5. RECONOS doesnt enforce to tell S1 that it will


be directed to another service it continues to make the
same calls, and the underlying communications
infrastructure handles the destination.
(c)
B

S1

S4

S4

S2

S1

S1

S3

S2

S2

S3

S5

S3

Figure 3. Service Reconfiguration

V.

FUTURE DIRECTIONS

Present progress is towards implementation of basic


kernel with service orientation. Next paper would explain
my kernel design as service and other components
comprising of RECONOS.
VI.

CREDITS

A very special acknowledgement to my mentor,


Prof. (Dr.) S. M Shah (Director, S. V. Institute of
Computer Studies, Kadi), whose encouragement, guidance
and support during my research enabled me to get through
these ideas and achieving targeted results.

Microsoft Corporation. Object Linking and Embedding


Programmers Reference version 1. Microsoft Press, 1992.
The Web's Next Revolution - https://www6.software.ibm.com.
D. Clark and D. Tennenhouse, Architectural Considerations for a
New Generation of Protocols, Proc ACM SIGCOMM, Sept 1990.
K. Fall, A Delay-Tolerant Network Architecture for Challenged
Internets, Proc. ACM SIGCOMM 2003, Karlsruhe, Germany,
2003.
V. Jacobson, Congestion Avoidance and Control, Proc. ACM
SIGCOMM '88, Stanford, CA, August, 1988.
J. Saltzer, D. Reed, and D. Clark, End-To-End Arguments in
System Design, 2nd International Conf on Dist Systems, Paris
France, April 1981. (Also in ACM Transactions in Computer
Systems, 2(4): 1984).

VII. REFERENCES

R. Braden, D. Clark, S. Shenker, Integrated Services in the Internet


Architecture, Internet RFC-1633, June 1994.
Computer Science and Telecommunications Board, Realizing the
Information Future: The Internet and Beyond, National Academy
Press, 1994.
M. Handley, C. Kreibich and V. Paxson, Network Intrusion
Detection, Evasion, Traffic Normalization and End-to-End
Protocol Semantics, Proc. USENIX Security Symposium, 2001.
Ubiquitous
Web:
http://www.w3.org/2006/10/uwa-activityproposal.html
Web Services: http://www.w3.org/2002/ws/
Semantic Web: http://www.w3.org/2001/sw/
Code
Reference:
http://www.codeproject.com,
http://www.codeplex.com
http://www.emobility.eu.org/

www.asdf.org.in

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

185

FOSS-A facilitator in Teaching.


Harshad Gune
Symbiosis Institute of
Computer Studies and Research
Abstract: With the effect of globalization the world has
come nearer to each other and teaching and learning is no
more confined to the classroom environment, exchange of
skills, products and ideas take place round the clock due to
Internet. With the emergence of Internet, access to
information from different parts of the world is just a click
away. Numerous organizations throughout the world are now
delivering training and education over the Internet. Colleges
and Universities and professional training institutions offer
on-line tutoring to students, working professional and people
from different sectors and fields.
With emergence of Industrial revolution in India on-line
training in education sector, IT sector, Medical Sector and
Finance sector has become trendy and moved to a right
course. Day by day, training and support for employees is
occurring on-line, and a number of institutions now offer
both partial or complete secondary certification and courses
through e-learning. The FOSS technologies and applications
provide support for the personalization of learning
modalities, for appliance of new teaching/learning methods,
and for using the latest hour technologies in the teaching and
learning management process.
India being a developing country a new era has began in
education with the emergence on FOSS technologies in elearning. This paper focuses on the current scenario and
future potential of FOSS in teaching and learning through a
case study of Moodle in teaching and learning process in an
academic institute.

I.

INTRODUCTION

Current scenario in E-learning:


The e-learning era is rapidly flouring in India. Indian IT
industry is mostly involved in the service sector, providing
software services to clients across the globe. But recent
exchange of thoughts and ideas has brought a new facet to
IT industry and Education sector in terms of trainings and
learning through Internet which is E-Learning.
More and more Educational and Professional training
institutes are broadening their horizons with the help of
technology and extending their educational services to the
masses irrespective of the distance and with quality 24x7.
The e-learning revolution has brought the trainer and
learner nearer to each other and the education is no longer
confined to the walls of classroom or hall.
To quote a few examples in the recent events:
Proc. of the Intl. Conf. on Computer Applications
Volume 1. Copyright 2012 Techno Forum Group, India.
ISBN: 978-81-920575-5-2:: doi: 10. 73235/ISBN_0768
ACM #: dber.imera.10. 73235

www.asdf.org.in

Akshay Rashinkar
Symbiosis Institute of
Computer Studies and Research
1. IT Company Educomp has joined hands with
Great Lakes Institute of Management to provide e-learning
education. Great Lakes Institute of Management in
partnership with Educomp will start rolling out franchisees
to conduct higher education programme and training in
tier-II cities of India.
2. Harvard Business Publishings (HBP) is
launching an e-learning tool for managers in India.
3. A Jesuit-run media center in Kolkata is promoting
e-learning in schools using audio-visual material in a bid to
make lessons more interesting.
4. At the government level, The Indira Gandhi
National Open University (IGNOU) announced the details
of the virtual university for Africa on May 25, a day after
Prime Minister Manmohan Singh promised such an
institution at a summit in Addis Ababa, Ethiopia.
Moodle: An Open-Source e-learning tool.
Moodle is an acronym for Modular Object-Oriented
Dynamic Learning Environment. Moodle was developed
from a social constructivist perspective by Martin
Dougiamas at Curtin University in Western Australia
(Dougiamas &Taylor, 2003).
Moodle is a software package for producing Internetbased courses and web sites. It is a global development
project designed to support teaching and learning
management process through elearning. Moodle is
provided freely as Open Source software (under the GNU
Public License). Moodle can be installed on any computer
that can run PHP, and can support an SQL database (for
example MySQL). It can be run on Windows, Linux and
Mac operating systems.
Disk space of 160MB free (min) is required for
installation. More free space to store your teaching
materials and other materials. Memory of 256MB (min),
1GB is recommended. Basically Moodle can support 50
concurrent users for every 1GB of RAM, but this will vary
depending on your specific hardware and software
combination.
II. MOODLE
Features:
1.
Pedagogical Considerations:
Social constructivist pedagogy emphasizes social
interaction as a basis for knowledge construction. Social
interaction creates opportunities for exchange of thoughts,
ideas and skills.
Learning environments support interaction among
learners and trainers.

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

Flexible arrangements facilitate collaborative group


work and reflect a model that supports the notion of social
constructivism for both the trainer and learner. The design
of virtual learning environments also reflects the view of
learning that is held by the designer. Moodle was
developed by Martin Dougiamas. Dougiamas claims that
Moodle's design is based on the idea of social
constructionism and that its structure supports the
development of constructivist, student-centred learning
environments.
2.
24x7 Access
Moodle can be accessed by any resource via Internet
form anyplace 24x7. These days mobile phones are also
equipped with fast speed Internet, this advancement in
technical gadgets and computers facilitates an individual to
learn anytime he wishes to learn. Education is no longer
time bound and has become a continuous activity.
3.
Expanded Collaborative Opportunities
The use of Moodle has expanded the capabilities of an
educational institute 24x7. The educational institutes are
able to extend their services to various segments of student
which include regular students, teachers, professionals, etc.
The educational institutes are not longer area bounded and
are expanding into different facets of technology and
literature.
4. Variety of Online Activities available:

Assignment submission: The students can submit


assignments online which are stored at ftp location.

Discussion forum: Moodle also provide facilities


for blogging and discussion of different topics.

Files download: Sample questionnaire, different


files can be downloaded from anywhere.

Grading: Teachers can view the quiz marks and


assignments online and grade the students. These activities
can also be performed from home or any other location.

Moodle instant messages: Moodle is equipped


with and online chat feature, which enables user to send
message instantly in the network.

Online calendar: Online calendar help to keep


track of activities for the user. The user can have different
or common calendar depending on the account setting.

Online news and announcement (College and


course level).

Online quiz: Online quizzes are hosted.

Wiki: Wiki of different topics and uploaded and


can be viewed by authorised users.
III. DRAWBACKS

Teachers need to actively work in coordinating


activities of Moodle.
Moodle does not provide much in terms of real
reports regarding student progress and participation, for
either the student or the instructor. The administrator needs
to go through several steps in order to import the MySql

www.asdf.org.in

186

data into MS Access in order to generate meaningful


reports. Thus an administrator who is well versed with
MySql and Moodle is required which is and overhead.

Not all Students have Internet Access.

Student artifacts are locked up in Moodle where


outside users cannot view them.
IV. APPLICATIONS OF MOODLE

Virtual Classroom: Moodle facilitates online live


teacher instruction and feedback that enables real-time
voice interaction, whiteboard sharing, and breakout
sessions to enhance a student's learning experience. This
provides students an opportunity to interact with the
teacher as well as classmates by oral and written
communication.

Blended Learning: Moodle allows students to


access online resources which are used to supplement
conventional modes of delivery such as lectures,
assignments and a variety of small group work. It enables
the trainer and learner to interact face-to-face.

Freestyle Course Self Motivated Learning:


Moodle allows its users to access information and
resources 24x7. Due to which the learner has no time
boundaries to learn what he wish for. He can access and
learn at his will. This will motivate self learning and will
help the learned understand better.

Forum-based Discussion Course: Moodle allow


users to host and write in blogs to discuss a topic. Group
chats is also a part of this activity. As Moodle allows users
to access from outside every individual can actively
participate in the discussion.

Single Activity Focused Courses (portfolios,


semester projects, etc): Moodle facilitates focused
activities related to projects and different courses. Each of
the objects are virtually separated from each other and the
resources can be saved and accessed individually.
With reference to an academic institute
Scope and Environment: SICSR is an educational
institute under the umbrella of Symbiosis International
University. The campus is wi-fi enabled network. Presently
SICSR runs four full time programmes.
The programmes are classified into 3 year
undergraduate and 2 year postgraduate courses. The UG
courses are BBA(IT) and BCA. The PG courses are
MBA(IT) and M Sc (CA).
The PG courses have 10 subjects per semester. And the
UG courses have 6 subjects per semester. Earlier the
Institute followed a complete traditional methodology of
teaching and evaluation. But from 2008 and 2009 the
institute implemented MOODLE and gradually turned to a
blended learning mode. Currently the Institute conducts at
least one criteria of evaluation for all subjects on the
MOODLE. Since the courses follow a credit system and
continuous evaluation. The students have to be evaluated at

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Proc. of the Second International Conference on Computer Applications 2012 [ICCA 2012]

least on three to four criteria for every subject. The criteria


may be written skills, presentation, viva-voca, case study,
etc.

V. KEY BENEFITS
1.
Institute
Saves Institute paper and toner costs.
Saves Institute answer sheet costs.
Utilizes Institute's computers and intranet.
2.
Student

Saves students pencil costs and paper cost.

Results are obtained immediately.

Students become familiar with standard teaching


software that is in wide use at schools internationally and
via distance learning systems on the Web.
Their grades are centralized and accessible to
them (during class) and easy for them to keep track of.

Computer tests can be easily rescheduled. A


student can take a learning quiz also.

Each and every student gets individualized


attention from the teacher through the Moodle quizzes.

They see the correct answers to items they got


wrong immediately after completing a quiz.
3.

Instructors
No photocopying is required of question papers.
No answer sheets are required.
Don't have to carry the test around all the time.

No need for the test marking device.

Grading is automated by a module.

Two versions of a test because each student's test


is different from his neighbors.

Timely tests can be arranged.

Analyzing the grades of students can also help in


determining the level of understanding of the students so
review sessions can be arranged.

187

Performance is heavily dependent on underlying


hardware.
VII. CONCLUSION
The case study shows that with the emergence of FOSS
technologies and its increasing use in developing nations
like India, there is a shift in paradigm of education from
classical way of learning to e-learning. This has not only
helped to exchange the thoughts, ideas and skill between
people but has also helped in increasing literacy rate and
improving talent anywhere and anytime.
In the near future India Educational Institutes with the
use of FOSS technologies will extend their education
services to the rural India bring which can bring potential
business to the institute through this venture. This will not
only fulfill the business but will also fulfill social
responsibility by increasing the literacy in the rural sector.
This vision will help in realizing Mahatma Gandhijis
dream of Sushikshit Bharat, Samruddha Bharat, Khushal
Bharat.
VIII.

REFERENCES

[1]

http://empresas.sence.cl/documentos/elearning/Elearning.%20Art%EDculo%20de%20Joanne%20Capper%20%28In
gl%E9s%29.pdf
[2] http://thor.info.uaic.ro/~mihaela/publications/articles/MBrutIE2005.pdf
[3] http://www.ascilite.org.au/conferences/brisbane05/blogs/proceeding
s/38_Kennedy.pdf
[4] http://dougiamas.com/writing/edmedia2003/
[5] http://www.business-standard.com/india/news/educomp-great-lakesto-invest-rs-150-cr-in-e-learning/134294/on
[6] http://articles.timesofindia.indiatimes.com/2011-05-07/indiabusiness/29519768_1_e-learning-mobile-devices-indian-managers
[7] http://www.cathnewsindia.com/2011/05/06/media-center-promotese-learning/
[8] http://indiaeducationdiary.in/Shownews.asp?newsid=9097
[9] http://e-learnindia.blogspot.com/
[10] http://dots.ecml.at/
[11] http://www.learningsolutionsmag.com/articles/71/moodle-a-lowcost-solution-for-successful-e-learning
[12] http://moodle.org/mod/forum/discuss.php?d=99961

VI. LIMITATIONS

No Support for interaction with human resource


systems.

No support to manage integration well between


student administration systems and Moodle student
information

No support specific and complex business-process


models

Support to use a distributed administration model


to support multiple schools and departments is still in
progress.

It lacks sophisticated assessment and grading


capabilities.

www.asdf.org.in

2012 :: Techno Forum Research and Development Centre, Pondicherry, India

www.icca.org.in

Você também pode gostar