Você está na página 1de 542

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,

ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

PROCEEDINGS OF
INTERNATIONAL CONFERENCE ON
ADVANCES IN ELECTRICAL, ELECTRONICS AND COMPUTATIONAL
INTELLIGENCE (ICAEECI16)

22nd February, 2016

Organized by

K.S.R. COLLEGE OF ENGINEERING


(Autonomous)
Tiruchengode - 637 215

In Association With

INTERNATIONAL JOURNAL OF EMERGING TECHNOLOGY


IN COMPUTER SCIENCE AND ELECTRONICS (IJETCSE)

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

FOUNDER CHAIRMAN MESSAGE

Lion.Dr.K.S.Rangasamy, MJF
Founder Chairman
KSR Institutions

In todays fast changing world there is a demand for new technologies and innovations in every

sphere of industry. The ideas that feed the ever growing demand for new designs and applications are derived
from the intensive efforts put in by scientists and researchers all over the world who work enthusiastically
for the upliftment of the society.

As the chief of the International Conference on Advances in Electrical, Electronics and

Computational Intelligence ICAEECI16 organized by the Departments of Electrical and Electronics


Engineering, Electronics and Communication Engineering and Computer Science Engineering on February
22, 2016, I am sure and confident that this conference will act as a common platform to share the research
ideas emanating throughout the world for further scientific and technical advancements.

I am delightful to note that the accepted papers are being published by leading International

Journals in the form of proceedings.


I wholeheartedly appreciate all the sincere efforts of the entire team of ICAEECI16 and wish

them a grand success.


Lion.Dr.K.S.Rangasamy, MJF

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

CHAIRMAN & MANAGING TRUSTEE MESSAGE

Thiru.R.Srinivasan
Chairman & Managing Trustee,
K.S.R. College of Engineering

The K.S.R College of Engineering was started in 1998 with the vision to produce the most

competent Scientists, Engineers. Entrepreneurs, Managers and Researchers through quality education.
Being located in a rural setup, it caters to the needs of rural students and over a time it has attracted students
from various countries.

One most forever strive for excellence or even perfection in any task however small it may be

and never satisfied with the second best quality and standard is the baseline behind the host institutions
success in every venture it has undertaken is now maneuvering to take up the task of conducting the First
ICAEECI16 Conference.

The various invited speakers, guests, scholars, delegates from different countries who will be

playing an active role in this conference will surely find the best hospitality and a very comfortable stay.

I wholeheartedly congratulate the entire team of ICAEECI16 for their efforts and wish the

conference a grand success.


Thiru. R.Srinivasan

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

PRINCIPAL MESSAGE

Dr.K.Kaliannan
Principal,
K.S.R.College of Engineering

It is pleasing to note that our K.S.R. college of Engineering is posting the prestigious International

Conference on Advances in Electrical, Electronics and Computational Intelligence ICAEECI16. I feel


proud to say that our college has become a common platform for the guests, invited speakers, delegates
and scholars throughout the world to share the expertise and research outcomes and being the patron of this
conference, I feel privileged to welcome them to this technical conference.

It is right to acknowledge and place on record the magnanimous support provided by the

management of K.S.R. College of Engineering, especially the Chairman of K.S.R. Institutions and the
Managing Trustee of our college.

The hallmark of this event is best brought out by selecting the 8 Leading International Journals,

which also has evinced a good score of around 320 papers received from the coveted areas of referred
research activities around the globe. All the papers were reviewed by over a hundred reviewers.

Making this mega conference a resounding success, lies in the planned hard work of many. It is my

duty to appreciate those who are behind this conference for taking a step ahead to make a mark in research
and technological advancements.

Dr.K.Kaliannan

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

DEAN MESSAGE

Dr.A.Krishnan,
K.S.R.College of Engineering

Im delightful to learn that Departments of Electrical and Electronics Engineering, Electronics

and Communication Engineering and Computer Science Engineering are organizing an International
Conference on Advances in Electrical, Electronics and Computational Intelligence ICAEECI16 at our
campus on 22.02.2016. I hope that more number of engineers, researchers will participate in this Technical
event. This will help the participants to improve their technical know-how, leadership quality, organizing
skills and to become a meaningful entrepreneur.

This conference enable the participants to apply their knowledge in the relevant fields and help

them to bring out novel techniques in emerging areas. This program aids them to shape their future and
caters all their needs.

I wish the function to be a grand success.


Dr. A. Krishnan

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

MESSAGE FROM THE CHIEF-IN-EDITOR

R.Jayaprakash
CHIEF-IN-EDITOR
IJETCSE

On behalf of the conference committee, I am honored to invite you all to the International

Conference on Advances in Electrical, Electronics and Computational Intelligence (ICAEECI16) on 22nd


February 2016 organized by IJETCSE in association with K.S.R. College of Engineering. I am really
delighted to see the overwhelming response from the scientific community and extremely thankful to the
participants and Head of the institutions for having profusely extended their support would be pleased
having participation of research scholars, students, academicians and industrial delegates. The ICAEECI16
conference takes a highly applied and practical focus. In a period when competitive advantage has become
a vital part of business and enterprise it is vital to keep up with the latest trends, technologies and tools.
Finally I would like to express our sincere thanks to all members of the organizing committee, the technical
program committee and all the volunteer reviewers who have been working hard for the success of this
conference. I also thank all the submitting authors for sharing their latest research results with us. I am
confident that the International Conference would highlight the significance and the necessity of enriching
and updating the knowledge in various upcoming areas. I am indeed very proud of being a part of this
International event and wishing you all a great success.

R.Jayaprakash

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Conveners
Dr.P.S.Periasamy, Prof.
Dr. S. Ramesh, Prof.

& Head, Dept. of ECE

& Head, Dept. of EEE

Dr.A.Rajiv Kannan, Prof.

& Head, Dept. of CSE

Co-Ordinators
Dr. T.R. Sumithira
Mrs. K.Yamuna
Dr. S. Karthikeyan
Dr. N.S. Nithya

Organising Committee
Dr. P. SUGANYA

Dr.C.GOWRI SHANKAR

Mrs. B.YUVARANI

Mrs. E.VANI

Dr.A.MAHESWARI

Mr. R.CHANDRASEKAR

Dr.C.KARTHIKEYAN

Dr. R.SANKARGANESH

Mr. K.R.NANDHAGOPAL

Dr.V.RAVI

Mr.M.VIJAYAKUMAR

Mr.K.P.SURESH

Dr. M.VIJAYAKUMAR

Dr.G.VIJAYAKUMAR

Mr.S.GOWTHAM

Dr. R. GOPALAKRISHNAN

Mr. S.CHINNAIYA

Dr.P.SATHISHKUMAR

Ms. S.B. CHITRAPREYANKA

Mr. P.SUNDARAVADIVEL

Mrs. S.THIRUVENI

Mr.S.GOPINATH

Dr.M.RAMASAMY

Mrs S.MAHALAKSHMI

Mr.R.KUMARESAN

Mr. M. SENTHIL KUMAR

Mrs. N.NISSANTHI

Mr.C.PAZHANIMUTHU

Mr.K.PRAKASAM

Ms.M.MUTHULAKSHMI

Mrs. M.SORNALATHA

Mr. J.THIYAGARAJAN

Mr. S. AROCKIASAMY

Mrs.R.JEYANTHI

Mr.E.KANNAN

Mrs.K.GOWRI

Mrs. R. POORNIMA

Mrs. S. POONGODI

Mrs. S. JEYABHARATHI

Ms. K. KIRUBA

Mrs.V.M.JANAKI

Ms.S.PREMALATHA

Mr. L. RAJA

Ms.A.LAVANYA

Mr.K.KARUPPANASAMY

Mrs.T.KAVITHA

Mr.R.VEERAMANI

Dr. J. GNANAMBIGAI

Mr. M. SUBRAMANI

Mrs. A. JAYA MATHI

Mr.S.MANOHARAN

Mrs. P. THILAGAVATHI

Mr. P. BALAKRISHNAN

10

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
Mr. P. MAHENDRAN

Mr.A.SURENDAR

Mr. G. SENTHILKUMAR

Mr.P.SIVAKUMAR

Mr. R. PANNEER SELVAM

MR.R.MAHENDRAN

Mr.C.KARTHIK

Mr. C. ARUN PRASATH

Mr. R. ESWARAMOORTHI

Mr. R. SATHEES KUMAR

Mr. P. SIVASANKAR RAJAMANI

Mr. T. M. SATHISH KUMAR

Mr. S. SENTHILKUMAR

Mr.K.P.UVARAJAN

Mr. J. RAMESHKUMAR

Mr.G.S.MURUGAPANDIAN

Mr.M.RAJASEKAR

Mr.S.VELMURUGAN

Mr. M. PRAVIN KUMAR

Mr.S.KRISHNAKUMAR

Mr. M. JOTHIMANI

Mr.S.VADIVEL

Dr. R.VELUMANI

Ms. V.VENNILA

Mr.K. KUMARESAN

Dr. A.VISWANANTHAN

Ms. G.S.RIZWANABANU

Mr.C.THIRUMALAISELVAN

Dr. E.BABYANITHA

Ms. S.SUGANYA

Mr.M.SUKUMAR

Mr. G. SIVASELVAN

Ms. S.REVATHY

Mr.T.SARANSUJAI

Dr. P.SIVAKUMAR

Mr. R. KRISHNA PRADEEP

Ms.M.UMAMAHESWARI

Mr. M.PRAKASH

Ms. K.THAMARAISELVI

Ms.S.SAVITHA

Mr. G.NAGARAJAN

Ms. K.NITHYA

Ms.S.SENBHAGA

Mr. T.SASI

Mr. A.R.SURENDRAN

Ms.D.SANDHIYA

Mr. K.VENKATESH GURU

Mr. J.SANTOSH

Mrs. V.SHARMILA

Mr. M.SUDHARSAN

Mr. G.T.RAJAGANAPATHI

Dr.M.SOMU

Mr. A.MUMMOORTHY

Mr. K.DINESHKUMAR

Dr.P.BALAMURUGAN

Mr. P.PRAKASH

Mr. S.ANGURAJ

Dr.S.NITHYAKALYANI

Mr. C.ANAND

Mr. V. SENTHILKUMAR

Dr. M.TAMILARASI

Mr. G.KARTHIK

Mr. S.SIVAPRAKASH

IJETCSE Organising Committee


Mr. R. JAYAPRAKASH,
Editor - in - Chief, IJETCSE

Mr. S. SELVANAYAGAM,
Organizing Head, IJETCSE

Mr. S. GOVINDASAMY,
Co-Ordinator,IJETCSE

11

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL, ELECTRONICS AND
COMPUTATIONAL INTELLIGENCE (ICAEECI-2016)
K.S.R. COLLEGE OF ENGINEERING,
TIRUCHENGODE 637 215

DATE

SESSION DETAILS
TIME
08.45-09.30 am

09.30-10.30 am

BOARD: EEE

PROGRAMME

VENUE

PROGRAMME SCHEDULE

REGISTRATION

MAIN BLOCK

REGISTRATION

INAUGURAL

HALL NO.4 (MCA


CONFERENCE
HALL)

Dr.KANNAN JEGATHALA
KRISHNAN
PROFESSOR, VICTORIA
UNIVERSITY, AUSTRALIA
Dr.V.N.MANI
SCIENTIST-E, C-MET, GOVT.
OF INDIA, HYDREBAD

10.30-10.50 am

TECHNICAL
SESSION-I

22-02-2016

11.00-01.00 pm

TEA BREAK

01.00-02.00 pm

02.00-04.00 pm

12

LUNCH

HALL NO.4 (MCA


CONFERENCE
HALL)

TEA BREAK

HALL NO.1
(MAIN BLOCK)
Dr.M.R,Dr.G.V

ICAEECI 009, ICAEECI 010,


ICAEECI 021, ICAEECI 022,
ICAEECI 024, ICAEECI 032

HALL NO.2
(MAIN BLOCK)
Dr.C.G.S,Dr.M.V

ICAEECI 045, ICAEECI 060,


ICAEECI 062, ICAEECI 063,
ICAEECI 072, ICAEECI 073

KSRCE BOYS
HOSTEL

LUNCH

HALL NO.1
(MAIN BLOCK)
-Dr.M.R,Dr.G.V

ICAEECI 074, ICAEECI 091,


ICAEECI 107, ICAEECI 123,
ICAEECI 125, ICAEECI 145,
ICAEECI 158

HALL NO.2
(MAIN BLOCK)
Dr.C.G.S,Dr.M.V

ICAEECI 164, ICAEECI 171,


ICAEECI 174, ICAEECI 175,
ICAEECI 181, ICAEECI 182,
ICAEECI 204, ICAEECI 039

TECHNICAL
SESSION-II

04.00-04.15 pm

TEA BREAK

HALL NO.4 (MCA


CONFERENCE
HALL)

TEA BREAK

04.15-04.45 pm

VALEDICTORY

HALL NO.4 (MCA


CONFERENCE
HALL)

VALEDICTORY

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL, ELECTRONICS AND
COMPUTATIONAL INTELLIGENCE (ICAEECI-2016)
K.S.R. COLLEGE OF ENGINEERING,
TIRUCHENGODE 637 215

DATE

SESSION DETAILS
TIME
08.45-09.30 am

09.30-10.30 am

BOARD: CSE

PROGRAMME

VENUE

PROGRAMME
SCHEDULE

REGISTRATION

MAIN BLOCK

REGISTRATION

HALL NO.4 (MCA


CONFERENCE HALL)

Dr.KANNAN
JEGATHALA
KRISHNAN
PROFESSOR,
VICTORIA
UNIVERSITY,
AUSTRALIA

INAUGURAL

Dr.V.N.MANI
SCIENTIST-E,
C-MET, GOVT.
OF INDIA,
HYDREBAD

22-02-2016

10.30-10.50 am

TEA BREAK

HALL NO.4 (MCA


CONFERENCE HALL)
HALL NO.5
(MECHANICAL
BLOCK)

11.00-01.00 pm

TECHNICAL
SESSION-I
HALL NO.6
(MECHANICAL
BLOCK)

01.00-02.00 pm

02.00-04.00 pm

LUNCH

TECHNICAL
SESSION-II

TEA BREAK
ICAEECI 001,
ICAEECI 006,
ICAEECI 013,
ICAEECI 098,
ICAEECI 192
ICAEECI 201,
ICAEECI 202,
ICAEECI 205,
ICAEECI 011,
ICAEECI 046

KSRCE BOYS HOSTEL

LUNCH

HALL NO.5
(MECHANICAL
BLOCK)

ICAEECI 047,
ICAEECI 048,
ICAEECI 049,
ICAEECI 052,
ICAEECI 053

HALL NO.6
(MECHANICAL
BLOCK)

ICAEECI 055,
ICAEECI 064,
ICAEECI 069,
ICAEECI 135

04.00-04.15 pm

TEA BREAK

HALL NO.4 (MCA


CONFERENCE HALL)

TEA BREAK

04.15-04.45 pm

VALEDICTORY

HALL NO.4 (MCA


CONFERENCE HALL)

VALEDICTORY

13

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL, ELECTRONICS AND
COMPUTATIONAL INTELLIGENCE (ICAEECI-2016)
K.S.R. COLLEGE OF ENGINEERING,
TIRUCHENGODE 637 215

DATE

SESSION DETAILS
TIME
08.45-09.30 am

09.30-10.30 am

BOARD: ECE

PROGRAMME

VENUE

PROGRAMME
SCHEDULE

REGISTRATION

MAIN BLOCK

REGISTRATION

HALL NO.4 (MCA


CONFERENCE HALL)

Dr.KANNAN
JEGATHALA
KRISHNAN
PROFESSOR,
VICTORIA
UNIVERSITY,
AUSTRALIA

INAUGURAL

Dr.V.N.MANI
SCIENTIST-E, C-MET,
GOVT. OF INDIA,
HYDREBAD

22-02-2016

10.30-10.50 am

11.00-01.00 pm

01.00-02.00 pm

02.00-04.00 pm

14

TEA BREAK

HALL NO.4 (MCA


CONFERENCE HALL)

TEA BREAK

HALL NO.3 (MAIN


BLOCK)

ICAEECI 016, ICAEECI


017, ICAEECI 018,
ICAEECI 019,
ICAEECI 020

HALL NO.4 (MCA


CONFERENCE HALL)

ICAEECI 025, ICAEECI


036, ICAEECI 038,
ICAEECI 050,
ICAEECI 056

KSRCE BOYS HOSTEL

LUNCH

HALL NO.3 (MAIN


BLOCK)

ICAEECI 061, ICAEECI


079, ICAEECI 090,
ICAEECI 137,
ICAEECI 159

HALL NO.4 (MCA


CONFERENCE HALL)

ICAEECI 179, ICAEECI


180, ICAEECI 198,
ICAEECI 200

TECHNICAL
SESSION-I

LUNCH

TECHNICAL
SESSION-II

04.00-04.15 pm

TEA BREAK

HALL NO.4 (MCA


CONFERENCE HALL)

TEA BREAK

04.15-04.45 pm

VALEDICTORY

HALL NO.4 (MCA


CONFERENCE HALL)

VALEDICTORY

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Page
No.

S.No.

Title with Author

Techo-Economic Simulation and Optimization of 4.5Kw Wind/Solar


Micro-Generation System for Victorian Climate

Kannan Jegathala Krishnan Akhtar Kalam Aladin Zayegh


1

Credit Score Tasks Scheduling Algorithm For Mapping A Set Of Independent


Tasks Onto Heterogeneous Distributed Computing Grid Environment

16

Dr.G.K.Kamalam1 , B.Anitha2 and S.Mohankumar3

Power Quality Improvement For A Wind Farm


Connected Grid Incorporating Upfc Controller

26

Prof. V. Sharmila Deve, Dr. K. Keerthivasan, Dr. K. Gheetha and Anupama. B

Image Color Correction Using


Mosaicking Application

34

P.Pavithra,

Threshold Based Efficient Data Transmission


In Hybrid Wireless Networks

38

R.Mohana,

Secure Data Storage In Cloud Using Code


Regeneration And Public Audition

43

S.Pavithra,

Avoid Collision And Broadcasting Delay In Multi-Hop


CR Ad Hoc Network Using Selective Broadcasting

47

D.Sandhiya , B.M.Brinda

QoS-Aware Spectrum Sharing for Multi-Channel


Vehicular Network

53

G.Madhubala , R.Sangeetha

A Secure Message Exchange and Anti-jamming


Mechanism in MANET

58

S.Sevvanthi, G.Arulkumaran

10

A Novel Approach for Secure Group Data Sharing and


Forwarding in Cloud Environment

63

N.Vishnudevi , P.E.Prem , M.Madlinasha


1

11

Survey of PDA Technique With Flying Capacitor


for Buck Boost Converter

68

Kiruthika S,

12

An Efficient Algorithm for Compression and


Storage of Two-Tone Image
1

13

S.Selvam

Dr.S.Thabasu Kannan

72

R.Ganesh

Hierarchical Structure of Geospatial


Field Data Using Enhaced Rtree

79

Kumutha Priya M, Brinda B M, Udhaya Chandrika A,


15

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

14

Effective and Energy Efficient neighbor


discovery protocols in MANET

83

R.Fuji1, G .Arul Kumaran2 and S.Fowjiya3,

15

Improvement of Power Quality Using PQ Theory


Based Series Hybrid Active Power Filter

87

Dr.K.Sundararaju, M.E.,Ph.D, Preetha Sukumar

16

Classification And Identification of Fault Location And Direction By


Directional Relaying Schemes Using Ann

17

Complex Wavelet Transform Based Cardiac


Arrhythmia Classification

94

Shoba.S, Dr.T. Guna Sekar

100

A.Kalai Selvi, M.Sasireka, Dr.A.Senthilkumar, Dr.S.Maheswari.

18

QoS Guided Prominent Value Tasks Scheduling


Algorithm in Computational Grid Environment

106

A.Kalai Selvi, M.Sasireka, Dr.A.Senthilkumar, Dr.S.Maheswari.

19

Authenticated Handsign Recogonition for


Human Computer Interaction

113

S.Sharaenya#1 and C.S.Manikandababu*2

20

Genetically Optimized UPFC Controller for


Improving Transient Stability

123

S.Sharaenya#1 and C.S.Manikandababu*2

21

Parallel Combination of Hybrid Filter


With Distorted Source Voltage

132

J.Priyanka devi1, J.Praveen daniel2

22

Harmonic Mitigation In Doubly Fed Induction Generator for Wind


Conversion Systems By Using Integrated Active Filter Capabilities

138

B.Rajesh kumar, R.Jayaprakash.

23

Dwt Based Audio Watermarking Using


Energy Comparison

147

K.Thamizhazhakan , Dr.S.Maheswari

24

Self - Propelled Safety System


Using CAN Protocol

25

Control Analysis of Statcom Under Power System Faults

153

M. Santhosh Kumar, , Dr.C.R.Balamurugan

159

Dr.K.Sundararaju, T.Rajesh

26

Automatic Road Distress Detection and


Characterization System

167

S.Keerthana PG scholar, Mr.C.Kannan

27

Circle Based Path Planning Algorithm for Mobile Anchor


Trajectory Localization in Wireless Sensor Networks -A Review
D.Poovaradevi, PG Scholar, Mrs.S.Priyadharsini

16

174

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

28

Collision Free Packet Transmission


for Localization

29

Measurement Of Temperature Using Thermocouple


and Software Signal Conditioning Using Labview

181

P.Saranya, V.Saravanan

188

P.Nandini1, Mr.C.S.ManikandaBabu2,

30

Comparative Analysis of 16:1 Multiplexer and 1:16


Demultiplexer using Different Logic Styles

31

Design And Implementation Of Low Power Floating


Point Unit Using Reconfigurable Data Path: A Survey

194

K.Anitha, PG Scholar, R.Jayachitra

203

Lathapriya V #1 and Sivagurunathan P T *2

32

A Survey on Han-Carlson adder


with Efficient Adders

209

Kaarthik.K, PG scholar, Dr .C.Vivek , Associate Professor,

33

A Data Flow Test Suite Minimization and Prioritization in


Regression Mode

217

M.Vanathi1 , J.Jayanthi2

34

Maximization of Co-Ordination Gain In Uplink System Using The


Dynamic Cell Clustering Algorithm

222

Ms.M.Keerthana, , Ms.K.Kavitha,

35

Srf Theory Based Reduced Rating Dynamic Voltage


Restorer for Voltage Sag Mitigation

226

S.Sunandha1, M.Lincy Luciana2 and K.Sarasvathi3

36

Performance Comparison Of Ann, Flc And Pi Controller Based Shunt


Active Power Filter For Load Compensation

235

Lincy Luciana.M1 Sunandha.S2 Sarasvathi.K3

37

An Approach for Lossless Data Hiding


Using Lzw Codes

245

Sindhuja J1,Anitha B2

38

Improving the non-functional Quality of Service (QoS)


attributes of web services for Tourism

252

S.Dharanya1,J.Jayanthi2

39

Solar Powered Auto


Irrigation System

256

V R.Balaji#1 And M.Sudha*2

40

Securing Internet Banking With A Two-Shares Visual


Cryptography Secret Image

262

Aparnaa.K.S, Santhi.P

41

Robot Vehicle Controlled


by Gestures

270

N.Prakash#1 and P.K.Dheesshma*2


17

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

42

Four Dimensional Coded Modulation With Gmi for


Future Optical Communication

275

P.Jayashree1, M.Jayasurya2, J.Keerthana3, K.Kasthoori4, R.Sivaranjini5

43

Measurement of Temperature using RTD and Software Signal Conditioning using LabVIEW

281

C. Nandhini, M. Jagadeeswari

44

Single Image Haze Removal Using Change of


Detail Prior Algorithm

289

D.Shabna1, Mr.C.S.ManikandaBabu2,

45

Modelling Of Hybrid Wind And Photovolatic


Energy System Using Cuk And Sepic Converter

294

S. Mutharasu1 and R. Meena2

46

Semicustom Implementation Of Reversible


Image Watermarking Using Reversible Contrast Mapping

300

S.Gayathri1, Mr.C.S.ManikandaBabu2

47

Survey on Cloud Computing


Security Issues

307

E.Dinesh#1 and Dr.S.M.Ramesh*2

48

Minimize Response Delay In Mobile Ad Hoc Networks


using DynamicReplication

315

Karthik K1 Suthahar P2

49

Voltage Sag Mitigation in


Photovoltaic System

324

B.Pavitra#1 and C.Santhana Lakshmi*2

50

Integral Controller for Load Frequency


Control in Deregulated environment

331

S.Ramyalakshmi#1 and V.Shanmugasundaram*2

51

A Survey On Wavefront Sensorless Techniques for


Free Space Optical Communication System

341

K.Goamthi1 , T.pasupathi2 N.Gayathri3 and S.Barakathu nisha4

52

Small Signal Stability Analysis of Power System Network Using Water


Cycle Optimizer

346

Dr.R.Shivakumar#1 and M.Rangarajan*2

53

Minimization Of Thd In Cascaded Multilevel Inverter Using Selective


Harmonic Elimination With Artificial Neural Networks

353

M.Ramesh#1 ,P.Maniraj*2 , P.Kanakaraj*3

54

A Survey Of Design Of Novel Arbitary Error Correcting


Technique For Low Power Applications

363

D.Ragavi,PG Scholar, S.Mohan Raj,Associate Professor

55

Determination Of Physical And Chemical


Characteristics of Soil Using Digital Image Processing
M.Arun pandian1, .C.S.ManikandaBabu2,

18

368

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

56

Blind Multiuser Detection Using Ofcdm

373

Sujitha J , and Baskaran K


#1

57

*2

Human Activity Recognition by Triliteration Hmsa

386

Yogesh Kumar M , Sindhanai Selvan K


1

58

Reactive Power Compensation And Voltage


Fluctuation Mitigation Using Fuzzy Logic In Micro Grid

59

Wireless Sensor Networks Using Ring Routing


Approach With Single Mobile Sink

391

Gokul Kumar.M, PG Student, M.Sangeetha

396

Sudharson.S1,Ms.R.Jennie Bharathi, M.E2

60

Design Of A Fuzzy Based Multi-Stack Voltage Equalizer


for Partially Shaded PV Modules

404

R .Senthilkumar, S.Murugesan

61

Design of Single Phase Seven Level PV Inverter Using FPGA

414

S.Murugesan, R.Senthilkumar

62

Home Automation Using IoT

425

Govardhanan D, Sasikumar K S, Vishalini R and Poojaa Sudhakaran

63

A Traffic Sign Motion Restoration Model Based on


Border Deformation Detection

432

T.RAMYA , G.Singaravel, V.Gomathi

64

Minimize Response Delay In Mobile Ad Hoc


Networks using DynamicReplication

439

Karthik K1 Suthahar P2

65

Efficient Distributed Deduplication System With Higher Reliability


Mechanisms In Cloud

66

Passive Safety System For Mordern Electronic


Gadgets Using Airbag Technology

455

Control Analysis of Statcom Under Power System Faults


1
Dr.K.Sundararaju, 2T.Rajesh
Performance Analysis of Four Switch
Three Phase Sepic-Based Inverter

462

Maximization of co-ordination gain in uplink system using


the dynamic cell clustering algorithm

481

Design and Simulation of Low Leakage High Speed


Domino Circuit using Current Mirror

486

448

P.Nivetha, M.Tech Information Technology,

C.V.Gayathri Monika, R.S.Keshika, G.Ramasubramaniam, S.Sakthivel

67
68

469

Prabu B PG Scholar /Dept.Of.EEE, K.S.Rangasamy College Of Engineering


Murugan M Associate Professor/Dept.Of.EEE, K.S.Rangasamy College Of Engineering

69

Ms.M.Keerthana, M.Kumarasamy Ms.K.Kavitha, M.Kumarasamy

70

Mr.C.Arun Prasath1, Dr.C.Gowri Shankar2 and R.Sudha3

19

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

71

Upgrade Reliability for Novel Identity-Based Batch


Verification Scheme In Vanet

495

Diagnosis of Cardiac Diseases


Using Ecg for Wearable Devices

501

Optimal Location of Capacitors And Capacitor Sizing


In A Radial Distribution System Using Krillherd Algorithm

511

S.Naveena devi1, M.Mailsamy2 and N.Malathi3,

72

B. Karunamoorthy#1, Dr. D. Somasundereswari #2 and A. Sangeetha*3

73

SA.ChithraDevi# and Dr. L. Lakshminarasimman*

20

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Techo-Economic Simulation and Optimization of 4.5Kw Wind/


Solar Micro-Generation System for Victorian Climate
Kannan Jegathala Krishnan1 Akhtar Kalam2 Aladin Zayegh3
1Coordinator University Education, Vision of Wisdom, The Education Wing,
The World Community Service Centre (WCSC), India
For Geographical Locations: Australia, New Zealand and South East Asia
2Professor, Akhtar Kalam, College of Engineering and Science, Victoria University, Melbourne, Australia
3Assoc. Prof. Aladin Zayegh, College of Engineering and Science, Victoria University, Melbourne, Australia

Abstract For either grid-connected or off-grid environment with input like solar photovoltaic,
wind turbines, batteries, H2 generators and conventional generators etc, designing and analyzing
hybrid systems inclusive of renewable energy Hybrid Optimization Model for Electric Renewables
(HOMER) is widely used in many countries. Distributed generation and hybrid systems inclusive of
renewable energy continue to grow and mitigation of financial risk for hybrid systems inclusive of
renewable energy projects is served by HOMER.
The paper mainly focuses on simulation and optimization of the implemented 4.5kW Wind/Solar
micro-generation system to obtain the most cost-effective, best component size and the projects
economics for 8.4MWh/d load with 827kW peak. The methodology and simulation model of 4.5kW
micro-generation system is presented in this paper. The collected climatic data of 5 Victorian
suburbs, load profile from the Department of Facilities and details of system components from
Power Systems Research Laboratory at Victoria University, Melbourne, and electricity as input-the
economic, technological and environmental performances is examined in this paper. The benefit of
using HOMER for micro-power optimization model and the determination for realistically financing
renewable energy or energy efficiency projects will be presented.
IndexTermsHOMER, 4.5kW climatic data, load profile, Cost of Energy (COE) and Net Present
Cost (NPC) and Wind/Solar micro-generation system
I. INTRODUCTION
The study in this paper aims to investigate the economic, technical and environmental performance
of the implemented 4.5kW Wind/Solar micro-generation under Australian (Victorian) climatic conditions.
Using global solar irradiation and wind speed as solar and wind energy data, load data (Building D, Level
5 at Victoria University, Footscray Park Campus), the price of PV array, Vertical Axis Wind Turbine
(VAWT), converters, grid electricity tariff and sale-back tariff as inputs of economic analysis, 4.5kW Wind/
Solar micro-generation system was simulated and optimized by Hybrid Optimization Model for Electric
Renewable (HOMER) [1]. Section 2 presents the methodology, simulation model, system simulation tool,
components modeling and system optimization problem. Section 3 provides the study locations and their
climatic data, load profile, details of system components and electricity tariff. Section 4 highlights the
economic, technological and environmental results. Finally, Section 5 provides the summary of this paper.
II METHODOLOGY
The implemented 4.5kW Wind/Solar micro-generation system in Power Systems Research
Laboratory at Victoria University was undertaken for research in this paper by using computer-based
energy simulation tool. Simulation software and system optimization objective [1] are introduced in this
section with economic data, collected weather data and load data as inputs.
A Simulation model
The 4.5kW Wind/Solar micro-generation system as shown in Fig. 1 has a direct current (DC) 1.5kW
PV array and alternating current (AC) 3kW VAWT as the energy generator. The system has also rectifier
converting electricity between AC to DC as well as inverter converting electricity between DC to AC as
the load and the grid is AC.
1

B System simulation tool

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

The HOMER software was used to model the implemented 4.5kW Wind/Solar micro-generation
system. It is a micro power optimization model that simplifies the task of designing distributed generation
(DG) system both on and off-grid developed by National Renewable Energy Laboratory (NREL), US.
HOMER simulates the operation of the system and performs energy balance calculations for each system
configuration. It also estimates the cost of installing and operating the system over the life time of the project
and supplies the optimized system configuration and components sizing. For simulation and optimization
of conventional and renewable energy system HOMER is widely used in many countries [2]. The 4.5kW
Wind/Solar micro-generation system is simulated by the HOMER coding as shown in Fig. 2.
C Components modeling
The solar energy is converted into DC electricity by the PV array in direct proportion to the solar
radiation incident upon it [6.1]. The PV array is placed at a tilt angle of 300 in order to achieve higher
insolation level to the solar radiation incident upon it. HOMER calculates the PV array output [2] as shown
in (1) and the radiation incident on PV array [2] as shown in (2).

(1)
where:

[2]
where:


For wind modeling, during each hour of the year, the base line data is a set of 8,760 values
representing the average wind speed expressed in meters per second. From twelve average wind speed
values: one for each month of the year, HOMER builds a set of 8,760 values, or one wind speed value for
each hour of the year. The synthesized data sequence has the specified Weibull distribution, autocorrelation,
seasonal and daily patterns [1-5].
2

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
For converter modeling, capital and replacement cost, its annual operating and management (O&M) cost
and their expected life times are the economic properties. The important physical property is its size and
the other property is the inversion and rectification efficiencies which are assumed to be constant [1-5].

The electricity utility charges for energy purchased from the grid referred to as power price, in $/
kWh; and the utility pays for the grid demand referred to as sale-back rate, in $/kWh. There are the two type
of prices for economic modeling of the grid. The grid is modeled as a component from which the system
can purchase and sell AC electricity [1-5].
D System Optimization problem

Since the implemented 4.5kW Wind/Solar micro-generation system was posed as an optimization
problem, corresponding to the system constraints and performances the objective functions were formulated.
Various configurations of wind turbines, PV arrays and converters combining with different sizes are the
different options of the system [1]. The optimized system has the lowest costs in the life cycle when
producing electricity and therefore the objective of optimization [2] is given by a function as shown in (3).

(3)
where:

III STUDY LOCATIONS AND THEIR CLIMATIC DATA



The 4.5 kW Wind/Solar micro-generation system in Power Systems Research Laboratory
at Victoria University was optimized for five selected suburbs in Victoria: Melbourne, Mildura, Nhill,
Broadmeadows and Sale. Table 1 shows the climate indicators related to solar energy resources for all
the five selected suburbs in Victoria. The monthly global clearness index and daily radiation for Victoria,
Australia were collected from HOMERs help file [2] and NASAs Surface Solar Data Set website [6] as
shown in Table 2. The global solar radiation per annum of Victoria, Australia is shown in the Fig. 3. In
Victoria during the month of (January to March) and (October to December) there is more exposure and
intensity and the solar irradiation is more. During (April to September) there is less solar exposure and
intensity of solar irradiation is less as shown in Fig. 3.

Fig. 1 4.5 kW Wind/Solar micro-generation system configuration


3

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Fig. 2 HOMER codes for 4.5 kW Wind/Solar micro-generation system


Table 1 Climate indicators for Victorian Suburbs [2], [6]

Table 2 Global clearness index and daily radiation for Victoria, Australia [2], [6]

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Fig. 3 Global solar radiations (kW/m2) per annum of Victoria, Australia [2], [6]
Table 3 Global Wind speed data for selected Victorian suburbs, Australia [7]

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Fig. 4 Wind resources Hourly wind speed data per annum for Melbourne [7]

Fig. 5 Wind resources Monthly average wind speed


for Mildura [7]

Fig. 7 Wind resources - Monthly average


wind speed for Sale [7]

Fig. 6 Wind resources Monthly average wind speed


for Nhill [7]

Fig. 8 Wind resources - Monthly average


wind speed for Broadmeadows [7]

Fig. 9 Power curve of 3kW VAWT [8]


6

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
Wind speed data for the selected five suburbs were obtained from weather base website [7]. Table 3 shows
the global wind speed data for all the five suburbs in Victoria, Australia. Fig. 4 shows the hourly wind speed
data for the suburb Melbourne for one year. It is also seen from Table 5 while Melbourne has more wind
speed, Nhill has the least wind speed and Mildura has the second least wind speed data. Figs. (5-8) show
the monthly average wind speed data of Mildura, Nhill, Sale and Broadmeadows [7] and Fig. 9 shows the
power curve [8] of 3kW VAWT.
A Load profile
Footscray Park Campus is the largest campus of Victoria University and situated next to parklands along
the Maribyrnong River and looks over Flemington Racecourse. There are 9 Buildings (A,C,D,E,G,K,L,M
& P) up to 7 levels. Fig. 10 shows the access and mobility map of Footscray Park Campus, Victoria
University [9].
The 3kW VAWT is installed on the roof top of Building D, Level 7 and 1.5kW mono-crystalline Solar
panels are installed on Building C, Level 2. The 4.5kW Wind/Solar micro-generation system is connected
to a grid in Building D, Level 5 [9]. An electric hourly load profile for Building D, Level 5 [9] is obtained
from Facilities Department, Victoria University as shown in Fig. 11. Table 6.4 shows the hourly load
profile for 24 hours with a peak demand from 9.00 a.m. to 5.00 p.m.
B Details of System Components
Cost, efficiency and lifetime of the major components are shown in Table 5.
Table 4 Hourly load profile of Building D, Level 5, Victoria University

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Fig.10 Victoria University, Footscray Park Campus access and mobility map [9]

Fig. 11Hourly load profile of Building D, Level 5 for one year


Table 5 Details of System Components

C Electricity tariff

The electricity involved in the 4.5kW Wind/Solar micro-generation system includes electricity
purchasing tariff and electricity sale-back tariff (feed-in-tariff) [10-11]. When a continuous supply of
electricity day or night is provided by the grid to all domestic and commercial appliances, the users need to
pay the electricity bill calculated by different types of tariffs. Taking an example of the AGL Energy tariff,
single rate meter tariff is 29.238 c/kWh inclusive of GST, two rate meter tariff which allows a permanently
wired storage hot water unit to be heated overnight then for 8 hours each night is 19.701 c/kWh inclusive
of GST and a time of use meter that measures electricity during peak and off-peak times is 36.850 c/kWh
and 20.471c/kWh [10].
8

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

For the promotion of renewable energy systems in Australia feed-in-tariff has been enacted by
several State Governments. In Victoria, State Government has introduced different feed-in tariff schemes
for people producing their own renewable energy for the excess power they feed into the grid, who install
sustainable energy systems of less than 100kW in size. The feed-in-tariff for new applicants, Standard
feed-in tariff (closed for new applicants on 31stDecember 2012), Transitional feed-in tariff (closed to new
applicants on 31st December 2012) and Premium feed-in tariff (closed to new applicants on 29th December
2011) are the different types of tariff in Victoria.

Since the 4.5kW Wind/Solar micro-generation system was installed during June 2010, it was
eligible for Premium feed-in tariff offered to small scale renewable energy systems for 5kW or less a credit
of at least 60c/kWh for excess electricity fed back into the grid [11].Single rate meter from AGL Energy
tariff [10] and Premium feed-in tariff [11] was used to optimize 4.5kW Wind/Solar micro-generation
system in this chapter.
IV RESULTS AND DISCUSSION

The simulation results presented in this section show the long term implementation of the prototype
4.5kW Wind/Solar micro-generation system where the electricity load per day is 8.4MWh/d with 827kW
peak. Economical, technological and environmental performances are the three type of results presented
and discussed in this section. As the indicators of
economic performance initial cost, total net present cost (NPC) and cost of energy (COE) are used to
measure financial quantity. System components size, renewable fraction and capacity shortage are used to
measure the systems technological performance [1-5].

The systems life cycle cost is represented by the total NPC and HOMER calculates the NPC [2] as shown
in (4)

CRF () = capital recovery factor


i= interest rate [%]
The average cost per kWh of useful electrical energy produced by the system is defined
as the levelized COE. HOMER divides the annual cost of produced electricity by the total
useful electric energy production [2] as shown in (5).

(5)
where:

The energy originated from renewable power sources is referred to as renewable fraction
and HOMER calculates the renewable fraction [2] as shown in (6).

(6)
9

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)


A shortfall between the required operating capacity and the amount of operating capacity the
system can provide is defined as the capacity shortage. HOMER calculates the capacity shortage over the
year [6.2]. The ratio between the total capacity shortage and the total electric load is known as capacity
shortage fraction and HOMER calculates the capacity [2] as shown in (7).

(7)
where:

The performance profiles of the optimized systems of all 5 Victorian suburbs for grid connected system
with 9.6kW converter size, grid connected system with 4.5kW converter size and off-grid system are
shown in Tables (6-19). From Tables (6-19) the system configuration, component sizing, initial capital
cost, operating cost, total NPC, COE, renewable fraction and capacity shortage can be found.
A Economic performance

The grid connected system with 9.6kW converter size which is implemented in Power Systems
Research Laboratory at Victoria University has the best economic performance in Melbourne as shown in
Table 6 (since the system shows the minimum NPC) and the least economic performance being in Nhill as
shown in Table 12 (since the system shows the maximum NPC).

Similar results are obtained for the grid connected system with 4.5kW converter size, Melbourne
has the best economic performance as shown in Table 6.7 and Nhill has the least economic performance as
shown in Table 13. The initial capital cost, the operating cost and the total NPC of 4.5kW converter size for
Melbourne and Nhill as shown in Tables (7 and 13) is less than the initial capital cost, the operating cost
and the total NPC of 9.6kW converter size as shown in Tables
(6 and 12) because of the difference in price of converters (4.5kW converter is $5000 cheaper than 9.6kW
converter).
COE for grid connected system with both 9.6kW converter size and 4.5kW converter size for all 5 suburbs
Melbourne, Mildura, Nhill, Sale and Broadmeadows is 0.279 $/kWh. For off-grid system Melbourne has
the least COE of 0.236 $/kWh and Nhill and Sale has the most COE of 0.239 $/kWh as shown in Tables
(8, 14 and 17).
B Technological performance
The system configuration and components for grid connected system with both 9.6kW converter size and
4.5kW converter size is almost similar for all five suburbs.
For off-grid system with 1.5kW PV and 400 units of 3kW VAWT, capacity shortage varies for all 5 suburbs.
Mildura have the least capacity shortage of 22% as shown in Table 6.11, Melbourne and Broadmeadows
have 24% of capacity shortage as shown in Tables (6.8 and 6.20). Nhill and Sale have the most capacity
shortage of 25% as shown in Tables (6.14 and 6.17).
For all the five suburbs if the capacity shortage is decreased by 5% to 10% the COE increases rapidly. To
maintain COE of off-grid system equal to the grid connected system the capacity shortage (22% - 25%) has
to be met from either batteries, diesel generators or from grid.
10

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
C Environmental performance
Fig. 12 shows the sum of reduction of carbon emissions for Victorian suburbs and Fig. 13 shows the
capacity shortage of all 5 suburbs for off-grid system at COE (0.236 to 0.239 $/kWh). If capacity shortage
is decreased, carbon emission can also be decreased.
V SUMMARY
The techo-economic performance of 4.5kW Wind/Solar micro-generation system in Victoria, Australia
for the load data of 8.4MWh/d of Building D, Level 5 obtained from Facilities department of Victoria
University, Footscray Park campus was studied. The system configuration, the system costs, capacity
shortage and carbon emissions of the system were analyzed for all the five selected suburbs in Victoria.
In terms of economic consideration for 4.5kW Wind/Solar micro-generation system grid connected system
for both 9.6kW converter size and 4.5kW converter size it was found that the COE is 0.279 $/kWh for all
the five selected suburbs. In terms of off-grid system it was found that Melbourne has the least COE of
0.236 $/kWh with 24% capacity shortage, Nhill and Sale has the most COE of 0.239 $/kWh with capacity
shortage 25%. It was also found that the COE increases rapidly as the capacity shortage is reduced in steps
of 5%. Therefore the capacity shortage has to be met from grid, batteries or diesel generators. Based on
the analysis of simulation results, it has been found that a bigger size of Wind/Solar system is required and
needs a larger amount of primary investment while is able to reduce up to 75% -78% of carbon emissions
from off-grid system for all selected suburbs.

Table 6 Optimization Results: Grid connected system for Melbourne


(9.6kW Converter size)

Table 7 Optimization Results: Grid connected system for Melbourne


(4.5kW Converter size)
Table 9 Optimization Results: Grid connected system for Mildura

Table 10 Optimization Results: Grid connected system for Mildura

11

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
Table 11 Optimization Results: Off-grid system for Mildura

12

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
Table 14 Optimization Results: Off-grid system for Nhill

Table 17 Optimization Results: Off-grid system for Sale

13

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Table 19 Optimization Results: Grid connected system for Broadmeadows


(4.5kW Converter size)
Table 20 Optimization Results: Off-grid system for Broadmeadows

Fig. 12 Sum of reduction of Carbon emissions for Victorian Suburbs

14

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Fig. 13 Capacity shortage for Victorian Suburbs (COE at 0.236 to 0.239 $/kWh)

REFERENCES
[1]

G. Liu, M. G. Rasul, M. T. O. Amanullah, M. M. K. Khan, Techo-economic simulation and


optimization of residential grid-connected PV system for the Queensland climate, International
Conference onPower and Energy Systems (ICPS), pp.1-6, 22-24 Dec. 2011.

[2]

NREL HOMER getting started guide for HOMER version 2.1, tech. rep. National Energy
laboratory, Operated for the U.S. Department of Energy Office of Energy Efficiency and Renewable
Energy; 2005.

[3]

S. Paudel, J. N. Shrestha, F. J. Neto, J. A. F. Ferreira, M. Adhikari, "Optimization of hybrid PV/wind


power system for remote telecom station", International Conference onPower and Energy Systems
(ICPS), pp.1-6, 22-24 Dec. 2011.

[4] Z. Simic, V. Mikulicic, "Small wind off-grid system optimization regarding wind turbine power
curve," AFRICON 2007, pp.1-6, 26-28 Sept. 2007.
[6.5] M. Moniruzzaman, S. Hasan, "Cost analysis of PV/Wind/diesel/grid connected hybrid systems,"
International Conference on Informatics, Electronics & Vision (ICIEV), vol., no., pp.727-730, 18-19
May 2012.
[6] NASAs Surface Metrology and Solar Energy. [Online]Viewed 2013 June 02. Available:http://
eosweb.larc.nasa.gov/sse/
[7] Weatherbase. [Online]Viewed 2013 June 21. Available: http://www.weatherbase.com
[8] Masters and Gilbert 2004, Renewable and Efficient Electric Power Systems, NewYork: Wiley, 2004
[9] Footscray Park Campus Access and Mobility Map. [Online] Viewed 2013 June 22.
A
vailable:http://www.vu.edu.au/sites/default/files/facilities/pdfs/footscray-park-access-and-mobilitymap.pdf
[10]

Department of Environment and Primary Industries. [Online] Viewed 2013 June 21. Available:
http://www.dpi.vic.gov.au/home.

[11 ] AGL, Energy in action. [Online]Viewed 2013 June 21. Available:http://www.agl.com.au/

15

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Credit Score Tasks Scheduling Algorithm For Mapping A Set Of


Independent Tasks Onto Heterogeneous Distributed Computing
Grid Environment
Dr.G.K.Kamalam1 , B.Anitha2 and S.Mohankumar3
M.E.,Ph.D, Department of Information Technology, Kongu Engineering College,
Perundurai, Erode, Tamil Nadu 638052, India.
2
M.E.,Department of Information Technology, Kongu Engineering College,
Perundurai, Erode, Tamil Nadu 638052, India.
3
B.TECH II ITA Department of Information Technology, Kongu Engineering College,
Perundurai, Erode, Tamil Nadu 638052, India
1

Abstract Abstract Heterogeneous grid environments are well suited to solve the scientific and
engineering applications that require large computational demands. The problem of optimally mapping,
that is selecting the appropriate resource and scheduling the tasks in an order onto the resources of a
distributed heterogeneous grid environment has been shown, in general to be a NP-Complete problem.
NP-Complete problem requires the development of heuristic techniques to identify the best possible
solution. In this paper, a new heuristic scheduling algorithm called Credit Score Tasks Scheduling
Algorithm (CSTSA) is proposed. It aims to maximizing the resource utilization and minimizing the
makespan. The new strategy of Credit Score Tasks Scheduling Algorithm is to identify the appropriate
resource and to find the order in which the set of tasks to be mapped to the selected resource. The order
in which the tasks to be mapped is identified based on the Credit Score of the task. Experimental results
show that the proposed Credit Score Tasks Scheduling Algorithm outperforms the Min-min heuristic
scheduling algorithm in terms of resource utilization and makespan.
Index Terms Computational Grid, Grid scheduling, Heuristic, Makespan .

I. INTRODUCTION
Grid is an infrastructure and builds various functions and it helps to involve integrated and
collaborative use of various technologies like computers, networks, database and scientific instrument which
are owned and managed by multiple organizations. It is globally distributed and consists of heterogeneous
and loosely coupled data and resources. Grid is the dynamic environment, so it has the ability to change
the resource frequently. Middleware is one of the important strategies in grid computing which divides
program into number of pieces among several computers [2,4].
Computational grid is defined as the distributed infrastructure that appears to an end user who
divides the job among individual machines and run the calculations in parallel and returns the results to
the original machine. Scheduling has direct impact on performance of grid application. One important
challenge in task scheduling is to allocate the optimal resources to the job in order to minimize the task
computation time. Several heuristic task scheduling
algorithms have been developed for task scheduling. Dynamically tasks are entered and scheduler
must allocate the resource effectively but it is a tedious process [9,10,11].
Opportunistic Load Balancing (OLB) algorithm assigns the job in an arbitrary order based on the
shortest schedule to the processor without considering the ETC of that processor and also it assigns task in
arbitrary order to the next available machine regardless of its expected execution time of the machine [5,6].
Minimum Execution Time (MET) algorithm based on the minimum execution time of the task it is
assigned to the machine without considering the resource availability of that machine and also it assigns
job to the machine in arbitrary order regardless of the current load on the processor in order to improve the
performance and faster execution [1,7].
Minimum Completion Time (MCT) algorithm with the earliest completion time and minimum
expected completion time of the job each task is assigned arbitrarily to the processor. The ETC of the job j
on the processor p is added to the ps current schedule length which is the completion time of the job j on
the processor p [1,7].
Min-min algorithm calculates the expected completion time of each task with all the processors then
it assigns the task to the resource with the minimum expected completion time [5,6,8].
Max-min algorithm is similar to the Min-min algorithm; first it calculates the minimum completion
16

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
time of the entire task and selects the machine with the minimum expected completion time. Then allocates
the job with maximum minimum completion time is assigned to the corresponding processor [5,6].
Suffrage Heuristic works as follows, the first step is to calculate the minimum and second minimum
completion time for each task, the difference between the two values is defined as the suffrage value. The
second step is the task with higher suffrage value is assigned to the machine with minimum completion
time. The mapping of task to the machine suffers in terms of ECT according to the suffrage value [3].
time of the entire task and selects the machine with the minimum expected completion time. Then
allocates the job with maximum minimum completion time is assigned to the corresponding processor
[5,6].
Suffrage Heuristic works as follows, the first step is to calculate the minimum and second minimum
completion time for each task, the difference between the two values is defined as the suffrage value. The
second step is the task with higher suffrage value is assigned to the machine with minimum completion
time. The mapping of task to the machine suffers in terms of ECT according to the suffrage value [3].
II. MATERIALS AND METHODS
A. Problem Definition
An application consists of n independent meta-task and a set of m heterogeneous resources.
The problem of mapping the n meta-tasks to the set of m heterogeneous resources in a grid computing
environment is an NP-Complete problem [9, 12]. This paper proposes a new algorithm Credit Score Tasks
Scheduling Algorithm for solving the scheduling problem in a grid computing environment.
The order in which the tasks to be mapped to a set of resources determines the efficient scheduling
which results in the reduced makespan. The proposed new algorithm Credit Score Tasks Scheduling
Algorithm provides an ordered set of tasks, which specifies the order in which the tasks to be scheduled
to the set of m resources. The proposed Credit Score Tasks Scheduling Algorithm provides reduced
makespan than the existing Min-min heuristic scheduling algorithm.

Proposed CSTSA Algorithm

The mapping of the n meta-tasks to the set of m heterogeneous resources is made based on the
following assumptions [1,7]:
A set of independent, non-communicating tasks called meta-tasks is being mapped.
Heuristics originate a static mapping.
Each resource executes a single independent task at a time.
The number of tasks to be scheduled and the number of heterogeneous resources in the grid

computing environment are static and known a priori.
ETC (Expected Time to Compute) matrix represents the expected execution time of a task on a
resource.
ETC matrix of size n*m, where n represents the number of meta-tasks and m represents the

number of heterogeneous resources.
ETij- represents the expected execution time of a task ti on a resource rj.
Task set is represented as T={T1,T2,T3......Tn}
Resource set is represented as R={R1,R2,R3......Rm}
The accurate estimate of the expected execution time for each task on each resource is

contained within an ETC matrix
TCTij expected completion time of task Ti on resource Rj
RTj-ready time of resource Rj
Makespan = max(TCTij)
ETC matrix is computed by the formula
where Tasklengthi represents the length of the task Ti in MI and powerj represents the

computing power of the resource Rj in MIPS
The ready time of the resource Rj, is the time at which the resource Rj completes the execution of

the previously assigned tasks and is defined as

The proposed Credit Score Tasks Scheduling Algorithm considers two criteria for scheduling the
17

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

meta-tasks onto the resources. The two criteria considered for efficient scheduling are,
Task Execution Time Credit
Unique Value Credit for the meta-task
The proposed algorithm schedules the task with the highest credit score value to the resources that
provides the minimum completion time of the task.
C. Task Execution Time Credit

The steps involved in calculating the task execution time credit for a meta-task is
listed below:
1) From the ETC matrix, the maximum execution time of a task is identified.
MAXET=max (ETij), 1 i n, 1 j m
2) Credits are assigned to each task using the following formula:

D. Unique Value Credit for Task


The unique value is an important criterion for scheduling meta-task onto the heterogeneous resources
in a grid environment. A unique value is assigned to each task. The proposed algorithm schedules the task
to the resources based on the total credit score of the task.
Algorithm for finding the unique credit value is shown below,

for all submitted tasks in the task set T,

The denominator value dv is determined as shown below:

If the highest unique value given to a task is a two digit number, then dv=100. If the highest unique
value given to a task is a three digit number, then dv=1000 and so on.
18

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
E. An Illustrative Example
Consider the following example for a grid system with ten tasks and three resources. The ETC
matrix is given in Table1.

The maximum execution time in the given ETC matrix is


MAXET=19.9

CV1=9.9

CV2=6.6

CV3=16.5

CV4=23.1
Credit Score (CSi) for each task ti is computed using Algorithm1 and the result is shown in Table2.
Table2 Credit Score for each Task

A Unique Value (UV) for each task is assigned in random in the range 1 to 10. Unique Value Credit
(UVC) for each task is computed using the Algorithm 2 and is shown in Table3.
Table3 Unique Value Credit for each Task

19

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
The total credit score for each task ti is computed using the formula,

TCSi = CSi * UVCi
and the result is shown in Table4.

The tasks to be scheduled are ordered in the Credit Score Set CSS in the descending order of TCSi.

CSS={T6,T3,T8,T2,T5,T9,T7,T10,T4,T1}
Now, the tasks are scheduled to the resource with minimum completion time. The makespan is 43.96
sec.
The order in which the tasks are scheduled, and the makespan obtained for Min-min algorithm and
the proposed Credit Score Tasks Scheduling Algorithm is shown in Table5.
Table 5 A Comparisons between Min-min Algorithm and Credit Score Tasks Scheduling Algorithm
in makespan and task schedule order.

F. Credit Score Tasks Scheduling Algorithm (CSTSA):

RESULTS AND DISCUSSION


This section presents the experimental results computed for the benchmark model by Braun et al
[1,4,7].
20

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
Benchmark Descriptions
To evaluate the proposed algorithm, the benchmark model instances are divided into twelve
different types of ETC matrices. The size of the ETC matrix is 512*16, where 512 represent the number of
tasks and 16 represents the number of resources. Twelve combinations of ETC matrices were based on the
three metrics: Task heterogeneity, Resource
heterogeneity, and Consistency. For each twelve different type of ETC matrix, the results were
averaged over 100 different ETC matrices of the same type. The benchmark instances are labelled as u-xyyzz.k where,
u-uniform distribution in generating ETC matrices
x-consistency(c-consistent,i-inconsistent,
s-semi-consistent or partially consistent)
An ETC matrix is said to be consistent if a machine mj executes any task ti faster than resource rk,
then resource rj executes all tasks faster than resource rk.
An ETC matrix is said to be inconsistent if a resource rj executes some tasks faster and some tasks
slower than resource rk.
Semi-consistent ETC matrices are the matrices that includes a consistent sub-matrix.
Task heterogeneity is the amount of variation in the execution time of tasks in the metatask for a
given resource.
yy-task heterogeneity(hi-high, lo-low)
Resource heterogeneity is the amount of variation in the execution time of a given task among all
the resources.
zz- Resource heterogeneity(hi-high, lo-low)
Twelve combinations of ETC matrices comprises three groups of four instances each. The first,
second and third group corresponds to consistent, inconsistent and Semi-consistent ETC matrices each of
them having high and low combinations of task and resource heterogeneity.

B. Evaluation Parameters

Makespan

Makespan is the important optimization criteria for grid scheduling. Makespan is


calculated as makespan=max(TCTij)
Table 1 shows the 12 different types of instances in the first column, the makespan value
obtained by Min-min in the second column, CSTSA in the third column. Graphical representation
of Table 1 in Figure shows that the CSTSA provides better makespan than Min-min Heuristic
Scheduling Algorithm.

21

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Figure 1: Comparison based on makespan

Table 2, 3, 4, 5 show the comparison of the makespan values obtained by Min-min and
CSTSA in all the four instances which comprises High Task High Resource, High Task Low
Resource, Low Task High Resource, Low Task Low Resource. The four instances are represented
for consistent, inconsistent, semi-consistent or partially consistent heterogeneous computing
systems. Figure 2, 3, 4, 5 shows the graphical representation of all the four instances for three
different consistencies.

Figure 2: Comparison based on makespan for High

Table 3: Comparison based on makespan (in sec)

22

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

23

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Figure 5: Comparison based on makespan for Low Task


Low Resource Heterogeneity

IV CONCLUSION AND FUTURE WORK


Grid environment can accommodate users with high computational tasks. Scheduling the tasks
to the appropriate resources to achieve minimum completion time of the tasks is one of the challenging
scenarios in grid computing environment. The current work emphasizes on selecting the tasks in the order
in which it is to be scheduled to the resources to achieve reduced makespan. Based on the experimental
study using 12 different types of ETC matrices with various characteristics such as task heterogeneity,
resource heterogeneity, and consistency, the Credit Score Tasks Scheduling Algorithm significantly
outperformed the Min-min heuristic scheduling algorithm in achieving reduced makespan. Because of its
robust performance, Credit Score Tasks Scheduling Algorithm is a viable solution for static scheduling
problem on heterogeneous grid environment.
[1]

REFERENCES
T.Braun, H.Siegel, N.Beck, L.Boloni, M.Maheshwaran, A.Reuther, J.Robertson, M.Theys, B.Yao,
D.Hensgen, and R.Freund, A Comparison Study of Static Mapping Heuristics for a Class of
Meta-tasks on Heterogeneous Computing Systems, In 8th IEEE Heterogeneous Computing
Workshop(HCW99), pp. 15-29, 1999.

[2]

I.Foster and C. Kesselman, The Grid: Blueprint for a Future Computing Infrastructure, Morgan
Kaufmann Publishers, USA, 1998.

[3]

E.U.Munir, J.Li, and S.Shi, QoS Sufferage Heuristic for Independent Task Scheduling in Grid,
Information Technology Journal 6(8), pp. 1166-1170, 2007.

[4]

TD. Braun, HJ. Siegel, N.Beck, A Taxonomy for Descriging Matching and Scheduling Heuristics
for Mixed-machine Heterogeneous Computing Systems, IEEE Workshop on Advances in Parallel
and Distributed Systems, West Lafayette, pp. 330-335, 1998.

[5]

R.Armstrong, D.Hensgen, and T.Kidd, The Relative Performance of Various Mapping Algorithms is
Independent of Sizable Variances in Run-time Predictions, In 7th IEEE Heterogeneous Computing
Workshop(HCW98), pp. 79-87, 1998.

[6]

R.F.Freund and H.J.Siegel,Heterogeneous Processing, IEEE Computer , 26(6), pp. 13-17, 1993.
T.D.Braun, H.J.Siegel, and N.Beck, A Comparison of Eleven Static Heuristics for Mapping a Class
of Independent Tasks onto Heterogeneous Distributed Computing Systems, Journal of Parallel and
Distributed Computing 61, pp.810-837, 2001.

[7]

T.D.Braun, H.J.Siegel, and N.Beck, A Comparison of Eleven Static Heuristics for Mapping a Class
of Independent Tasks onto Heterogeneous Distributed Computing Systems, Journal of Parallel and
Distributed Computing 61, pp.810-837, 2001.

24

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
[8]

R.F.Freund, and M.Gherrity, Scheduling Resources in Multi-user Heterogeneous Computing


Environment with Smart Net, In Proceedings of the 7th IEEE HCW, 1998.

[9]

G.K.Kamalam, and Dr. V..Murali Bhaskaran, A New Heuristic Approach:Min-Mean Algorithm


For Scheduling Meta-Tasks On Heterogeneous Computing Systems, IJCSNS International Journal
of Computer Science and Network Security, Vol.10 No.1, pp. 24-31, 2010.

[10] G.K.Kamalam, and Dr. V..Murali Bhaskaran, An Improved Min-Mean Heuristic Scheduling
Algorithm for Mapping Independent Tasks on Heterogeneous Computing Environment,
International Journal of Computational Cognition, Vol. 8, N0. 4, pp. 85-91, 2010.
[11] G.K.Kamalam, and Dr. V..Murali Bhaskaran, New Enhanced Heuristic Min-Mean Scheduling
Algorithm for Scheduling Meta-Tasks on Heterogeneous Grid Environment, European Journal of
Scientific Research, Vol.70 No.3, pp. 423-430, 2012.
[12] H.Baghban, A.M. Rahmani, A Heuristic on Job Scheduling in Grid Computing Environment, In
Proceedings of the seventh IEEE International Conference on Grid and Cooperative Computing, pp.
141-146, 2008.

25

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Power Quality Improvement For A Wind Farm


Connected Grid Incorporating Upfc Controller
Prof. V. Sharmila Deve, Dr. K. Keerthivasan,
Dr. K. Gheetha and Anupama. B
Abstract The wind generated power is always fluctuating due to its time varying nature and causing
stability issues when it is incorporated with the power system network. In this paper, control circuit of
Unified Power Flow Controller (UPFC) is proposed and developed to improve the power quality issues
in wind energy conversion systems. Three phase fault involving ground is simulated using MATLAB
and tested for an IEEE 5 bus system with wind powered generator. When fault is simulated at the
terminals of WECS, PCC voltage reduced to a very low value. Results shows that voltage at PCC, real
power and reactive power are improved by the incorporation of UPFC. Validation of the proposed
controller design of UPFC is also tested for a real time wind powered generator connected grid.
Index Terms FACTS, UPFC, power quality , real power, reactive power .

I. INTRODUCTION
In recent years, wind energy has become one of the important and promising sources of renewable
energy. But incorporation of large amount of wind energy in power network will result in fluctuating real
power injection and varying reactive power absorption which leads to voltage flactuations and affect the
stability and power quality of the system. Flexible AC Transmission System (FACTS) devices can give
solution for the variations created in power system by such types of renewable resources and helps to
improve its stability, power transfer capability and control of power flow. FACTS controllers provide the
necessary dynamic reactive power support and the voltage regulation at the Point of Common Coupling .
Here the Unified Power Flow Controller is chosen for the power quality improvement because the UPFC
allows simultaneous control of all three parameters of power system that is the line impedance, voltage
magnitude and power angle. It is primarily used for independent control of real and reactive power in
transmission lines for flexible, fast, reliable and economic operation.
In the WECS most commonly used generators are wound rotor induction generators. Induction
generators draw reactive power from the main power grid and hence might result in voltage drops at the
PCC. Moreover, the input power to these induction machines is variable in nature and hence the output
voltages are unacceptably fluctuating.
More research has been done on FACTS devices and discussed on controllers like Static
Var Compensator and STATic synchronous COMpensator to improve voltage ride-through of induction
generators [1], The article [2] gives an approach based on Differential Evolution for optimal placement &
parameter setting of UPFC for improving power system security. Research on control design to improve
the
dynamic performance of a wind turbine for induction generator unit [3], and also how the FACTS
devices could be used for the power transfer capability improvement using fuzzy controller is explained in
[4]. Many authors have discussed on power quality improvement in WECS, voltage
regulation, reactive power power support and transient stability improvement [5-8].
In this paper, UPFC control scheme is used for the grid connected wind energy generation
system for power quality improvement and it is simulated using MATLAB/SIMULINK. When a three
phase to ground fault occurs, the voltage at the WECS terminals drops, Thus the generated active power
falls. After fault clearance, the reactive power consumption increases resulting in reduced voltages at the
PCC. Here test case1 considered is a IEEE 5 BUS system and case 2 is real time grid system. Here results
shows that, UPFC connected at the terminals of WECS results in the voltage improvement at PCC and real
and reactive power improvements that in turn gives the power quality improvement.
II. POWER QUALITY ISSUES
Perfect power quality means the voltage is continous and sinusoidal having a constant
amplitude and frequency. It is described in terms of voltage, frequency, and interruptions. Grid connected
wind turbines do affect the power quality. Power quality depends upon the interaction between the grid
26

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
and the wind turbines. There are two types of loads, linear and nonlinear. Motors, heaters and incandescent
lamps are examples of linear load that consumes a current proportional to the voltage. The nonlinear
load uses high-speed electronic power switching devices to convert the AC supply voltage to a constant
DC voltage used by the internal circuits. During converting, harmonic currents on the power grid are
generated. Production of harmonic currents at PCC cause several adverse effects such as line
voltage distortion at
PCC, equipment overheating, electronic lighting ballasts, ferromagnetic devices, failure of
sensitive electronic equipment, flickering of fluorescent lights, adjustable speed drives, dc motor
drives and arcing equipment are examples of nonlinear loads.
Voltage Unbalance, According to the electricity board, the variation in the steady state voltage
is in the range from + 5% to 5% at the wind turbine terminals in the wind farms [9]. Too low voltages
can cause the relay protection to trip the wind turbines. Voltage flicker is dynamic variations in the
network. Flicker produced during continuous operation and is caused by power fluctuations, which
mainly emanate from variations in the wind speed, the tower shadow effect and mechanical properties
of the wind turbine. Flicker due to switching operations arises from the start and shut down of the
wind turbines [10][11]. Frequency Range, According to electricity boards and manufacturers, the
grid frequency in
India can vary from 47 to 51.5 Hz. Most of the time, the frequency is below the rated 50 Hz.
For wind turbines with induction generators directly connected to the micro grid, frequency variation
will be very considerable. Frequency variation is directly affected by the power production of wind
mill. Harmonics, Grid connected wind turbines through power converters however emit harmonic
currents that create the voltage distortion. Power electronic loads, rectifiers, inverters are the sources
that produce the harmonics. Due to this harmonics the wind turbine generators affected leading
togenerator over heating, low power factor, increased generator vibration. Transients, are of very
short duration that will vary greatly in magnitude. When transient occur, thousands of voltage can be
generated into the electrical system, causing problems for equipment down the line [10][11].

III FACTS CONTROLLERS FOR


POWER QUALITY IMPROVEMENT
The FACTS devices are based on power electronic controllers used in the transmission line
for the better utilization of existing transmission system with increased transmission capacity and system
reliability. FACTS devices increases dynamic and transient grid stability. These controllers are fast
responding when controllers are properly tuned. FACTS devices are mostly used to regulate the voltage
and power flow through the lines. UPFC is a flexible controller and it is used in this research work for
the power quality improvement in the WECS connected to grid.

IV PROPOSED SYSTEM CONFIGURATION

All the three parameters of line power flow (line impedance, voltage and phase angle) can
be simultaneously controlled by the Unified Power Flow Controller device. UPFC is a device that
combines together the features of two devices STAtic synchronous COMpensator (STATCOM) and
the Static Synchronous Series Compensator (SSSC)[12]. These two devices are two Voltage Source
Converters connected respectively in shunt with the transmission line through a shunt transformer and in
series with the transmission line through a series transformer, connected to each other by a common dc
link including a storage capacitor. Filters are connected across capacitor to prevent the flow of harmonic
currents generated due to switching. At the output of the converters ,the transformers are used to give
the isolation and modify voltage/current levels and also to prevent DC capacitor getting shorted by
the operation of various switches. Insulated Gate Bipolar Transistors (IGBTs) are the power electronic
devices used with anti-parallel diodes for shunt and series
converters .The shunt inverter is used for voltage regulation at the point of connection, injecting
an opportune reactive power flow into the line and to balance the real power flow exchanged between
the series inverter and the transmission line. The series inverter can be used to control the real and
reactive line power flow inserting an opportune voltage with controllable magnitude and phase in series
with the transmission line. Thereby, the UPFC can fulfill functions of reactive shunt compensation,
active and reactive series compensation and phase shifting.
27

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

V. CONTROL OF PROPOSED SYSTEM

Fig.1 shunt controller of UPFC

Fig.2 Series controller of UPFC

In order to control the bus voltage, sending-end voltage(Vs) is measured instantly and subtracted
from its reference value (Vs_ref) ,which gives the error. This error signal has been given as inputs to a PI
block [13]. The output of PI controller gives the magnitude of injected shunt voltage similarly, DC link
capacitor voltage (Vdc)is also measured and subtracted from its reference value (Vdc_ref) to get error.
This error signal has been given as input to a PI block to obtain the angle. Pulse Width Modulation (PWM)
technique is used to
generate the pulses for IGBT. Reference signal is compared with carrier (triangle) signal and the
outputs of the comparators are given to the converter switches as firing signals .

VI. DESIGN OF PROPOSED SYSTEM

In case 1 the UPFC performance has been tested in an IEEE 5 bus system for power quality
improvement.
In this test system shown in Fig.3, the buses 1 and 2 are generator buses . Here bus 1 considered is
an IG based wind farm and buses 3, 4, 5 are load buses (PQ buses). The base case has been taken as 11KV
and 18 MW. A three phase to
TEST CASE1:IEEE 5 BUS SYSTEM

Fig.3 Single line diagram of IEEE 5 bus system

28

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
ground fault is applied near WECS from 1 sec to 2 sec. After connecting UPFC across PCC of
WECS and bus1, improvement has been observed in the PCC voltage, real power and reactive power. The
simulation results of PCC voltage, real power, reactive power with and without UPFC are shown in Fig.7,
Fig8, and Fig.9 respectively.
CASE 2 : GRID CONNECTED REAL TIME WECS
The load values considered here are the real time load demands of wind farm connected to 11
KV bus. UPFC performance has been tested here for power quality improvement. In this system a three
phase to ground fault is applied near WECS from 1 sec to 2 sec and the performance of UPFC is tested
such that results shows that the real power, reactive power and voltage at PCC has improved. Fig.4 shows
the single line diagram of WECS connected to real time load.

Fig.4 Grid connected WECS

VII SIMULATION CIRCUITS WITH RESULTS

Fig.5 Simulation diagram of IEEE 5 bus system

Fig.6 Simulation diagram of grid connected WECS

CASE1: IEEE 5 BUS SIMULATION RESULTS

With UPFC

Without UPFC

Fig.7 PCC voltage with and without UPFC


29

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
With UPFC
With UPFC

Without UPFC
Without UPFC

Fig.8 Real power with and without UPFC

Without UPFC
Without UPFC
With UPFC
With UPFC

Fig.9 Reactive power with and without UPFC

CASE 2: SIMULATION RESULTS OF GRID CONNECTED WECS

With UPFC

Without UPFC

Fig.10 PCC voltage with and without UPFC


With UPFC

Without UPFC

Fig.11 Real power with and without UPFC

Without UPFC

With UPFC

Fig.12 Reactive power with and without UPFC

30

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
VIII DISCUSSION ON RESULTS
TEST CASE 1:It is found that during the presence of fault the PCC voltage without incorporating
UPFC reduced to a very low value and after clearance it maintained at the value of 5350 volts. As
soon as UPFC has been connected, the PCC voltage increases to 8700 volts. It can also be observed
that
when fault occurs the real power settles at 9MW without UPFC .With incorporation of UPFC
it has increased to 13.8MW. It is found that when fault occurs the reactive power settles at 27MVAR
without UPFC , it can be observed that after connecting UPFC it reduced to 21MVAR.
TEST CASE 2: It is found that during fault the PCC voltage without UPFC reduced to
very low value and after clearance it maintained at value of 5300 volts. As soon as UPFC has
been connected, the PCC voltage increases to 8700 volts .It can also be observed that during fault
real power settles at 9.7MW without UPFC and after incorporation of UPFC it has increased to
14MW. It is found that when fault occurs the reactive power settles at 30 MVAR without UPFC
, it can be observed that after incorporating UPFC it reduced to 21MVAR.
IX FFT ANALYSIS
FFT ANALYSIS FOR IEEE 5 BUS SYSTEM
The power quality analysis is done in IEEE5 bus system and also induction generator based
grid connected wind power generating system using the FFT tool of the powergui.
The Total Harmonic Distortion in FFT analysis of PCC voltage without UPFC controller for
IEEE 5 bus system is 8.53% and is shown in figure 13, and that of PCC voltage with UPFC controller
is 5.03% and is shown in figure.14.

Fig.13 FFT analysis of PCC voltage without UPFC

Fig.14 FFT analysis of PCC voltage with UPFC

FFT ANALYSIS FOR GRID CONNECTED WIND FARM

31

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Fig.15 FFT analysis of PCC voltage without UPFC

The Total Harmonic Distortion in FFT analysis at PCC voltage for grid connected wind farm
without UPFC controller is 6.72% shown in figure.15 and with UPFC controller it is reduced to 2.01%
shown in figure.16. From the figures it can be observed that the UPFC controller helps to mitigates the
harmonic distortion in the transmission line.

Fig.16 FFT analysis of PCC voltage with UPFC

3.CLOUDCOMPUTING DEPLOYMENT

MODELS
IX CONCLUSION
The performance of the proposed method has been simulated for an1.IEEE
bus system
The 5Public
Cloudand a real
time wind farm connected grid UPFC is connected at PCC to compensate the voltage
sag
by fault. Service Model
Figure 1: created
Cloud Computing
It is observed that real power flow obtained has increase and reactive power absorption got in result has
been decreased after fault clearance by incorporating UPFC at PCC. Total Harmonic Distortions has been
reduced using proposed UPFC controller. Therefore, it is concluded that the proposed UPFC control results
in power quality improvement of grid with wind farm.
REFERENCES
[1] Saad-Saoud Z, Jenkins N, The application of advanced static VAR compensators to wind
farms, IEEE colloquium on power electronics for renewable energy, London, June 1997.
[2]

Shaheen H. I, Rashed G.I and Cheng SJ, Optimal location and parameter setting of UPFC for
enhancing power system security based on Differential Evolution algorithm, INTJ. Electrical
Power and Energy Systems, vol. 33, pp. 94-105, 2011.

[3]

Ezzeldin SA, Xu Wilson. Control design and dynamic performance analysis of a wind turbine
induction generator t, IEEE Transactions on Energy Conversion, 2000, vol.15, p. 916

[4]

Shameem Ahmad a, Fadi M. Albatsh a, Saad Mekhilef a, Hazlie Mokhlis Fuzzy based controller for
dynamic Unified Power Flow Controller to enhance power transfer capability, Energy Conversion

32

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
and Management ,vol.79 ,2014, p.652-665.
[5] Ahmed Tarek, Noro Osamu, Hiraki Eiji, Nakaoka Mutsuo,Terminal voltage regulation v
characteristics by static compensator for a three-phase self excited induction generator, IEEE
Transactions on Industry Applications , 2004, Vol. 40, p.97888.
[6] Chompoo-inwai C, Yingvivatanapong C, Methaprayoon K, Lee Wei-Jen, Reactive power
compensation techniques to improve the ride-through of induction generators during disturbance,
IEEE Transactions on Industry Applications 2005, vol.41, p. 666-72.
[7]

Saad Saoud Z, Lisboa ML, Ekanayake JB, Jenkins N and Strbac G, Application of STATCOMs
to wind farms, In IEE Proceedings on generation transmission and distribution, 1998,vol. 145, p.
5116.

[8]

Molinas M, Vazquez S, Takaku T, Carrasco JM, Shimada R, Undeland T, Improvement of


transient stability margin in power systems with integrated wind generation using a STATCOM, an
experimental verification, In Proceedings future power systems conference, Amsterdam, November
2005.

[9]

K. C. Divyaa; P.S. Nagendra Rao Effect of Grid Voltage and Frequency Variations on the Output
of WindGenerators, Electric Power Components and Systems, Taylor & Francis, vol 6, 2008, p.602
614.

[10] Power Quality issues standards and guide lines, IEEE, Vol -32, May96.
[11] J. J. Gutie rre z, J. Ru iz, L. Leturiondo, and A. Lazkano, Flicker measurement system for wind
turbine certification, IEEE Trans Instrum. Meas., vol. 58, no. 2, p. 375382.
[12]

T. T. Nguyen and V.L. Nguyen, Dynamic Model of Unified power Flow Controllers in load flow
analysis,

[13] R. Jayashri a, R.P. Kumudini Devi b, Effect of tuned unified power flow controller to mitigate the
rotor speed instability of fixed-speed wind turbines, Renewable Energy vol.34 ,2009,P. 591596.

33

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Image Color Correction Using


Mosaicking Application
P.Pavithra,
M.Tech Information Technology,
Vivekananda College of Engineering for Women,
Tiruchengode-637205
Abstract Photo mosaic is a technique which uses to divide a photograph into equal sized rectangular
sections. Each of rectangular section is replaced with another image that matches the target photo. Color
layout and edge histogram was considered to perform the Mosaicking operations. It results with a low
intensity value of pixel in the low resolution images. This paper proposes a filtering and convolution based
processing for correcting the disparities in image. Images are color segmented using median filtering
technique. The filtering operation is used to take away the disparities in image. Inverse color gradient
algorithm is used to determine the layer that needs Mosaicking. Convolution and image Mosaicking is
performed to all the region of the image. Then the image can be well scrutinized as a high resolution
image..
Index Terms Color correction, image mosaicking, color transfer, color palette mapping function

I. INTRODUCTION
Photo Mosaic is a concept in which picture usually a photograph that has been divided into usually
equal sized rectangular sections, each of rectangular section is replaced with another photograph that
matches the target photo. Image Mosaicking and other similar variations such as image compositing. And
stitching had found a huge field of application ranging from aerial Imagery or satellite to medical imaging,
street view
Maps, city 3D modelling, texture synthesis or stereo reconstruction, to name a few. In general,
whenever merging two or more images of the same scene was required for evaluation or integration purposes,
a mosaic is built. Two problems are concerned in the computation of an image mosaic the geometric
along with the photometric correspondence. Image mosaicking application is required both photometrical
and geometrical registrations between the images that compose the mosaic. First, the image to be color
corrected was segmented into several regions using mean shift. Then, connected regions is extracted by
using the median filtering technique and local joint image histograms of each region was modelled as
collections of truncated Gaussians using a maximum likelihood estimation procedure.
The geometric correspondence is usually referred to as image registration and this is the procedure of
overlaying two or more images of the same scene taken at different times, maybe from different viewpoints
and by different sensors. It should renowned so that the most cases the alignment that was produced by a
registration method was never accurate to the pixel level. Hence, a pixel to pixel direct mapping of color
is not a feasible solution. On the other hand, the photometrical correspondence between images deals
with photometrical alignment of the images capturing devices. The same object under the same lighting
condition should be represented to the same color in two different images. Whatever, even in set of images
taken from the same camera and the colours representing an object may difference from picture to picture.
This poses a problem to the fusion of information from several images. So the problem of how to balance
the colour of one picture so that it is matches the color of another must be tackled. This procedure of
calibration and photometrical alignment is referred to as color correction between images is depicted in
the paper and compare each to the baseline approach from. This paper proposes a new color correction
algorithm presented several technical novelties while compared to the state of the art, Images are color
segmented using median filtering technique. The filtering operation is used to take away the disparities in
image. Inverse color gradient algorithm is used to determine the layer that needs Mosaicking. Convolution
and image Mosaicking is performed to all the region of the image. Then the image can be well scrutinized
as a high resolution image. A methodology to perform the expansion of the color palette mapping functions
to the non overlapping regions of the images. To the best of our information, this paper also presents
one of the most complete evaluations of color corrections algorithms for image mosaicking published
in the literature. An extensive comparison, which is include the nine other approaches, two dataset and
34

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
two distinct evaluation metrics with over sixty image pairs , is presented. The proposed color correction
algorithm achieves very high-quality results when compared to state of the art algorithms.The industrial
applications are automatic 3D digital image metrology quality control (example Object alignment, offset
displacement, 3D pose estimation)
2. EXISTING WORK
The overlapping portion of the target image undergoes a mean-shift based color segmentation
process. Then, color segmented regions are extracted using a region fusion algorithm. Each of the color
segmented regions is then mapped to a local joint image histogram. Then, a set of Gaussian functions is
fitted to the local joint image histogram using a Maximum Likelihood Estimation (MLE) process and
truncated Gaussian functions as models. These Gaussians are then used to compute local color palette
mapping functions. The next step is to expand the application of these functions from the overlapping area
of target image to the entire target image. Finally, the entire color corrected image is produced by applying
the color palette mapping functions to the target image.
3. PROPOSED APPROACH
This paper proposes a filtering and convolution based process for correcting the disparities in
image. Images are color segmented using median filtering technique. The filtering operation is used to
take away the disparities in image. It preserves edge of the image while removing the noise. Convolution
based processing is used to convert the image as matrix format. Inverse color gradient algorithm is used to
determine the layer that needs Mosaicking image. Mosaicking is performed to all the region of the image.
Then the image can be well scrutinized as a high resolution image.

Fig 1: Image mosaicking architecture

3.1 Median Filtering Technique

Median filtering in signal processing is often desirable to be able to carry out some kind of noise
reduction on a picture or signal. The median filter is mostly used in nonlinear digital filtering technique; it
is often used to remove noise. Such noise reduction was a typical pre-processing step to improve the results
of later processing (for example an image and edge on). Median filtering was very widely used in digital
image processing because, under certain conditions, it is preserving edges of the images when removing
noise. Median was a nonlinear local filter whose output value was the middle element of a sorted array
of pixel values from the filter window. Since median value was robust to outliers, the filter has used for
reducing the impulse noise. Now we would describe median filtering with the help of example in which
we will be placed some values for pixels. There was no entry preceding the first value, the first value was
repeated, with the last values to handle the missing window entries at the boundaries of the signal, but
there was other schemes that had different properties that might be preferred in particular circumstances.
Handing out the boundaries, with or without cropping the signal and image boundary afterwards, fetching
entries from other places in a signal take images; entries from the far vertical or horizontal boundary might
be selected.
3.2 Convolution Based Processing

Convolution was an important operation in signal and image processing. Convolution operates
on two signals (in 1D) and two images (in 2D) we can think of one as the input signal (or image), and the
other is called the filter on the input image, producing an output image (so convolution took two images
as input and produces a third as output). Convolution was incredibly important concept in many areas of
engineering and math.
It is a general purpose filter effect for images. It is a matrix applied to the image and the mathematical
operation comprised of integers. It is work by determining the values of each central pixel by add the
35

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
weighted values of all it is neighbours together. The output is a new modified filtered image. A convolution
was done by multiplying a pixels and it is neighbouring pixels color value by a matrix. It uses Kernel. The
kernel is defined as small matrix of numbers which is used in image convolutions. Differently sized kernels
contain different patterns of numbers produced by different results under convolution. The dimension of a
kernel is arbitrary but 3x3 is frequently used. It involves Smoothing, Gaussian Blur, Mean Removal, and
Sharpen, Emboss layer, Edge Detector and Customizing operations. It implements smoothing of image so
as to make the comparison more accurate.
3.3

Inverse Color Gradient Algorithm

To determine the layer that needs mosaicking and to determine which part of the image can be used
for mosaicking we use Inverse color gradient algorithm. All dynamical system produces a sequence of
values z0, z1, z2 zn. Fractal images is created by producing one of these sequence to each pixel in the
image the coloring algorithm was what interprets this is the sequence to produce a final color.
Typically, the coloring algorithm produces a single value to every pixel. Since color was a threedimensional space, that one-dimensional value must be expanded to produce a colour image. The common
method was to create a palette, a sequence of 3D colour values which are connected end to end and the
colouring algorithm value is used as a position beside this multi-segmented line (the gradient). If the last
palette colour was connected to the first, a closed segmented loop was formed and any real value from the
coloring algorithm can be mapped to a defined color in the gradient. These are similar to the pseudo color
renderings often used for infra red imaging. Gradient is normally linearly interpolated in RGB space (Red,
Green, Blue), but it can also be interpolated in HSL space (Hue, Saturation, Lightness) & interpolated with
spine curves instead of straight line segments.
The selection of the gradient was one of the most critical artistic choices in creating a high-quality
fractal image. Color selection can emphasize one part of a fractal image while de emphasizing others. In
extreme cases, two imagery with the same fractal parameters, but different color schemes will be appear
totally different.
Some coloring algorithms produced discrete values, when some produce continuous values. Discrete
values will produce visible stepping while used as a gradient position; until recently this is not terribly
important, as the restriction of 8-bit color displays introduced an element of color stepping on gradients
anyway and discrete coloring values are mapped to corresponding discrete color in the gradient with the
introductions of inexpensive 24 bit displays, algorithms which are produce continuous values is becoming
more important, as this is permit interpolating along the color gradient to any color precision desired.
3.4

Image Mosaicking

Photo Mosaic is a concept in which picture has been divided into equal sized rectangular sections;
each of rectangular section is replaced with another photograph that matches the target photo.
To the best of our knowledge, this paper also included one of the most complete evaluations of colour
correction algorithms for image mosaicking published in the literature. The extensive comparison, which
includes other approaches, two datasets with number of image pairs and two distinct evaluation metrics,
was presented. Image mosaicking and other similar variations such as image compositing have found a
vast field of applications ranging from satellite or aerial imagery to medical imaging street view maps city
super-resolution texture synthesis and also stereo reconstruction to name a few. In general, whenever
merging two or more images of the same scene was required for comparison or integration purposes, a
mosaic is built. Two problems are involved in the computation of an image mosaic: the geometric and the
photometric correspondence.
4. CONCLUSION
This work proposes a novel color correction algorithm. Images are color segmented and extracted
using median filtering technique. Each segmented region is used to consider a local color palette mapping
function and convolution based processing. Inverse color gradient algorithm is used to determine the layer
thats need mosaicking. Finally, by using an extension of the color palette mapping functions to the whole
picture, it is achievable to make mosaics where no color transitions are noticeable.
For the proper assessment of the performance of the proposed algorithm, ten other color correction
algorithms were evaluated (#2 through #11), next to with three alternatives to the proposed approach (#12b
through #12d). Each of the algorithms was functional to two datasets, with a combined total of 63 image
pairs. The proposed approach outperforms all other algorithms, in most of the image pairs in the datasets,
considering the PSNR and S-CIELAB evaluation metrics. Not only has it obtained some of the best average
36

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
scores but also shows to be more consistent and robust. Results have shown that the proposed approach
achieves very high-quality results even if no color segmentation preprocessing step is used. Results have
also improves the effectiveness of the color correction algorithm. Finally, we show that the RGB along with
the l color spaces achieve like color correction performances.
REFERENCES
[1] A. Abadpour and S. Kasaei, A fast and efficient fuzzy color transfer method, in Proc. 4th IEEE Int.
Symp. Signal Process. Inf. Technol., Dec. 2004, pp. 491494.
[2]

M. Ben-Ezra, A. Zomet, and S. Nayar, Video super-resolution using controlled subpixel detector
shifts, IEEE Trans. Pattern Anal. Mach. Intell., vol. 27, no. 6, pp. 977987, Jun. 2005.

[3]

D. Comaniciu and P. Meer, Mean shift: A robust approach toward feature space analysis, IEEE
Trans. Pattern Anal. Mach. Intell., vol. 24, no. 5, pp. 603619, May 2002

[4]

J. S. Duncan and N. Ayache, Medical image analysis: Progress over two decades and the challenges
ahead, IEEE Trans. Pattern Anal. Mach. Intell., vol. 22, no. 1, pp. 85106, Jan. 2000.

[5]

H. S. Faridul, J. Stauder, J. Kervec, and A. Tremeau, Approximate cross channel color mapping
from sparse color correspondences, in Proc. IEEE Int. Conf. Comput. Vis. Workshops (ICCVW),
Dec. 2013, pp. 860867.

[6]

U. Fecker, M. Barkowsky, and A. Kaup, Histogram-based prefiltering for luminance and


chrominance compensation of multiview video, IEEE Trans. Circuits Syst. Video Technol., vol.
18, no. 9, pp. 12581267, Sep. 2008.

[7]

Y. HaCohen, E. Shechtman, D. B. Goldman, and D. Lischinski, Nonrigid dense correspondence


with applications for image enhancement, in Proc. ACM SIGGRAPH, 2011, pp. 70:170:10.

[8]

C.-H. Hsu, Z.-W. Chen, and C.-C. Chiang, Region-based color correction of images, in Proc. Int.
Conf. Inf. Technol. Appl., Jul. 2005, pp. 710715.

[9]

J. Jia and C.-K. Tang, Tensor voting for image correction by global and local intensity alignment,
IEEE Trans. Pattern Anal. Mach. Intell., vol. 27, no. 1, pp. 3650, Jan. 2005.

[10] G. Lee and C. Scott, EM algorithms for multivariate Gaussian mixture models with truncated and
censored data, Comput. Statist. Data Anal., vol. 56, no. 9, pp. 28162829, Sep. 2012.
[11] V. Lempitsky and D. Ivanov, Seamless mosaicing of image-based texture maps, in Proc. IEEE
Conf. Comput. Vis. Pattern Recognit., Jun. 2007, pp. 16.
[12] A. Levin, A. Zomet, S. Peleg, and Y. Weiss, Seamless image stitching in the gradient domain, in
Proc. Eur. Conf. Comput. Vis., May 2003, pp. 377389.
[13] P. Meer and B. Georgescu, Edge detection with embedded confidence, IEEE Trans. Pattern Anal.
Mach. Intell., vol. 23, no. 12, pp. 13511365, Dec. 2001.
[14] B. Sajadi, M. Lazarov, and A. Majumder, ADICT: Accurate direct and inverse color transformation,
in Proc. Eur. Conf. Comput. Vis., Sep. 2010, pp. 7286.
[15] P. Soille, Morphological image compositing, IEEE Trans. Pattern Anal. Mach. Intell., vol. 28, no.
5, pp. 673683, May 2006.

37

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Threshold Based Efficient Data Transmission


In Hybrid Wireless Networks
R.Mohana,
M.Tech Information Technology,
Vivekananda College of Engineering for Women,
Tiruchengode-637205
Abstract Distributed Three-hop Routing protocol (DTR) is used for data transmission in Hybrid wireless
network. DTR split a data into segments and transmits the segment in a distributed manner. It uses at most
two hops in ad-hoc transmission mode and one hop in cellular transmission mode. However, the selection
of trust nodes for data transmission is complicated in DTR which in turn creates security issues. This
paper proposes a TEEN APTEEN SPEED (TAS) protocol for trust node selection. TAS protocol allocates
a threshold value to each node in a network. Based on the threshold value, a trust node is selected for
efficient data transmission in Hybrid Wireless Network. The threshold value is also to maintain security
in the network in order that unauthorized spoofing nodes cant enter the network. Furthermore, this paper
implements overhearing technique in which the sending node share the content with one or more other
nodes before data transmission with the intention that failure node can be discovered and replaced.
Index Terms Hybrid wireless networks, Threshold value, Trust node, Overhearing.

I. INTRODUCTION
Hybrid wireless network combine mobile ad-hoc network and infrastructure wireless network. It
is to be an enhanced network structure for the next generation network. According to the environment
condition, it can choose base station transmission mode or mobile ad-hoc transmission mode. The mobile
ad-hoc network is an infrastructure-less network. The devices in a mobile ad-hoc network can move in any
direction and the link between the devices can changed frequently. In this network, the data is transmitted
from source to destination in a multi-hop manner through intermediate nodes. In an infrastructure wireless
network (e.g. Cellular network), each device communicates with other device through base stations. Each
cell in a cellular network has a base station. These base stations are connected via wire or fiber or wirelessly
through switching centers.
If the region has no communication infrastructure or the existing infrastructure, communication
between nodes are difficult or inconvenient to use. In this situation hybrid wireless network may still be
able to communicate through the construction of an ad-hoc network. In such a network, each mobile node
operates as a host and also as a router. Forwarding packets to other mobile nodes in the network may not
be within direct wireless transmission range. Each node participates in an ad-hoc routing and infrastructure
routing, for this distributed three hop routing protocol is used. It allows to discovering a Three-hop path
to any other node through the network is introduced in this work. The first two hops in ad-hoc networking is
sometimes called infrastructure-less networking, since the mobile nodes in the network dynamically create
routing among themselves to form their own network. The third hop is created in infrastructure networking.
Most Wi-Fi networks function in an infrastructure mode. Devices in this network communicate through
a single access point, which is generally the wireless router. For example, consider the two laptops are
placed next to each other, each connected to the same wireless network. Even the two laptops are placed
next to each other, theyre not communicating directly in infrastructure network. Some possible uses of
hybrid wireless network consist of students using laptop, computers to participate in an interactive lecture,
business associates and sharing information during a meeting, soldiers communicate information about the
situation awareness on the emergency disaster relief and personnel coordinating efforts after a hurricane or
earthquake.
Spread Code is generally used for secured data transmission in wireless communication as a way
to measure the quality of wireless connections. In wired networks, the existence of a wired path between
the sender and receiver are determining the correct reception of a message. But in wireless networks,
path loss is a major problem. The wireless communication network has to take a lot of environmental
parameters to report background noise and interfering strength of other simultaneous transmission. SINR
attempts to generate a demonstration of this aspect. So the TAS protocol is implemented to maintain the
details about the sender and receiver and the communication media in the network. This is implemented
38

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
through overhearing concept. This TAS implements grouping of nodes depending on the threshold value
so that the communication will be easy. In overhearing, the data is transferred to many nearby nodes in
a cluster. The cluster is a grouping of nodes, which contain cluster head and gateway. So the basic idea is
to separately learn unknown and possibly random mobility parameters and to group the mobile node with
similar mobility pattern to the same cluster. The nodes in a cluster can then interchangeably share their
resources for load balancing and overhead reduction, aiming to achieve scalable and efficient routing.
In TAS protocol, a secured code called threshold value is used. The nodal contact probabilities
are updating with the help of threshold value, it established to converge the true contacts probabilities.
Subsequently, a set of functions are devised to form clusters and choose entrance nodes based on nodal
contact probabilities. Finally gateway nodes exchange the network information and perform routing. The
result show that it is achieve higher delivery ratio and considerably lower overhead and end-to-end delay
when compared to non-clustering counterpart.
2. EXISTING WORK
The Base stations (BS) are connected by means of a wired backbone, so that there are no power
constraints and bandwidth during transmission among BS. The intermediate nodes are used to indicate
relay nodes that function as gateways connecting an infrastructure wireless network and mobile ad hoc
network. DTR aims to shift the routing burden from the ad hoc network to the infrastructure network by
taking advantage of widespread base stations in a hybrid wireless network. Rather than using one multi-hop
path to forward a message to one BS, DTR uses at most two hops to relay the segments of a message to
different BS in a distributed manner, and relies on BS to combine the segments. When a source node wants
to convey a message stream to a destination node, it partition the message stream into a number of partial
streams called segments and spread each segment to a neighbor node. Upon receiving a segment from the
source node, a neighbor node decides among direct transmission and relay transmission based on the QoS
requirement of the application. The neighbor nodes promote these segments in a distributed manner to
nearby BS. Relying on the infrastructure network routing, the BS further transmit the segment to the BS
where the destination node resides.
The final BS reorganizes the segments into the original order and forwards the segments to
the destination. It uses the cellular IP transmission method to launch segments to the destination if the
destination moves to another BS during segment transmission.DTR works on the Internet layer. It receives
packets from the TCP layer and routes it to the destination node, where DTR forwards the packet to the TCP
layer. The data routing process in DTR can be divided into two processes: uplink from a source node to the
first BS and downlink from the final BS to the datas destination. In uplink process, one hop to forward the
segments of a message in a distributed manner and uses another hop to find high-capacity forwarder for
high performance routing. As a result, DTR limits the path length of uplink routing to two hops in order
to avoid the problems of long-path multi-hop routing in the ad-hoc networks. Specifically, in the uplink
routing, a source node divides its message stream into a number of segments, then transmits the segments
to its neighbor nodes. The neighbor nodes promote segments to BS, which will forward the segments
to the BS where the destination resides. In this work, throughput and routing speed are taken as a QoS
requirement. The bandwidth/queue metric is to reflect node capacity in throughput and fast data forwarding.
A larger bandwidth/queue value means higher throughput and message forwarding speed, and vice versa.
When selecting neighbors for data forwarding, a node needs the capacity information of its neighbors. Also,
a selected neighbor should have sufficient storage space for a segment. To find the capacity and storage
space of its neighbors, each node periodically interactions its current information with its neighbors. If
a nodes capacity and storage space are changed, it again sends its current information to the segment
forwarder. After that, the segment forwarder will select the highest capacity nodes in its neighbors based
on the updated information. That is, after a neighbor node receives a segment from the source, it uses either
direct transmission or relay transmission. If the capacity of each of its neighbors is no greater than itself,
relay node utilize direct transmission. If not, it uses relay transmission. In direct transmission, the relay
nodes pass on the segment to a BS if it is in a BSs region. Or else, it stores the segment while moving until
it goes into a BSs region. In relay transmission, relay node chooses its highest-capacity neighbor as the
second relay node based on the QoS requirement. The second relay node will use direct transmission to
forward the segment directly to a BS. As a result, the number of transmission hops in the ad-hoc network
component is confined to no more than two. The small number of hops helps to increase the capacity of the
network and reduce channel contention in ad-hoc transmission. The purpose of the second hop selection
39

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
is to find a higher capacity node as the message forwarder in order to improve the performance of the QoS
requirement.
If a source node has the highest capacity in its region, the segments will be forwarded back to
the source node according to the DTR protocol. The source node then forwards the segments to the BS
directly due to the three-hop limit. This case occurs only when the source nodes is the highest capacity
node within its two-hop neighborhood. Since the data transmission rate of the ad hoc interface is more
than 10 times faster than the cellular interface example 3G and GSM. Thus, the transmission delay for
sending the data back and forth in the ad-hoc transmission is negligible in the total routing latency. After a
BS receives a segment, it needs to forward the segment to the BS, where the destination node resides (i.e.,
the destination BS)..However, the destination BS recorded in the home BS may not be the most up-to-date
destination BS since destination mobile nodes switch between the coverage regions of different BS during
data transmission to them. For instance, data is transmitted to BS Bi that has the datas destination, but the
destination has moved to the range of BS Bj before the data arrives at BS Bi. To deal with this problem,
the Cellular IP protocol is used for tracking node locations. With this protocol, a BS has a home agent and
a foreign agent. The foreign agent keeps track of mobile nodes moving into the ranges of other BS. The
home agent intercepts in-coming segments, reconstructs the original data, and re-routes it to the foreign
agent, which then forwards the data to the destination mobile node. After the destination BS receives
the segments of a message, it rearranges the segments into the original message and then sends it to the
destination mobile node. DTR specifies the segment structure format for rearrange message. Each segment
contains eight fields, including: (1) source node IP address; (2) destination node IP address; (3) message
sequence number; (4) segment sequence number;(5) QoS indication number; (6) data; (7)length of the data;
and (8) checksum.
3. PROPOSED WORK
3.1 Establishing the Network
The first step of network establishment is forming the cluster. The cluster is the group of similar nodes
formed in order to make the data transmission easier. Each cluster will have Cluster Head, Gateway and
other nodes. The first criterion in wireless medium was to discover the available routes and establish them
before transmitting. The network consists of n nodes in which two nodes must be source and destination
others will be used for data transmission. The path selection for data transmission is based on the availability
of the nodes in the area using the ad-hoc on demand distance vector routing algorithm. Using the Ad-hoc on
Demand Distance Vector routing protocol, the routes are created on demand as needed.

Fig: Data transmission process

3.2 Threshold Distribution


Threshold value distribution is done using TEEN, APTEEN and SPEED protocol. Based on the
threshold value, trust node can be selected also malicious node can be ignored.
3.2.1 Threshold-sensitive Energy Efficient sensor Network protocol (TEEN)
It is a reactive protocol proposed for time-critical applications. The main objective of this technique
is to generate the threshold value to each node in the network. After generate the threshold value, the node
is arranged in a hierarchical clustering scheme in which some nodes act as a 1st level and 2nd level cluster
heads. After forming the cluster head, the nodes get the data for transmission. Once the data is received the
40

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
cluster head broadcasts the data to this cluster member.
3.2.2. Adaptive Threshold-sensitive Energy Efficient sensor Network protocol (ATEEN)
APTEEN is a hybrid routing protocol proposed for both time periodic data collection and critical
events. The main objective is to maintain the statistical information. In this APTEEN technique, the
threshold value of each node in the cluster will be communicated with other cluster. Each cluster will have
an APTEEN values.
3.2.3. SPEED Protocol
SPEED is a stateless protocol which provides real time communication by maintaining desired
delivery speed across the network. SPEED protocol is to find geographic location. In this protocol
whenever source nodes are transmits a packet, the next hop neighbor is acknowledged using Stateless
Non deterministic Geographic Forwarding (SNGF). The SNGF identifies a node as next hop neighbor, if
it belongs to adjacent set of nodes, lies within the range of destination area and having speed larger than
certain desired speed.
3.3 Overhearing Technique
The path selection, maintenance and data transmission is consecutive process which happens in split
seconds in real time transmission. Hence the path allocated priory is used for data transmission. The first
path allocated formerly is used for data transmission. The data is transferred through the highlighted path.
But the transmission path may be unsuccessful some times. At that moment second path is selected for data
transmission. It takes more time to find the second path. In order to deal with these overhearing is used. The
overhearing is the concept in which the sending nodes allocate data to more than one node in a network. If
the node failure occurs in a network, that can be substituted by other alive node.
3.4 Three hop Routing
Three hops are used for data transmission in a network. Two hops at mobile ad-hoc network and
one hop at infrastructure network. The usage of this combination will improve the reliability. In this
technique, the network is silent until a connection is needed. The other nodes forwarded this message,
and documentation the node that they heard it from, creating an explosion of temporary routes is back
to the needed node. When a node receives such a message, it will send the message backwards through
a temporary route to the requesting node. The deprived node then begins using the route that is the least
number of hops through other nodes. Idle entries in the routing tables are recycled after a time.
4. CONCLUSION
Distributed Three-hop Routing protocol integrates the features of infrastructure and ad-hoc network
in the data transmission process. In Distributed Three-hop Routing, source node divides a message stream
into segments and transmit them to its mobile neighbors and it further forward the segments to their
destination via an infrastructure network. Distributed Three-hop Routing limits the routing path length
to three, and always arranges for high capacity nodes to forward data. Distributed Three-hop Routing
produces significantly lower overhead by eliminating route discovery and maintenance. TAS protocol is
implemented in this work which distributes a threshold value to each and every node in a network for the
selection of trust nodes. In addition, Overhearing technique is applied to find out and change the failure node
in the network. . It has the characteristics of short path length, short-distance transmission, and balanced
load distribution provides high routing reliability with high efficiency and also include congestion control
algorithm which can avoid load congestion in Bs in the case of unbalanced traffic distributions in networks.
Besides the data transmission in hybrid wireless network is highly secure and more efficient.
REFERENCES
[1] Haiying shen, Ze Li, and Chenxi Qiu, A Distributed Three-Hop routing protocol to increase the
capacity of hybrid wireless networks, IEEE Transactions on Mobile computing, 2015. sss
[2]

B. Bengfort, W. Zhang, and X. Du Efficient resource allocation in hybrid wireless networks, In


Proc. of WCNC, 2011.

[3]

L. M. Feeney, B. Cetin, D. Hollos, M. Kubisch, S. Mengesha, and H. Karl, Multi-rate relaying for
performance improvement in IEEE 802.11 wlans, In Proc. of WWIC, 2007.

[4]

X. J. Li, B. C. Seet, and P. H. J. Chong, Multi-hop cellular networks: Technology and economics,
41

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
Computer Networks, 2008.
[5]

K. Akkarajitsakul, E. Hossain, and D. Niyato, Cooperative packet delivery in hybrid wireless


mobile networks: A coalitional game approach, IEEE Trans. Mobile Computing 2013.

[6]

P. Thulasiraman and X. Shen, Interference aware resource allocation for hybrid hierarchical
wireless networks, Computer Networks, 2010.

[7]

L. B. Koralov and Y. G. Sinai, Theory of probability and random processes, Berlin New York
Springer, 2007.

[8]

D. M. Shila, Y. Cheng, and T. Anjali, Throughput and delay analysis of hybrid wireless networks
with multi-hop uplinks, In Proc. of INFOCOM, 2011.

[9]

T. Liu, M. Rong, H. Shi, D. Yu, Y. Xue, and E. Schulz, Reuse partitioning in fixed two-hop cellular
relaying network, In Proc. of WCNC, 2006.

[10] C. Wang, X. Li, C. Jiang, S. Tang, and Y. Liu, Multicast throughput for hybrid wireless networks
under Gaussian channels model, TMC, 2011.

42

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Secure Data Storage In Cloud Using Code


Regeneration And Public Audition
S.Pavithra,
S.Pavithra, M.Tech Information Technology,
Vivekananda College of Engineering for Women,
Tiruchengode-637205
Abstract Data integrity maintenance is the major objective in cloud storage. It includes audition using
TPA for unauthorized access. The data of the users will be stored in public and private areas of the cloud.
So that, only public cloud data will be accessed by user and private cloud will remain more secure. Once
any unauthorized modification is made, the original data in the private cloud will be retrieved from the
cloud server and will be returned to the user. Every data stored in the cloud will be generated with a
Hash value using the Merkle Hash Tree technique. So modification in content will make changes in the
Hash value of the document as well. The proxy also performs signature delegation work by generating
private and public key for every user using OEAP Algorithm so that the security will be maintained.
This scenario is implemented in a multi owner environment in which one document will be accessed by
user groups. In this context, the access limit should be properly maintained so that no user of other group
should be allowed to modify a particular group's data. Also, if any modifications made to that data will be
identified by the proxy and the user is revoked.
Index Terms Cloud storage, code regeneration, public audition, Dynamic Multi Owner.

I. INTRODUCTION
Cloud computing is documented as an alternative to traditional information technology due to its
intrinsic resource sharing with low maintenance characteristics. In cloud computing, the cloud service
providers (CSPs), such as Amazon and others are able to distribute different services to cloud users with
the assist of authoritative data centers. By shifting the local data management systems into cloud servers
and users may enjoy high quality services and save significant investments on their limited infrastructures.
One of the most essential services is offered by cloud providers was data storage. Let's consider a limited
data application the company allows its staffs in the same group or department to store and shared files in
the cloud. By utilizing the cloud that the staffs could be completely released from the troublesome local
data storehouse and maintenance. However, it is also posing a significant risk to the confidentiality of those
stored files. Specifically the cloud servers are managed by cloud providers is not fully trusted by users while
the data files stored in the cloud might be confidential and sensitive such as business plans. To preserve data
privacy is the primary solution for encrypted data files and then uploaded the encrypted data into the cloud.
Unfortunately, the designing of the efficient and secure data sharing scheme for groups in the clouds is not
an easy task due to the following challenging issues.
First of all identity the privacy is being one of the most significant restriction for the wide deployment
of cloud computing. Here not holding the guaranteed of identity privacy user may be unwilling to append
in cloud computing systems because their real identities can be easily disclose to cloud providers and also
attackers. On the other hand its unconditional identity privacy might incur the abuse of privacy for example
the misconduct staff could deceive others on the company to sharing false files without being traceable.
Therefore, traceability enables the TPA to expose the real identity of a users are also highly desirable.
Second, it is highly recommended that any member in the groups should able to fully enjoy the data
storing as well as sharing services provided by the cloud which are defined as the multiple owner manner.
Compare with the single owner manner where only the group manager could store and modify data in the
cloud, the multiple owner manners are more flexible in practical applications. More concretely, each user
in the groups is able to not only read data and also modify his or her part of the data in the entire data file
shared with the company.
Last but not the least so that groups are normally dynamic in practice, e.g., new staff cooperation
and current employees revocation in the company. The changes of membership make secure data sharing
extremely problematic. On one hand, the anonymous systems can challenge modern granted users can learn
the content of data files stored before their cooperation, because it is not possible for new granted users to
contact with anonymous data owners and access the corresponding decryption keys. On the other hand the
43

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
efficient membership repeal mechanism without updating the classified keys of the remaining users has
also desire to minimize the complexity of key management. Many security schemes for data sharing on
untrusted servers had been proposed. In these approaches, data owners are able to store the encrypted data
files in mistrustful storage with distributed the corresponding decryption keys are only to authorized users.
Thus, unauthorized users as well as storage servers couldn't learn the content of the data files because they
dont have knowledge of the decryption keys.
However, the complexity of user participation and repeal in these schemes is linearly increasing
with the numbers of data owners as well as the number of revoked users, respectively. By setting the group
with a single attribute, we proposed a secure provenance scheme is established in the cipher text policy
attribute established encryption technique, which allows any member in a group to share data with others.
However, the issue of user revocations is not addressed in their scheme. We presented a scalable and fine
grained data access control scheme on cloud computing based on the key policy attributes based on by
encryption technique with the implementation of the Proxy Server. Unfortunately, the single owner manner
hinders the adoption of theirs scheme into the case, where all users are granted to store and share data.
Hence we are implementing a group based Data owner system.
2. BACKGROUND
The entire previous auditing scheme is implemented only in a single owner environment and also
the owner should always be available in online. Bilinear Pairing Map technique is used for comparing the
data and finding unauthorized changes in which no block verification is allowed. CPOR (Compact Proofs
of Retrievability) algorithm is implemented for retrieving the data from the cloud. Without the permission
of owner the original data will not be retrieved and regenerated back to the modified data. It takes more
amount of time and also increases burden the data owner.
3. SECURE DATA STORAGE IN CLOUD
To protect the data integrity and save the data owners computation resources as well as online burden,
a secure data storage in the cloud using code regeneration and public audition scheme is proposed for the
dynamic multi owner environment, in which the data integrity checking and renewal are implemented
by an auditor and a semi trusted proxy separately. This scheme is the first to allow secure data storage in
cloud. The contents are masked during the initial phase to avoid leakage of the original data. This method
is insubstantial and does not introduce any burden to the TPA or proxy server. The proxy server releases
data owners from online burden for the renewal of corrupted blocks. The unauthorized action done by any
group member can be found and revoked by the proxy.
To make the scenario easier to follow, this technique is explained with an example description: The
staffs (i.e., cloud users) first generate their public and private keys, and then hand over the authenticator
regeneration to a proxy (a cluster or powerful workstation provided by the company) by sharing partial
private key. After producing encoded blocks and authenticators, the staffs upload and distribute them to
the cloud servers. Since that the staffs will be frequently off-line, the company employs a trusted third
party (the TPA) to interact with the cloud and perform periodical verification on the staffs data blocks in
a sampling mode. Once some data corruption is detected, the proxy is informed, it will act on behalf of the
staffs to regenerate the data blocks as well as corresponding authenticators in a more secure approach. A
group of staffs can work under a same project and they can be in one group to access and modify the files.
3.1 Regeneration model

Fig 1: Regeneration System model


44

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
The system model for Secure data storage is shown in Fig 1.., which involves four entities: the group
of data owners, stores their data in the cloud; the cloud, which are managed by the cloud service provider,
provide storage service and have considerable computational resources; the third party auditor(TPA), who
has knowledge and capabilities to carry out public audits on the coded data in the cloud, the TPA is trusted
and its audit result is impartial for both data owners and cloud servers; and a semi trusted proxy agent,
acts on behalf of the data owner to restore the data blocks during the repair procedure. Notice that the data
owner is restricted in computational and storage resources compared to cloud and may become off-line
after the data upload procedure. The proxy, who would always be online, is supposed to be much more
powerful than the data owner but less than the cloud servers in terms of computation and memory capacity.
To save resources as well as the online burden the data owners resort to the TPA for integrity verification
and delegate the reparation to the proxy.
3.2 User Logs
Group members are a set of registered users that will
1. Store their private data into the cloud server and
2. Share them with others in the group.
This module maintains the users details in it. The group membership is dynamically changed, due to
the staff resignation and new employee participation in the company. The group member has the ownership
of changing the files in the group. All the users in the group can view the files which are uploaded in their
group and also can modify it. Also each group will have private key and public key in it. The public key
is used for viewing the document in the cloud whereas the private is the meant for providing modification
rights for a user.
3.3 User and Data maintenance
The registered users and data will be maintained using a cloud server. A local Cloud which provides
priced abundant storage services are been created in this module. The users can upload their data in the
cloud. This module can be developed where the cloud storage can be made secure. The cloud is not fully
trusted by users since the CSPs are very likely to be outside of the cloud users trusted domain. Similar to
that the cloud server is honest but curious. That is, the cloud server will not maliciously delete or modify
user data due to the protection of data auditing schemes, but will try to learn the content of the stored data
and the identities of cloud users. This essentially means that the owner (client) of the data moves its data to
a third party cloud storage server which is supposed to - presumably for a fee - faithfully store the data with
it and provide it back to the owner whenever required.
The cloud server provides privilege to generate secure multi-owner data sharing scheme called
MONA. It implies that any user in the group can securely share data with others by the cloud. This scheme
is able to support dynamic groups efficiently. Specifically, new granted users can directly decrypt data files
uploaded before their participation without contacting with data owners but within the group.
3.4 Authentication and Signature Generation
This module makes the following functions
1. Signature Generation,
2. Signature Verification,
3. Content Regeneration.
A proxy agent acts on behalf of the data owner to regenerate authenticators and data blocks on the servers
during the repair procedure. Notice that the data owner is restricted in computational and storage resources
compared to other entities and may becomes off-line after the data upload procedure. The proxy, who would
always be online, is supposed to be much more powerful than the data owner but less than the cloud servers in
terms of computation and memory capacity. To save resources as well as the online burden potentially brought
by the periodic auditing and accidental repairing, the data owners resort to the TPA for integrity verication
and delegate the reparation to the proxy. Considering that the data owner cannot always stay online in practice,
in order to keep the storage available and veriable after a malicious corruption, it introduces a semi-trusted
proxy into the system model and provides a privilege for the proxy to handle the reparation of the coded blocks
and authenticators. It generates signature using OAEP based key delegation which provides unique private and
public key for each group registered in the cloud. So the users can access the document provided by its own
group only. The users can view other groups document using private key of the other groups. If he modifies
other group content he will be revoked by the cloud server.
45

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
3.5 User Revocation and file regeneration
User revocation is performed by the proxy via a public available RL based on which group members
can encrypt their data files and guarantee the privacy against the revoked users. No unauthorized access to
the document is encouraged in the cloud storage. So the data should be provided rights to modify only by
the groups own users. Other members cannot modify the content. Once if any user tries to hack the private
key of another group and trying to modify this will be detected by the cloud server and the users account
will be revoked by the user. The user could never enter his login again. This function will be performed by
the cloud server. Also if content is modified by unauthorized user it will be rollback to its original state by
the cloud server.
4. CONCLUSION
A public investigating scheme that which can be used for regenerating code over the cloud storage
system, where the data owners are privileged to delegate TPA for their data validity checking for keep the
original data privacy opposition to the TPA has been proposed. No data owners can always stay in online so
in order to keep the storage available and verifiable after a malicious corruption and a semi trusted proxy is
introduced into the system model that which can be used to provide a privilege for the proxy to handle the
reparation of the coded blocks and authenticators. More number of users can handle the files as its defined
with group. This scheme is provable secure and highly efficient.
Not only in text data, regeneration system is planned to appeal for audio and video data by generating
transformation of data and then comparing the data will modification. Number of techniques like DWT,
WSQ algorithms can be used for this. To design collusion resistant proxy re-signature schemes while also
supporting public auditing (i.e., blockless verifiability and non-malleability) remains to be seen. Essentially,
since collusion-resistant proxy re-signature schemes generally have two levels of signatures, where the two
levels of signatures are in different forms and need to be verified differently, achieving blockless verifiability
on both of the two levels of signatures and verifying them together in a public auditing mechanism is
challenging which will be considered in future.
REFERENCES
[1] B. Chen, R. Curtmola, G. Ateniese, and R. Burns, Remote data checking for network codingbased distributed storage systems, in Proceedings of the 2010 ACM workshop on Cloud computing
security workshop. ACM, 2010, pp. 3142.
[2]

H. Chen and P. Lee, Enabling data integrity protection in regenerating-coding-based cloud storage:
Theory and implementation, Parallel and Distributed Systems, IEEE Transactions on, vol. 25, no.
2, pp. 407416, Feb 2014.

[3]

K. Yang and X. Jia, An efficient and secure dynamic auditing protocol for data storage in cloud computing,
Parallel and Distributed Systems, IEEE Transactions on, vol. 24, no. 9, pp. 17171726, 2013.

[4]

Y. Zhu, H. Hu, G.-J. Ahn, and M. Yu, Cooperative provable data possession for integrity verification
in multicloud storage, Parallel and Distributed Systems, IEEE Transactions on, vol. 23, no. 12, pp.
2231 2244, 2012.

[5]

A. G. Dimakis, K. Ramchandran, Y. Wu, and C. Suh, A survey on network codes for distributed
storage, Proceedings of the IEEE, vol. 99, no. 3, pp. 476489, 2011.

[6]

H. Shacham and B. Waters, Compact proofs of retrievability, in Advances in CryptologyASIACRYPT 2008. Springer, 2008, pp. 90 107.

[7]

Y. Hu, H. C. Chen, P. P. Lee, and Y. Tang, Nccloud: Applying network coding for the storage
repair in a cloud-of-clouds, in USENIX FAST, 2012.

[8]

C. Wang, Q. Wang, K. Ren, and W. Lou, Privacy-preserving public auditing for data storage
security in cloud computing, in INFOCOM, 2010 Proceedings IEEE. IEEE, 2010, pp. 19.

[9]

G. Ateniese, R. Di Pietro, L. V. Mancini, and G. Tsudik, Scalable and efficient provable data
possession, in Proceedings of the 4th international conference on Security and privacy in
communication networks. ACM, 2008, p. 9.

[10]

S. Goldwasser, S. Micali, and R. Rivest, A digital signature scheme secure against adaptive chosen
message attacks, SIAM Journal of Computing, vol. 17, no. 2, pp. 281308, 1988.

46

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Avoid Collision And Broadcasting Delay In Multi-Hop


CR Ad Hoc Network Using Selective Broadcasting
D.Sandhiya , B.M.Brinda
M.Tech, Dept Of Information Technology
Vivekanandha College Of Engineering For Women ,Tiruchengode
sandhiyadurairaj20@gmail.com
Assistant Professor, Dept Of Information Technology
Vivekanandha College Of Engineering For Women , Tiruchengode
bmbrinda@gmail.com
AbstractCognitive networks enable efficient sharing of the radio spectrum. Control signals used to
setup a communication and broadcast to the neighbors in their particular channels of operation. This paper
deals with broadcasting challenge specially in multi-hop CR ad hoc networks under practical scenario with
collision avoidance have been address. Exchanging control information is a critical problem in cognitive
radio networks. Selective broadcasting in multi-hop cognitive radio network in which control information
is broadcast over pre-selected set of channels. We introduce the idea of neighbor graphs and minimal
neighbor graphs to obtain the necessary set of channels for broadcast. Selective broadcasting reduces the
delay in disseminate control information and yet assures successful transmission of information to all its
neighbors. It is also confirmed that selective broadcasting reduces redundancy in control information and
hence reduces network traffic.
Index Terms broadcasting, neighbor graphs and minimal neighbor graphs.

I. INTRODUCTION
The plan of cognitive networks was initiate to enhance the effectiveness of spectrum utilization.
The basic idea of cognitive networks is to allow other users to utilize the spectrum allocated to licensed
users (primary users) when it is not individual use by them. These other user who are opportunistic users
of the spectrum are called secondary users. Cognitive radio [1] expertise enables secondary users to
dynamically sense the spectrum for spectrum holes and use the same for their communication. A group of
such self-sufficient, cognitive users communicating with each other in a multi-hop manner form a multihop cognitive radio network (MHCRN). Since the vacant spectrum is shared among a group of independent
users, there should be a way to control and manage access to the spectrum. This can be achieve using a
central control or by a cooperative disseminated approach. In a centralized design, a single entity, called
spectrum manager, controls the procedure of the spectrum by secondary users [2]. The spectrum manager
gathers the information about free channels either by sensing its complete domain or by integrate the
information collected by potential secondary users in their respective local areas. These users transmit
information to the spectrum manager through a dedicated control channel. This approach is not possible
for dynamic multi-hop networks. Moreover, a direct attack such as a Denial of Service attack (DoS) [3]
on the spectrum administrator would debilitate the network. Thus, a distributed approach is chosen over
a centralized control. In a disseminated approach, there is no central administrator. As a result, all users
should jointly sense and share the free channel. The information sense by a user should be shared with
other users in the network to enable certain necessary tasks like route detection in a MHCRN. Such control
information is broadcast to its neighbours in a traditional network. Since in a cognitive method, each node
has a set of channels accessible, a node receives a message only if the message was send in the channel on
which the node was listen to. So, to make sure that a message is effectively sent to all neighbors of a node,
it has to be broadcast in every channel. This is called entire broadcasting of information. In a cognitive
location, the amount of channels is potentially large. As a result broadcasting in every channel causes a
large delay in transmit the control information. Another solution would be to choose one channel from
among the free channel for control sign exchange. However, the possibility that a channel is common
with all the cognitive user is little [4]. As a result, several of the nodes may not be available using a single
channel. So, it is necessary to transmit the control information on more than one channel to make sure that
every neighbour receives a copy [5]. With the raise in number of nodes in the system, it is potential that
the nodes are scattered over a huge set of channels. As a effect, cost and delay of communications over all
47

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
these channels increases. A simple, yet efficient solution would be to identify a small separation of channels
which cover all the neighbors of a node. Then use this set of channels for exchange the control information.
This concept of transmitting the control signals over a selected group of channels as an alternative of
flooding over all channels is called selective broadcasting and forms the basic design of the paper. Neighbor
graphs and minimal neighbour graphs are introduced to find the minimal set of channels to transmit the
control signals.
II.EXISTING SYSTEM
Broadcast is an important process in ad hoc networks, especially in distributed multi-hop multichannel networks. In CR ad hoc networks, different SUs may obtain different sets of accessible channels.
This non-uniform channel availability impose special plan challenges for broadcasting in CR ad hoc
networks. So we introduce fully-distributed broadcast protocol in a multi-hop CR ad hoc network. In this
protocol, control information exchange among nodes, such as channel accessibility and routing information,
is critical for the realization of most networking protocols in an ad hoc network. In cognitive network, each
node has a set of channels available; a node receives a message only if the message was send in the channel
on which the node was listen to. So, to make sure that a message is successfully sent to all neighbors of
a node, it has to be broadcast in every channel. In a cognitive environment, the number of channels is
potentially large. As a result broadcasting in each channel causes a large delay in transmitting the control
information. problem defined in this project1)Broadcasting delay is high.2)Redundancy Occur.3)Sent
control information to all nodes.4)High congestion.5)Network traffic is high.
III.PROPOSED SYSTEM
Broadcasting control information overall channel will origin a large delay in setting up the
communication. Thus, exchange control information is a main problem in cognitive radio networks. In our
proposed work, we deals with selective broadcasting in multi-hop cognitive radio network in which, control
information is transmit over pre-selected set of channels. We establish the concept of neighbor graphs and
minimal neighbor graphs to derive the essential set of channels for transmission. Neighbor graphs and
minimal neighbor graphs are introduced to find the minimal set of channels to transmit the control signals.
A neighbor graph of a node represents its neighbors and the channels over which they can communicate.
A minimal neighbor graph of a node represent its neighbors and the minimum set of channels through
which it can reach all its neighbors. Advantages of proposed system.1)Control information is transmitted
over pre-selected set of channels.2)Network traffic is reduced3)Low Broadcasting delay4)Less congestion,
contention5)No common control channel6)Redundancy reduced.

Figure 1.Architecture Diagram

IV.SELECTIVE BROADCASTING
In a MHCRN, each node has a set of channels presented when it enters a network. In order to
become a part of the network and start communicate with other nodes, it has to initial know its neighbors
and their channel information. Also, it has to let other nodes know its occurrence and its accessible channel
information. So it broadcasts such information over all channels to make sure that all neighbors obtain
the message. Similarly, when a node wants to start a communication it should replace certain control
information useful, for example, in route discovery. However, a cognitive network location is dynamic
due to the primary users traffic. The number of available channels at each node keeps changing with time
and location. To keep all nodes efficient, the information change has to be transmitted over all channels
as quickly as possible. So, for successful and efficient coordination, fast dissemination of control traffic
48

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
between neighboring users is required. So, minimal delay is a important factor in promptly disseminating
control information. Hence, the goal is to decrease the broadcast delay of each node.
Now, consider that a node has M available channels. Let Tb be the minimum time required to
broadcast a control message. Then, total broadcast delay = M Tb. So, in order to have lower broadcast
delay we need to reduce M. The value of Tb is dictated by the particular hardware used and hence is fixed. M
can be reduced by discovering the minimum number of channels, M ' to broadcast, but still making sure that
all nodes obtain the message. Thus, communications over carefully selected M' channels instead of blindly
broadcasting over M (presented) channels is called Selective Broadcasting. Finding the minimum number
of channels M ' is accomplished by using neighbor graphs and discovering the minimal neighbor graphs.
Before explaining the idea of neighbor graph and minimal neighbor graph it is essential to understand
the state of the network when selective broadcasting occurs and the difference among multicasting and
selective broadcasting.
A.State Of The Network
When a node enter in the network for the first time, it has no information about its neighbors.
So, initially, it has to broadcast over all the feasible channels to reach its neighbors. This is called the
initial state of the network. From then on, it can begin broadcasting selectively. Network steady state is
reached when all nodes know their neighbors and their channel information of each node. Since selective
broadcasting starts in the steady state, all nodes are assumed to be in steady state during the rest of the
conversation.
B.Multicasting And Selective Broadcasting
Broadcasting is the environment of wireless communication. As a result, Multicasting and
Selective broadcasting might appear related, but they change in basic idea itself. Multicasting is used to
send a message to a specific group of nodes in a particular channel. In a multichannel environment where the
nodes are listening to different channels, Selective broadcasting is an essential way to transmit a message
to all its neighbors. It uses a selected set of channels to transmit the information instead of broadcasting in
all the channels
V.NEIGHBOR GRAPH AND MINIMAL NEIGHBOR GRAPH FORMATION
In this section, the design of neighbor graph and minimal neighbor graph is introduced and the
construction of the same is explain. A neighbor graph of a node represent its neighbors and the channels
over which they can communicate. A minimal neighbor graph of a node represents its neighbours and the
minimum set of channels through which it can reach all its neighbors. The complete construction of both
such graphs is explained below.
A.Construction Of Neighbor Graph
Each node maintains a neighbor graph. In a neighbour graph, each user is represented as a node
in the graph. Each channel is represent by an edge. Let graph G denote the neighbor graph, with N and C
representing the set of nodes and all possible channels, correspondingly. An edge is added between a pair
of nodes if they can communicate through a channel. So a each nodes can have 2 edges if they can use two
different frequencies (channels). For example, if nodes A and B have two channels to communicate with
each other, then it is represented as shown in Fig. 1a. A and B can communicate through channels 1 and 2.
hence, nodes A and B are connected by two edges.

Figure 2. a) Nodes A and B linked by 2 edges. b) Representation of node A with 6 neighbors


49

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
consider a graph with 7 nodes from A to G and 4 different channels as shown in Fig. 1b. Node A is
considered the source node. It has 6 neighboring node, B through G. The edges signify the channels through
which A can communicate with its neighbors. For example, A and D node can communicate through the
channels 1 and 2. It means that they are neighbors to each other in channels 1 and 2. So this graph is called
the neighbor graph of node A. Similarly every node maintains its neighbor graph.
B.Construction Of Minimal Neighbor Graph
To decrease the number of broadcasts, the minimum number of channels through which a node
can reach all its neighbours has to be chosen. A minimal neighbor graph contain set of channels. Let DC
be a set whose elements represent the degree of each channel in the neighbor graph. So, DCi represents
the number of edges corresponding to channel Ci . For example, the set DC of the graph in Fig. 1b is: DC
={3,3,1,2}. To build the minimal neighbor graph, the channel with the highest degree in DC node is chosen.
All edges corresponding to this channel, as well as all nodes other than the source node that are associated
to these edges in the neighbor graph, are removed. This channel is added to a set called Essential Channel
Set, ECS which as the name imply, is the set of required channels to reach all the neighboring nodes. ECS
originally is a null set. As the edges are removed, the corresponding channel is added to ECS.
For example, review the neighbor graph shown in Fig: 1b. The step wise formation of a minimal
neighbor graph and the ECS. ECS is set to void. Since channel 1 has the highest degree in DC node, the
edges corresponding to channel 1 are removed in the initial step. Also, nodes B, C and D are removed from
the graph and channel 1 is added to ECS. It can be seen that sets DC and ECS are reorganized for the next
step. This process continues until only the source node is left. At this point ECS contains all the necessary
channels. The minimal neighbor graph is formed by removing all the edges from the original neighbour
graph, which do not correspond to the channels in ECS. The final minimal neighbor graph is shown in Fig.
2. Since, ECS is constructed by adding only the required channels from C; ECS is a subset of C.

Figure 3. Final minimal neighbor graph of fig. 2b.

VI.

PERFORMANCE EVALUATION
In this section the performance of the selective broadcast is compared with complete broadcasting
by studying the delay in broadcasting control information and redundancy of the received packets. The
performance evaluation used in all these experiments is shown below. For each experiment, a network area
of 1000m1000m is considered. The number of nodes is different from 1 to 100. All nodes are deployed
randomly in the network. Each node is assign a random set of channels changing from 0 to 10 channels. The
transmission range is set to 250m. Each data point in the graphs is an average of 100 runs. Before looking
at the routine of the proposed idea, two observations are made that help in understanding the simulation
results. Fig. 3 shows the plot of channel spread as a function of number of nodes. Channel spread is defined
as the combination of all the channels covered by the neighbors of a node.

Figure 4 Plot of channel spread with respect to number of nodes for a set of 10 channels.
50

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
A.Broadcast Delay
In this part transmission delay of selective broadcast and complete broadcast are compared.
Broadcast delay is defined as the total time taken by the node to effectively broadcast one control message
to all its neighbors. Each point in the graph is the average wait of all nodes in the network. The minimum
time to broadcast in a channel is assumed to be 5 msec.
In selective broadcasting the delay in disseminating the control information to all neighbours of
a node is reduced amount of the complete broadcast. In selective broadcasting, the delay increases with
the number of nodes because, it increase the number of nodes and the nodes are spread over increased
number of channels. As a result, a node may have to transmit over a large number of channels. In complete
broadcasting, a node transmits over all its obtainable channels. Since these channels are assign randomly to
each nodes, the average number of channels at each node is almost constant.
The average delay increases linearly with large number of channels in the case of successful broadcast,
because the node transmit in all its available channels. On the other hand, in selective broadcasting, the
rate of increase in delay is small. This is because, increasing the amount of channels, the number of
neighboring nodes enclosed by each channel also increases. As a result, the minimum channel set required
to cover all the neighbors remains constant and keeping the delay constant.
B.Redundancy
Redundancy is defined as the total number of additional copies of a message received by all nodes
in the network if all of them transmit control messages once. It is observed that the number of redundant
messages increases with amount of nodes in both the cases and the curve are similar in shape. This implies
that the difference in redundancies is not a purpose of the number of nodes. The average M to M ' ratio was
found to be 2.5. This concludes that the reduced total redundancy is due to the reduction in channel set in
selective broadcast. It has been verified that redundancy is reduced by a factor of (M /M ').
The rate of increase of redundancy is lower in selective broadcast when compared to successful
broadcast. In complete broadcast, the number of redundant messages at each node is equal to the number
of channels it has frequent with the sender. Therefore, with increase in number of channels the redundant
messages approximately increase linearly whereas in selective broadcast the increase is small due to the
selection of minimum channel set. In this section, it has been demonstrated that selective broadcasting
provides lower transmission delay and redundancy. It should be noted that, due to the decrease in redundancy
of messages, there will be less congestion in the network and hence, there is possible for improvement in
throughput by using selective broadcasting.
VII.CONCLUSION
In this paper the concept of selective broadcasting in MHCRNs is introduced. A minimum set of
channels called the Essential Channel Set (ECS), is derived by neighbor graph and minimal neighbor
graph. This set contains the minimum number of channels which cover up all neighbors of a node and
hence transmitting in this selected set of channels is called selective broadcasting is compared to complete
broadcast or flooding. It performs better with increase in number of nodes and channels. It has also been
exposed that redundancy in the network is reduced by a factor of (M /M '). As a result there is a possible for
improvement in overall network throughput.
REFERENCES
[1] Joseh Mitola and G. Q. Maguire. Cognitive radio: making software radios more personal IEEE
personal Communications, 6(4):1318,1999.
[2]

Chunsheng Xin, Bo Xie, Chien-Chung Shen, A novel layered graph model for topology formation
and routing in dynamic spectrum access networks, Proc. IEEE DySPAN 2005, November 2005,
pp. 308-317

[3]

K. Bian and J.-M. Park, "MAC-layer misbehaviors in multi-hop cognitive radio networks," 2006
US - Korea Conference on Science, Technology and Entrepreneurship (UKC2006), Aug. 2006.

[4]

J. Zhao, H. Zheng, G.-H. Yang, Distributed coordination in dynamic spectrum allocation networks,
in: Proc. IEEE DySPAN 2005, pp . 259-268, November 2005

[5]

G. Resta, P. Santi, and J. Simon, Analysis of multi-hop emergency message propagation in vehicular
ad hoc networks, in Proc. 8th ACM Int. Symp. Mobile Ad Hoc Netw. Comput., 2007, pp. 140149.
51

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
[6]

Chlamtac and S. Kutten, On broadcasting in radio networksProblem analysis and protocol


design, IEEE Trans. Commun.,vol. no . 12, pp. 12401246, Dec. 1985.

[7]

S.-Y. Ni, Y.-C. Tseng, Y.-S. Chen, and J.-P. Sheu, The broadcast storm problem in a mobile ad hoc
network, in Proc. 5th Annu. ACM/IEEE Int. Conf. Mobile Comput. Netw., 1999, pp. 151162.

[8]

J. Wu and F. Dai, Broadcasting in ad hoc networks based on selfpruning,in Proc. IEEE Conf.
Comput. Commun., 2003, pp. 22402250.

[9]

J. Qadir, A. Misra, and C. T. Chou, Minimum latency broadcasting in multi-radio multi-channel


multi-rate wireless meshes, in Proc. IEEE CS 3rd Annu. Sens. Ad Hoc Commun. Netw., 2006, vol.
1, pp. 8089.

[10] Qayyum, L. Viennot, and A. Laouiti, Multipoint relaying for flooding broadcast messages in mobile
wireless networks, in Proc. 35th Annu. Hawaii Int. Conf. Syst. Sci., 2002, pp. 38663875.
[11] Y. Song and J. Xie, QB2IC: A QoS-based broadcast protocol under blind information for
multi-hop cognitive radio ad hocnetworks, IEEE Trans. Veh. Technol., vol. 63, no. 3, pp.
14531466 Mar. 2014.
[12] L. Lazos, S. Liu, and M. Krunz, Spectrum opportunity-based control channel assignment in
cognitive radio networks, in Proc. IEEE 6th Annu. Commun. Soc. Conf. Sens., Mesh Ad Hoc
Commun. Netw., 2009, pp. 19.
[13] J. Zhao, H. Zheng, and G. Yang, Spectrum sharing through distributed coordination in dynamic
spectrum access networks, Wireless Commun. Mobile Comput., vol. 7, pp. 10611075, Nov. 2007.
[14] K. Bian, J.-M. Park, and R. Chen, Control channel establishment in cognitive radio networks using
channel hopping, IEEE J. Sel. Areas Commun., vol. 29, no. 4, pp. 689703, Apr. 2011.
[15] Y. Song and J. Xie, ProSpect: A proactive spectrum handoff framework for cognitive radio ad
hoc networks without common control channel, IEEE Trans. Mobile Comput., vol. 11, no. 7, pp.
11271139, Jul. 2012.

52

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

QoS-Aware Spectrum Sharing for Multi-Channel


Vehicular Network
G.Madhubala , R.Sangeetha
Dept.of Information Technology
Vivekanandha College Of Engineering For Women,Tiruchengode
madhug149910@gmail.com
Dept.of Information Technology
Assistant Professor,Dept.of Information Technology
Vivekanandha College Of Engineering For Women,Tiruchengode Sgee89@gmail.com
AbstractWe consider QoS -aware band sharing in cognitive wireless networks where secondary users
are allowed to access the band owned by a primary network provider. The intrusion from secondary
users to primary users is forced to be below the tolerable limit. Also, signal to intrusion plus noise ratio
(SINR) of each secondary user is maintained higher than a required level for QoS cover. When network
load is high, admission control needs to be performed to satisfy both QoS and intrusion constraint. We
propose an admission control algorithm which is performed jointly with power manage such that QoS
needs of all admitted secondary users are satisfied while keeping the intrusion to primary users below the
passable limit. When all secondary users can be supported at minimum rates, we allow them to increase
their spread rates and share the spectrum in a fair manner. We formulate the joint power/rate allocation
with max-min equality principle as an optimization problem. We show how to change it into a convex
optimization problem so that its globally most favourable solution can be obtained. Numerical grades
show that the proposed admission control algorithm achieves performance very close to the optimal
solution.
Index Terms cognitive radio,Admission control algorithm

I. INTRODUCTION
Wireless access in vehicular environments (WAVE) is distinct to carry applications for
intelligent transportation systems (ITSs), with security and disaster services, automatic toll collection,
traffic management, and commercial trans-actions among vehicles. It specifies the architecture and
management functions to allow protected vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I)
wireless announcement. In order to ease ITS applications, local area networks are recognized to consist
two types of major archi-tectural mechanism, i.e., the onboard units (OBUs) in vehicles and the roadside
units (RSUs) install in road infrastructure, which are denoted as stations in this paper. Specifically, the
IEEE1609.4 criterion is intended to improve the IEEE802:11p medium access control (MAC) protocol
for multi-channel operations. The WAVE system is intended to work on the75 MHz band in the licensed
ITS5:9GHz band. The operating band is divided into seven channels, including one control channel (CCH)
and six service channels (SCHs), each with10MHz bandwidth. Execution of increasing high-speed wireless
applications requires exponential growth in spectrum demand. How-ever, it has been reported that current
use of owed spectrum can be as low as 15%. Thus, there is an increas-ing interest in initial well planned
method for spectrum administration and sharing which is encouraged by both industry and FCC authority.
This motivate to exploit the spectrum opportunities in space, time, frequency while protecting users of the
primary network holder from extreme interference due to opportunistic spectrum access. In fact, it is required
that an intrusion limit corresponding to an intrusion tempera-ture level be maintained at the receiving points
of the primary network. The input challenge in cognitive radio networks is how to construct band access/
sharing schemes such that users of the primary network (will be called primary users in the sequel) are
protected from excessive intrusion due to secondary band access and QoS performance of secondary users
are guar-anteed. In this paper, we present a band sharing frame for cognitive CDMA wireless networks
with explicit interference protection for primary users and QoS constraint for secondary users. Secondary
users have minimum transmission rates with required QoS performance and highest power constraints.
When the network load is lofty, an admission control algorithm is proposed to guarantee QoS constraint for
secondary users and intrusion constraints for primary users. When all the secondary user can be support,
we present a joint rate and power allocation solution with QoS and intrusion constraint.Prioritized optimal
53

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
channel allocation (POCA) protocol to improve the system throughput of non-safety data delivery with
definite transmission reliability of safety-related messages. By adopting the concepts in CR network [14],
[15], the primary provider (PP) and secondary provider (SP) are defined to represent the stations that are
delivering safety messages and non-safety information, respectively.On the other hand, the primary user
(PU) and secondary user (SU) are defined to indicate the stations that are receiving messages from PP
and SP, respectively. In order to provide privilege for safety information, PPs are designed to possess
smaller contention window (CW) size compared to that of SPs.More opportunity for channel entre will
be available to PPs in order to effectively deliver their safety messages. PPs are more disposed to change
to both CHs 172 and 184 in the SCH interval to deliver their safety-related messages; while SPs will have
relatively more opportunity to use the other four SCHs.
II.

EXISTING SYSTEM
Existing research works have been proposed based on the IEEE 802:11p/1609 standards
a self-organizing time division multiple access (STDMA) scheme was proposed into ensure successful
transmission of time critical traffic between the vehicles. In, a carrier sense multiple access (CSMA)
based protocol for multi channel network is proposed. A separate control channel is utilized to eliminate
interference between the control and data messages. All RTS and CTS packets are transmitted on the
control channel, and the optimal channel for each user is selected based on the signal to interference plus
noise ratio (SINR) to exchange data messages. However, without the consideration of different priorities
among stations in the delivery of safety-related messages cannot be protected and guaranteed. In POCA-D,
distributed CR network, SPs are not allowed to negotiate with each other. In POCA-C, centralized CR
network, SPs can negotiate with each other by sending control messages at the beginning of SCH interval
drawback of existing system 1)Throughput is low 2)Delay is high 3)Quality of service is low.
Considering either distributed or centralized networks, the proposed POCA schemes can be
distinguished into distributed POCA (POCA-D) and centralized POCA (POCAC) protocols. Distributed
network system is considered in POCA-D scheme with the knowledge of PPs distribution probability.
Optimal channel-hopping sequence can be obtained based on dynamic programming (DP) in order to achieve
maximum aggregate throughput for SPs under the quality-of-service (QoS) constraint of PPs. On the other
hand, the POCA-C scheme is proposed for centralized networks, where an optimal channel allocation for
SPs is derived by means of linear programming based on the number of PPs of each channel in every SCH
interval. With the adoption of proposed POCA schemes, optimal load balance can be achieved between the
probability of channel availability and channel utilization; while the transmission opportunities for safetyrelated messages can also be preserved. Note that the proposed POCA-D and POCA-C schemes can be
utilized to investigate the effects from different network scenarios. Performance validation and comparison
of both protocols will be evaluated via simulations.
Multi-Channel Operation
The coordinated universal time (UTC) is adopt for all stations the synchronization scheme for
sync intervals. As the stations will toggle to the CCH in every CCH interval to either listen or transmit
advertising messages, and potentially switch into one of the SCHs during the SCH interval for data spread.
During the CCH interval, the safety-related messages can be broadcast on the CCH by the providers and
these messages are expected to be received by all stations. On the other hand, if a provider intends to deliver
non-safety information to some of the users, the provider will broadcast the WAVE services advertisement
(WSA) edge on the CCH. The WSA frame nmainly contains two fields including the SCH that the provider
plans to switch into and the planned MAC address of user for data transmission. In order to facilitate twoway handshaking process, the WSA response (WSAR) casing is defined in this paper and will be issued
by the corresponding user to acknowledge the reception of WSA frame if the user agrees to receive data
from the provider. After the provider has received the WSAR frame, both the provider and user will switch
to the consequent SCH that is recorded in the WSA frame in the following SCH interval. During the SCH
interval, the channel access method for provider is based on the carrier sense multiple access with smash
evading (CSMA/CA) scheme. A data transmission is completed by means of the RTS/CTS/DATA/ACK
four-way handshaking mechanism. Furthermore, there can be multiple providers that intend to compete
for both the announcement on the CCH during CCH interval and the utilization of six SCHs during SCH
interval. The random backoff plan is adopted to improve potential smash between the WSA frames on CCH
as well as collision between the the RTS frames on SCHs. Note that even with successful data delivery,
54

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
each provider can further conduct multiple channel contentions within an SCH interval if it has additional
data to be transmited.
III.PROPOSED SYSTEM
When the network weight is lofty, an admission control algorithm is proposed to guarantee QoS
constraint for secondary users and intrusion constraints for primary users. When all secondary users can
be supported, we present a joint rate and power allocation solution with QoS and intrusion constraints. In
this paper, consider the spectrum sharing problem among unlicensed users (secondary users) and licensed
users (primary users). The entities we will work with are communication associates each of which is a pair
of users communicating with each other. We will refer to communication links belonging to secondary
networks as secondary associates. We will also deem the interference constraints at the receiving nodes
of chief network which will be referred to as primary receiving points in the sequel. We assume that each
primary receiving point can accept a maximum interference level.Admission control problem when the
network load is high and all secondary links transmit with their minimum rate.advantages of proposed
system 1)High Quality of Service 2)Interference Avoided 3)High network performance 4)Separate power
allocation for channels

Figure:Architecture

IV.ALGORITHM USED
Admission Control Algorithm
An admission control algorithm which is perform together with power control such that QoS
requirements of all admitted secondary users are contented while keeping the intrusion to primary users
below the tolerable limit. When all secondary users can be supported at least rates, we permit them to
increase their transmission rates and share the spectrum in a fair manner. Admission control algorithms to
be used during high network load conditions which are performed mutually with power control so that QoS
desires of all admitted secondary users are satisfied while keeping the intrusion to primary users below the
passable limit. If all secondary users can be supported at minimum rates, we allow them to enlarge their
broadcast rates and share the band in a fair manner. The secondary links requesting access to the band
approved to the primary network have QoS requirements.
V.EXPERIMENTAL RESULT
When the network load is small, all secondary associates can be admitted into the network and
they would increase their broadcast rates over the least values. In core, we wish to solve the optimization
problem. The decision variables are transmission rates and powers P. Transform this problem into a convex
optimization problem where globally optimal solution can be obtained. We would like to note that the joint
rate and power allocation for cellular CDMA networks has been an active research topics over the last
several years. We refer the readers to and references therein for existing fiction on the trouble. However, the
work is one of the first papers which adapt the problem to the ad hoc network setting. Here, the objective
is to minimize the maximum service time on different transmission links. In this paper, we proceed one
55

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
step further by solving the joint rate and power allocation problem in the spectrum sharing context where
total interference at the primary receiving point should be smaller the tolerable limit. We will assume that
transmission rate of secondary link can be adjusted in an allowable range with minimum and maximum
values are minimum and maximum , respectively. Also, power of secondary link is constrained to be
smaller than the maximum limit. When the network weight is small, all requesting secondary links with
minimum transmission rates can be supported while fulfilling both QoS and intrusion constraints. If it is the
case, secondary links would increase their broadcast rates above the least values and split the spectrum in a
fair manner. For fairness issue, we adopt the max-min standard which aims to maximize the broadcast rate
of the secondary link with a minimum broadcast rate.
VI.CONCLUSION
In this paper presented a solution approach to the spectrum sharing problem in cognitive wireless
networks. In particular, an admission control algorithm has been proposed which aims to remove the least
number of secondary links so that both QoS constraints in terms of desired SINR for accepted links and
interference constraints for primary links are satisfied. We have also formulated the joint rate and power
allocation problem for the secondary links as an optimization problem with both QoS and interference
constraints. Also, several interesting impacts of system, QoS and interference constraint parameters on
network performance is high.
REFERENCES
[1] R. A. Uzcategui and G. Acosta-Marum, WAVE: A Tutorial, Commun. Mag., vol. 47, no. 5, pp.
126133, May 2009.
[2]

IEEE P1609.4/D6.0, Draft Standard for Wireless Access in Vehicular Environments (WAVE) Multi-Channel Operation, 2010.

[3]

IEEE P802.11p/D7.0, Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY)
Specifications-Amendment 7:
Wireless Access in Vehicular Environment,2009.

[4]

S. Eichler, Performance evaluation of the IEEE 802.11p WAVE communication standard, inProc.
IEEE 66th Veh. Technol. Conf., Oct. 2007, pp. 21992203.

[5]

Y. Wang, A. Ahmed, B. Krishnamachari, and K. Psounis, IEEE 802.11p performance evaluation


and protocol enhancement, in Proc. IEEE Int. Conf. Veh. Electron. Safety, Sep. 2008, pp. 317322.

[6]

N. Choi, S. Choi, Y. Seok, T. Kwon, and Y. Choi, A solicitation-based IEEE 802.11p MAC protocol
for roadside to vehicular networks, inProc. Mobile Netw. Veh. Environ., May 2007, pp. 9196.

[7]

K. Bilstrup, E. Uhlemann, E. Strom, and U. Bilstrup, Evaluation of the IEEE 802.11p MAC method
for vehicle-to-vehicle communication, inProc. IEEE 68th Veh. Technol. Conf., Sep. 2008, pp. 15.

[8]

S. Wang, C. Chou, K. Liu, T. Ho, W. Hung, C. Huang, M. Hsu, H.Chen, and C. Lin, Improving
the channel utilization of IEEE 802.11p/1609 Networks, inProc. IEEE Wireless Commun. Netw.
Conf., Apr. 2009, pp. 16.

[9]

C.-M. Lee, J.-S. Lin, Y.-P. Hsu, and K.-T. Feng, Design and analy-sis of optimal channel-hopping sequence
for cognitive radio networks, inProc. IEEE Wireless Commun. Netw. Conf., Apr. 2010,pp. 16.

[10]

J. So and N. H. Vaidya, Multi-channel MAC for ad hoc networks: Handling multi-channel hidden
terminals using a single trans-ceiver, inProc. 5th ACM Int. Symp. Mobile Ad Hoc Netw. Comput.,
May 2004, pp. 222233.

[11]

N. Jain, S. Das, and A. Nasipuri, A multichannel CSMA MAC protocol with receiver-based
channel selection for multihop wire-less networks, in Proc. 10th Int. Conf. Comput. Commun.
Netw., Oct. 2001, pp. 432 439.

56

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
[12] T. Shu, S. Cui, and M. Krunz, Medium access control for multi-channel parallel transmission in
cognitive radio networks, inProc. IEEE Global Telecommun. Conf., Dec. 2006, pp. 15.
[13]

C. Han, M. Dianati, R. Tafazolli, and R. Kernchen, Throughput analysis of the IEEE 802.11p
enhanced distributed channel access function in vehicular environment, inProc. IEEE 72nd Veh.
Tech-nol. Conf. Fall, 2010, pp. 1 5.

[14] S. Haykin, Cognitive radio: Brain-empowered wireless communications, IEEE J. Select. Areas
Commun.,vol.23,no. 2, pp. 201220, Feb. 2005.
[15] I. F. Akyildiz, W.-Y. Lee, M. C. Vuran, and S. Mohanty, Next gen-eration/dynamic spectrum
access/cognitive radio wireless net-works: A survey,Comput. Netw., vol. 50, pp. 21272159, 2006.

57

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

A Secure Message Exchange and Anti-jamming


Mechanism in MANET
S.Sevvanthi, G.Arulkumaran
M.Tech, Dept. of Information Technology
Vivekanandha College Of Engineering For Women, Tiruchengode
sevvanthis@gmail.com
Assistant Professor, Dept. of Information Technology
Vivekanandha College Of Engineering For Women, Tiruchengode
erarulkumaran@gmail.com
AbstractSecure neighbor discovery is the fundamental process in the MANET deployed in aggressive
environment. It refers to the process that nodes exchange messages to discover and authenticate each
other. It is defenseless to the jamming attack in which the adversary intentionally transmits signals to
prevent neighboring nodes from exchanging messages. Existing anti-jamming communications depends
on JR-SND. The JR-SND, a jamming-resilient secure neighbor discovery scheme for MANETs based
on Random spread-code pre-distribution and Direct Sequence Spread Spectrum (DSSS). In Existing,
they prevent the jamming and introduce the anti-jamming mechanism using DSSS introduce the secure
message exchange mechanism and prevent the collisions during packet transmission. But in this we lack
of introducing to detect the selfish and malicious nodes in the network. For this, in the Future Work We
will enhance the work by detecting the selfish nodes using Watchdog and Neighbor Coverage-based
Probabilistic Rebroadcast Protocol (NCPR).
Keywords - WatchDog, Neighbor Coverage-based Probabilistic Rebroadcast Protocol (NCPR)

I. INTRODUCTION
Mobile ad hoc networks (MANETs) are auspicious area based on the ability to self-conguring
mobile devices connection into wireless network without using any infrastructure. MANETs are mobile,
they use wireless connections to connect to various networks. In the occasion where there is a group
effort required, the MANETs plays a major role in wireless communication and provides effective
communication. SECURE neighbor discovery is a fundamental functionality in mobile ad hoc networks
(MANETs) deployed in aggressive environments. It refers to the process that nodes exchange messages
to discover and authenticate each other [2]. Therefore the basis of other network functionalities such
as medium access control and routing, secure neighbor discovery need be often performed due to node
movability. Direct Sequence Spread Spectrum (DSSS) is a common forms of spread spectrum techniques
[15]. In classic spread spectrum techniques, senders and receivers need to pre-distribute a secret key, with
which they can generate spread codes, for communication. If a jammer knows the secret key, the adversary
can easily jam the communication by the spread codes, used by the sender. There have been a few current
attempts to remove the circular dependency of jamming-resistant communications on pre-shared keys like
JR-SND [1]. Many existing protocols in MANET work properly only against single node attacks. They
cannot provide protection against multiple malicious nodes working in collusion with one another. Since
packet transmission in MANETs depends heavily on mutual trust and cooperation among the nodes in
the network, therefore determining the trust of an individual node before actually forwarding packet to it
becomes essential for successful packet transmission.
In this paper we propose, Watchdog timer and NCPR can help in detecting malicious behavior of
some nodes in the network. The NCPR is used to decrease routing overhead based on neighbor coverage
knowledge and rebroadcast probability (NCPR). The excess of route request has been decreased using
several methods like neighbor coverage based probabilistic(NCPR) method which guides to high end-toend delay and packet delivery ratio. The node which has sufficient power to send the packet is recognized
by using good neighbor node detection method. Here, This NCPR provides optimal solution for finding
good nodes. Performance metrics in classification of nodes are transmission range and power of node,
signal strength, high packet forwarding capacity and relative location of node.
Our main contributions are summarized as follows.
1. We identify selfish node in MANETs as a related problem that cannot be addressed by existing
58

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
anti-jamming techniques such as
2. We propose a NCPR and Watchdog scheme to detect node behaviour .
The rest of this paper is structured as follows. Section II discusses about the related work. Section
III introduces the proposed scheme. Section IV illustrates the implementation of Proposed system. Section
V presents the performance evaluation and Section VI concludes this paper.
I.RELATED WORK
Several schemes have been proposed to enable two nodes to establish a secret
spread code (or key) under the jamming attack. The schemes proposed in [1] are all based on DSSS and a
publicly known spread-code set and thus vulnerable to the DoS attack. In [7], Proposed a scheme, Mobile
Secure Neighbor Discovery (MSND), which offers a measure of security against wormholes by allowing
participating mobile nodes to securely determine if they are neighbors. In [3], Proposed an enhanced
security scheme against jamming attack with AOMDV routing protocol [4]. The jamming attacker delivers
huge amount of unauthorized packets in the network and a result network gets attained. The proposed
scheme identifies the jamming attacker and blocks its activities by identifying the unauthorized packets
in network. Multipath routing protocol AOMDV is used to improve the network performance but there is
a state that jamming phase occurs naturally and is not achieved by attacker intentionally [5]. In presence
of attacker security method always provides the secure path and through multipath routing the possibility
of secure routing is enhanced. The schemes proposed in [1] and [6],[15] are all based on DSSS. DSSS is
a modulation technique widely used in code division multiple access (CDMA) systems, e.g., IS-95. In a
DSSS system, the sender spreads the data signal by increased it by an independent noise signal known as
a spread code, which is a pseudorandom sequence of 1 and -1 bit values at a frequency much higher than
that of the original signal. The energy of the original signal is thus spread into a much broad band. The
receiver can modulate the original signal by multiplying the received signal by a synchronized version of
the same spread code, which is known as a de-spreading process. To transmit a message, the sender first
transforms the message into a NRZ sequence by replacing each bit 0 with -1 and then multiplies each bit
of the message by a spread code to get the spread message also known as the chip sequence [1]. In [4],
Proposed an method, use passive ad hoc identity method and key distribution. Detection can be done by
a single node, or multiple trusted nodes can join to improve the accuracy of detection. In [4] Sybil attacks
pose a great threat to decentralized systems like peer-to-peer network and geographic routing protocols.
More recently, In [9] proposed an anti-jamming scheme that explores interference cancellation and transmit
precoding capabilities of MIMO technology.
II.PROPOSED SCHEME
This System proposes a scheme is watchdog and rebroadcast delay. The rebroadcast delay
is to determine the forwarding order. The node which has many common neighbors with the previous
node has the lower delay. If this node rebroadcasts a packet, then many common neighbors will know this
fact. Therefore, this rebroadcast delay enables the information that the nodes have transmitted the packet
spread to many neighbors. This is performed by using the Neighbor Knowledge Probabilistic Rebroadcast
Protocol based on the neighbor knowledge method. And the watchdog can be used to detect the selfishnode
while the packet transmission.
III.IMPLEMENTATION
A.Watchdog
Watchdogs are well known mechanism to detect the attacks from selfish nodes in
the network. A way to decrease the overall detection time of selfish nodes in a network is the collaborative
watchdog. Collaborative contact means both nodes are coordinate then if one of them has one or more
positive. It can transmit information to other nodes. The nodes can directly communicate with each other
if a contact occurs (that is, if they are within communication range). Supporting this coordination is a cost
intensive activity for nodes. Thus, in the real world, nodes could have a selsh behaviour, being unwilling to
forward packets for others. Selshness means that nodes decline to forward other node packets to save their
own resources. Therefore, detecting such nodes quickly and accurately for the overall performance of the
network. Previous works have showed that watchdogs are appropriate mechanisms to detect misbehaving
and selsh nodes.
59

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
B.NCPR (Neighbor Coverage-Based Probabilistic Rebroadcast Protocol)
The proposed scheme is a Neighbor Coverage based probabilistic Rebroadcast protocol
that can be used to decrease routing overhead based upon neighbor coverage knowledge and rebroadcast
probability. The overhead of route request has been reduced using several methods like (NCPR) neighbor
coverage based probabilistic method which guides to high end-to-end delay and packet delivery ratio. The
node which has sufficient power to transmit the packet is identified by using good neighbor node detection
method. This method provides best solution for finding good nodes. Performance metrics in categorization
of nodes are transmission range, power of node, high packet forwarding capacity and relative position of
node.
Selfishness of the node is categorized as two:
1.Full selfishness and
2.Partial selfishness
Algorithm Description
The legal description of the Neighbour Coverage based Probabilistic Rebroadcast (NCPR) for
decreasing routing overhead in route discovery is shown in algorithm
Definition: RREQv: RREQ packet received from node v. Rv.id: The unique identifier of RREQv.
U(u, x): Uncovered neighbours set of node u for RREQ whose id is x. N(u): Neighbour set of node u.
Timer(u, x): Timer of node u for RREQ packet whose id is x. {Note that, in the actual implementation of
NCPR protocol, every different RREQ needs a UCN set and a Timer.}

The node ni receives an RREQ packet from its previous node s, it can use the neighbor list in the
RREQ packet to calculate how much its neighbors have not been covered by the RREQ packet from s. If
node ni has more neighbors unveiled by the RREQ packet from s, which means that if node ni rebroadcasts
the RREQ packet, the RREQ packet can reach more additional neighbor nodes. In algorithm N(s) and N
(ni) are the neighbors sets of node s and ni, respectively. s is the node which sends an RREQ packet to node
ni. When a neighbor receives an RREQ packet, it could calculate the rebroadcast delay Td (ni) according to
the neighbor list in the RREQ packet and its own neighbor list. Where Tp (ni) is the delay ratio of node ni,
and MaxDelay is a little constant delay. | . | is the number of elements in a set. The node s sends an RREQ
packet, all its neighbors ni, i = 1,2 . . . | N(s)| receive and process the RREQ packet. If node ni receives
a replicate RREQ packet from its neighbor nj, it knows that how many its neighbors have been covered
by the RREQ packet from nj. UCN set according to the neighbor list in the RREQ packet from nj . After
60

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
adjusting the U (ni), the RREQ packet received from nj is disposed. When the timer of the rebroadcast delay
of node ni expires, the node obtains the final UCN set. Note that, if a node does not sense any duplicate
RREQ packets from its neighborhood, its UCN set is not changed, which is the initial UCN set. We define
the additional coverage ratio (Ra (ni)) of node ni. Ra becomes bigger, more nodes will be covered by this
rebroadcast, and more nodes need to receive and process the RREQ packet, and, thus, the rebroadcast
probability should be set to be higher. Then, we can use 5.1774 log n as the connectivity metric of the
network. So we assume the ratio of the number of nodes that need to receive the RREQ packet to the total
number of neighbors of node ni is Fc (ni). The arrangement of network connectivity approaching 1, we have
a heuristic formula: |N (ni)| . Fc (ni) 5.1774 log n. Then Combining the additional coverage ratio and
connectivity factor, we obtain the rebroadcast probability Pre (ni) of node ni. if the Pre (ni) is greater than
1, we set the Pre (ni) to 1. Note that the calculated rebroadcast probability Pre (ni) may be greater than 1,
but it does not collide the behavior of the protocol. It just shows that the local density of the node is so low
that the node must forward the RREQ packet. Then, node ni need to rebroadcast the RREQ packet received
from s with probability Pre (ni).
IV.PERFORMANCE EVALUATION
We perform several tests using the ns-2 simulator. In order to do this, we implemented a specific
watchdog module for this simulator available at (http://safewireless. sourceforge.net/). Using this simulator
allows us to test networks with a large number of nodes, changing the number of attackers and the mobility
of them. Figure 1 shows a preliminary study of how the percentage of malicious nodes and the total number
of nodes of the scenario affects the probability that an attack is performed in a traffic flow. As we can see,
not only the percentage of attackers affects the probability of found an attack in one test, also the number
of total nodes of the scenario affects it. Afterwards we implemented the watchdog mechanism for this
simulator and performed several tests varying the mobility of the nodes and the number of attacks to assess
the effectiveness of the watchdog.

Figure 1: Probability of an attack when varying the number of nodes and the

Figure 2: Attacks detected by the watchdog

Figure 2 shows the results obtained with different parameters. We can see that mobility clearly
affects the number of attacks detected. It decreases when mobility is increased. With a mobility of 1 m/s,
near by 100% of the attacks are detected.
V.CONCLUSION
In this paper, we propose watchdog mechanism to detect the selfishnode based on NCPR. It refers
61

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
to the process that nodes exchange messages to discover and authenticate each other . The results show
that a collaborative watchdog can reduce the overall detection time of the selfish node. Thus improves the
performance of the networks by avoiding these selfish nodes from the routing path. By using this technique,
the local watchdog can generate a positive (or negative) detection in case the node is acting selshly.
REFERENCES
[1] R. Zhang, Y. Zhang, and X. Huang, JR-SND: jamming-resilient secure neighbor discovery in
mobile ad-hoc networks, in IEEE ICDCS11,Minneapolis,Minnesota,June2015.
[2]

P. Papadimitratos, M. Poturalski, P. Schaller, P. Lafourcade, D. Basin, S. Capkun, and J.-P. Hubaux,


Secure neighborhood discovery: a fundamental element for mobile ad hoc networking, IEEE
Commun. Mag,vol. 46, no. 2, pp. 132139, February 2008.

[3]

Priyanka Sharma, Anil Suryawanshi, Enhanced Security Scheme against Jamming attack in Mobile
Ad hoc Network, IEEE International Conference on Advances in Engineering & Technology
Research (ICAETR - 2014), August 2014.

[4]

P.Kavitha, C.Keerthana, V.Niroja, V.Vivekanandhan, Mobile-id Based Sybil Attack detection on


the Mobile ADHOC Network,International Journal of Communication and Computer Technologies
Volume 02 No.02 Issue: 02, Maech 2014.

[5] Vme-Rani Syed, Dr.ArifIqbal Vmar, Fahad Khurshid, Avoidance of BlackHole Affected
Routes in AODV BasedMANET, international Conference on Open Source Systems and
Technologies(iCOSST), 2014
[6]

Q. Wang, P. Xu, K. Ren, and X.-Y. Li, Towards optimal adaptive UFH-based anti-jamming wireless
communication, IEEE J. Select. Areas Commun., vol. 30, no. 1, pp. 1630, 2012.

[7]

R. Stoleru, H. Wu, H. Chenji, Secure Neighbor Discovery in Mobile Ad Hoc Networks, IEEE
International Conference on Mobile Ad-Hoc and Sensor Systems, 2011.

[8]

Marcin Poturalski, Panos Papadimitratos, Jean-Pierre Hubaux, Formal Analysis of Secure Neighbor
Discovery in Wireless Networks, IEEE TRANSACTIONS ON DEPEDABLE AND SECURE
COMPUTING, 2013.

[9]

Qiben Yan, Huacheng Zeng, Tingting Jiang , Ming Li, Wenjing Lou, Y. Thomas Hou, MIMObased Jamming Resilient Communication in Wireless Networks, IEEE Conference on Computer
Communications,2014

[10] Liang Xiao, HuaiyuDai, Peng Ning, Jamming-Resistant Collaborative Broadcast Using
Uncoordinated Frequency Hopping, IEEE TRANSACTIONS ON INFORMATION FORENSICS
AND SECURITY, Vol. 7, No. 1, February 2012.
[11] Chengzhi Li, HuaiyuDai, LiangXiao, Peng Ning, Communication Efficiency of Anti-Jamming
Broadcast in Large-Scale Multi-Channel Wireless Networks, IEEE TRANSACTIONS ON
SIGNAL PROCESSING, Vol. 60, No. 10, October 2012.
[12]

Reshma Lill Mathew, P. Petchimuthu, Detecting Selfish Nodes in MANETs Using Collaborative
Watchdogs, International Journal of Advanced Research in Computer Science and Software
Engineering, Volume 3, Issue 3, March 2013.

[13] Lahari.P, Pradeep.S, A Neighbor Coverage-Based Probabilistic Rebroadcast for Reducing Routing
Overhead in Mobile Ad Hoc Networks,International Journal of Computer Science and Information
Technologies, Vol. 5 (2) , 2014.
[14]

62

Shengli Zhou, Georgios B. Giannakis, Ananthram Swami, Digital Multi-Carrier Spread Spectrum
Versus Direct Sequence Spread Spectrum for Resistance to Jamming and Multipath, IEEE
TRANSACTIONS ON COMMUNICATIONS, Vol. 50, No. 4, April 2002.

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

A Novel Approach for Secure Group Data Sharing and


Forwarding in Cloud Environment
N.Vishnudevi1, P.E.Prem2, M.Madlinasha3
PG Student [IT], Dept. of IT,
Vivekanandha College of Engg for Women, Tiruchengode, Tamilnadu, India
2
Assistant professor, Dept. of IT,
Vivekanandha College of Engg for Women, Tiruchengode, Tamilnadu, India
3
Assistant professor, Dept. of IT,
Vivekanandha College of Engg for Women, Tiruchengode, Tamilnadu, India
1

AbstractSite data storage is a function of cloud that relieve the customers from focus on data storage
cloud computing organization. Out sourcing data to third-party administrative control makes security
problem. Data leakage may occr due to attacks by other users in the cloud. Comprehensive of data by
cloud service provider is yet another problem in the cloud because of that high level of security is needed
. In this paper provide high security by the concept of Data Security for Cloud Environment with SemiTrusted Third Party for secure group data sharing and forwarding. It provides for key management, access
control, and file assured deletion. The DaSCE use Shamir entry system to handle and generate the key.
We use multiple key managers for one share of key. many key managers avoid on its own failure by using
cryptographic keys.We execute and estimate running model of DaSCE performance based on the time
consumption when more number operations, next analyze the working of DaSCE using High Level Petri
nets .The outcome can be efficiently used for security measures of outsourcing data by key management,
access control, and file assured deletion.
Index Terms Cloud Computing , High Level Petri Nets,file assured deletion,key management,shamir
scheme

I. INTRODUCTION
Cloud Computing is packaged within a new infrastructure paradigm that offers improved scalability,
elasticity, startup time, reduced costs, and just-in-time availability of resources.Cloud computing has emerge
as managing the hardware and software assets located at third-party facility provider.Demand way in to the
computing resources relieve the customers from building and maintaining complex infrastructures.Cloud
computing has every computing component as a utility ,such as software, platform, and infrastructure.
The cost-cutting measure of infrastructure, maintenance, and flexibility makes cloud computing smart
for organizations and individual customers.although benefits, cloud computing faces assured challenges
and issues widespread adoption of cloud.For instance,security, performance, and quality are mentioned.
security , privacy unease when using cloud computing services are like to those of traditional non]
cloud services, apprehension are amplified by external control over executive resources and the potential
for mismanagement of those resources. Transitioning to public cloud computing involve a transfer of
responsibility and control to the cloud provider over information system Representing characteristics and
c utility ,causes the user to focus on data security, transmission, processing , moving data to the cloud,
and operated by certain level of trust and security. Multiple users, separated through virtual machines,
share resources as well as storage space. Multi-tenancy and virtualization generate risks and underpins the
confidence of users to adopt the cloud model.
The security of outsourcing data to public clouds, work for the development of data security
technique. We aim for a technique capable of addressing the critical issues. data security scheme that
uses key manager servers for the cryptographic keys. Shamirfs (k, n) threshold scheme is used for the
management of keys that use k share out of n in the direction of reconstruct the key. Access to key and data
is ensured through a policy file.The client generates random symmetric keys for encryption and integrity
functions. Symmetric keys are protected by the public key, over all symmetric keys are deleted from the
client. Encrypted data, keys are uploaded to the cloud. For downloading the data,
client presents a policy file to cloud and downloads the encrypted ,decrypted data and keys. The
FADE is a light-weight scalable method that give surety the deletion of files from cloud when requested by
the user .during our examination FADE short on issues of security of keys and authentication of participating
parties. Based on that identified with FADE, development to the scheme and name it as Data Security for
63

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
Cloud Environment with Semi-Trusted Third Party for group data sharing and forwarding .the man-inthe-middle-attack, we steps for the session key establishment process. The security level and exclude the
wicked user to carry out the attack at a performance. The grades from our confirmation investigation DaSCE
is more secure than FADE when man-in-the-middle attack was introduced. DaSCE for out-sourced data to
cloud that uses symmetric and asymmetric encryption combination .The DaSCE certify data confidentiality
at a cloud as long as it is in use by the client. It also assures that data gets deleted it becomes not recoverable
after deletes it from the cloud. Access control to both data, key through authority of guidelines and mutual
authentication between client and key managers, cloud .Digital signatures and variation of Diffie-Hellman is
used for common certification of arty. Successful authentication and session key concern amount produced
in contact to asymmetric keys are mainly use in consequent cryptographic operations. The integrity of data
for symmetric key and Message Authentication Code and securing symmetric keys to asymmetric keys
generated by third party key managers.
II.RELATED WORKS
In This method provide a security to the cloud data in a number of service us, such as integrity,
freshness, and availability. The authors in use a opening application in the organisation to handle the
integrity and originality verify for the data. The Iris file system is planned to transfer association internal
file system to the cloud[3]. A Merkle tree is used by gateway, which ensure originality and integrity of data
by insert file blocks, MAC, and file version numbers at various levels of the tree. The gateway application
maintain the cryptographic keys for confidentiality needs.
In proposed monitoring and auditing model that audits the cloud environment for ensuring the new
data, data irretrievability, and resilience against disk failures. The methods depends on the users employed
type for data confidentiality[7]. Data cannot be protected against service provider fully.
In the authors presented a cryptographic file method that provides confidentiality and integrity
services to the outsourced data. The authors used hash based MAC tree for aforesaid services. Block-wise
encryption is used for development of a MAC tree. The file salient side interacts with the file system of the
server and outside of the encrypted blocks[4]. Encrypted file blocks and cryptographic metadata are stored
individually. presence of cryptographic metadata on the storage side can be a prospective threat.
III.EXISTING METHOD
To provide the security for the data as well as Key maintenance in separate servers . Efficient
method for the data storage in cloud environment. Authenticate the data owners and user requesting for
downloading a file.
1.Cryptographic File System based on Hashed MAC:
Block-wise encryption is used for the construction of a MAC tree. The file system at the client side
interacts with the file system of the server and outsources the encrypted blocks. Encrypted file blocks and
cryptographic metadata are stored separately.
2. Cloud Storage System Based On Secure Erasure Code:
The system use verge key servers for storing a users key generated by a system manager. User
encrypts the data divided into blocks and stores every block on randomly selected multiple servers.The
system also provides the functionality of data transferred by allow any of the users to forward without
downloading.

Fig.1. Architecture of DASCE


64

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
IV.PROPOSED WORK
The DaSCE use Shamirs k out of n threshold method to manage the keys, this will easy to generate
the key. It uses multiple key managers, each represent one share of key.various key managers avoid single
point of failure for the cryptographic keys. The FADE protocol provides privacy,security,integrity, access
control, and guaranteed deletion to outsourcing data. The FADE uses both symmetric and asymmetric keys
the client check the integrity of the folder. Symmetric decryption process is done by the user to retrieve the
original data. DaSCE improved key management and authentication processes. secure group data sharing
and forwarding The performance of the DaSCE is fast on the time consumption during file upload and File
Download.Authentication is fully provided to all data located in public cloud.It takes relatively low cost.
User selects their own number of key managers who are all needs to produce the key.Generally data owner
generate the symmetric key value S. That will be spitted into k number of keys.Each key managers stores
their key pair with its own identity and public key.
V.ALGORITHM
Digital Signature: Hash value of a significance when encrypted with the private key of a owner is
his digital signature on that electronic Document.As the public key signer is agreed, anybody can check the
message and the digital signature To provide accuracy ,reliability and Non denial to e-documents To use
the Internet as the harmless and protected
Diffie-Hellman: The DiffieHellman key exchange technique allows two parties that have no
previous information of each other together launch a collective secret key above an insecure channel. this
key agreement provide varies authenticated protocols, and is also provide perfect self-assured defence in
Transport Layer modes. it followed shortly by RSA
HMAC: A Hash Function produce a fingerprint of file, message, data protecting the integrity of
a message validating it create a little fixed sized block.depending on both message checks it matches the
MAC provide guarantee that communication is unchanged and comes from sender
FADE:
(a)File upload:
User access the constraint it is reachable in publically for establishing the session key,we are using
the a primitive root and p is the large prime number.the client generates a random number calculate the
session key by key manager .the file is uploaded based on the mod operation.the file upload process is
multiple key manager.the interdependencies in file upload process is man in the middle attack
Fig.2. File Upload

(b)File download:

The client send requests to download the file after that encrypted keys in the cloud. The client check
for the resoluteness of the file through the HMAC. Then the client generates a secret number and calculates
sent to KM for decryption. The KM sends attributes used are based on Pi. The client extracts key from the
received message and that in turn is used to decrypt F.

Fig.3. File Download


65

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
(c) Policy Revocation:
The client send a request to the key manager for revoke the file. The key manager provide a random
number and send it to the client for decryption process. The authentic client decrypts r, calculates the hash
value,and sends it backward to the key manager. After that key verification, the KM revokes policy file and
acknowledge

Fig.4. Policy Revocation

(D) Policy Renewal:


If policy file pi is needs to be renewed as Pj, client download the file and send it to the key manager
it is encrypted by the Si after that decrypted by Pj, KM sends new public key parameters(ej, nj) to client as
now formally analyze FADE it is renewal by the key manager

Fig. 5. Policy Revocation

VI.MODULE DESCRIPTION
(1)Key manager setup:
For the efficient storage of different keys to encrypt the file which could be stored in cloud, needs
key managers. All the key managers are authenticated by the cloud service provider .Each manager has its
own identity and generate the key for encrypt the file.
(2)Keys for file encryption:
User selects their own number of key managers who are all needs to produce the key.Generally data
owner generate the symmetric key value S. That will be spitted into k number of keys.Each key managers
stores their key pair with its own identity and public key.
(3) Shamirs Strategy:
Help to reconstruct the symmetric key s, by getting many parts of key from specified key managers.
Here k represent the number of key managers and n represents spited number of symmetric keys on all key
managers(km)client :breaks up symmetric keys s into n shares(s1,s2,sn).encrypt i th shares with the public
keys of i th km. upload all shares of s to cloud. client downloads all shares of key client selects k number
of kms randomly.sends i th share of s to its km. receives back decrypted i th share. reconstructs s from k
shares according to Shamirs strategy

Fig. 6.Shamirs Strategy


66

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
VII.CONCLUSION
Proposed the DaSCE protocol, a cloud storage security system that provided key management,
access control, and file assured deletion for group data sharing and forwarding. Assured deletion is mainly
depends upon the policy file encrypted and upload. On revocation of policies, access keys are deleted by the
KMs that result in halting of the contact to the data the files be reasonably deleted from the cloud. The key
management mainly using (k, n) threshold secret sharing mechanism for complete the task We analyzed
and model FADE by using the operating of file upload
and file download. Some uses are highlighted in the FADE . DaSCE improved key authentication
and management process the presentation and efficient of the DaSCE was evaluate depends on the time
consumption during file upload and download. The results base on that the DaSCE protocol can be
practically used for clouds for security of outsourced information .The reality that the DaSCE does not
want any protocol and implementation level changes at the cloud makes it highly practical methodology
for cloud.
REFERENCES
[1] C. de Morais Cordeiro, H. Gossain, and P. Agrawal, Multicast over wireless mobile ad hoc networks:
Present and future directions, IEEENetw., vol. 17, no. 1, pp. 5259, Jan./Feb. 2003.
[2]

W. Liao and M.-Y. Jiang, Family ACK tree (FAT): Supporting reliable multicast in mobile ad hoc
networks, IEEE Trans. Veh. Technol., vol. 52,no. 6, pp. 16751685, Nov. 2003.

[3]

G. Ateniese, M. Steiner, and G. Tsudik, New multi-party authentication services and key agreement
protocols, IEEE J. Sel. Areas Cmmun, vol. 18, no. 4, pp. 628639, Apr. 2000.

[4]

M. Steiner, G. Tsudik, and M. Waidner, Key agreement in dynamic peer groups, IEEE Trans.
Parallel Distrib. Syst., vol. 11, no. 8, pp. 769780 Aug. 2000.

[5]

Y. Kim, A. Perrig, and G. Tsudik, Simple and fault-tolerant key agreement for dynamic collaborative
groups, in Proc. ACM Conf. Comput .Commun. Security, Nov. 2000, pp. 235244. [3] Z. Wan,
J. Liu, and R. H. Deng, HASBE: A hierarchical attribute-based solution for flexible and scalable
access control in cloud computing, IEEE Trans. Inf. Forensics Security, vol. 7, no. 2,pp. 743754,
Apr. 2012.

[6]

Li, S. Yu, Y. Zheng, K. Ren, and W. Lou, Scalable and secure sharing of personal health records in
cloud computing using attribute based encryption, IEEE Trans. Parallel Distrib. Syst., vol. 24, no.
1,pp. 131143, Jan. 2013.

[7]

J. Li, X. Huang, J. Li, X. Chen, and Y. Xiang, SecurelyOutsourcing attribute-based encryption with
check ability, IEEE Trans. Parallel Distrib. Syst., vol. 20, no. 8, pp. 22012210Aug. 2014.

[8]

P. Barralon, N. Vuillerme, and N. Noury, Walk detection with a kinematic sensor: Frequency and
wavelet comparison, in EMBS06. IEEE, 2006, pp. 17111714.

[9]

H. Wang, S. Sen, A. Elgohary, M. Farid, M. Youssef, and R. R.Choudhury, No need to war-drive:


Unsupervised indoor localization,10th Mobisys. ACM, 2012, pp. 197210.

67

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Survey of PDA Technique With Flying


Capacitor For Buck Boost Converter
Kiruthika S,

PG Scholar, M.Kumarasamy College of Engineering

Abstract Buck Boost Converter is unease to accomplish a sky-scraping efficiency control strategy
for improving the transients in the output voltage. The sophisticated control technique can regulate
an output voltage for an input voltage, which is elevated, lower, or the same as the output voltage.
There are several obtainable solutions to these tribulations. The technique introduced here is unique
of its kind from the point of view of ripple contented in the output voltage and the reliability of the
control strategy. The unsurpassed approach involves a tradeoff among cost, efficiency, and output
noise or ripple. The main objective of this work is to enclose a positive buck-boost regulator that
automatically transits from one mode to the other. The method introduced in this dissertation is an
amalgamation of buck, boost, and buck-boost modes. Basic analytical studies have been made and
are presented. In the buck boost method, instead of immediate transition from buck to boost mode,
intermediate combination modes consisting of numerous buck modes followed by several boost
modes are utilized to deal out the voltage transients. This is consummate of its kind from the point of
view of improving the efficiency and ripple at ease in the Output voltage Theoretical considerations
are accessible. Simulation results are shown to prove the proposed theory.
Keywords : dc-dc convertor,charge pump,buck converter
I. INTRODUCTION

A very widespread power-handling problem, especially for portable applications, powered by
batteries such as cellular phones, special digital assistants (PDAs), wireless and digital subscriber line
(DSL) modems, and digital cameras, which afford a regulated non-inverting output voltage from a variable
input battery voltage. The battery voltage, when charged or discharged, can be superior than, equal to, or
less than the output voltage. But for such small-scale applications, it is very obligatory to regulate the output
voltage of the converter with high precision and Performance. For with the purpose of reason, a switch in
the interior of cost, efficiency, and output transients should be considered. A regular power-handling issue
for space-restrained applications motorized by battery of the output voltage in the midrange of a patchy
input battery voltage. . Some of the common examples are 3.3 V output with the 34.2 V Li cell input, 5 V
output with a 3.66 V four-cell alkaline participation, or a 12 V output with an 815 V leadacid battery
input.

Here it describes a new method for minimizing the transients in the output of a DC-DC converter
required for diminutive powered portable electronic applications.

Fig 1: Buck, boost, buck-boost converter


68

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
The transient problem is a serious problem intended for power supplies needing the production voltage in
the midrange of the voltage. In that maximum transient arises when the donation voltage becomes generally
equal to the yield voltage. There are having various techniques used to solve the problem of transients.
However, a some methods have drawbacks such as comparatively higher transients or inferior efficiency
anticipated for the reason that of the longer switching operation. This manuscript describes few methods
already been used to unravel the transient problem and points out the demerits of those methods. A new
combination method, which combine buck in addition to increase modes in favor of the period of the
transition mode, is described in this paper to diminish the transients at the production of the converters when
the input voltage is near of output voltage. Exact equations have been put forward to sustain the proposed
idea of the transient minimization. Simulation results contain added to compose a relative analysis of
transient response method the other methods.
II. SURVEY

Hybrid BuckBoost Feed forward and Reduced Average Inductor Current Techniques in Fast
Line Transient and High-Efficiency BuckBoost Converter

A buckboost converter with a novel control method was introduced in this paper. More than a
few advantages include reduced switching losses through the use only the number of semi switches during
each cycle and decreased conduction losses of power switches due to Reduced Average Inductor Current
technique. The competence is effectively improved. A novel mode detector can choose proper operating
mode to get a synchronized output and thus enhanced control accuracy are guaranteed throughout mode
transition. Besides, the Hybrid Buck Boost Feed Forward method is integrated in this converter to reduce
the voltage difference at the output of error amplifier. As a consequence, a fast line transient reply can
be achieved with small dropout voltage at the output. Investigational consequences show that the output
voltage is regulated during the whole battery life, and the output transition is very smooth during this mode
transition by the future control system.

Power-Tracking Embedded BuckBoost Converter with Fast Dynamic Voltage Scaling for the
SoC System

The current-mode manage is utilized to attain fast transient response, improved line rejection
capability, and on-chip system compensation system at the similar time. In addition, the output voltage
ripple is minimized particularly for some noise-sensitive blocks, such as the RF circuits in the SoC scheme.
Using the high switching frequency in dcdc converter can reduce the output voltage ripple and reduces
the value of off-chip inductor value and printed circuit panel area. Proposed high switching current-mode
PTE-BB converter with F-DVS function was made-up by 0.25-m CMOS procedure. To efficiently power
the SoC system, the F-DVS purposes rapidly adjust the production voltage value to meet the different
power request. In addition, when small voltage dissimilarity exists in V BAT and V, the use of the PCC and
VCC methods in the buck and boost operations, respectively, can eliminate the system instability caused by
switching noise to ensure a steady voltage regulation.

A Near-Optimum Dynamic Voltage Scaling (DVS) in 65-nm Energy-Efficient Power Management
with Frequency-Based Control (FBC) for SoC System

An energy-efficient SIDO power unit with the frequency-based hybrid manage loop is proposed
to achieve the near optimum DVS operation in SoC system. Power loop for controlling the SIDO power
module is combination of PLL which simultaneously realize proper supply voltage and process frequency.
Near-optimum DVS process is achieved by system processor, which overcomes the PVT-caused distortion
to obtain the near-optimum supply voltage. In addition, the FBC circuits helps to achieve the hybrid control
scheme in accordance with demanded operation frequency. Therefore, DVS and DFS operation can be
achieve at the same time. Experimental consequences show correct near-optimum DVS operation and
proper energy delivery scheme in the SIDO power module, as well as the request operation frequency. The
future power management fabricated by 65-nm technology occupies a 1.12 mm silicon area and achieves
highest power reduction by 33%.

A 90240 MHz Hysteretic Controlled DC-DC Buck Converter with Digital Phase Locked Loop
Synchronization
69

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
A bang-bang D-PLL frequency locking technique for ultra-high frequency hysteretic controlled dc-dc
buck converters. The D-PLL locks the converter operating frequency to a clock reference to eliminate the
dependence of switching frequency on output conversion voltage. A voltage or digitally controlled delay
element inserted within the hysteretic control loop is used to control the converters switching frequency.
An equivalent linear system representation is employed to model the hysteretic loop as a voltage controlled
or digitally controlled oscillator. Linear system analysis shows that when the oscillator is placed inside a
PLL, the stability of the system is primarily determined by the PLL parameters and weakly dependent on
the dc-dc converter parameters thus the stability of the PLL is ensured for wide frequency locking of the
hysteretic dc-dc buck converter. Both analog and digital PLL designs, as well as linear and binary bangbang loops, are feasible. Moreover, the proposed synchronization scheme can implement the hysteretic
control loop as a fixed frequency PWM controller or an auto-selectable-frequency PFM controller. By
adjusting the D-PLL division ratio N, the converter switching frequency can be scaled by binary weighted
multiples of the fundamental to reduce switching losses and improve light load efficiency while maintaining
a predictable spectrum at the output.

Digital Combination of Buck and Boost Converters to Control a Positive Buck Boost Converter


The complicated control technique can control an output voltage for an input voltage, which is
higher, lower, or the similar as the output voltage. There are several obtainable solution to these problems,
but all contain their disadvantages. The method introduced here is unique of its kind from the point of view
of ripple content in the output voltage and the reliability of the control approach. The best approach involves
a tradeoff among cost, efficiency, and output noise or ripple. It is have a positive buck-boost regulator that
automatically transits from one mode to the other. This method introduced in that combination of buck,
boost, and buck-boost modes. Basic analytical studies have been made and are presented. The proposed
method, instead of immediate transition from buck to boost mode, intermediate mixture mode consisting
of several buck modes follow by several boost modes are utilized to distribute the voltage transients. This
is unique of its kind from the point of view of improving the efficiency and ripple content in the Output
voltage Theoretical considerations are available.

A voltage-mode DCDC buck converter with fast output voltage-tracking speed and wide output
voltage range

This paper presents a high switching frequency and wide output range DCDC buck converter
with a novel compensated error amplifier. The converter has been fabricated in a standard 0.35m CMOS
process. The DCDC converter has good stability when worked at the wide output range. The converter gets
a high speed of output voltage-tracking with 8.8 s/V for up-tracking and 6s/V for down-tracking. Besides
the recovery times are less than 8s for both load step-up and step-down. Therefore, the best converter is
suitable for the wide output range, especially on the occasion of the fast voltage-tracking speed.

Reduction of Equivalent Series Inductor Effect in Delay-Ripple Reshaped Constant On-Time
Control for Buck Converter with Multilayer Ceramic Capacitors

The DZC-NME technique is proposed in this paper to conquer the small ESR value and large
ESL effect in the COT buck converter. Even though the MLCC is used as the output capacitor if without
conventional ESR compensation, the DZC technique still can increase the system stability since the
compensator contributes phase lead similar to the PD controller. Besides, the differential structure can
benefit the noise margin to decrease the Jitter and the EMI effects. On the other hand, the NME technique
eliminates the effect of ESL to enhance the noise immunity. Furthermore, using the reliable on-time timer
with an improved linear function, the near-constant switching frequency, which is adjusted to accommodate
to variable input voltage, can further confirm the system stability. Because of MLCC with extremely small
RESR value for general applications, the output ripple can be greatly reduced and thus switching power loss
can be decreased in corresponding to large RESR used to compensate conventional ripple-based control.
Experiment results verify the correct and effective functions of the DZC and the NME techniques at the
strict case when small RESR of 1 m and large VESL of 40 mV. Without scarifying the inherent advantages
of the COT control, the DZC-NME technique for the MLCC applications can ensure low ripple of 10 mV
and high efficiency of 91%.

70

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Single-inductor multiple output DC-DC converters


The design of single-inductor multiple-output DC-DC converter is important for future low-power
portable systems. The key issues and some possible solutions have been described in this chapter. The
provided examples demonstrate the feasibility and the limits of various approaches. Because of the increased
diffusion of complex and portable systems is expected that the area will come upon great development in
the near future.
III. CONCLUSION

An extremely steady ying-capacitor buck boost converter applies a novel pseudo current dynamic
acceleration method is described in detail. The circuitry design of the proposed converter is simple and
implemented by profitable CMOS manufacture procedure. The input voltage is 5 V, the output voltage
range 2.3 V, and the process frequency is 1 MHz the boost ratio of the positive output voltage is 2D and
the power conversion efficiency reaches 80%. The proposed converter operates in the buck/boost mode by
merely changing the duty cycle and the transient reply time.
REFERENCES
[1].

P.-C. Huang, W.-Q. Wu, H.-H. Ho, and K.-H. Chen, Hybrid buckboost feedforward and reduced
average inductor current techniques in fast line transient and high-efficiency buckboost converter,
IEEE Trans. Power Electron., vol. 25, no. 3, pp. 719730, Mar. 2010.

[2].

Y.-H. Lee et al., Power-tracking embedded buck-boost converter with fast dynamic voltage scaling
for the SoC system, IEEE Trans. Power Electron., vol. 27, no. 3, pp. 12711282, Mar. 2012.

[3].

W.-C. Chen et al., Reduction of equivalent series inductor effect in delay-ripple reshaped constant
on-time control for buck converter with multi-layer ceramic capacitors, in Proc. IEEE ECCE, Sep.
2012, pp. 755758.

[4]. A. Chakraborty, A. Khaligh, A. Emadi, and A. Pfaelzer, Digital combination of buck and boost
converters to control a positive buckboost converter, in Proc. IEEE Power Electron. Spec. Conf.,
Jun. 2006, vol. 1, pp. 16.
[5]. Yong-Xiao Liu*, Jin-Bin Zhao, and Ke-Qing Qu* Fast Transient Buck Converter Using a
Hysteresis PWM Controller in proc. Journal of Power Electronics, Vol. 13, No. 6, pp. 991-999,
November 2013.
[6]. Yu-Huei Lee,Chao-Chang Chiu,Ke-Horng Chen A Near-Optimum Dynamic Voltage Scaling
(DVS) in 65-nm Energy-Efficient Power Management With Frequency-Based Control (FBC) for
SoC System IEEE Journal Of Solid-State Circuits, Vol. 47, No. 11, November 2012
[7].

Yang Miao; ., Zhang Baixue, Cao Yun, Sun Fengfeng and Sun Weifeng A voltage-mode DCDC
buck converter with fast output voltage-tracking speed and swide output voltage rangeJournal of
Semiconductor Vol. 35, No. 5, May 2014.

[8]. Pengfei Li, StudentMember, IEEE, Deepak Bhatia,Member, IEEE, Lin Xue, and Rizwan
Bashirullah,Member, IEEE IEEE A 90240 MHz Hysteretic Controlled DC-DC Buck Converter
With Digital Phase Locked Loop Synchronization Journal Of Solid-State Circuits, Vol. 46, No. 9,
September 2011.

71

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

An Efficient Algorithm for Compression and


Storage of Two-Tone Image
1
1

S.Selvam

Dr.S.Thabasu Kannan

R.Ganesh

Research Scholar, Research & Development Centre, Dept. of Computer Science,


Bharathiar University, Coimbatore, Tamilnadu, India
2
Principal, Pannai College of Engg& Tech,
Sivagangai 630 561, Tamilnadu, India.
3
Asst.Prof. Department of Computer Science,
NMSSVN College, Madurai.

AbstractImage compression refers to representing an image with as few bits as possible while
preserving level of quality and intelligibility required for particular application. The present work aims to
developing an efficient algorithm for compression and storage of two-tone image.

In this paper, an efficient coding technique is proposed and termed as line- skipping coding for twotone image.The technique exploits 2-D correlation present in the image. This new algorithm is devised to
reduce fifty to seventy five percentage of memory storage.
Keywords 2-D correlation, 1LSC, DCT, RLC.

I. INTRODUCTION
The need for compression and electronic storage of two-tone image such as line diagrams,
weather maps and printed documents has been increasing rapidly. It has endless applications ranging from
preservation of old manuscripts. Paperless office and electronic library etc. For text such as English and
Arabic, good quality optical character readers are available and provide good compression. But they accept
only limited fonts and style of characters. Also for line diagram and text whose OCRS are not readily
available, the materials has to be considered as an image. Electronic storage of such image required a very
large amount of memory. To reduce the memory requirements and hence the cost of storage, efficient
coding techniques are used. A large number of coding techniques have been proposed and studied by
different reaches. These techniques are broadly classified into two categories: Loss less and loss.
Lossless techniques do not introduce any distortion. From the coded bit stream, we can reconstruct
the digitized original image extract. Lossy techniques introduce some distortion to the reconstructed image
while achieving high compression and retaining image usability.
A scan line of a two-tone image consists of runs black pixels separated by runs of white pixels.
The spatially close pixel are significantly correlated, source coding technique exploit this correlation either
along a single scan line or along many scan lines. Simplest and most commonly used techniques is run
length coding, which exploit the correlation along a single scan line to code the runs of black or white
pixels. More complex techniques exploit the correlation along many scan lines to give better compression.
However this is achieved at the cost of increased system complexity.
In this paper, a very simple and efficient coding technique is proposed and termed as skip-line
coding for two-tone image. The technique exploits 2-D correlation present in the image. The technique is
based on the assumption that if there exist very high degree of correlation between successive scan lines,
then there is no need to code each of them, only one of them need be coded and other may be skipped. While
decoding, skipped lines are taken to be identical to the pervious line. This reduced the storage requirement
significantly. The performance of this technique is compared with the run length coding. This paper also
concentrates on Discrete Cosine Transform based compression so as to have comparative study of its
performance.
2.Existing System:
Image Data Compression is one of the major areas of research in image processing. There
were several algorithms designed for compression of images. Here two method have been explained for
compression of image data, which is considered to be an existing system.

i)
Block Truncation coding

ii)
Run Length Coding
72

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
i) Block Truncation coding
In Block Truncation coding method, an image to be compressed is divided into many disjoint
blocks. For each block mean and standard deviation are computed and each pixel will be coded as either
zero or one depending on its level with mean. The recovered image is compared with original image and
the error rate/pixel is calculated.
ii) Run Length Coding:
Run length coding (RLC) is the simplest and most popular technique to code the runs of
black and white pixels. It explicit the correlations along a single scan line. For storage, coding was
done from top to bottom. This is done in order to achieve higher compression. The runs can be coded
by fixed length code words. Since coding is done from top-to-bottom, maximum run length will be
equal to the file size in case the whole document is white or black. So we divided runs into two
categories: (i) Runs having length less than 255. (ii) Runs having length equal to or greater than 255.
The runs between 0 to 254 are coded as single byte, when the runs are greater than 254.
It is coded as three bytes. The first byte, FF indicates that the coded runs is greater than or
equal to 255.The second byte gives the factor by which the run being coded is greater than 255. The
third represents the remainder. Three bytes in this way can represent a maximum length of 65535.
So this coding scheme is not restricted to A4 size documents, but can be used for other standard
type documents. Since black and white runs are coded by same codes and there is no way to find
tone of the pixels. Therefore one bit of information is provided to synchronize the color for runs
of the pixels here synchronizing bit is provided only at the beginning and every alternate run will
be of the same color.
The decoded image is obtained by the following the reverse process of coding. Since
no information is lost in the coding process, reconstructed image is extract replica of the original
image.
3.Proposed System:
In an image, very high degree of 2D correlation is present. Efficient coding technique must
exploit this correlation to give much higher compression. A number of such techniques are given.
They provide maximum compression only if the correlation between the successive lines is perfect
(i.e.) successive lines are similar. However similar lines will have similar codes. So a lot of them and
skipping the other similar lines. The proposed technique s based on this assumption. The algorithm is
termed as Line Skipping Coding Method.
3.1. Line Skipping Coding Method:
In this coding scheme, it is assumed that vertical correlation between successive scan lines
is perfect. So they will have same code. First it is assumed that two successive lines are similar then
coding is performed only one of them and skipping the next line. This is termed as One Line Skipping
Coding (1LSC). Here only half of the total scan lines need be coded. The runs 0 the black and white
pixel are coded using run length coding technique describe second.
Next it is assumed the vertical correlation is such that three successive lines are similar
coding is performed on only one of them and skipping the next two consecutive lines. This is termed as
Two Line Skipping Coding (2LSC). Here only one third of the total scan line need be run length coded.
Finally coding is performed over one line and skipping three consecutive lines and this
technique
i s termed as Skip Three Line Skipping Coding(3LSC). Here only one fourth of the total scan line is
run length coded. It is clear that compression achieved through the proposed technique will be much
higher compared to Run-Length Coding (RLC) but introduces distortion in the reproduced document.
If line-to-line correlation is poor. However, distortion will not be visible for documents having high
line-to-line correlation. This process is shown in the figure1.
Figure:1 Process of the Line Skipping Coding Method

73

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Figure:1 Process of the Line Skipping Coding Method

3.2 Algorithm for proposed method:



1.Input the image (gray or color image)

2.Apply m-connectivity relationship in the image.

3.Convert the image into Binary image.

4.Binary image is converted into Approximated image using DCT Transformation.

5.To get the color of first pixel.

6.Compute the Run length.

7.Compute One Line Skipping Coding (1LSC), Two Line Skipping Coding

(2LSC) and as Three Line Skipping Coding (3LSC).

8.Opening and Display the compressed file for reconstruction.

9.Producing reconstructed image matrix.

10.Display reconstructed image.
3.3 DCT-Baseline Mode Coding:

The compression algorithm used in the baseline sequential mode first partitions the original image
into non-overlapping blocks of 8 X 8 pixels as shown below figure 2

Figure: 2: Block diagram of JPEG Encoder.

Each block of 8 X 8 pixels is then transformed using DCT into an array of 8 X 8 coefficients as
74

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
illustrated by the equation-(1) given below.

The first coefficient (0,0) of every block is called coefficient, while the rest of the coefficient are called
AC coefficients. DCT (low frequency) AC coefficients as shown figure 3

Figure 3: Block diagram for AC & DC coefficient


In other word, the higher frequency coefficients contain relatively less crucial details. All the
coefficients are quintile using a uniform misstep quintile and rounded to the nearest integer as expressed in
the equation()
C(u,v) = F(u,v)(Q(u,v))/2 / Q(u,v)
Where, Q(u,v) = quantization step size for coefficient (u,v);
C(u,v) = rounded value of the quantized coefficient;

Each application can have its own quantization tables, which is usually designed to provide the
best possible reconstructed image quality. In order to obtain a good subjective image quality, the DC and
the low frequency AC coefficient are quantized using large step sizes. Hence there is a tradeoff between the
quantization steps size (i.e. image quality) and the compressive achieved: Smaller the steps size, better the
image quality and smaller the compression ratio. The correlation between the DCT Coefficients of adjacent
blocks and exploited using DCPM to achieve further compression.

The DCT a coefficient of each block is reordered in a 1-D sequence using Zigzag scan as shown
in the figure 4.

75

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

: 1-D sequence using Zigzag scan


The scheme generates long runs or zero value coefficients (corresponding to the high frequency AC
coefficient) in most of the image. The Zigzag ordered coefficient and run length and Huffman coded (i.e.) a
code stored / transmitted for each DC coefficient and non zero AC coefficient indicating the magnitude and
position in the Zigzag order.

Finally, the image blocks are rested scanned to generate the image bit stream.

The image is reconstructed by performing the decompression operations in the reverse order.
Each block of 8 X 8 pixels is hence transformed back to spatial domain using the inverse discrete cosine
transformation (IDCT) in equation(1).
The scheme generates long runs or zero value coefficients (corresponding to the high frequency AC
coefficient) in most of the image. The Zigzag ordered coefficient and run length and Huffman coded (i.e.)
a code stored / transmitted for each DC coefficient and non zero AC coefficient indicating the magnitude
and position in the Zigzag order.

Finally, the image blocks are rested scanned to generate the image bit stream.

The image is reconstructed by performing the decompression operations in the reverse order.
Each block of 8 X 8 pixels is hence transformed back to spatial domain using the inverse discrete cosine
transformation (IDCT) in equation(1).

7 7
F(u,v) = [1/4] C(u) C(v) f(i,j)x cos[(2i + 1)u /16] cos[(2i + 1)v /16] ---(1)
i=0 j=0

Where, f(i,j) = (i,j)th pixel in the reconstructed image block.
The baseline sequential algorithm is used to reconstruct the image in its original size at a specific image
quality (SNR resolution).
4.Performance Evaluation

We select to test Line Skipping Coding Method, to compare the new system results with Run
length coding method. The quality of the image after decompression has been found quantitatively using
signal to noise ratio.

The proposed method gives maximum compression than run length code and Discrete Cosine
Transform coding. It gives an option of maximum compression of 98% to 99% at the cost of relative
degradation in the output.

The following table:1 gives the performance of all the compression algorithms in terms of
compression ratio and signal-to-noise ratio.

Table:1 Comparison of Line Skipping Coding Method with other Algorithms.


76

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
Table:1 Comparison of Line Skipping Coding Method with other Algorithms.

Figure 5: Output of the saturn.raw file

Figure 6: Output of the moon.raw file


4.1 Result:
Results are presented for 120 X 120 images. The images have been taken in its gray level from
and have been converted to two-tone image if they are not in the two-tone format in its original form.
The algorithm for run length coding(RLC), One Line Skipping Coding algorithm along with RLC, Two
Skipping algorithm along with RLC,
Three Line Skipping algorithms along with RLC, and Discrete Cosine Transformation based compression
have been coded in C (programming language) and show result obtained in implementing them.
5.Conclusion:

In this work a novel method of compression image has been presented. The method presented
exploits the usefulness of Run Length Coding technique. Also presented the variations of Run Length
Coding Technique by introducing the concept of Line Skipping Coding Method. The technique based on
DCT has also been carried out and presented. It has been proved that the proposed work shows superiority
over the Standardized Discrete Cosine based method and simple run length coding method. Further work
77

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
would involve in compressing image with bit planes after skipping lines. It would also involve in color and
multispectral image coding.
REFERENCES
[1] O.R Mitchell and E.J. Delp, Multilevel graphics representation using block truncation coding,
proceedings of the IEEE, vol.68,no.7, pp.868-873,July 1980.
[2]

J. Polec and J. Pavlovicova,; , A new version of region Based BTC, EUROCON2001, Trends in
Communications,International Conference on. , vol.1, no., pp.88-0 vol.1, 4-7 July 2001.

[3]

C.K. Yang, and W.H. Tsai, Improving block truncation coding by line and edge i n f o r m a t i o n
and adaptive bit plane selection for gray-scale image compression, Pattern recognition letters,volu
me.16,number1,pages=67-75,1995.

[4]

T.M. Amarunnishad, V. K. Govindan and Abraham T. Mathew, Improved BTC image compression
using a fuzzy complement edge operator, signal processing operator, signal Processing, vol- 88,
issue 12, (2008)2989- 2997, Elsevier 2008.

[5]

T.M. Amarunnishad, V. K. Govindan and Abraham T. Mathew,Use of Fuzzy Edge Image in


Block Truncation Coding for Image compression, International Journal of signal Processing,Vol 4,
N0 3, pp 215-221, 2008.

[6] S.Selvam and Dr.S.Thabasu Kannan, titled IMAGE RETRIVAL OPTIMIZATION WITH
GENITIC ALGORITHM, published IJAER ,International Journal of Applied Engineering
Research as Special issue, Volume No 10, Issue No 55(2015) and IJAER is indexed by SCOPUS,
EBSCOhost, Google Scholar, JournalSeek, J-Gate etc. And also listed in Anna University Chennai
Annexure II-2014(Sl.No.8565).
[7]

A. Aggoun and A. El-Mabrouk; , Image compression algorithm using local edge


Wireless Image/Video Communications, 1996., First International Workshop
pp.68-73, 4-5 Sep 1996.

detection,
on, vol., no.,

[8]

S.Selvam and Dr.S.Thabasu Kannan, titled An Empirical Review on Enhancing the Robustness of
Multi resolution Water Marking, published IJAER ,International Journal of Applied Engineering
Research as Special issue, Volume No 10, Issue No 82(2015), ISSN 0973-4562 and IJAER is
indexed by SCOPUS, EBSCOhost, Google Scholar, JournalSeek, J-Gate etc. And also listed in
Anna University Chennai Annexure II-2014(Sl.No.8565).

[9]

U.Y. Desai, M. M. Mizuki and I. Masakiand Horn; B.K.P., Edge and mean based
compression,1996.

[10]

R. Redondo and G. Cristobal; Lossless chain coder for gray edge images, Image Processing, 2003.
ICIP 2003. Proceedings. 2003 International Conference on , vol.2, no., pp. II- 201-4 vol.3, 14-17
Sept. 2003.

[11]

D. E. Tamir , K. Phillip and Abdul-Karim, , Efficient chain- code encoding for segmentationbased image compression, Data Compression Conference, 1996. DCC 96. Proceedings , vol., no.,
pp.455, Mar/Apr 1996.

image

[12] H. Sung and W.Y. Kuo. A skip-line with threshold algorithm for binary image compression. In
Image and Signal Processing (CISP), 2010 3rd International Congress on, volume 2, pages 515-523.
IEEE, 2010.
[13] E.J. Delp, M. Saenz and Salma, article BLOCK TRUNCATION CODING (BTC),2010.
[14]

S.Selvam and Dr.S.Thabasu Kannan, titled An Empirical Review on Enhancing the Robustness
of Multi resolution Water Marking, published IJAER ,International Journal of Applied Engineering
Research as Special issue, Volume No 10, Issue No 82(2015), ISSN 0973-4562 and IJAER is
indexed by SCOPUS, EBSCOhost, Google Scholar, JournalSeek, J-Gate etc. And also listed in
Anna University Chennai Annexure II-2014(Sl.No.8565).

[15]

Rafel C. Gonzalez and Richard E. Woods, Digital Image Processing, Second Edition, Pearson
Education Asia, 2005.

78

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Hierarchical Structure of Geospatial


Field Data Using Enhaced Rtree
Kumutha Priya M, Brinda B M, Udhaya Chandrika A,
Student, M.Tech, Department of Information Technology
Vivekananda College Of Engineering For Women,
Namakkal, India Email:priyakums3@gmail.com.
Assistant Professor, Department of Information Technology,
Vivekananda College Of Engineering For Women,
Namakkal, India. Email:bmbrinda@gmail.com.
Assistant Professor, Department of Information Technology,
Vivekananda College Of Engineering For Women,
Namakkal, India. Email:udhayaa11@gmail.com
AbstractGeosciences include remote sensing images, climate model simulation and so on. The
spatial-temporal data have become more and more multidimensional, enormous and are constantly being
modernized. As an outcome, the integrated maintenance of these data is becoming a challenge. A blocked
Effective Hierarchical structuredversion within the split-and-merge hypothesis for the compacted storage,
endless updating and data querying of multidimensional geospatial field data. The unique multidimensional
geospatial field data are split into small blocks in accordance to their spatial-temporal references. The
blocks are then characterized and compressed as Hierarchical structures for updating and querying. They
are then combined into a single hierarchical tree. The use of buffered binary tree data structure and
equivalent optimized operation algorithms, the original data can be constantly compressed, attached,
and queried. In comparison with conventionalsystems, the new approach is revealed to keep hold of the
features of the original data with much lesser storage expenses and faster computational performance.
The outcomeimplies an efficient structure for integrated storage, presentation and computation of
multidimensional geospatial field data.
Keywords Geospatial data, Spatial-Temporal Reference, HTR, RTree.

I. INTRODUCTION
The data observation and model simulation rapidly develops in geosciences. The data from these
systems have high dimensionality and huge volumes. Bulk amount of observation of exiting attributes/
variables are successively produced by large-scale observation systems. These data are compressed for
storage and the lately arrived data must be constantly compressed and attached to the present data, such
that these data are integrated to the existing data as a whole. This updation procedure should be done in
a short time and can be continually applied for the next piece of fresh data. The compression and storage
must preserve the reliability of the spatial-temporal reference (STR) of this data.Balances the data accuracy,
compression performance and improve the index and query analysis.The explosion of both the data volumes
and dimensionality makes storage, management, query and processing a scary approach for existing results.
Conventional methods make use of data indexes to speed up the query and storage. When the dimension
grows, the data segmentation along with data structure are becoming complex and inefficient. Big data
or data-intensive computing results use parallel data I/O and computation to fasten the data accessing
and updating. On the other hand, huge computers and complex computation architectures are required
to provide the I/O bandwidth and computation power needed. This condition turns out to be worse when
the continuous data compressing, attaching and updating are necessary. Within the current data version
and analysis framework, neither the conventional methods nor the big data or data-intensive computing
solutions are matched for dynamic data attaching and updating. Hence finding optional data structures
that fit the essential storage architecture may be difficult. The current exiting solutions for constant data
processing need different data structures in the management, query and analysis measures that requires to
undergo numerous difficult processing steps before they attain the final stage. The regular data transmit
between different data structures slows down the processing throughput.
Tensor is a vital tool for multidimensional data processing and analysis. It is derived from data79

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
intensive purposes. Computationally-oriented researchers classify multidimensional arrays as tensorstructured datasets. Tensor decomposition, tensor-based PDE solving and signal mining are then be applied
for pattern mining, high dimensional data strategy and prophecy. Yet, many of these tools are for particular
computations and contain inadequate functions. In many case, the curse of the dimensionality and Null
Space problems are present.
These problems call for a new data structure and algorithms that supports data organization,
compressed storage, data attaching and query.The successive data attaching and arbitrary data access
of multidimensional geospatial field data involves data to be stored in every dimensional configuration
parallel with blocks rather than stored as a whole. In this manner, immediate and secure data organization,
competent and compressed data storage, uninterrupted data attaching can be applied to every block
separately. A hierarchical tensor decomposition based on the split-and-merge concept is developed for
constantly compressing and attaching of multidimensional geospatial field data. Our intention is to propose
a hierarchical data structure to reformulate and store the huge volume of geospatial field data and to expand
the techniques for data storage, querying and computational support by means of this data structure.
3. RELATEDWORK
Hierarchical Tensor Representation for spatial data is arecent trends in geospatial data. Somany
approaches arealready in use to compress, updating and data querying ofmultidimensional spatial field data.
Each technique usedits ownrepresentation of spatial data. Our algorithm fallsinto HTR.

Fig.1.System architecture

4. IMPLEMENTATION OF MODULES
4.1 Geospatial Data: (Spatial Index RTree)
A familiarmethod to look foran object based on their spatial position is location-based search, for
instance to find all restaurants within 5 Kms of my current location, or find all colleges within the zip code
of 651101. All spatial objects can be signified by an object id, a minimal bounded rectangle (MBR), with
other attributes. So the space can be signified by assortment of spatial objects. A query can be represented
as an additional rectangle. The query is regardingthe location of the spatial objects whose MBR go beyond
with the query rectangle.
RTree is a spatial indexing method that is given a query rectangle. This is to quickly locate the
spatial object results. The concept is related to BTree. The spatial objects are grouped that are close to each
other and structure a tree whose intermediarynodes contain near-by objects. Since the MBR of the parent
node has all MBR of its children, the Objects are close by if their parents MBR is minimized.

Fig. 2. Location search.


80

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
4.2 Geospatial Search
Begin from root;check each child MRB to see if it overlaps with the query MBR. Skip the entiresub
tree if there is no overlapping, or else, recur the search by drilling into each child.Unlike other tree
algorithms the trees traverse down one path. Our search here needs to traverse down multiple paths if the
overlaps occur. Hence, to minimize the overlapping as high as possible the tree should be structured. This
means minimizing the sum of MBR areas along each path (from the root to the leaf) as much as possible.
4.3 Geospatial Insert

To insert a new spatial object, start from root node, pick children node whose MBR will
be the absolute least if the new spatial object is further, stridealong this path until getting the leaf node.If
the leaf node has space, insert the object to the leaf node and update the MBR of the leaf node as well as all
its parents. If not, divide the leaf node into two and construct a new leaf node and copy several content of
the original leaf node to this new one. And then add the newly created leaf node to the parent of the original
leaf node. If the parent has no space left, the parent will be split as well. If the split goes all the way to the
root, the original root will then be split and a new root is created.
4.4 Geospatial Delete
To delete,the spatial node will initially look for the data containing the leaf node. Eradicate the
spatial node from the leaf nodecontent and renew its MBR along with the parent MBR all the way to the
root. If the new leaf node has less than m node, subsequently we have toreduce the node by deleting the leaf
node. And now we eradicate the leaf node from the parent with updating them. Remove the parent from
the parents parent if the parent node is less then m. At present the intact node which is marked delete is
separated from the RTree. Since all the nodes are not invalid several children that are valid (but removed
from the tree) are reinserted and all these valid nodes are added back to the tree. Ultimately, the root node
is checked to contain only one child and we discard the original root and use its own child to become the
new root.
4.5 Geospatial Update
When the present spatial node modified from its original dimension updation occurs. The economical
way is to modify the spatial nodes MBR but not toalter the RTree. A better but expensiveis to delete the
node, transform its MBR and after thatinclude it back to the RTree.
5. CONCLUSTION
The data examination and model replication quickly develops in geosciences. The data from
this information have huge volumes with high dimensionality. A fresh computational tool and data
demanding scalable design that can sustain integrated storage, query and difficult scrutiny for such
immense multidimensional datasets that will be crucial .Tensor is an expected means of representing
multidimensional field data. They are the mathematical representations which are complicated for analysis.
In this paper, a tree representation which can support the process of updating, compression, query and
analysis over a substantial multidimensional geo-spatial field data was projected. By the split-and-merge
concept, the RTree achieves the stability among data precision, memory occupation and running time
for the data. Constant data appending and compression permits the data dimension to be controlled at
meticulous levels with no losing the exactness of data representation. Since the computational competence
is incredibly high and the memory cost is low even with high level of data, our process has the prospective
for processing huge amounts of data on a single PC.This structure provides a successful composition for
integrated storage. A blocked data separation mechanism for splitting the huge tensors into small blocks
and compress them to store. The retrieval of data from stored function is done by RTree. RTree improves
the performance of retrieval and updating.
6. FUTURE WORK
Ongoing works include: 1) resolve the stratagem for finding best block splitting and rank purpose in
accordance with data distribution, 2) improving the search and update method by using enhanced RTree.
The individual canvasser cannot simply keep up with the prose in this domain. Up to our awareness this
is the first method with HTR. This set up will be an efficient one. The tree could be extended by several
instructions, so that the efficiency can be improved in future.
REFERENCES
81

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
[1]

A. Aji, F. Wang, H. Vo, R. Lee, Q. Liu, X. Zhang, and J. Saltz, Hadoop gis: A high performance
spatial data warehousing system over map reduce, Proc. VLDB Endowment, vol. 6, no. 11, pp.
10091020, 2013.

[2]

C. Lubich, T. Rohwedder, R. Schneider, and B. Vandereycken, Dynamical approximation by


hierarchical tucker and tensortrain tensors, SIAM J. Matrix Anal.Appl., vol. 34, no. 2, pp. 470
494, 2013.

[3]

L. Grasedyck, D. Kressner, and C. Tobler, A literature survey of low-rank tensor approximation


techniques, GAMM-Mitteilungen, vol. 36, no. 1, pp. 5378, 2013.

[4]

T. L. L. Siqueira, C. D. de AguiarCiferri, V. C. Times, and R. R. Ciferri, The sb-index and the


hsb-index: Efcient indices for spatial data warehouses, Geoinformat., vol. 16, no. 1, pp. 165205,
2012.

[5]

G. Cugola and A. Margara, Processing ows of information: From data stream to complex event
processing, ACM Comput. Surv., vol. 44, no. 3, pp. 15:115:62, 2012.

[6]

M. L. Yiu, H. Lu, N. Mamoulis, and M. Vaitis, Ranking spatial data by quality preferences, IEEE
Trans. Knowl. Data Eng., vol. 23, no. 3, pp. 433446, Mar. 2011.

[7] S. J. Kazemitabar, F. Banaei-Kashani, and D. McLeod, Geostreaming in cloud,


inProc.2ndACMSIGSPATIALInt. WorkshopGeoStreaming, 2011, pp.39.
[8]

H. Plattner and A. Zeier, In-Memory Data Management: An Inection Point for Enterprise
Applications. New York, NY, USA: Springer, 2011.

[9]

I. Oseledets and E. Tyrtyshnikov, TT-cross approximation for multidimensional arrays, Linear


Algebra Appl., vol. 432, no. 1, pp. 7088, 2010.

[10]

I. Arad and Z. Landau, Quantum computation and the evaluation of tensor networks, SIAM J.
Computer., vol. 39, no. 7, pp. 3089 3121, 2010.

82

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Effective and Energy Efficient neighbor


discovery protocols in MANET
R.Fuji1, G .Arul Kumaran2 and S.Fowjiya3,
PG Scholar, 2Assistant Professor, 3Assistant Professor
1, 2, 3
Dept. of Information Technology,
Vivekananda College of Engineering for Women,
Tiruchengode-637205

AbstractNeighbor discovery is an essential pace in wireless ad hoc network. In this paper, we


Intend and examine several routing protocols for neighbor discovery in MANETs. Performance is
investigated in both the symmetric and asymmetric case. Dynamic source routing is used to detect the
shortest path among all neighbor nodes. To determine the energy efficiency of the node a new protocol
commonly known as energy efficient Zigbee routing protocol is used. We propose sleep scheduling
algorithm to determine the sleep state and active state of the nodes. By using those protocols the
effectiveness and energy efficiency of the node is recognized for asynchronization case in MANET. The
concert of the network is increased up to 90% in both the symmetric and asymmetric case.
Index terms: asynchronization, energy efficiency, MANET, performance, routing protocols

I. INTRODUCTION
A Mobile Ad-hoc Network is an` anthology of autonomous mobile nodes that can communicate
with each other through radio waves. A Mobile Ad-hoc Network has many free or autonomous nodes
often unruffled of mobile devices or other mobile pieces that can organize themselves in various ways and
operate without strict top-down network administration. A mobile ad-hoc network (MANET) is a network
of mobile routers coupled by wireless links - the union of which forms a casual topology. The routers
are free to move indiscriminately and organize themselves in a unsystematic manner so the networks
wireless topology may perhaps change hastily and indeterminable. In MANET the concert of the network
is based on nodes uniqueness like effectiveness, energy efficiency, transmission speed etc., the concert of
the network is high if the nodes in the network satisfy the distinctiveness.
MANET characteristics: MANET network has an autonomous behavior where each node presents
in the network; act as both host and router. During the transmission of data if the destination node is out of
range then it posses the multi-hop routing. Operation performed in Manet network is distributed operation.
Here the nodes can join or leave the network at any time. Topology used in MANET network is dynamic
topology.
Routing protocols: Generally routing protocol is defined as a set of rules which regulates the
transmission of packets from source to destination. These characteristics are maintained by different
routing protocols. In MANET different types of protocols are used to find the shortest path, status of the
node, energy condition of the node.
II NEIGHBOR DISCOVERY PROTOCOL
Central servers can be engaged, proximity-based applications potential can be better demoralized
providing the capability of discovering close by mobile devices in wireless communication locality due to
some reasons like users can enjoy the ease of local neighbor discovery at any occasion, although the federal
service may be occupied due to unexpected reasons, a single neighbor discovery protocol can advantage
various applications by providing more litheness than the centralized approach. Communications between a
central server and different mobile nodes may persuade problems, such as unnecessary transmission outlay,
clogging, and unpredicted reaction delay, penetrating for close by mobile devices locally is entirely free of
charge. a dispersed neighbor discovery protocol for mobile wireless networks is tremendously needed to
put into practice. Usually, there are three challenges in cunning such a neighbor discovery protocol.
Neighbor discovery is nontrivial for several reasons: Neighbor discovery needs to deal with collisions.
Idyllically, a neighbor discovery algorithm desires to minimize the possibility of collisions and, therefore,
the time to determine neighbors. In many realistic settings, nodes have no awareness of the number of
neighbors, which makes cope with collisions even harder. When nodes do not have right to use a global
clock, they have to activate asynchronously and at rest be able to determine their neighbors competently. In
83

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
asynchronous systems, nodes can potentially initiate neighbor discovery at different times and, therefore,
may miss each others transmissions Furthermore, when the number of neighbors is unknown, nodes do
not recognize when or how to conclude the neighbor discovery process. To evaluate the performance of our
designs in one-to-one and group scenarios, not only conduct comprehensive simulations, but also sampling
them using testbed. Evaluation results show that Diff-Codes drastically decrease the discovery latency in
both the median case and worst case.
III. RELATED WORK
In [1] with the setting of a single-hop wireless network of n nodes, ALOHA like neighbor discovery
algorithm is used to detect collisions. In [2] WiFi interface is known to be a primary energy consumer in
mobile devices, and idle listening (IL) is the leading source of energy utilization in WiFi. Most existing
protocols, such as the 802.11 power-saving modes (PSM), try to diminish the time depleted in IL by sleep
arrangement. In [3] the increasing prevalence of multi packet reception (MPR) technologies was used
.We study neighbor discovery in MPR networks that permit multiple packets to be recognized effectively
at a receiver. In [4] efficiency, as it permits a near order of magnitude diminution of the organize time.
Heftiness, because codes are short and easily detectable even at low SINR and even while a neighbor is
transmitting data.
IV PROPOSED SCHEME
New classes of neighbor discovery protocols were used to improve the performance in symmetric
and asymmetric case. Energy consumption of the nodes was reduced to increase the life time of the network
by using energy efficient Zigbee routing protocol. By using the sleep scheduling algorithm all the neighbor
nodes in the network were kept in a sleep state except the transmitting nodes to save the energy.
V PROTOCOL OVERVIEW
Dynamic source routing protocol DSR use source routing conception. The data packets collect the
source route in the packet header.DSR uses route discovery procedure to propel the data packets from
sender to receiver node for which it does not previously be familiar with the route it uses a route discovery
method to actively establish such a route. DSR toil by flooding the packets in network with route request
packets. Route request packets are received by every neighbor nodes and carry on this flooding process
by retransmissions of route request packets, if it gets destination. Such a node replies to the route request
with a route reply packet that is routed rear to real source node. Source routing uses route request and
route reply packets. The route request builds up the pathway traversed diagonally to the network. The
source caches backward route by reply packets for forthcoming use. If any association on a source route is
ruined, a route error packet is notified to the source node. In common, the operations of a dynamic routing
protocol can be described as; the router transmits and receives routing messages on its interfaces. The router
distribute routing messages and information with other routers that are by means of the similar routing
protocol .Routers swap routing information to gain knowledge about remote networks. When a router finds
a topology alteration, the routing protocol can publicize this alteration to other routers.
Energy efficient Zigbee Routing protocol Zigbee Routing protocol are not just connected to reduce
the total energy utilization of the route but also to take full advantage of the existence of each node in the
network to expand the time of the network. The major principle of energy efficient algorithm is to continue
the network functioning only if possible. In MANTEs energy utilization is ended in three states of the nodes
which are broadcast in receipt of a sleeping state. Nodes guzzle more energy whereas transmit in a sleep
state. Sleep state means nodes are inactive, in which they neither transmit nor receive any signals. More
energy can be saved by trusting more nodes in sleep state.
VI ALGORITHM OVERVIEW
Sleep scheduling algorithm All node has a unique id, and the message flanked by neighboring nodes
is symmetric and bidirectional. It is also understood that the clocks of the sensor nodes in the WSN are
coordinated so that nodes can be wake up almost at the same time.
The objectives of the sleep scheduling are as follows:
Nearly all nodes must be in sleep mode most of the time so that the energy utilization by each node
is condensed. Utilization of energy by all the nodes remains balanced. Load distributed by each node should
be equal so that no node is over used. Time required to transmit data from sender node to the receiver node
should be as minimum as possible.
84

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
VII PERFORMANCE EVALUATION

Fig 1: Energy consumption in symmetric and asymmetric case

Fig 3: Performance level in symmetric and asymmetric case

Comparing with the existing system the energy consumption is reduced from 60%to 30%.similarly
the performance level increased up to 90%
VIII CONCLUSION
In MANETs energy consumption and performance are the main challenges. Here efficient routing
protocols were used to improve the performance. Energy consumption is decreased so that the lifetime
of the network is improved and the performance level is increased up to 90%.The performance gain is
increased in both the symmetric and asymmetric case.
IX FUTURE WORK
We have recognized the deviation path problem and traffic concentration problem of the ZTR. These
are the basic problems of the general tree routing protocols, which source the overall network performance
dreadful conditions. To conquer these problems, we suggest STR that uses the neighbor table, originally
defined in the Zig Bee standard. In STR, each node can locate the best next hop node based on the enduring
tree hops to the destination. The analyses show that the one-hop neighbor in sequence in STR reduces the
traffic load concentrated on the tree links as well as provides an efficient routing path.
REFERENCES
[1] S. Vasudevan, M. Adler, D. Goeckel, and D. Towsley, Efficient algorithms for neighbor discovery
in wireless networks, IEEE/ACM Trans. Netw., vol. 21, no. 1, pp. 6983, Feb. 2013.
85

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
[2]

X. Zhang and K. G. Shin, E-MILI: Energy-minimizing idle listening in wireless networks, IEEE
Trans. Mobile Comput vol.11, no.9, pp. 14411454, Sep. 2012.

[3]

W. Zeng et al., Neighbor discovery in wireless networks with multi- packet reception, in Proc.
MobiHoc, 2011, Art. No. 3.

[4] E.Magistretti, O.Gurewitz, and E.W.Knightly,802.11ec:Collision avoidance without control


messages, in Proc. Mobi Com, 2012, pp. 6576.
[5]

S. Vasudevan, D. F. Towsley, D. Goeckel, and R. Khalili, Neighbor discovery in wireless networks


and the coupon collector's problem, in Proc. MobiCom, 2009, pp. 181192

[6]

R. Khalili, D. Goeckel, D. F. Towsley, and A. Swami, Neighbor discovery with reception status
feedback to transmitters, in Proc. IEEE COM, 2010, pp. 19.

[7]

M.J. McGlynn and S. A.Borbash,Birthday protocols for low energy deployment and flexible
neighbor discovery in ad hoc wireless networks, in Proc. MobiHoc, 2001, pp. 137145.

[8]

S. Vasudevan, J. F. Kurose, and D. F. Towsley, On neighbor discovery in wireless networks with


directional antennas, in Proc. IEEE
IN- FOCOM, 2005, vol. 4, pp. 25022512.

[9]

N. Karowski, A. C. Viana, and A. Wolisz, Optimized asynchronous multi-channel neighbor


discovery, in Proc. IEEE INFOCOM, 2011, pp. 536540.

[10]

S. Bitan and T. Etzion, Constructions for optimal constant weight cyclically permutable codes and
difference families, IEEE Trans. Inf. Theory, vol. 41, no. 1, pp. 7787, Jan. 1995.

86

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Improvement of Power Quality Using PQ Theory


Based Series Hybrid Active Power Filter
Dr.K.Sundararaju, M.E.,Ph.D, Preetha Sukumar
SProfessor, Department of EEE
M.Kumarasamy College of Engineering Karur, Tamilnadu
sunkrr@gmail.com
Student, Department of EEE
M.Kumarasamy College of Engineering
Karur, Tamilnadu preethu.karur@gmail.com
AbstractThis paper investigate the power quality improvement under unbalanced supply condition.
The current drawn is highly non-linear and contain harmonics. Shunt active filter eliminate only the
current harmonics, not the voltage harmonics. But he series active filter provide harmonics isolation.
SHAPF is suitable for compensation of source voltage, reactive power and reduce source current and
voltage harmonics. It also eliminate series and parallel resonance. The control technique for the SHAPF
is based on the Generalized Instantaneous Reactive Power Theory. The real and reactive power are
converted into voltage component. Then the reference voltage are calculated inorder to compensate the
source current and load voltage harmonics. Simulations have been carried out on MATLAB-Simulink
Platform and results are presented.
KeywordsActive and passive filter; instantaneous reactive power theory; Harmonics; MATLAB.

I. INTRODUCTION
Electrical energy is the most efficient form of energy and the modern society is heavily dependent
on the electric supply. The life is impossible without the supply of electricity. The quality of the electric
power is very important for the efficient functioning of the power system components and the end user
equipment. The term power quality became most important in the power sector and both the electric power
supply company and the end users are concerned about it. The electric power system is affected by various
problems like transients, noise, voltage sag/swell, which leads to the production of harmonics and affect the
quality of power delivered to the end user [1]. The harmonics may exist in voltage or current waveforms
which are the integral multiples of the fundamental frequency, which does not contribute for the active
power delivery. The quality of power is affected when there is any deviation in the voltage, current or
frequency.
The main effect of these problems is the production of harmonics. The presence of harmonics
deteriorates the quality of power and may damage the end user equipment. These harmonics causes the
heating of underground cables, insulation failure, increases the losses, reduces the life-time of the equipment
etc. The most effective solution to improve the power quality is the use of filters to reduce harmonics. There
are different filter topologies in the literature such as- active, passive, hybrid.
The passive filter is used to compensate the current harmonics. The voltage harmonics are
compensated using the Active filter. The Active filter can regulate the voltage at the load but cannot reduce
the current harmonics in the system [2-3]. The hybrid filter is the combination of the active filter and
passive filter. Among various combination the series APF with a shunt connected passive filter (SHAPF)
is widely used. To overcome the problems of both passive and active power filters, Series Hybrid 1Active
Power Filters (SHAPF) have been used and extensively used. It provide the cost effective solution for the
nonlinear load compensation. The performance of the SHAPF depend on the proper reference generation
algorithm.
A variety of configurations and control strategies are proposed to reduce inverter capacity [4-6].
Many approaches have been published. The instantaneous reactive power theory caused a great impact on
harmonic isolation. The instantaneous reactive power theory caused a great impact in reference voltage
generation. The instantaneous active and reactive power has average component and oscillating component.
This paper is organized as follows. First, a system configuration is presented in section II. The
generalized definition of instantaneous active, reactive and apparent power quantity is presented in section
III-A. The control strategy for the Series Active Filter is presented in section III-B. The simulation results
87

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
are given in section IV. The Simulation study for the compensation of current harmonics, voltage harmonics,
reactive power and unbalanced supply voltage is presented.
II SYSTEM CONFIGURATION
Figure 1 shows the block diagram of SHAPF. It consist of the shunt passive filter and series active
filter with a series transformer. This arrangement act as a harmonic isolator, voltage harmonic compensator.
The harmonic current is made to sink into the passive filter. The SHAPF eliminate the series and parallel
resonance. The setup also reduce the need of the precise tuning of the passive filter. The harmonics are
eliminated by the passive filter and only higher order harmonics are eliminated by the series active filter and
thus the rating of the active filter needed will be less compared with conventional shunt active filters [7-9].
Series active filter compensate unbalanced voltage and harmonics simultaneously. The arrangement
of the series active filter and shunt passive filter reduces the need for precise tuning of the passive filter
and eliminates possibility of series and parallel resonance. The ripple filter inductor and capacitor are used
to suppress the switching ripples generated because of the high-frequency switching of the PWM inverter.
The purpose of the coupling transformers is not only to isolate the PWM inverters from the source it also to
match the voltage and current ratings of the PWM inverters with of the power system.

Figure 1 Block Diagram of SHAPF

The turn ratio of the transformer should be high in order to reduce the amplitude of the inverter
output and to reduce the voltage induced across the primary winding. Also, the selection of the transformer
turns ratio affect the performance of the ripple filter connected at the output of the PWM inverter. The series
active filter in the arrangement is controlled as active impedance and is controlled as a harmonic voltage
source which offers zero impedance at fundamental frequency and high impedance at all desired harmonic
frequencies.
III CONTROL SCHEME
A.Instantaneous Reactive Power Theory
The Generalized Theory of the Instantaneous Reactive Power in Three-Phase Circuits, also known
as instantaneous power theory or p-q theory. This Theory was given by Akagi, Kanazawa and Nabae in
1983. Control strategy presented in this section is capable of compensating the source current harmonics
and it balance in load voltages. It deals with instantaneous power and classified into following two groups.
The first one is developed based on abs phase to three orthogonal axes which is known as p-q theory that is
based on a-b-c to --0 transformation, and the next is directly on a-b-c phases. The main use of this theory
is that it is valid for steady state or transitory operations. It also allow control the active filter in real time.
The main advantage of using this technique is the calculation is simple. It require only algebraic calculation.
The p-q theory consists of an algebraic transformation (Clarke transformation) of the three-phase
voltages and currents in the a-b-c coordinates to the --0 coordinates, followed by the calculation of the
p-q theory instantaneous power components [10-11]. Three phase generic instantaneous line current can
be transformed on the --0 axes. On applying the --0 transformation, the zero sequence can be separated
and eliminated.
B.Control statergy
Control strategy plays very important role in the performance of the system. The instantaneous 3
88

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
load currents and the 3 voltages are sensed and transformed from a, b, c coordinates to , , 0 coordinates
by using Clark transformation.

The Instantaneous Real Power and the Instantaneous Imaginary Power has both average and
oscillating power. The average power of the real and reactive power are expressed as p and q . The
oscillating power of the real and reactive power are expressed as p and q . The real and imaginary power
can be obtained based on the average and oscillating power is

The Zero sequence voltage is eliminated and the and coordinates are to be considered. The
voltage of the and corresponding with the oscillating power of the real power and the reactive power is
calculated in (6).

The reference voltage is calculated in order to compensate the harmonic voltage in (7). They are
obtained by the inverse Clarke transformation.

The reference voltage is compared with the source voltage and the output is given to the comparator
89

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
and the output is used to control the controller [6]. The inverter are operated corresponding to the output
of the comparator. SHAPF injecting the voltages that follows the reference voltage. It can compensate
the source voltage unbalances and supply current harmonics simultaneously. The special features of this
approach are: simplicity in separating harmonic voltage component, computational complexity is less
compared with existing techniques.
SIMULATION RESULTS
The control algorithm for the SHAPF is developed in the MATLAB/Simulink software environment
to check the performance of the control strategy in improving the system behavior. The simulation is
carried under three conditions Without Filter

With Passive Filter

With Active and Passive Filter

The proposed control strategy is simulated with a non-linear balanced load and the performance of
the system is analyzed. The system data is given in Table 1

Table 1 System parameter

Figure 2 Load voltage without any filter

Figure 3 Load current without any filter

The load voltage and load current obtained when the system is in open loop without any filter is
shown in the Figure 2 and Figure 3. Load voltage and Load Current obtained consist of more Harmonics
which must be eliminated. The harmonics generated is eliminated with the help of filters.
The FFT Analysis is carried out for the system without any filter. The THD values are calculated and
it is 34.64%, which is higher shown in the Figure 4.

Figure 4 THD Analysis without any filter


90

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
The load voltage and load current obtained when the system with passive filter is shown in the Figure
5 and Figure 6.The Load voltage and the load current obtained with passive filter consist of less harmonics.

Figure 5 Load voltage with passive filter

Figure 6 Load current with passive filter

Figure 7 THD Analysis with passive filter

The FFT Analysis is carried out. The THD values are calculated for the system with passive filter
and shown in the Figure 7. The value is 3.14%, which is less compared with the system without filter.
The load current and load voltage obtained when the system with both Active and Passive filter is
shown in the Figure 8 and Figure 9. The Harmonics content in the load voltage and Load current obtained
with the series connected active filter and shunt passive filter is comparatively low than the other.

Figure 8 Load voltage with SHAPF

Figure 9 Load current with SHAPF

The FFT Analysis is carried out. The THD values are calculated for the system with SHAPF and
shown in the Figure 10. The value is 0.24%, which is lesser compared with passive filter.
91

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Figure 10 THD Analysis with SHAPF

The Table gives the THD value for the system (a) without filter, (b) With Passive Filter, (c) With
Active and Passive Filter. The THD value that are obtained by the RL Load. From the table the THD value
for the system with both Active and Passive Filter are very much less compared with the system with no
Filter and system with Passive Filter.

Table 2 Comparison of the THD values

The voltage and current harmonics that are produced in the system will be eliminated with the
Active and Passive Filter. The Active Filter is connected in series and Passive Filter is connected in parallel
to obtain the necessary output.
V.CONCLUSION
The demand for electric power is increasing at an exponential rate and at the same time the quality
of power delivered became the most prominent issue in the power sector. Thus, the reduction of harmonics
and improving the power factor of the system is of utmost important. In this project a solution to improve
the electric power quality by the use of Active Power Filter is discussed. Most of the loads connected to
the system are non-linear which the major source of harmonics is in the system. A Hybrid power filter
with series connected APF and shunt connected passive filter is used. The simulation is also carried out
with unbalanced load and found that the APF improves the system behavior by reducing the harmonics.
Therefore, it is concluded that the hybrid filter consisting of series APF and a shunt passive filter is a
feasible economic solution for improving the power quality in electric power system.
REFERENCES
[1] W. E. Reid, "Power quality issues-standards and guidelines", IEEE Translnd. Appl. , Vol. 32, No.3,
pp. 625-632, 1996.
[2]

F. Z. Peng and D. J. Adams, Harmonics sources and filtering approaches, Proc. Industry
Applications conf., vol. 1, Oct.1999.

[3]

J. C. Das, Passive filters-potentialities and limitations, IEEE Transaction Industry Applications,


vol. 40, no. 1, Jan. 2004.

[4]

Salmeron, P.; Litran, S.P., A Control Strategy for Hybrid Power Filter to Compensate Four-Wires

92

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
Three-Phase Systems, IEEE Transactions on Power Electronics, July 2010, Volume 25, Issue 7, pp.
19231931.[5]
A. G. Dimakis, K. Ramchandran, Y. Wu, and C. Suh, A survey on network
codes for distributed storage, Proceedings of the IEEE, vol. 99, no. 3, pp. 476489, 2011.
[5]

J. Tian, Q. Chen and B. Xie, Series Hybrid Active Power Filter based on Controllable Harmonic
Impedance, IET Journal of Power Electronics, 2012, Vol. 5, Issue 1, pp. 142-148.

[6]

H. Akagi, Y. Kanazawad, and A. Nabae, Instantaneous power theory and application to power
conditioning IEEE Press, Wiley-inter-science, A Jon Wiley & sons, INC., Publication.

[7]

R.S.Herreraand, P.Salmern, Instantaneous reactive power theory: A comparative evaluation of


different formulations, IEEE Trans. Power Del., vol. 22, no. 1, pp. 595604, Jan. 2007.

[8]

M. F. Shousha, S. A. Zaid and O. A. Mahgoub. Better Performance for Shunt Active Power Filters
Clean electrical Power (ICCEP), IEEE June 2011

[9]

H. Fujita, H. Akagi, A practical approach to harmonic compensation in power systems series


connection of passive and active filters, IEEE Transactions on Industry Applications, Vol. 27,
Nov.-Dec. 1991.

[10]

Bhattacharya, S., Cheng, P.-T., Divan, D.M.: Hybrid solutions for improving passive filter
performance in high power applications, IEEE Trans. Ind. Appl., 1997, 33, (3), pp. 732747

[11]

Mulla, M.A., Chudamani, R., Chowdhury, A novel control scheme for series hybrid active power
filter for mitigating source voltage unbalance and current harmonics. Presented at the Seventh Int.
Conf. on Industrial and Information Systems (ICIIS-2012) held at Indian Institute of Technology
Madras, Chennai, India, 0609 August 2012

93

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Classification And Identification of Fault Location And Direction


By Directional Relaying Schemes Using Ann
Shoba.S, Dr.T. Guna Sekar
Applied Electronics
Department of EEE Kongu engineering college,
Perundurai, Erode. (shobasvds@gmail.com)
Assistant Professor
Department of EEE Kongu Engineering College
Perundurai, Erode. (gunas.27@gmail.com)
AbstractThis paper provides an approach for classifying, locating the various faults in transmission line
and also to estimate its direction by using artificial neural network. The major drawback of conventional
method is that it fails to adapt the dynamical conditions of power system. This method uses fundamental
voltage and current signals measured at one end of the transmission line. The algorithm accurately
classify and locate the various type of faults like single line to ground, double line to ground, three phases
faults and also find its direction whether it is forward or reverse. By using ANN the performance has been
improved with different system parameters and conditions. The simulation is carried out for the fault
detection, classification and location using MATLAB/SIMULINK
Index termsArtificial Neural Network, Fault classification, location, direction.

I. INTRODUCTION
Transmission lines are vital part of the electrical system, as they provide the path to transfer power
between generation and load. Electrical power system suffers unexpected failures in transmission lines for
various causes such as natural events, physical accidents, failure of equipments and misoperation generate
faults and hence protection of transmission lines is important element of power system. Any fault, if not
detected and isolated quickly will cascade into a system wide disturbance causing interconnected system
operating close to its limits. Initially decision tree based method is used for classifying the fault in single
transmission line [1] and the voltage and current values are obtained from both end of the transmission line.
Support Vector Machine is used for classifying and locating the fault but this leads to various problems like
increased steady state current, voltage inversions which acts non-linearly during fault condition and it is
not accurate[2][3]. Directional relays based on negative or zero sequence components or compensated post
fault voltages are most commonly used in [4]. These relays have drawback of their inability to respond to
all types of faults and slow operating time. The major drawback of conventional method is that it fails to
adapt for the dynamical conditions of power system.
This paper demonstrates the theory of Artificial Neural Networks (ANNs) that can be used as an
alternative to the conventional approach for identifying, classifying and locating the various types of
faults like single line to ground, double line to ground and three phase faults. The ANNs provide a viable
alternative because they can handle most situations based on dynamic conditions. Training patterns to be
absorbed by the ANN were generated using voltage and current samples for different faults at various
locations along the transmission line.
The direction of the fault on a transmission line is determined by the phase angles of instantaneous
voltages and current phasors but it does not determine the fault location. A directional relaying algorithm
based on the phase angles between positive-sequence components of fault voltages and currents was
developed for various types of fault in [5]. But it does not identify the faulty phase and the distance to fault
point. There are several papers which uses superimposed components, high-frequency signals, wavelet
transform for detection and classification of various types of faults [6-10].
II. POWER SYSTEM NETWORK
The system considered is composed of a section of 220KV and transmission line of 80-120 km
length is connected to source at one end and load at another end. Various types of faults like AG, BG, CG,
ABG, BCG, ACG, and ABC are considered and the location of faults are found. This method is used to
analyze various faults, classify the fault and locate the fault by using Artificial Neural Network (ANN).
94

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
Therefore, the faults are classified and located accurately. The block diagram of proposed method is shown
in the figure given below.

Figure 1 Block diagram

III. ANN BASED NETWORK


Artificial Neural Network (ANN) has emerged as a relaying tool for protection of power system. It
has been widely used for providing the protective relaying schemes for transmission line protection and it
can be an effective technique that helps to predict the fault, when it is provided with characteristics of fault
currents and the corresponding past decisions as outputs. The ANN provides a viable alternative because
they can handle most situations which are not defied sufficiently for a deterministic algorithm to execute
such an approach have excellent noise immunity, robustness and the decisions made by an ANN are not
seriously affected by the source impedance, fault resistance and the prevailing system conditions. The basic
points used to implement neural network for the fault detection classification and direction estimation is
described. The most important aspects of ANN are
Network Architecture
Training method
Checking of network behavior
ANN fault locator.
A. Selection of Inputs and Outputs
One factor in determining the size and structure for the network is the number of inputs and outputs.
If the numbers of inputs are low, the network will be small. But the input data to characterize the problem
must be used. The magnitudes of the fundamental components (50 Hz) of three phase samples of post
fault voltages and currents measured at the relay location three phase have been selected as input to neural
network. Various faults are injected and corresponding voltage and current values are noted. These values
are used for training ANN and it is used to detect the type of fault, identifies the faulted phase and estimates
its direction whether it is forward or reverse fault. The proposed scheme allows the protection engineers
to increase the protection of transmission line length as compared to earlier techniques [1]. Further the
estimation of accurate fault location helps to reduce the fault clearing time. The total numbers of inputs to
the neural network for classifying the fault are 6.
The basic task for the detection, classification and direction estimation network is to determine the
fault type along with its phase and the direction. The outputs corresponding to three phases, ground and
direction of fault (total 7) were considered as outputs provided by the network.
Table 1 Output Values

95

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
B. Learning Rule
The back-propagation learning rule is used in many practical applications and hence this technique
makes back-propagation more reliable and faster. This rule is used to adjust the weights and bias of the
network to minimize the network error. This is done by continuously changing the values of the network
weights and biases with respect to error. The back-propagation method is slow because it requires small
learning rates for stable learning. The various techniques were applied to the network architectures and it
was concluded that the suitable training method is LevenbergMarquardt optimization technique.
C. Network Architecture
After deciding the input and output for the network, the number of layers and the neurons per
layer were considered. Various network with different number of neurons in the hidden layer of ANN
based fault detector, classifier and direction estimator was considered. The neural network structure for the
classification of fault is given in the figure below. The input has 6 layers, hidden has 2 layers and the output
has 7 layers to classify the fault.

Figure 2. Architecture of neural network

D. Training the Network


To train the network, a suitable number of relevant phenomenon must be selected so that the
network can learn the fundamental characteristics once training is completed, provide correct outputs in
new situations based on the training.. Each type of faults like AG, BG, CG , ABG, BCG, ACG and ABC)
at different fault locations between 80-120km of line length, fault resistance (0 & 100) and fault inception
angles (0 & 90) have been simulated as shown below in Table 2.

The data sets were used as spaced equal points throughout the original data. Both the networks
were trained using LevenbergMarquardt algorithm using neural network toolbox of MATLAB [12]. This
learning strategy converges itself to the desired output.
IV NETWORK SIMULATION
Fault is defined as the short circuit or open circuit or an external disturbance that occur in the power
system. Hence the fault should be classified in order to obtain the accurate result.
The circuit for classification is shown in Figure 3. In the circuit during compiling the fault is selected
and the location is given. Neural network should classify and locate the accurate fault based on training.

96

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Figure 3 Circuit for classification and location

From the figure 3, while compiling the fault is selected and location is entered manually in order to
classify and locate it. The fault selected in figure 3 is BCG and the location is given as 112km. The neural
network should
classify and locate based on the training. The output is shown in figure 4 and it classifies the fault
accurately based on training. From the training the fault classified is BCG and the fault is 112.4624.
Therefore the error obtained is 0.4624.

Figure 4 Classified and located circuit

The performance plot is shown in figure 5. In that the best validation performance is 0.072101
at epoch 5. In the plot the blue line is for training, the red line is for testing and green line indicates the
validation.

Figure 5 Performance plot

The table 3 provides the various types of fault location values. It provides the various types of faults
and their distance. The neural network should locate the exact position based on the training.
97

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

The figure 6 provides the voltage waveform for BG fault. The magnitude of voltage at phase B is
reduced. Next graph provides the RMS voltage value.

Figure 6 Voltage waveform

The figure 7 provides the current waveform at B phase. The magnitude of B phase is increased due
to fault.

Figure 7 Current Waveform

CONCLUSION
By using ANN the fault classification, location and direction has been estimated. The fault has been
classified accurately for all types of faults. The fault location has error that could be rectified with more
number of iteration of the fault location values. The computation complexity is more due to large training
data, parameter selection, and large training time. The proposed method is analyzed for simple system and
in future this model can be implemented to complex power system networks. To minimize the computation
complexity and to improve the efficiency of the system Fuzzy Logic schemes can be implemented.
REFERENCES
[1] Jamehbozorg, A., Shahrtash, S.M.: A decision-tree-based method for fault classification in singlecircuit transmission lines, IEEE Trans.Power Deliv., 2010, 25, (4), pp. 21902196.
98

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
[2]

Parikh, U.B., Das, B., Maheshwari, R.: Fault classification technique for series compensated
transmission line using support vector machine, Int. J. Electr. Power Energy Syst., 2010, 32, (6),
pp. 629636.

[3]

Ekciki, S.: Support vector machines for classification and locating of fault in transmission line,
Appl. Soft Comput., 2012, 12, pp. 16501658.

[4] Duan, J.D., Zhang, B.H., Luo, S.B., Zhou, Y.: Transient-based ultra-high-speed directional
protection using wavelet transforms for EHV transmission lines. IEEE/PES Transmission and
Distribution Conf. and Exhibition: Asia and Pacific, 2005, pp. 16.
[5]

O. A. S. Youssef, New algorithm to phase selection based on wavelet transforms, IEEE Trans.
Power Del., vol. 17, pp. 908914,Oct. 2002.

[6]

Xinzhou Dong, Wei Kong, and Tao Cui, 2009: Fault Classification and Faulted-Phase Selection
Based on the Initial Current Traveling Wave, IEEE transactions on power delivery, Vol. 24, No. 2,
552-559.

[7]

Dong, X., Dong, X., Zhang, Y., Guo, X., Ge, Y.: Directional protective relaying based on polarity
comparison of traveling wave by using wavelet transform, Autom. Electr. Power Syst., 2000, 7, pp.
1115.

[8]

A. Apostolov, D. Tholomier, S. Sambasivan, and R. Richards, Protection of double circuit


transmission lines, in Proc. 60th Annu. Conf. Protective Relay Engineers, pp. 85101, 2007.

[9]

D. Das, N. K. Singh, and A. K. Sinha, A comparison of Fourier transform and wavelet transform
methods for detection and classification of faults on transmission lines, presented at the IEEE Power
India Conf., India, 2006.

[10]

S. M. Brahma, New fault-location method for a single multiterminal transmission line using
synchronized phasor measurements, IEEE Transactions on Power Delivery, Vol. 21, No. 3, pp.
1148 1153, July 2006.

[11]

S. M. Brahma, Fault location scheme for a multi-terminal transmission line using synchronized
voltage measurements, IEEE Transactions on Power Delivery, Vol. 20, No. 2, pp. 1325 1331,
April 2005.

[12]

H. Demuth and M. Beale, Neural Network - Toolbox - For Use with Matlab 2000.

99

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Complex Wavelet Transform Based Cardiac


Arrhythmia Classification
A.Kalai Selvi, M.Sasireka, Dr.A.Senthilkumar, Dr.S.Maheswari.
PG Scholar, Department of Electrical and Electronics Engineering,
Kongu Engineering College, Erode - 638052.
India, Email id: kalai.art10@gmail.com

Assistant Professor, Department of Electronics and Instrumentation Engineering,


Kongu Engineering College, Erode - 638052.
India, Email id: Sasirek_28@yahoo.com

3
Professor&Head, Department of Electrical and Electronics Engineering,
Dr.Mahalingam College of Engineering and Technology, Pollachi - 642001.
India, Email id: hod_eee@drmcet.ac.in
4

Assistant Professor, Department of Electrical and Electronics Engineering,


Kongu Engineering College, Erode - 638052.
India, Email id: Maheswari_bsb@yahoo.com

AbstractElectroCardioGram (ECG) wave reveals the electrical activity of the cardiac system. The
small changes in amplitude and duration of ECG signal cannot be described precisely by the human
eyehence there is a need for computer aided diagnosis system. In this proposed method, dual tree complex
wavelet transform based feature extraction approach is used for a classification of cardiac arrhythmias.
The feature set consist of complex wavelet coefficients extracted from the fourth and fifth scale of
DTCWT decomposition of a QRS complex signal in association with four other features like AC power,
kurtosis, skewness and energy extracted from the QRS complex signal.Support Vector Machine (SVM)
is used to classify the ElectroCardioGram (ECG) beats. The empirical results reveal that the DWT and
DTCWT established feature extraction technique classifies ECG beats of MIT-BIH Arrhythmia database.
Index termsDiscrete Wavelet Transform(DWT), Dual Tree Complex Wavelet Transform (DTCWT),
Electro Cardio Gram(ECG), Support Vector Machine(SVM).

I. INTRODUCTION
The analysis of ECG has been extensively used for diagnosing many cardiac diseases. Arrhythmias
commonly occur due to abnormal heart beat. These cardiac disease can be noninvasively diagnosed using
ECG signal. Computer-aided heart arrhythmia identification and classification can play a significant role
in the management of cardiovascular diseases. An important step toward detection of arrhythmia is the
classification of heartbeats. The rhythm of the ECG signal can then be decisive by knowing the classification
of consecutive heartbeats in the signal. Hence, there is a need for computer aided diagnosis system which
can accomplish higher recognition accuracy. Numerous techniques are applied to analyse and classify ECG
beats.
In [1], the biographer classified PVC beats from normal and other abnormal beats by using wavelet
transformed ECG waves with timing data as feature and ANN as a classifier. An overall accuracy of 95.16%
is achieved by using this technique. In [2], PCA is used as a gadget for the classification of five types of
ECG beats (N, LBBB, RBBB, PVC and APC). A relative study is performed on three methodologies of
feature extraction (principal component of segmented ECG beats, principal component of error signals
of linear prediction model, principal components of DWT coefficients). In [3], an accuracy of 94.64% is
accomplished using the approximation wavelet coefficient of ECG signal in conjunction with three timing
data as feature and RBF Neural network as a classifier. Here, classification was performed on five types of
cardiac beats (N, LBBB, RBBB, PVC and APC). In [4], the biographer have used particle swarm optimization
and radial basis function neural network (RBFNN) used for classifying six types of ECG beats. In [5], an
experimental pilot study is performed to examine the property of pulsed electromagnetic field (PEMF) at
100

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
extremely low frequency(ELF) in response to photo plethysmographic (PPG), electrocardiograph (ECG),
and Electro Encephalo Graph (EEG) activity using discrete wavelet transform. In [6], the biographer has
proposed Electroencephalography (EEG) seizure detection using the DTCWT-Fourier features and neural
network as a classifier. These features accomplish perfect classification rates (100%) for the EEG database
from the University of Bonn. In [7], the biographer have classified five types of ECG beats recommended
by Association for Advancement of the Medical Instrumentation (AAMI) standard, i.e. normal beat,
ventricular ectopic beat (VEB), supraventricular ectopic beat (SVEB),fusion of normal and VEB, exotic
beat using ECG morphology, heart beat intervals and RR intervals as feature and classifier based on
linear discriminates. In [8], the authors have shown the generalization capability of the Extreme Learning
Machine (ELM) up the support vector machine (SVM) approach in the automatic classification of cardiac
beats. In [9], wavelet transform and probabilistic neural network is used to classify six types of ECG beats.
This technique has displayed high classification accuracy but the experiment is limited to very small sets
of data. In [10], a combo of independent features and compressed ECG data is used as input to the multilayered perceptron network. An accuracy of 88.3% is reported up 10 files of MIT -BIH database. In this
paper, we have proposed a novel technique for classifying cardiac beats using complex wavelet coefficients
of 4th and 5th scale DTCWT decomposition in association with four features extracted from the QRS
complex signal of each cardiac cycle. Fourier transform (FT) of a signal provides destitute time frequency
localization of the signal and short time Fourier transform (STFT) analyses every spectral component
uniformly .Wavelet analyses the non-stationary signal with varying window size thereby assure good time
frequency localization of ECG signal. The dual tree complex wavelet transform add approximate shift
invariance and directionally selective filters while conserving the usual properties of perfect reconstruction
and computational capability. The ability of dual tree complex wavelet transform to allow shift invariance
indicates the ability of its coefficient to differentiate the shifting of input signals. Artificial neural network
trained by the back propagation algorithm for classifying ECG beats to appropriate classes. A parallel
study is performed on two sets of features, and experimental results indicate that DTCWT based features
accomplish better than the DWT features.
II. ECG DATA
Data from the MIT-BIH arrhythmia database were used in this paper which includes recordings
of many common and aggressive arrhythmias along with examples of normal sinus rhythm. The database
contains 48 recordings, each containing two 30-min ECG lead signals. These records were sampled at 360
Hz and band pass filtered at 0.1-100 Hz [10].
III. PROPOSED FRAMEWORK
The suggested technique consists of three main stages: (i) pre-processing, (ii) feature extraction and
(iii) classification. The pre-processing level contains amplitude normalization and filtering of ECG signals.
The ECG signals are normalized to a mean of zero and standard deviation of unity, hence decreasing the
amplitude variance from file to file. The embedded noises in ECG signals are eliminated using a band pass
filter with a cut off frequency of 422 Hz. The pre-processed ECG signal is used in next level for extracting
significant features. The features extracted using DTCWT technique is applied as input to an SVM classifier
which maps the feature vectors to the respective class labels. The proposed block diagram as shown in Fig I.

Fig.I.Block diagram of the proposed technique.


101

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
IV. WAVELETS
A. Introduction
The wavelet transform has rise recent years as a powerful time frequency analysis. Both short period,
high frequency and longer period, lower frequency information can be seize simultaneously using wavelets.
Hence the method is particularly useful for the search of transients, aperiodicity and other non-stationary
signal features where, through the interrogation of the transform, small changes in signal morphology may
be highlighted over the scales of interest. Another key merit of wavelet techniques is the variety of wavelet
functions available, thus allowing the most appropriate to be chosen for the signal under investigation.
B.Feature extraction using discrete wavelet transform
The analysis of the ECG signals was performed using the DWT .The selection of appropriate wavelet
and the number of decomposition levels is essential in analysis of signals using the wavelet transform. The
number of decomposition levels is chosen based on the frequency components of the signal. The levels are
chosen such that those parts of the signal that associate well with the frequencies required for classification
of the signal are retained in the wavelet coefficients. In this project, the number of decomposition levels
was chosen to be 5. Thus, the ECG signals were decomposed into the detail co-efficient D1D4 and one
final approximation co-efficient A4. The smoothing feature of the Daubechies wavelet of order 2(db2)
made it more sufficient to detect changes of the signals. Therefore, the wavelet coefficients were computed
using the db2. The frequency bands corresponding to different levels of decomposition for Daubechies
wavelet of order 2 (db2) with a sampling frequency of 256 Hz .The discrete wavelet coefficients were
computed using the MATLAB software. The feature selection is an important component of designing
the pattern classification .since even the best classifier will perform badly if the features used as inputs are
not selected well. The computed discrete wavelet coefficients provide a solid representation that shows
the energy distribution of the signal in time and frequency. In spite of reducing the dimensionality of the
extracted feature vectors, statistics over the set of the wavelet coefficients was used. From each sub-band
four statistical parameters are computed i.e. maximum, minimum, mean and standard deviation of the
wavelet coefficient. Thethree level sub-band decomposition using DWT technique as shown in Fig II.

Fig.II.Three level sub-band decomposition using DWT technique

C.Feature extraction using dual tree complex wavelet transform


The dual tree complex wavelet transform has multiple resolution representation, just like the DWT.
Two sets of real filters are used for generating the real part (TREE A) and the imaginary part (TREE B) of
the complex wavelet transform. The feature extraction technique can be summarized as follows: Extract
the QRS complex by taking 256 samples around the R-peak. (ii) Decompose the QRS complex signal to
five resolution scales by using 1D DTCWT.(iii) Choose the features of DTCWT from 4th and 5th scale and
compute the absolute value of the real and imaginary coefficients (detail coefficients) from each scale.(iv)
Perform ID FFT on the selected features and take the logarithm of the Fourier spectrum. The shift invariant
property of DTCWT and FFT helps in classifying the ECG beats efficiently. In addition to the complex
wavelet based features, four other features are extracted from the QRS complex of each cardiac cycle.(i)
AC power of the QRS complex signal.(ii) Kurtosis of the QRS complex signal.(iii) Skewness of the QRS
complex signal. (iv) Energy of the QRS complex signal. These features are also classified using SVM
classifier. Dual tree CWT corresponding to three levels as shown in Fig III.
102

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Fig.III.Dual tree CWT corresponding to three levels

The ECG signals of MIT-BIH database are sampled at a frequency of 360 samples per second,
hence the frequency component in ECG range between 0 and 180 Hz. In our work, the wavelet coefficients
are computed across the QRS complex is maximum in the frequency range of 8-20Hz.The number of
decomposition levels is limited to 5 beyond which baseline.
V. SUPPORT VECTOR MACHINE
Support Vector Machine (SVM) was introduced by Vapnik. Support Vector Machines (SVM) is a
relatively new learning method used for binary classification. The basic idea is to find a hyperplane which
separates the d-dimensional data perfectly into its two classes. SVM is a supervised classification method.
Here, a set of known objects is called training set. Each object of the training set consists of a feature vector
and a belonging class value. Based on the training data, the learning algorithm extracts a decision function
to classify the unknown input data. The architecture of SVM shown in Fig V.

Fig V Architecture of SVM

For examples (xi, yi) i = 1 ...l, where each example has d inputs (xi Rd), and a class label with one
of two values (yi(-1,1)) . Now, all hyper planes in Rd are parameterized by a vector (w), and a constant
(b), expressed in the equation
w.x+b=0
w is the vector orthogonal to the hyperplane. Given such a hyperplane (w, b) that separates the data,
this gives the function.
f(x) = sign(w.x + b)
Which correctly classifies the training data. However, a given hyperplane represented by (w, b)
is equally expressed by all pairs (w,b) for R^+ .The canonical hyperplane is defined to be that which
separates the data from the hyperplane by a distance of at least1. That is, we consider those that satisfy
y_i (x_i.w+b)1foralli
To obtain the geometric distance from the hyperplane to a data point, we must normalize by the
magnitude of w. This distance is given by
d((w,b),x_i=(yi(xi.w+b))/(w)1/(w)
The hyperplane that maximizes the geometric distance to the closest data points is needed. This is
103

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
accomplished by minimizingw.
The vectors closest to the boundaries are called support vectors and the distance between the
support vectors and hyper plane is called margin. In order to detect the arrhythmias and to reduce false
positives, the feature vector is analysed by a classifier. SVMs are a useful technique for data classification.
In addition, SVMs are supervised learning models with associated learning algorithms that analyse data and
recognize patterns, used for classification and regression analysis. The basic SVM takes a set of input data
and predicts two possible classes for each given input. A brief strategy for ECGclassification is as follows:
1. Select input data patterns
2. Embed input data into a multi-dimensional feature space by computing inner products of data
patterns by using a kernel function
3. Seek for linear relations among the data patterns in the feature space by trying to find a separating
hyperplane with maximum support vector margins.
VI. RESULTS
Classifier is evaluated using the parameters sensitivity,and accuracy.
Sensitivity = (TP/TP+FN)*100
Accuracy= (TP+TN)/(TP+TN+FP+FN)*100
Where TP stands for true positive, TN for true negative, FP forfalse positive and FN for false
negative. The performance of classification shown in table I.
Table I Classification performance

VII. CONCLUSION
This paper, a technique is proposed for classifying ECGbeats using DTCWT based feature set.
Four features AC power,kurtosis, skewness and energy extracted from QRS complex of each cardiac
cycle concatenated with the features extractedfrom the fourth and fifth decomposition levels of DTCWT,
are usedas total feature set. In this paper, the SVM is used as a classifier because it has ability to learn
and generalize, smaller training set requirements, fast operation, and ease of implementation. The major
advantage of this network is that it finds the nonlinear surfaces separating the underlying patterns which is
generally considered as an improvement on conventional methods and the complex class distributed features
can be easily mapped by classifier. The proposed method has shown a promising sensitivity of 50% which
indicates that this technique is an excellent model for computer aided diagnosis of cardiacarrhythmias.
The performance of the proposed method is comparedwith DWT based statistical features and it is seen
that the proposedfeature set achieves higher recognition accuracy than DWT basedfeature. The proposed
methodology can be used in telemedicine applications, arrhythmia monitoring systems, cardiac pacemakers,
remote patient monitoring andin intensive care units.
REFERENCES
[1] Inan OT, Giovangrandi L, Kovacs GT. Robust neural network based classification of premature
ventricular contraction using wavelet transform and timing interval feature. IEEE Trans Biomed
Eng 2006; 53(December (12)):250715.
[2]

Korurek M, Dogan B. ECG beat classification using particle swarm optimization and radial
basisfunction neural network. Expert Syst Appl 2010; 37:75639.

[3]

Yu SN, Chen YH. Electrocardiogram beat classification based on wavelet trans-formation and
probabilistic neural network. Pattern Recogn 2007; 28:114250.

[4]

Thomas M, Das MK, Ari S. Classification of cardiac arrhythmias based on dualtree complex
wavelet transform. In: Proceedings of IEEE International Conference on Communication and
Signal Processing-ICCSP 2014. 2014.

[5]

Ubeyli ED. Statistics over features of ECG signals. Expert Syst Appl 2009; 36:875867.

104

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
[6]

Kadambe S, Srinivasan P. Adaptive wavelets for signal classification and compression. Int J
Electron Commun (AE) 2006; 60:4555.

[7]

Martis RJ, Acharya UR, Ray AK. Application of principle component analysis to ECG signals for
automated diagnosis of cardiac health. Expert Syst Appl2012; 39:11792800.

[8]

Chen G. Automatic EEG seizure detection using dual-tree complex wavelet-Fourier features.
Expert Syst Appl 2014, April; 41:23914.

[9]

Hosseini HG, Reynolds KJ, Powers D. A multi-stage neural network classifier for ECG events. In:
23rd Int. Conf. IEEE EMBS. 2001. p. 16725.

[10] Chazal PD, Dwyer MO, Reilly RB. Automatic classification of heartbeats using ECG morphology
and heartbeat interval features. IEEE Trans Biomed Eng2004; 51(July (7)).

105

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

QoS Guided Prominent Value Tasks Scheduling


Algorithm in Computational Grid Environment
A.Kalai Selvi, M.Sasireka, Dr.A.Senthilkumar, Dr.S.Maheswari.
Dr.G.K.Kamalam1 , B.Anitha2 and V.Gokulnathan3
M.E.,Ph.D, Department of Information Technology, Kongu Engineering College, Perundurai, Erode,
Tamil Nadu 638052, India.

2
M.E.,Department of Information Technology,
Kongu Engineering College, Perundurai, Erode,
Tamil Nadu 638052, India.

B.TECH II--ITA .,Department of Information Technology,


Kongu Engineering College, Perundurai, Erode,
Tamil Nadu 638052, India.

AbstractGrids enable large-scale coordinated and collaborative resource sharing. Grid resources
owned and managed by multiple organizations for solving scientific and engineering problems that require
the large amount of computational resources. Scheduling of the tasks to the distributed heterogeneous
grid resources belongs to the class of NP-Complete problems. To achieve high performance in the
heterogeneous grid environment requires an efficient mapping of the tasks to the appropriate resources is
essential. The order in which the tasks are scheduled to the resources is very critical criterion in scheduling
which results in reduced makespan. This paper proposes a heuristic scheduling technique QoS Guided
Prominent Value Tasks Scheduling Algorithm that determines the order in which the tasks are to be
scheduled to the appropriate resources to optimize the completion time of the tasks. The comparison
study shows that the proposed QoS Guided Prominent Value Tasks Scheduling Algorithm deals with the
efficient resource mapping to the tasks and provides overall optimal performance with reduced makespan.
The experimental results reveal that the order of mapping heuristic strategy depends on the parameters
such as (a) QoS value (b) Prominent value and (c) execution time of the tasks.
Index terms Task Scheduling, Heterogeneous, QoS, NP-Complete

I. INTRODUCTION
An emerging trend in network technology led to the possibilities of interconnection of diverse set
of geographically distributed heterogeneous resources which supports executing computationally intensive
applications. The high performance of the grid applications can be achieved by an efficient scheduling
strategy. The key strategy for achieving high performance is the efficient mapping of the meta-task to the
available computational resources. The fundamental criterion
for obtaining optimal task scheduling is the reduced makespan [3,9]. Meta-task can be defined as a
collection of independent, non-communicating tasks. Makespan can be defined as the overall completion
time of all the computational tasks. The problem of optimally mapping the computational tasks to the
diverse set of geographically distributed heterogeneous grid resources has been shown to be NP-Complete
[2,4]. The grid scheduler needs to consider the task and QoS constraints to identify a better mapping
between the tasks and the grid resources. The proposed QoS Guided Prominent Value Tasks Scheduling
Algorithm based on the task requirement of QoS classifies the tasks into high QoS tasks and low QoS
tasks. The grid resources based on the task constraints are classified into high QoS provision resources and
low QoS provision resources. The proposed QoS Guided Prominent Value Tasks Scheduling Algorithm
performs the better mapping between the tasks and the grid resources by computing the Prominent Value
(PV) for the task. The tasks are ordered into the Prominent Value Set (PVS) from minimum to the highest
prominent value of the task. The proposed algorithm achieves optimal scheduling with reduced makespan
compared to that of the Min-min heuristic scheduling algorithm.
106

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
II.RELATED WORKS
Opportunistic Load Balancing (OLB) algorithm does not consider the minimum execution time or
minimum completion time of the tasks and schedules the tasks in the arbitrary order to the available grid
resources. The grid resources are also selected in an arbitrary order [5,6].
Minimum Execution Time (MET) algorithm assigns each task to the resource with the minimum
expected execution time for that task [1,7].
Minimum Completion Time (MCT) algorithm assigns each task to the resource with the minimum
completion time for that task. The disadvantage of the algorithm is that some tasks do not have the minimum
execution time [1,7].
Min-min algorithm starts with the set U of all unmapped tasks. The set of minimum completion
time for each task in the set U is calculated. Then, the task with the overall minimum completion time is
selected and scheduled to the particular resource. The newly allocated task is removed from the set U and
the process repeats until all tasks in the set U are mapped [5,6,8].
The Max-Min algorithm starts with the set U of all unmapped tasks. The set of minimum completion
time for each task in the set U is calculated. Then, the task with the overall maximum completion time is
selected and scheduled to the particular resource. The newly allocated task is removed from the set U and
the process repeats until all tasks in the set U are mapped [5,6].
Our previous work, Min-mean heuristic scheduling algorithm works in two phases. In the first phase,
Min-mean heuristic scheduling algorithm starts with a set of all unmapped tasks. The algorithm calculates
the completion time for each task on each resource and finds the minimum completion time for each task.
From that group, the algorithm selects the task with the overall minimum completion time and allocates
to the appropriate resource. Removes the task from the task set. This process repeats until all the tasks get
mapped. The algorithm calculates the total completion time of all the resources and the mean completion
time. In phase 2, the mean of all resources completion time is taken. The resource whose completion time
is greater than the mean value is selected. The tasks allocated to the selected resources are reallocated to the
resources whose completion time is less than the mean value [10, 11].
In QoS guided min-min heuristic, tasks are classified into the high QoS and low QoS tasks. High QoS
tasks are given the highest priority and are first scheduled using Min-min heuristic scheduling algorithm.
Low QoS tasks are also scheduled using Min-min heuristic scheduling algorithm [14].
In QoS priority grouping algorithm, the tasks are classified into n groups based on the number of
resources on which the tasks can execute. Tasks from each group are scheduled using sufferage algorithm
independently [13].
In QoS sufferage heuristic algorithm, tasks are grouped into two groups based on high QoS and
low QoS requirements. Tasks are scheduled based on the highest sufferage value assigned to the resource.
Sufferage value is the difference between the earliest completion time and the second earliest completion
time [3].
In QoS based predictive max-min, min-min switcher algorithm, tasks are grouped into high QoS
request tasks and low QoS request tasks. The algorithm schedules the tasks using the two conventional
algorithms, Max-min and Min-min based on standard deviation of minimum completion time of unassigned
tasks [15]. A task with high QoS request can only be executed on a resource with high QoS provision [16].
III.MATERIALS AND METHODS
Meta-task is defined as the collection of independent tasks with no inter-task data dependencies.
An application consists of n independent meta-task and m heterogeneous resources. An optimal order of
mapping the meta-tasks to the set of heterogeneous in a grid environment is an NP-Complete problem [12].
An efficient scheduling algorithm which determines an optimal order in mapping the meta-tasks to the set
of heterogeneous resources is proposed.
In the proposed QoS Guided Prominent Value Tasks Scheduling Algorithm, meta-tasks are
grouped into two groups: High QoS and Low QoS. The tasks with high QoS requirements can only be
executed on resources with high QoS provision. High QoS task assigned with low QoS value. Low QoS
tasks are assigned with high QoS value based on the execution time of the tasks. The low QoS tasks which
have the highest execution are given the high priority.
The proposed algorithm QoS Guided Prominent Value Tasks Scheduling Algorithm considers a
scheduling criteria Prominent Value of the meta-task for efficient scheduling. The task order in which
the tasks to be scheduled are based on the Prominent Value. The tasks are ordered from the minimum
107

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
Prominent Value to the highest Prominent Value. The ordered tasks are scheduled to the resources with the
minimum completion time of the task. The common objective function of the task scheduling algorithm
is the makespan. Makespan is defined as the total time required for executing the meta-task. The proposed
algorithm provides an optimal schedule with reduced makespan.
A.Notations And Definitions
The notations and definitions used in this paper are shown in Table 1.

The pseudocode for finding Credit Point for each task is given below:

B.
108

QoS Value

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
B.QoS Value
The resources may not have the capability to execute all the tasks due to its low QoS provision. The
tasks that can be executed in only one resource or few resources are grouped into high QoS tasks. The tasks
that can be executed in all resources are grouped into low QoS tasks.
The task that can be executed in only one resource is given the QoS value 1. The task that can be
executed in only two resources is given the QoS value 2 and so on. The tasks that can be executed on all
resources are given the QoS value based on their execution time. The task that has maximum execution
time is given high QoS value and so on.
The pseudocode for finding the QoS Credit Value is shown below:

The value dv is determined as follows:


If the highest QoS value assigned for a task is a two digit number, dv=100, if it is a three digit
number, dv=1000, and so on.
C.QoS Guided Prominent Value Tasks Scheduling Algorithm

A simple example is given below to illustrate the execution of the proposed algorithm QoS Guided
Prominent Value Tasks Scheduling Algorithm and to compare its efficiency with the existing Min-min
heuristic scheduling algorithm.
Table 1 shows the execution time of 9 tasks on 5 resources. The entry X in the table denotes that
the resource does not have the capability to execute that particular task due to its low QoS provision.
109

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

\
The maximum value in the given ETC matrix is,
METC=17.3
TV=17.3/2=8.7
TV=17.3/3=5.8
TV=14.5
TV=23.2
The Credit Point for each task is computed and is shown in Table 2.
Table 2 Credit Point for each Task

The task t1 can be executed only on one resource R5, Task t1 is called high QoS task. So, the task
t1 is given the low QoS value 1.Next, the task t2 can be executed on two resources R4 and R5. The task
t1 is given the low QoS value 2 and so on. The tasks t8 and t9 are called low QoS task, since they can
be executed on all resources. The tasks t8 and t9 are given high QoS value. The task t9 has maximum
execution time and is given the high priority and the credit for task t9 is 5 and for task t8 is 6. The QoS
value, QoS Credit Value for each task is computed and is shown in Table 3. The Prominent Value for each
task ti is computed and is shown in Table 3.

The tasks are ordered in the Prominent Value Set (PVS) in the ascending order of PVi .

PVS = {t1,t2,t4,t3,t5,t8,t9,t6,t7}
The high QoS tasks are scheduled to the resources that have low QoS provision and the low QoS
tasks are scheduled to the resources that have the high QoS provision. The tasks are scheduled in the order
110

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
specified in the task set PVS. The makespan obtained for the Min-min algorithm, QoS sufferage algorithm
and the proposed QoS Guided Prominent Value Tasks Scheduling Algorithm is shown in Table 4.

Table 4 A Comparisons between existing and proposed algorithms in makespan and task schedule
order.
IV.SIMULATION AND RESULTS
The proposed approach is evaluated with user-defined number of resources and tasks. The
execution time of all the tasks is considered for efficient scheduling. The execution time of all the tasks in
all the resources is generated using the ETC matrix, a benchmark model designed by Braun et.al [1,4,7].
The rows of the ETC matrix represent the execution time of each task on all given resources.
Figure 1 shows the experimental results corresponding to ETC matrices of 50 Tasks* 5 resources,
100 tasks * 10 resources, 150 Tasks* 10 resources, 200 tasks*10 resources and 250 tasks*10 resources
indicate that the proposed QoS Guided Prominent Value Tasks Scheduling Algorithm performs well and
outperforms the Min-min heuristic scheduling algorithm. The proposed QoS Guided Prominent Value Tasks
Scheduling Algorithm gives reduced makespan for all five cases than the Min-min heuristic scheduling
algorithm.

Figure 1: Comparison based on makespan for five different cases

V.CONCLUSION
Task scheduling is an NP-Complete problem in distributed grid environment. This paper proposed
a novel heuristic scheduling strategy by considering QoS factor in scheduling the tasks on to the resources.
The proposed QoS Guided Prominent Value Tasks Scheduling Algorithm and the Min-min heuristic
scheduling algorithm are examined using the benchmark simulation model by Braun et.al [1,4,7]. Presented
experimental results prove that the proposed heuristic scheduling strategy QoS Guided Prominent Value
Tasks Scheduling Algorithm has a significant improvement in performance in terms of reduced makespan
and outperforms Min-min heuristic scheduling algorithm.
REFERENCES
[1]

T.Braun, H.Siegel, N.Beck, L.Boloni, M.Maheshwaran, A.Reuther, J.Robertson, M.Theys, B.Yao,


D.Hensgen, and R.Freund, A Comparison Study of Static Mapping Heuristics for a Class of
Meta-tasks on Heterogeneous Computing Systems, In 8th IEEE Heterogeneous Computing
Workshop(HCW99), pp. 15-29, 1999.

[2]

I.Foster and C. Kesselman, The Grid: Blueprint for a Future Computing Infrastructure, Morgan
Kaufmann Publishers, USA, 1998.
111

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
[3]

E.U.Munir, J.Li, and S.Shi, QoS Sufferage Heuristic for Independent Task Scheduling in Grid,
Information Technology Journal 6(8), pp. 1166-1170, 2007.

[4]

TD. Braun, HJ. Siegel, N.Beck, A Taxonomy for Descriging Matching and Scheduling Heuristics
for Mixed-machine Heterogeneous Computing Systems, IEEE Workshop on Advances in Parallel
and Distributed Systems, West Lafayette, pp. 330-335, 1998.

[5]

R.Armstrong, D.Hensgen, and T.Kidd, The Relative Performance of Various Mapping Algorithms is
Independent of Sizable Variances in Run-time Predictions, In 7th IEEE Heterogeneous Computing
Workshop(HCW98), pp. 79-87, 1998.

[6]

R.F.Freund and H.J.Siegel,Heterogeneous Processing, IEEE Computer , 26(6), pp. 13-17, 1993.

[7]

T.D.Braun, H.J.Siegel, and N.Beck, A Comparison of Eleven Static Heuristics for Mapping a Class
of Independent Tasks onto Heterogeneous Distributed Computing Systems, Journal of Parallel and
Distributed Computing 61, pp.810-837, 2001.

[8]

R.F.Freund, and M.Gherrity, Scheduling Resources in Multi-user Heterogeneous Computing


Environment with Smart Net, In Proceedings of the 7th IEEE HCW, 1998.

[9]

G.K.Kamalam, and Dr. V..Murali Bhaskaran, A New Heuristic Approach:Min-Mean Algorithm


For Scheduling Meta-Tasks On Heterogeneous Computing Systems, IJCSNS International Journal
of Computer Science and Network Security, Vol.10 No.1, pp. 24-31, 2010.

[10]

G.K.Kamalam, and Dr. V..Murali Bhaskaran, An Improved Min-Mean Heuristic Scheduling


Algorithm for Mapping Independent Tasks on Heterogeneous Computing Environment,
International Journal of Computational Cognition, Vol. 8, N0. 4, pp. 85-91, 2010.

[11] G.K.Kamalam, and Dr. V..Murali Bhaskaran, New Enhanced Heuristic Min-Mean Scheduling
Algorithm for Scheduling Meta-Tasks on Heterogeneous Grid Environment, European Journal of
Scientific Research, Vol.70 No.3, pp. 423-430, 2012.
[12] H.Baghban, A.M. Rahmani, A Heuristic on Job Scheduling in Grid Computing Environment, In
Proceedings of the seventh IEEE International Conference on Grid and Cooperative Computing, pp.
141-146, 2008.
[13] F.Dong, J.Luo, L.Gao, and L.Ge, A Grid Task Scheduling Algorithm based on QoS Priority
Grouping, In Proceedings of the 5th International conference on Grid and Cooperative Computing,
pp.58-61,2006.
[14] H.E.Xiaoshan, X.H.Sun, and G,V.LLaszewski, QoS Guided min-min heuristic for grid task
scheduling, Journal of computer science technology (Special Issue on Grid Computing), pp.442451,2003.
[15] M.Singh, and P.K.Suri, Analysis of service, challenges and performance of a grid, International
Journal of Computer Science and Network Security, pp.84-88, 2007.
[16] H.Ligang, and A.Stephen, Dynamic Scheduling of parallel jobs with QoS demands in multiclusters
and grids, Proccedings of 5th IEEE/ACM International Workshop on Grid Computing, pp.402409,2004.

112

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Authenticated Handsign Recogonition for


Human Computer Interaction
S.Sharaenya#1 and C.S.Manikandababu*2
#Student, ECE, Sri Ramakrishna Engineering College, Coimbatore, India.
* Assistant Professor (SI.Grade), ECE, Ramakrishna Engineering College,
Coimbatore, India
AbstractHand sign recognition is garnering attention in the recent years due to its diverse applications.
It aids Human Computer Interaction (HCI) which transforms the way the machines are operated. Providing
reliable machine authentication in HCI is crucial to allow secure access to the data. The main aim of the
project is to identify the authorized user by knuckle pattern identification, which is discriminative. The
features of the knuckle pattern will be extracted accurately by using the SURF algorithm. The hand sign
of the authorized user will be then captured to perform the analogous action. The contour of the hand sign
is extracted from the image on preprocessing and compared with the hand signs stored in a database. An
efficient matching algorithm will be employed to find the closest sign related to the input posture.
Index terms Hand Gesture, Human Computer Interaction, Image Processing, Object Detection.

I. INTRODUCTION
Hand Recognition of sign languages is one of the major concerns for the international deaf
community. However, contrary to popular belief, sign language is not universal. Wherever communities
of deaf people exist, sign languages develop, but as with spoken languages, these vary from region to
region. There is no unique way in which such a recognition can be formalized. Every country has its own
interpretation. They are not based on the spoken language in the country of origin. In fact their complex
spatial grammars are markedly different. Sign language recognition is a multidisciplinary research area
involving pattern recognition, computer vision, natural language processing and psychology. Sign language
recognition is a comprehensive problem because of the complexity of the visual analysis of hand gestures
and the highly structured nature of sign languages. A functioning sign language recognition system can
provide an opportunity for a mute person to communicate with non-signing people without the need for an
interpreter. It can be used to generate speech or text making the mute more independent. Unfortunately,
there has not been any system with these capabilities so far. All researches till date have been limited to
small scale systems capable of recognizing only a minimal subset of a
full sign language. The most complicated part of dynamic hand gesture recognition is the sign
language recognition as both local and global motions of the hand preserve necessary information in
addition to temporal information. In order to recognize even the simplest hand gesture, the hand must
be detected in the image. Once the hand is detected, a complete hand gesture recognition system must be
able to extract the hand shape, the hand motion and the spatial position of the hand. Moreover, the hand
movement for a particular sign follows some temporal properties.
Hand gestures are a type of communication that is multifaceted in a number of ways. Hand gestures
provide an attractive alternative to the cumbersome interface devices used for human-computer interaction
(HCI). Thus, integrating the use of hands in HCI would be of great benefit to users. Smart environments
have recently become popular to improve our quality of life. Gesture recognition capabilities implemented
in embedded systems are very beneficial in such environments to provide various apparatuses with efficient
HCI. Real time processing is an essential feature to use hand signs for HCI. Since real time recognition
incurs very high computational costs, a powerful full-specification PC is necessary to implement recognition
systems as software. However, such systems are very large physically, and consume large amount of power,
which is not suitable for embedded systems. Improvements in field programmable gate arrays (FPGAs)
have driven a huge increase in their use in space, weight, and power constrained embedded computing
systems. The implementation of FPGAs has raised the possibility of achieving portable systems that can
recognize hand gestures without bulky PCs while decreasing the response time due to their computing
power. Hand gestures are generally either hand postures or dynamic hand gestures. Hand postures are static
113

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
hand poses without any movements. However, hand gestures are defined as dynamic movements, which are
a sequence of hand postures. This paper focuses on hand posture recognition, and proposes a new real-time
system of hand sign recognition that integrates all processing tasks that includes an image capture circuit
with an on-board camera. Human Computer Interaction (HCI) is a sophisticated way of interaction between
humans and computers by making computers more amendable to the users need.
HCI provides a great advantage to people with disabilities by visually recognizing the actions
performed by them. It is necessary to provide authenticated HCI because it is very important to prevent
identity theft. Authentication of a person by means of biometrics is more desirable. Among the various
biometric characteristics, the texture pattern in Finger-Knuckle Print (FKP) is more unique and can serve
as a distinctive biometric identifier. It has contact-less image acquisition and ensures better security of the
system. This provides an effortless and secure access to the data. Hand sign recognition using webcam
provides more functionality than common input devices in the computer. The hand sign database is created
according to the demands of the user. The action performed by the user is made to be recognized by the
system. The hand sign recognition is done by comparing the image stored in the database and the current
input sign. Finally according to the user requirements, the corresponding action takes place in the system.
Finger knuckle print (FKP), has been proposed for personal authentication with very interesting results.
One of the advantages of FKP verification lies in its user friendliness in data collection. However, the user
flexibility in positioning fingers also leads to a certain degree of pose variations in the collected query FKP
images. The widely used Gabor filtering based competitive coding scheme is sensitive to such variations,
resulting in many false rejections. We propose to alleviate this problem by reconstructing the query sample
with a dictionary learned from the template samples in the gallery set. The reconstructed FKP image can
reduce much the enlarged matching distance caused by finger pose variations; however, both the intra-class
and inter-class distances will be reduced. We then propose a score level adaptive binary fusion rule to
adaptively fuse the matching distances before and after reconstruction, aiming to reduce the false rejections
without increasing much the false acceptances. Experimental results on the benchmark PolyU FKP database
show that the proposed method significantly improves the FKP verification accuracy.
II.REVIEW OF KNUCKLE BASED HAND SIGN ALGORITHM
The system that transformed preprocessed data of the detected hand into a fuzzy hand-posture
feature model by using fuzzy neural networks and based on this model, the actual hand posture were
determined by applying fuzzy inference. The fuzzy features like distance between the finger tips, bendiness
of each finger, angle between finger joint and plane of palm of given hand were calculated. The correct
identification rate is in average above 96% with a limitation of the need of using a homogenous background.
The limitation of this method is that it uses fixed input resolution and not implemented in real time
performance [1]. Detecting and tracking of bare hands in cluttered background. For every frame captured
from a webcam, the hand was detected, then key points were extracted for every small image that contains
the detected hand gesture and fed into the cluster model to map them into a bag-of-words vector, which was
then fed into the multiclass SVM training classifier to recognize the hand gesture. The system achieved
satisfactory real-time performance regardless of the frame resolution size as well as high classification
accuracy of 96.23%. The important factors that affected the accuracy of the system are the quality of the
webcam in the training and testing stages, the number of training images and choosing number of clusters
to build the cluster model. The system can be made suitable for different input resolution pixel sizes[10].
SOM-Hebb classifier, which employed both neuron culling and perturbed training data. They have also
proposed a novel video processing architecture. A whole recognition algorithm was implemented in
hardware using an FPGA connected to an on-board CMOS camera. This system was designed to recognize
24 ASL hand signs, and its accuracy was tested by simulations and experiments. The proposed hand sign
recognition system could carry out recognition at a speed of 60 fps and 60 recognitions per second with a
recognition accuracy of 97.1%[7]. The exibility in positioning of ngers lead to a certain degree of pose
variations in the collected query FKP images. Gabor ltering based on competitive coding scheme is
sensitive to such variations and resulted in many false rejections. To alleviate this problem, the query
samples were reconstructed with a dictionary consisting of samples in the gallery set. The reconstructed
FKP image could reduce the matching distance caused by nger pose variations. However, both the intraclass and inter-class distances will be reduced. To enhance the FKP matching process, a reconstruction
based matching scheme was proposed to reduce the enlarged matching distance. Adaptive binary
fusion(ABF) rule was proposed to make the nal decision by fusing the matching scores before and after
114

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
reconstruction. The ABF ensured a good reduction of false rejections without increasing much the false
acceptances, which lead to much lower equal error rates than state-of-the-art methods. One of the advantages
of FKP verication lies in its user friendliness in data collection. This proposed method signicantly
improves the FKP verication accuracy. However, it has a high algorithmic complexity[3]. SIFT hardware
accelerator which is the fastest so far used to implement SIFT algorithm. It has two hardware components,
one for key point identication, and the other for feature descriptor generation. The key point identification
part has three stages: 1) Gaussian ltering and DoG image computation; 2) key-point detection; and 3)
gradient magnitude and orientation computation. The stage 1 was designed for building the Gaussianltered images and difference-of-Gaussian images, these images are used as inputs of stage 2 and stage 3.
Stage 2 received DoG images to identify the key points with stability checking. The stage 3 was used to
compute the gradient magnitude and orientation of each pixel in the Gaussian images. The feature descriptor
generation part received the locations (including x and y coordinates) of the key points from stage 2 and the
related gradient histograms from stage 3. The processing time of the key point identication was only
3.4ms for one video graphics array (VGA) image. One of the major contributions made in this paper is an
architecture that used rotating SRAM banks for Segment Buffer Supporting Sliding Window Operation.
The accuracy can be determined by comparing the results produced by the SIFT accelerator to that obtained
from a SIFT software. It has a few disadvantages. It requires a large internal memory and complex hardware
logic. It also has a high computational complexity[5]. The system consisted of two modules: hand gesture
detection module and hand gesture recognition module. The detection module could accurately locate the
hand regions with a blue rectangle. This is mainly based on Viola-Jones method. In the recognition module,
the Hu invariant moments feature vectors of the detected hand gesture were extracted and a Support Vector
Machines (SVMs) classifier was trained for final recognition[9]. Adaptive skin colour detection and motion
detection were combined to generate hand location hypotheses. Histograms of oriented gradients (HOG)
were used to describe hand regions. Extracted HOG features were projected into low dimensional subspace
using PCA-LDA. In the subspace, a nearest neighbor classifier was applied to recognize gestures.
Experimental results showed that the proposed method gained detection rate up to 91% and it could process
in real-time. Its disadvantage is that highly sensitive to illumination, not reliable and Expensive computation
in order to achieve scale invariance [6]. The concept of object-based video abstraction was used for
segmenting the frames into video object planes (VOPs), as used in MPEG-4, where hand is considered as a
video object (VO). The redundant frames of the gesture video sequence were discarded and the key hand
poses (key video object plane (VOP) were used in MPEG-4 domain) for summarization and recognition.
The palm and finger motions were represented in terms of shape change of the hand. The global hand
motion were represented in terms of motion trajectory in 2D space. The proposed gesture representation
could handle different gestures consisting of different number of states. The key frame based gesture
representation was the summarization of the gesture with finite number of gesture states. In case of global
hand motion, trajectory estimated from the corresponding VOPs bared spatial information regarding the
hand positions in the gesture sequence, which was utilized successfully in the gesture classification stage.
However, the results obtained by VOP extraction based on edge change detection suffered from boundary
inaccuracy. Change detection methods did not support moving background. As the number of objects in the
image increased, boundary inaccuracy increases in case of change detection methods. Douglas Chai et al
(2002) have described a fast, reliable, and effective algorithm that exploits the spatial distribution
characteristics of human skin color. The face location was identied by performing region segmentation
with the use of a skin color map. The color analysis and the different color space models were studied. The
three colour space models that were studied are RGD,HSV and YCbCr. These are some, but certainly not
all, of the color space models available in image processing. YCrCb color space was effective as the
chrominance information can be used for modeling human skin in this color space and will avoid the extra
computation required in conversion. HSV color space was compatible with human color perception, and the
hue and saturation components reported to be sufficient for discriminating color information for modeling
skin color. However, this color space was not suitable for video coding. A normalized RGB color space was
employed to minimize the dependence on the luminance values. Therefore, it is important to choose the
appropriate color space for modeling human skin color. The factors that need to be considered are application
and effectiveness. The intended purpose of the face segmentation will usually determine which color space
to use and at the same time, it is essential that an effective and robust skin color model can be derived from
the given color space[4]. The identification and verification are done by passwords, pin number, etc., which
115

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
is easily cracked by others. Biometrics is a powerful and unique tool based on the anatomical and behavioral
characteristics of the human beings in order to prove their authentication. This paper proposes a novel
recognition methodology of biometrics named as Finger Knuckle print (FKP). Hence this paper has focused
on the extraction of features of Finger knuckle print using Scale Invariant Feature Transform (SIFT), and
the key points are derived from FKP are clustered using K-Means Algorithm. The centroid of K-Means is
stored in the database which is compared with the query FKP K-Means centroid value to prove the
recognition and authentication. The comparison is based on the XOR operation. Hence this paper provides
a novel recognition method to provide authentication. Results are performed on the PolyU FKP database to
check the proposed FKP recognition method. Biometrics is defined as the measure of human body
characteristics such as fingerprint, eye, retina, voice pattern, Iris and Hand measurement. It is a powerful
and unique tool based on the anatomic and behavioral characteristics of the human beings. Most anatomical
characteristics used for security application are fingerprint, Iris, face and palm print .Apart from anatomical
characteristics, behavioral characters like voice, signature, and gait moments are also used to recognize the
user. Most biometric systems that are presently used in real time applications, typically uses a single
biometric characteristic to authenticate a user. So authentication leads major part in the secured way of
communication. Currently, passwords and smartcards are used as the authentication tool for verifying the
authorized user. However, passwords are easily cracked by dictionary attacks, as well as the smart cards are
stoles by anybody, and then we cannot check who the authorized user is. So the biometrics is an only
remedy for the problems. This paper discusses about the new biometric identifier named as finger knuckle
print. Many researchers are going on this new emerging biometric because, which is an also unique
characteristic like fingerprint, Iris, etc in order to prove the genuine user. The Finger Knuckle print
recognition system contains data acquisition, ROI extraction, feature extraction, coding, and matching
process. The features of FKP are extracted using the Gabor filtering with the cropped region of interest. The
Gabor filter features are matched for recognize the user. The average values of global and local orientation
features of finger back surface or FKP was analyzed and matched with Fourier transform and Gabor filter
respectively. An Image of Finger Knuckle Print Zhang, et.al proposed the score level fusion with FKP was
performed with the phase congruency, local feature and local phase features novel approach to extract the
Invariant Features as key points used for object recognition. Using Hough transform the local information
of FKP was accessed using Scale Invariant Feature Transform (SIFT) and Speed up Robust Features
(SURF) algorithm is used to get the key points using the scaling and invariant features which were matched
to rove the user authentication Ajay et.al, proposed a new method of triangulation, which is used to
authenticate the hand vein images of finger knuckle surface of all fingers[8].
III.PROPOSED ALGORITHM
In the proposed system, the authentication of the person is done by capturing the knuckle print by
using the webcam and matching it with the pattern stored in the database. Further usage of the computer
by the person is allowed only if both the pattern matches. Hand signs are captured by the webcam and are
processed to perform corresponding action. We broadly classify our project into two modules. They are:
Knuckle print authentication, Hand sign recognition. The overall block of the knuckle print authentication
and hand sign recognition is shown in the Figure 3.1.

Fig 3.1 Knuckle Print Authentication and Hand sign Recognition


116

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
The knuckle pattern is captured by using the web camera. The features of captured knuckle pattern
are extracted and compared with the pre-loaded image in the database. If the captured knuckle pattern does
not matched with any image in the database, it will display invalid user and further access of the system will
be denied. If the captured knuckle pattern is matched with any one of the image in the database, it will show
that the user is authenticated and it allows to user to communicate with the system by using hand signs. In
the hand sign recognition system, the hand posture is captured by using the web camera. The features of the
hand sign image will be extracted and is matched with the pre-loaded signs in the database. The analogous
action will be performed depending on the user input. The texture pattern in the finger-knuckle print (FKP)
is highly unique and thus can serve as a distinctive biometric identifier. The use of FKP to recognize
subjects is first exploited in where FKP is represented by a curvature based shape index. FKP is transformed
using the Fourier transform. The band-limited phase only correlation (BLPOC) is employed to determine
the similarity between two FKP images. Gabor filter based competitive code (CompCode) has been used
to extract local features. Two FKP images are matched with the help of these local features from a FKP.
Features extracted from FKP with the help of principal component analysis (PCA), independent component
analysis (ICA) and linear discriminant analysis (LDA) are used for matching. However, these subspace
analysis methods generally fail to extract the distinctive line and junction features from the FKP images.
Features such as orientation and magnitude information (ImCompCode & MagCode) extracted by Gabor
filter have been considered for comparing two FKP images. Global and local features and their combination
are used to represent FKP images for recognition. The Fourier coefficients of the image have been taken
as the global features while orientation information from the Gabor filters are considered as local features.
Weighted sum rule is used to fuse the local and the global matching scores. Local features induced by the
phase congruency (PC) model have been used. The orientation and phase information are used in addition
with phase congruency (PC). Score level fusion is used to fuse these features to improve the KP recognition
accuracy. However, to the best of authors knowledge, there does not exist any technique to index the
features of FKP images to minimize the search space. The problem of FKP based identification system is
to find the top t best matches for a given query from the database of FKP images. One traditional way to
find the top t best matches of the query image is to search all FKP images in the database. The process of
retrieving these images from the database is found to be computationally expensive. For a large database,
there is a need to design an efficient indexing technique to reduce the cost of searching because an indexing
technique performs better than ustering and classification techniques. Following are the characteristics that
are to be taken into consideration while indexing features of FKP images.
_ Number of features extracted from an image is not fixed.
_ Number of extracted features is generally large.
_ Number of features of two images of the same subject obtained at two different instant of time
may not be the same.
_ There may be partial occlusion in the image.
_ A query image may be rotated and translated with respect to the corresponding image in the
database.
It is expected that any indexing technique should be support all the above issues efficiently.
Geometric hashing is an indexing technique which is found to be the most suitable for indexing the features
of FKP images as it can handle efficiently the issues like: (i) it can index the variable number of features
having high dimensionality (generally >500 features per image), (ii) it can index features of all images
in the database in a single hash table efficiently, (iii) for any new image, extracted features can be added
into the hash table without modifying the existing hash table, and (iv) it can handle rotation and partial
occlusion. However, it involves high memory and computational cost as there exists
redundant insertion of each feature into the hash table to handle rotation and occlusion. In this
paper, the geometric hashing is boosted in such a way that it insert features exactly once which reduces
both memory and computational cost significantly without compromising the recognition accuracy. It can
also handle the problem of rotation and partial occlusion efficiently. In this knuckle print authentication, the
knuckle pattern is captured using camera. Then the captured image undergoes following sections.
i. Image Pre-processing
a. Image Resize
The captured image is converted into a standard size image (256x256). It is necessary to resize the
image because all the cameras do not have the same resolution.
117

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
b. Grayscale Conversion
Grayscale images are black-and-white images. In photography, a grayscale digital image is an image
in which the value of each pixel is a single sample, that is, it carries only intensity information.
c. Median Filtering
Median filter is often used to remove noise and it is a nonlinear digital filtering technique. Noise
reduction is a typical pre-processing step to improve the results of later processing (for example, edge
detection on an image). Median filtering is very widely used in digital image processing because, under
certain conditions, it preserves edges while removing noise.
d. Histogram Equalization
This method usually increases the contrast of many images, especially when the data of the image
is represented by close contrast values. Through this adjustment, the intensities can be better distributed
on the histogram. This allows for areas of lower contrast to gain a higher contrast. The following steps are
performed to obtain Histogram equalization: The frequency of each pixel value is obtained in matrix form.
The probability of each frequency is calculated.
The probability of each pixel value occurrence = frequency of

pixel value/no of pixels

Cumulative histogram of pixel 1=a


Cumulative histogram of pixel 2=a+frequency of pixel 2=b
Cumulative histogram of pixel 3=b+frequency of pixel 3=c
The Cumulative distribution probability (cdf) of each pixel is found by using,
cdf of pixel 1=a/no. of pixels
Calculate the final value of each pixel by multiplying cdf with (no. of bins)cdf of pixel 1.
ii. Feature Extraction
The features of the pre-processed image are extracted using Scale-Invariant Feature Transform
(SIFT) algorithm. SIFT is an algorithm in computer vision to detect and describe local features in images.
The SIFT key points of objects are
extracted from reference images and stored in database. An object is recognized in a new image by
individually comparing each feature from new image to this database and finding the matching features.
From full set of matches, subset of key points that agree on the object and its features are identified in new
image to filter out good matches. SIFT image features provide a set of features of an object that are not
affected by many of the complications experienced in other methods, such as object scaling and rotation.
Scale Invariant Feature Transform (SIFT) Four major steps in SIFT are scale space extrema, key-point
localization, orientation assignment and key-point descriptor. The main idea behind the detection of scale
space extrema is to identify stable features from texture that remain invariant to change in scale and
viewpoint. This technique has been implemented efficiently by using a difference of Gaussian (DOG)
function to identify potential key-points. DOG images are used to detect key-points with the help of local
maxima and minima across different scales. Each pixel in DOG image is compared with eight neighbors in
the same scale and nine neighbors in the neighboring scales. The pixel is selected as a candidate key-point if
it is either local maxima or a local minima. Orientation is assigned to each key-point to achieve invariance
to image rotation. To determine key-point orientation, a gradient orientation histogram is computed in
the neighborhood of key-point. The feature descriptor is computed as a set of orientation histograms on
pixel neighborhoods. Each histogram contains eight bins while each descriptor contains an array of four
histograms around the key-point. This generates SIFT feature descriptor ~D of length 128.
a.Scale-space peak selection
Difference-of-Gaussian (DoG) function is used to identify potential interest points that are invariant
to scale and orientation. DoG is mainly used to detect edges in the image.
Each sample point is compared with the eight closest neighbours in image location and nine
neighbours in the scale above and below. This is done to detect the locations of all local maxima and
minima i.e. peaks of,
118

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
DOG images are used to detect key-points with the help of local maxima and minima across different
scales.
b. Key-point localization
The points that have low contrast or that are poorly localized along the edges are eliminated because
they are sensitive to noise. Key-points are selected based on measures of their stability.
c. Orientation Assignment
Assigning a consistent orientation to each key-point based on local image properties, the key-point
descriptor can be represented relative to this orientation and therefore achieve invariance to image rotation.
For each image sample, Lx,y, the gradient magnitude, m, and orientation, , is pre-computed using pixel
differences:
d. The local image descriptor
A key-point descriptor is created by first computing the gradient magnitude and orientation at
each image sample point, these points are then weighted by a Gaussian window. These samples are then
accumulated into orientation histograms. This allows for wider local position shifts. While allowing for an
object to be recognized in a larger image, SIFT image features also allow for objects in multiple images of
the same location, taken from different positions within the environment, to be recognized. SIFT features
are also very resilient to the effects of noise in the image.
iii.Matching
The features of captured knuckle pattern are compared with the pre-loaded image in the database. If
the captured knuckle pattern is not matched with any image in the database, it will be displayed as invalid
user. If the captured knuckle pattern is matched, it shows that the user is authenticated and it moves to
second module of this system i.e., Hand sign recognition system.
a. Gesture Image Identification
Gesture image identification is tested with knuckle identification and the algorithm is applied to get
a finite robustness in extracting the image quality and verification and the algorithm used here is SURF and
one of the finite algorithm for the improvement of image quality and better then SIFT algorithm.
b. Surf Algorithm
SURF (Speeded Up Robust Features) like SIFT, the SURF approach describes a key point detector
and descriptor. It makes use of Hessian matrix for the detection of key-point. For a pixel P(x,y) of an image
I, the Hessian matrix H(P,)
at scale r can be defined as follows

L (P,)L (P,) xx xy L (P,)L (P, ) yx yy are the convolution of the Gaussian second order derivatives
with the image I at pixel P. Keypoints are found by using a so called Fast-Hessian Detector that bases on an
approximation of the Hessian matrix for a given image point. The responses to Haar wavelets are used for
orientation assignment, before the keypoint descriptor is formed from the wavelet responses in
a certain surrounding of the keypoint. The descriptor vector has a length of 64 floating point numbers
but can be extended to a length of 128. As this did not significantly improve the results in our experiments
but rather increased the computational costs, all results refer to the standard descriptor length of 64.

119

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
To speed up the computation, second order Gaussian derivatives in Hessian matrix are approximated
using box filters. To detect key-points at different scales, scale space representation of the image is obtained
by convolving it with the box filters. The scale space is analyzed by up-scaling the filter size. A nonmaximum
suppression in a 3x3x3 neighborhood is implemented to localize the key-points over scale. A
rectangular window around each detected key-point is used to compute its descriptor which is termed
as key-point descriptor. The window is splitted into 4x4 sub-regions. For each sub-region, Haar wavelet
responses are extracted.
IV.RESULTS AND DISCUSSIONS
a. Introduction
The CRR values of the knuckle image of 256x256 were evaluated using the MATLAB software by
implementing image resize technique. From the values, we proved that the proposed SURF algorithm has
a better quality in identifying the knuckle image for a better security authentication.

b. Pre-Processing Result
Fig 4.1 shows the pre-processing which including gray image, histogram equalization and median
filtering method for a clear visible output, given input is converted into gray scale image & histogram
equalization is calculated followed by Median filtering technique.
c. Database Creation Result

Fig 4.2 shows the database creation the given image is stored inside the database and it is later
compared with the given input image from the web camera which is attached with the system
Fig 4.3 shows the detected output after the input image compared with the database, the given input
image is stored with the user name and when the input image is compared it will show the user name is
detected.

120

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Fig 4.4 Shows when there is any change in the direction of knuckle the input image will not be
matched with the data base and we will get an error report cannot detect the knuckle print show me the
knuckle print image.

Fig 4.5 shows SURF and SIFT algorithm , when the input image is preprocessed and when the
detection is verified ,proposed SURF and SIFT algorithm is applied and compared to prove that SURF
algorithm is the fast and efficient algorithm.
V.CONCLUSION
A novel SIFT algorithm is compared with the new algorithm that requires many stage of preprocessing.
We obtained that all considered approximate transforms perform very close to the ideal SURF. However,
the proposed transform possess a computational complexity and is faster than all other approximations
under consideration. In terms of security identification, knuckle authentication is a challenging process
in terms of Image quality improvement. Hence the new proposed transform is the best algorithm for the
knuckle identification in terms of image selection and also in terms of Image Quality metrics such as CRR
and Hit rate. The CRR values obtained are Optimum threshold values for SIFT and SURF feature extractors
to achieve maximum CRR of 96.36% and 99.69% respectively are found to be 0.2 and 0.09 with the
sampling step of 4. Future work includes the implementation of Hand sign technique in VLSI hardware kit
and also to approximate versions for various hand sign to provide a better authentication prototype.
REFERENCES
[1] Annamaria R, Balazs Tusor andVarkonyi-Koczy(2011), HumanComputer Interaction for Smart
Environment Applications Using Fuzzy Hand Posture and Gesture Models,IEEE transactions on
instrumentation and measurement, vol. 60, pp. 5-15.
[2]

.David G. Lowe(2004), Distinctive Image Features from Scale-Invariant Keypoints, International


Journal of Computer Vision, pp. 91-110.

[3]

David Zhang, Guangwei Gao, Jian Yang, Lei Zhang and Lin Zhang, (2013) Reconstruction Based
Finger-Knuckle-Print Verification With Score Level Adaptive Binary Fusion, IEEE transactions on
image processing, vol. 22, pp. 12-25.
121

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
[4]

Douglas Chai and King N. Ngan(2002), "Face Segmentation Using Skin-Color Map in Videophone
Applications", IEEE transactions on Circuits and Systems for Video Technology, vol. 09, Issue no.
04.

[5]

Feng-Cheng Huang, Ji-Wei Ker, Shi-Yu Huang and Yung-Chang Chen(2012), High-Performance
SIFT Hardware Accelerator for Real-Time Image Feature Extraction,IEEE transactions on circuits
and systems for video technology, vol. 22, pp. 3-15.

[6]

Hanqing Lu, Jian Cheng, Kongqiao Wang and Yikai Fang(2007), A Real-Time Hand Gesture
Recognition Method, IEEE International Conference on Multimedia and Expo, vol.11, pp. 995998.

[7]

Hiroomi Hikawa and Keishi Kaida(2015), "Novel FPGA Implementation of Hand Sign Recognition
System with SOM-Hebb Classifier", IEEE transactions on Circuits and Systems for Video
Technology, vol. 25, Issue no. 01.

[8]

Kannan.S and Muthukumar .A(2013), Finger knuckle print recognition with sift and k-means
algorithm, ICTACT journal on image and video processing, vol. 03,Issue no. 03.

[9]

Liu Yun and Zhang Peng(2009), "An Automatic Hand Gesture Recognition System Based on ViolaJones Method and SVMs", IEEE transactions on Computer Science and Engineering, vol. 02, pp.
72-76.

[10] Nasser H. Dardas and Nicolas D. Georganas(2011), Real-Time Hand Gesture Detection and
Recognition Using Bag-of-Features and Support Vector Machine Techniques, IEEE transactions
on instrumentation and measurement, vol. 60, pp.11-27.
[11] http://fourier.eng.hmc.edu/e161/lectures/gradient/node9.html

122

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Genetically Optimized Upfc Controller for


Improving Transient Stability
S.Sharaenya#1 and C.S.Manikandababu*2
student, Electrical and Electronics Engineering,
M.Kumarasamy college of engineering, Karur, Tamilnadu
#

*
Professor, Electrical and Electronics Engineering,
M.Kumarasamy college of engineering, Karur, Tamilnadu

AbstractAn electrical power system is a complex network which consists of numerous generators,
transformers, transmission lines and variety of loads. The power demand should be equal to the power
generated then the system will be stable. Transient stability is a complicated problem in the power system
transmission lines for maintaining synchronism. Sometime the total power losses, cascading outage of
transmission lines, etc, are increased due to power loadability in lines and the system leads to collapse. In
order to avoid this, the power system will be analyzed with various optimization methods. New voltage
stability index is method used to determine the critical line on the transmission lines. In order to improve
transient stability in power system, FACTS device can play important role on it. UPFC is one of the
FACTs controllers which can control both real and reactive power of transmission lines. UPFC is needed
to be locate on optimal location of system and this effective control strategies is achieved by genetic
algorithm. PID controller is employed for controlling UPFC and the results obtained are by placing the
UPFC in optimal location on the transmission lines. The transient stability of seven bus system is studied
under some external disturbance in MATLAB/SIMLINK environment.
Index terms Flexible AC Transmission system (FACTS), Unified Power Flow Controller (UPFC),
New Voltage Stability Index (NVSI), Genetic Algorithm (GA)

I. INTRODUCTION
An electric power system is a network of electrical components that is used to supply and transfer
the power to the load for satisfying the required demand. The transmission lines get overloaded when the
demand of the lines gets increases due to which the system becomes complex and this leads to serious
stability issues. The most important stability issue is transient stability [1]. The stability problems of power
system can be effectively reformed by the use of FACTS devices. The FACTS device is employed here in
order to transfer the exceeding power from generator to the system.
FACTS are the most multifarious devices [2] used for controlling reactive power and real power in
transmission line for economic and flexible operation. The main objective of FACTS devices are:
Increases power transfer capability

Controls power flow in specified routes

Realize overall system optimization control

FACTS devices include Static Var Compensator (SVC), Static Synchronous Compensator
(STATCOM), Thyristor Controlled Series Capacitor (TCSC), Thyristor Controlled Phase Shifter (TCPS),
Static Synchronous Series Compensator (SSSC) and Unified Power Flow Controller (UPFC) etc., UPFC
is used for voltage control applications. UPFC helps to maintain a bus voltage at a desired value during
load variations. The UPFC can be made to generate or absorb reactive power by adjusting firing angle. The
major problem in the FACTS controllers is identifying [2] the location for fixing and the amount of voltage
and phase angle to be injected. The stochastic algorithms can be used for locating the FACTS devices on
the transmission lines. There are several stochastic algorithms such as Genetic Algorithms, Differential
Evolution, Tabu Search, Simulated Annealing, Ant Colony Optimization, Particle Swarm Optimization
and Bees Algorithm. Each of these algorithms has its own advantages. Genetic Algorithms (GA), Particle
Swarm Optimization (PSO) and Bees Algorithm (BA) are a few efficient and well known stochastic
algorithms.
UPFC OPERATION AND MATHEMATICAL MODEL
Unified Power Flow Control is a combination of parallel and series branches, each consists of
123

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
separate transformer, power electric converter which is capable of turning on and off with semiconductor
device and dc circuit [5].

Fig no 1: Principal Configuration of UPFC Model

The DC-circuit allows the active power exchange between shunt and series transformer to control
the phase shift of the series voltage, this setup is shown in Figure 1. The DC link in UPFC is used for filter
the ripple content. Fig 2 shows a UPFC equivalent circuit, the parameter are connected between bus i and
bus j, the voltages and angles at the buses i and j are V_i , V_j, _i and _j respectively. The main advantage
of the UPFC is to control the active and reactive power flows in the transmission line.

Fig no 2: UPFC Equivalent Circuit

The real power (P_ij) and reactive power (Q_ij) flow between the buses i to j can be written as
follows,

The real (P_ji) and reactive power flow between the buses j to i can be written as

The operating constraint of the UPFC (the active power exchange via the dc link) is
PE= P_sh+P_ser=0


------------- (5)
Where PE is active power exchange, P_sh is Real power on shunt transformer and P_ser is real
power on series transformer. The value of P_sh is found from Re(V_sh I_sh^*) and value of P_ser is found
fromRe(-V_ser I_ser^*). The UPFC absorbs and generates reactive power by adjusting firing angle so in
the operating constraint of UPFC the real power of series transformer and shunt transformer is made as
equal to zero.
II LOCATION CRITERIA OF UPFC
The location of FACTs devices are based on several criteria which include sensitivity based
approach, artificial intelligence methods, point of voltage collapse method frequency response, stability
124

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
index and control theories [6]. FACTS device will be placed in power network for different reasons and
their locations can be determined by applying different techniques. New voltage Stability Index is proposed
techniques to locate device.
A. NEW VOLTAGE STABILITY INDEX
Figure 3 consists of two buses which is bus 1 and bus 2. The apparent power of bus 1 and bus 2 are
P_1+jQ_1 and P_2+jQ_2. The current (I) passed through the impedance of transmission line. NVSI can be
mathematically explained as follows by fig (3).

Fig No 3: Line Model

With taking the suffix i as sending end bus and j as the receiving bus. NVSI can be defined by
as follows
V
Where NVSI is new voltage stability index, V_1is the magnitude of bus 1, P_2 and Q_2 is the
real and reactive power of bus 2. If the equation (13) is equal to 1.00 then the respective line is said to be
maintain stable in transmission lines.
IV. GENETIC ALGORITHM
Genetic algorithm is proposed by Darwin in the 19th century and it is developed based on the
evolutionary theories [9]. The genetic algorithm is used to solve complex optimization problems. It is a
global search technique which is based on the mechanisms of natural selection and genetics, it also can
search several possible solution simultaneously. The GA start with random generation of initial population
Figure 4 shows the flow chart of genetic algorithm. The first step is to initialize of variables that need to
125

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
taken into account. Here the optimal solution is nothing about fitness solution it is used to find the best
solution that suited to the system if it is not satisfied then it goes to selection process where thousands of
solution is being produced. Reproduction is the core of the evolution process where generational variations
of a population are determined by genetic crossover and by random mutations that may occur. Reproduction
is process of mixing genetic material from parent and it generates a quicker evolution which is compared to
the one that would result if all descendants will contain a copy of their parents. The possible solution of a
problem is done with a chromosome by a bit string which is codified by 0 and 1. Individuals are evaluated
through a function which measures their ability of problem solving. The new population evolves based on
random operators using reproduction, mutation, crossover and the evolution exits the cycle when the stop
criterion is reached. The main application of genetic algorithm is to tune the controller parameter that being
viewed during the optimization problem [10 - 12] and here it is done by the defined objective function and
time domain simulation. Here the genetic algorithm works on chromosome which is encoded version of
potential parameter of solution rather than the parameters of themselves. It overcomes problems that occur
in analytical solution. Figure 4 represents the flow chart of genetic algorithm.
ALGORITHM
Step 1: Start the algorithm to find effective solution.
Step 2: Initialize the parameter that are required such as population size, number of population,

number of generation.
Step 3: From the initialized parameter the possible solution are generated. It consists of thousands

of solution for estimated optimization problem.
Step 4: At T=0, the condition is checked whether it is optimum solution for problem. The optimum

solution is selected from possible solution which is generated through fitness function.
Step 5: If optimum solution is found then go to stop or else go to next step.
Step 6: Here the selection process is done where it selects one individual that is suited to problem.
Step 7: Then the selected individual reproduce various chromosomes which is called as child

chromosome.
Step 8: Here it combine Childs chromosome with parent chromosomes and from various child

chromosome suitable solution is selected. Each and every chromosomes consists of various

solution
Step 9: The duration of time T is increased by 1 which leads to T+1.
Step 10: Again the optimum solution is checked with the problem and same process is repeated until

solution is being found.
Step 11: Stop the algorithm when optimum solution is found.
Tuning the controller parameter can be viewed as an optimization problem in power system because
setting of the controller will yield good performance. Genetic algorithm is most used algorithm. Some of
the traditional method of tuning doesnt give guarantee to optimal parameter and in most cases the tuned
parameter needs improvement through trial and error where as in genetic algorithm the tuning process is
depend on the optimality concept through the objective function and the time domain simulation. In present
study genetic algorithm is employed for the optimal tuning of UPFC controller. Table 1 shows the specified
parameter for GA algorithm.

Fig no 4: Flow Chart of Genetic Algorithm


126

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
Table 1: parameter used in genetic algorithm

UPFC PARAMETER FOR OPTIMIZATION


In this section, it explains about the UPFC parameters that are need to be considered during
optimization.
1. The series angle and voltage source (_ser,V_ser ) and the shunt angle and voltage source

(_sh,V_sh ) for UPFC are considered as the variables. These variables are adjusted as per the

operation.
2. The main idea of these UPFC variables is to optimize indirectly by adjusting the active and

reactive power on the line and also the bus voltage magnitude of specified line on the system.

The main aim of the optimization is to determine the critical line where the instability of exist

transmission lines.
VI. DESIGN OF DAMPING AND INTERNAL UPFC CONTROLLERS PARAMETER
UPFC has two internal controllers. They are Bus voltage controller and DC voltage controller which
is used for controlling the internal parameter of UPFC. Here the PID controller is used for controlling
UPFC control problems. The power system stabilizer is provided to improve the damping of power system
oscillations [8, 13]. The feedback is controlled by controller called as proportional integral derivative
control. It continuously calculates an error value. The error valve is defined as the difference between a
measured process variable and desired set point. The output of PID controller will be equal to input of plant
in terms of time domain. It is represented as follows:
---------- (14)
Where, e is tracking error, K_p is proportional gain, u is control signal and K_i is integral gain
The equation 14 explains that PID controller is combination of P, I and D because K_p e(t) is the
transfer function of proportional and K_i e(t) dt is transfer function of integral where as d denotes
derivates on the equation.
The transfer function of PID controller for UPFC is found by taking Laplace transfer of equation
(14)
Laplace transfer,

------------- (15)

------------- (16)

The PID controller is co-dependent on each separate function in order to update the derivative which
is from the integral function as supplied data. By using PID controller the following advantage exist on the
system.

Fast in operation

Low offset

Low overshoot
VII. SYSTEM DESCRIPTION
The three phase, seven bus system is taken in account to show the performance of UPFC with
127

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
behaviour of power system. The two buses in the system are used as generator bus while the remaining five
buses in the system are used as load bus. In the simlink model, the three phase winding transformer is used
for delivering the power to load. The one of block in model is used for each bus in order to view the real
and reactive power of the lines.
VIII. SIMLINK MODEL
The simulated model of three phase seven bus system is used and the model is simulated and
corresponding results of voltage magnitude, phase angle, real and reactive power is shown in figure
respectively. Figure 5 shows simlink model of seven bus system.

Fig no 5: Simlink model of seven bus system

In simlink model each bus in system consists of phase locked loop, phase sequence analyzer, three
phase instantaneous active and reactive power. Phase locked loop (PLL) is used for synchronize on the set
of variable frequency. The terminator is the most important block which is used for terminating the output
signal within the simulating time. Here RLC series branch is used. The two generator bus is used as sources
of system.

Fig no 6: real and reactive power of source bus

Fig no 7: magnitude and phase angle of source bus

Fig no 8: real and reactive power across load bus


128

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Fig no 9: magnitude and phase angle of load bus

Fig no 10: frequency comparison of buses in system

Fig no 11: system without UPFC

Fig no 12: bus system with UPFC (when external disturbance applied)

Figure 5 and 6 represents the real and reactive power and magnitude and phase angle of source bus
where as remaining five buses are used as load bus where the corresponding graph is shown in figure 7 and
8. Here the frequency of these bus systems are compared to each other and it is shown in figure 9. Firstly
the bus system operated without any fault, so in order to evaluate the improvement of transient stability
the external disturbance is being applied on system. The system is simulated for the duration of 5 seconds.
The genetic algorithm is most important technique which is used for solving optimization problems. UPFC
detects the disturbance that occurs on the system and it is solved by the UPFC due to which the transient
stability of the system improved. The external disturbance exists in system from 0.1second to 0.6second.
The fault is cleared above 2 second and stability of system is maintained at the time period of 3second and it
maintains same till the system is operated. The optimal location of UPFC is done by New Voltage Stability
Index where it is maintained at 1.00 for making the system stable. When compared to existing system, it
129

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
is cleared that UPFC plays very important role in improvement of transient stability because when TCSC
controller is used it takes 6seconds to maintain system at stable and fault is cleared around 3second. When
fuzzy controller is used it takes 5 to 6 second to maintain stability. So this method is more efficient when
compared to other methods.
XI. CONCLUSION
This paper presents the modelling and optimizing of UPFC controller for improving the transient
stability. Then simple transfer function for UPFC controller is developed and the various parameters are
optimally tuned. The minimization of the rotor speed deviation from the external disturbance is formulated
as the optimization problem and optimal location of UPFC controller are achieved by Genetic algorithm.
The simulation results clarify that, the efficiency of the proposed algorithm and optimal location of FACTS
device. This algorithm is suitable to search several possible solutions simultaneously and it is well suited
for transient stability. Further this algorithm is practically easy to be implemented in power system. Here
the PID controller helped in improving damping and enhancing the transient stability in the system.
REFERENCES
[1] R. K. Ahuja, M. Chankaya, Transient Stability analysis of Power System With UPFC Using PSAT,
International Journal of Emerging Technology and Advanced Engineering, vol. 2, no. 12, Dec. 2012.
[2]

N. G. Hingorani and L. Gyugyi, Understanding FACTS Concepts and Technology of Flexible AC


Transmission Systems, Piscataway: IEEE Press, 1999.

[3]

K. Kawabe and A. Yokoyama, Transient Stability Improvement by UPFC Using Estimated


Sensitivities Based on Thevenins Theorem and DC Power Flow Method, Journal of International
Council on Electrical Engineering vol. 2, no. 3, 2012, pp. 257-263.

[4]

O.P.Dwivedi, J.G.Singh and S.N.Singh Simulation and Analysis of Unified Power Flow Controller
Using SIMULINK National Power System Conference, NPSC( 2004).

[5]

K.R.Padiyar and A.M.Kulkarni. Control Design and Simulation of Unified Power Flow Controller
IEEE Transaction on Power delivery, Vol.13 No 4, October (1998).

[6]

M. Noroozian, L. Angquist, M. Ghandari, and G. Anderson, Use of UPFC for optimal power flow
control, IEEE Trans. on Power Systems, vol. 12, no. 4, 1997, pp. 16291634.

[7]

Ch.Rambabu, Dr.Y.P.Obulesu, Dr.Ch.Saibabu, Improvement of voltage profile and reduces power


system losses by using multi type FACTS devices, International Journal Of Computer Applications,
Vol 13, No.2, January 2011.

[8]

N.Tambey and M.L. Kothari: " Damping of power system oscillations with unified power flow
controller (UPFC) , IEE Proceeding ,Vol. 150, No. 2, March 2003.

[9]

T.K.Mok, H.Liu, Y.Ni, F.F.Wu and R.Hui, Tuning the fuzzy damping controller for UPFC through
genetic algorithm with comparison to the gradient descent training, Electric Power Systs. Research.
Vol-27, pp. 275283, 2005.

[10] S.Mishra, P.K. Dash, P.K.Hota and M.Tripathy, Genetically optimized neuro-fuzzy IPFC for
damping modal oscillations of power system. IEEE Trans. Power Systs., vol-17, pp. 11401147,
2002.
[11] C.Houck, J.Joines and M.Kay, A genetic algorithm for function optimization: A MTLAM
implementation. NCSU-IE, TR 9509. 1995.
[12] S. Panda and R.N.Patel, Optimal location of shunt FACTS controllers for transient stability
improvement employing genetic algorithm, Electric Power Components and Systems, Vol. 35, No.
2, pp. 189-203, 2007.
130

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
[13] M.A.Abido, Analysis and assessment of STATCOM-based damping stabilizers for power system
stability enhancement. Electric Power Systs. Research. Vol-73, pp. 177185, 2005.
[14] Yong Chang and Zheng Xu, A novel SVC supplementary controller based on wide area signals,
Electric Power Syst. Research, Article in Press.
[15] P.W.Sauer and M.A.Pai, Power System Dynamics and Stability. Pearson education (Singapore) Pte.
Ltd. 2002.
[16] H.Saadat, Power System Analysis. Tata McGraw-Hill, New Delhi, 2002.
[17] P. Kundur, Power System Stability and Control. New York: McGraw-Hill, 1994.

131

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Parallel Combination of Hybrid Fil


Ter With Distorted Source Voltage
J.Priyanka devi1, J.Praveen daniel2
Dept of Electrical and Electronics Engineering
M.Kumarasamy College Of Engineering
Tamilnadu,Karur 1priyanka.jey26@gmail.com
Dept of Electrical and Electronics Engineering
M.Kumarasamy College of Engineering
Tamilnadu,Karur 2dani_07@rediffmail.com
AbstractThe usage of non-linear devices will leads to the injection of harmonics into the power system.
Hybrid filter comprising of a shunt active and shunt passive filters, tuned for 5th and 7th harmonics.
Which is used for compensating the harmonics in the power system. Active power filters can perform one
or more of the functions required to compensate power systems in improving power quality. The control
algorithm here used is simple and effectively reduces the harmonics in the system. They are used in
computer applications and other electronics devices let to the injection of harmonics and reactive power
in the power system. So harmonic filter is important one for filtering harmonics in the power system.
The shunt active and shunt passive filter is designed for compensating 5th and 7th harmonics. Hybrid
filter performance was verified through MATLAB/SIMULINK where the source voltage having 3rd
harmonic component. Thus the hybrid filter provides an effective harmonic compensation and reactive
power compensation.
KEYWORDS Hybrid power filter, nonideal mains voltage

I. INTRODUCTION
The power electronic devices are used for ac power control in the power system. The ac power
flow in industrial and domestic applications in the form of adjustable speed drives (ASDs), furnaces,
computer appliances etc. The harmonics injection and reactive power cause disturbance to the customer
and interference in the communication line with low system efficiency and poor power factor. Many
researchers have provided solutions for harmonic and reactive power compensation [1] and they imposed
specific limitations of current harmonics and voltage notches. Passive filter is used for eliminating lower
order harmonics and capacitors used for compensating reactive power demand in the system. They have
some drawback such as fixed compensation and resonance problems. Then the fundamental frequency
reactive power may affect the system voltage regulation. Here the increased harmonic pollution leads to
the development of active filters. The active filter rating depends on harmonics and reactive power to be
compensated. Generally active filters required high current rating and higher bandwidth requirement that
do not constitute cost effective solution for harmonic mitigation.
Hybrid filter as a combination of active and passive filters provide an effective harmonics and
reactive power compensation overcoming the technical disadvantages of active and passive filters [2].
The hybrid filter with shunt combination of active and passive filter [3] were proposed. The hybrid filter
topology with parallel combination of active and passive filter is used for reducing the filter bandwidth
requirement of active filter.
Many control strategies, starting from instantaneous reactive power compensation, evolved since
the inception of shunt active filters. One of the control strategy based on DC link voltage was discussed
[4] [5]. In this method the shunt active filter is to compensate the load side harmonics and reactive power,
thereby making the load linear. Therefore the supply side distortions are imposed on the line current. Even
though this method meets the reactive power requirement of the load when the supply voltage distortion
occur and the same are imposed on the line current also, where the line current still remains non-sinusoidal
even after compensation.
In this paper the instantaneous reactive power algorithm has been used for shunt active filter with
132

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
some modification that can effectively compensates the harmonics caused by source side distortion also. This
proposed method overcomes this drawback by preprocessing supply voltage using parks transformation
for that we are using only the fundamental positive sequence component of source voltage for reference
current calculation. Control block diagram and operational principles are discussed below. This proposed
method can effectively compensate the harmonics and reactive power even when the source voltage is
unbalanced.
I. INSTANANEOUS REACTIVE POWER ALGORITHM
Most of the active filters are designed based on the instantaneous reactive power algorithm. The
instantaneous reactive power algorithm is derived based on the dq0 transformation, where they can calculate
the current compensation based on a two-axis system [6]. In this method, the voltage and load current were
first transformed into two-axis representation. The instantaneous real and instantaneous reactive power
consumed by the loads was calculated based on this representation system. After compensation, the post
compensated current in the two-axis system was required to inversely transform back to the three phase
system from the grid; from this the reference signals of compensation current can be obtained. These
reference signals and detected power converter output currents can be sent to a current controller to
generate the pulse width modulation.

The generated pulse width modulation (PWM) signals required for the operation of control circuits.
The control strategy proposed here is for making the compensated line current to be sinusoidal and
balanced. Therefore the objective includes a sinusoidal reference current calculation and the current control
technique for generation of switching pulses to the VSI for a sinusoidal and balanced line current. Where is
the amplitude of the desired line current, the phase and frequency of the line current are obtained from the
supply voltage. The magnitude of reference line current can get by regulating the DC bus voltage of VSI.
The DC-link capacitance of VSI is used as an energy storage element in the system. For a lossless active
filter in the steady state, the real power supply from the supply should be equal to the real load demanded,
and no more real power passes through the power converter into the capacitor. Therefore the averaged
dc-capacitor voltage can be maintained at the reference voltage level. For a balanced line current under
unbalanced source voltage the proposed method to use one phase of source voltage as phase reference and
1200 shifter. By this method the harmonics present in the source voltage are reflected in the reference line
current. Therefore, a modified algorithm, by preprocessing the source voltage template is proposed to make
the compensated the line current sinusoidal.
The source voltages are transformed into d-q reference frame using parks transformation. After
transformation nth order positive sequence component becomes (n-1)th order component becomes (n-1)
th order component and nth order component becomes (n+1)th order component in d-q reference frame.
The fundamental component of source voltage becomes a dc component in d-q reference frame which
should be filtered out using a low-pass filter. This filtered dc value after counter transformation into 3-phase
component can be used for unit templates for reference current calculation. Thus this modification filters
out the effect of source side distortion in the line current.
In order to drive the line currents to trace the reference currents an effective current control technique
has to be used for generating the switching pulses of the VSI. Hysteresis control is implemented here for
this purpose. In this control line currents are sensed and compared with the reference currents. The error in
each phase is sent to the hysteresis control, Where in a switching pulse is generated to the upper switch of
the VSI, if this error s less than the lower hysteresis band and a switching pulse is generated to the lower
switch of the VSI if the error is found more than the upper hysteresis band.
133

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Fig1.1 Block diagram for control strategy

III.SYSTEM CONFIGURATION

Fig1.2 Block diagram for parallel combination of shunt active and passive filters

The fig1.2 represents three-phase source and non-linear load. Where the shunt passive filter and
shunt active filter are connected in shunt with the line. Passive filter provides cost effective mitigation for
harmonics and reactive power from supply. Active filter can effectively compensate the harmonics and can
meet the reactive power demand. The hybrid filter with parallel combination of active filter and passive
filter reduces the filter bandwidth requirement of active filter. The active filter in this topology is a voltage
source inverter (VSI) with a DC-link capacitance (Cdc) at the dc side and a filter inductor (Lc) to attenuate
the ripple of the converter current caused by the switching of the converter. Two single tuned low pass
passive filter tuned to 5th and 7th harmonics along with a high pass passive filter tuned to 11th harmonic are
used with active filter. for making the compensated line current to be sinusoidal and balanced.

Fig1.3 Block diagram for controller

The fig1.3 represents the controller block diagram. Where three phase supply from the grid and they
are converted into dq0 transformation. By eliminating the 0th term we get dq alone, setting dq reference
value by comparing the actual value and reference value we get dq error value.PID controller output depends
on the erroneous value. Then converting dq0 to abc value they are given as input to the pulse generator, by
varying the amplitude can generate six pulse and given as input to the three-phase inverter.
IV.SIMULATION RESULTS

134

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Fig 1.4 simulation result for parallel combination of hybrid filter

The simulation results for phase -a of source voltage (Vs), Load current (Iload), Line current
(Is), Compensating current (Ic). Here implements the three phase circuit breaker in the circuit design. The
circuit breaker can be used series with the three-phase element which want to switch. Where the opening
and closing time can be controlled either from an external simulink signal or from an internal control timer.
If it sets in external control mode, the control signal connected to input must be either 0 or 1, 0 to open the
breaker and 1 to close them. If the three phase breaker block is set in internal control mode, the switching
times are specified in the dialog box of the block. The three individual breakers are controlled with the
same signal. When external switching time mode is selected, a simulink logical signal is used to control the
breaker operation. Switch is in open state and the switching time is 0.1s. Before 0.1s the simulation result
shows without using hybrid filter. The active filter parameters are Lc=3.35mH, Rc=0.4 ohms, DC-link
capacitance CDC=2200 F, DC-link reference voltage Vdc,ref=680volts for parallel hybrid filter.

Fig 1.5 FFT analysis for hybrid filter in parallel combination


135

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Fig1.3 shows the simulation results for parallel combination of passive and active filter. The source
voltage (Vs) is having a THD of 25.23%, and the load current (Iload) having a THD of 30.65%. The line
current(Is) after compensation is having 11.41%. The peak value of supply current is found less than the
peak value of load current that shows the supply current is carrying only the active component of load
current and active component of compensating current. The DC-link voltage of the VSI is maintained at
680Volts. The passive filter component compensates for 5th,7th,11th harmonics partly. Since this passive
filter is connected a supply having 3rd harmonic component, it draws a 3 harmonic component adding
extra burden to the active filter. The compensating current from the active filter is having a fundamental of
0.7887amps with a THD of 478.87% which shows active filter is used at reducing rating.
To drive the line currents to trace the reference currents an effective current control technique has
to be used for generating the switching pulses for the VSI. Hysteresis control is implemented for this
purpose. In this control the line current are compared and sensed with the reference current. Therefore this
modification filters out the effect of source side distortions in the line current.
The topology of hybrid filters are simulated using MATLAB/SIMULINK and the results are
compared under non-ideal supply voltage with a 0.1pu 3rdm harmonic negative sequence component in
the source voltage. The rms value of the fundamental component is 230Volts. The various design values of
passive filter are shown in Table2.
V.CONCLUSION
Thus the results show the use of hybrid filter topology for harmonic and reactive power compensation.
However, hybrid filter topology with a parallel combination of active and passive filters reduces the load
distortion bandwidth to be compensated by the active filter, thereby lowering the bandwidth required by the
active filter. The passive filter performance might have affected due to changes in the system parameters.
This topology is beneficial only when the source voltage is sinusoidal, as the performance of passive filters
is improved when they are connected to a pure sinusoidal supply, which further reduces the burden on active
filter. . The active filter performance is often degraded due to the distorted and unbalanced main voltages
In this paper, a new algorithm has been proposed to improve the active filter performance under nonideal
main voltages. The performance of control strategy used here is simple and effectively compensates the
load generated harmonics and nullifies the effect of source voltage harmonics in the line.
REFERENCES
[1] Bhim Singh, Kamal Al-Haddad and Ambrish Chandra, A Review of Active Filters for Power
Quality ImprovementIEEE VOL.46,N0.5,OCT 1999.
[2]

Bhim Singh and Vishal Verma, An Indirect Current Control of Hybrid Power Filter for Varying
LoadsIEEE VOL.21,NO.1,JAN 2006.

[3]

Adil M. Al-Zamil and David A. Torrey, A Passive Series, Active Shunt Filter for High Power
ApplicationsIEEE VOL.16,NO.1,JAN 2001.

[4]

Shyh-Jier Huang and Jinn-Chang Wu, A Control Algorithm for Three-Phase Three-Wired Active
Power Filters Under Non ideal Mains Voltages IEEE VOL.14,NO.4,JULY 1999.

136

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
[5]

Shailendra Kumar Jain, Pramod Agarwal and H. O. Gupta A Control Algorithm for Compensation
of Customer-Generated Harmonics and Reactive PowerIEEE VOL.19,NO.1 JAN 2004.

[6]

Montero, M.I.M. ,Cadaval, E.R. , Gonzalez, F.B. Comparison of Control Strategies for Shunt
Active Power Filters in Three-Phase Four-Wire Systems IEEE VOL.18,NO.3 JAN 2005.

[7]

R. Arseneau, G. T. Heydt, and M. J. Kemper, Application of IEEE standard 519-1992 harmonic


limits for revenue billing meters, IEEETrans. Power Delivery, vol. 12, pp. 346353, Jan. 1997.

[8]

H. Akagi and S. Atoh, Control strategy of active power filter using multiple voltage-source PWM
converters, IEEE Trans. Ind. Applicat.,vol. IA-22, pp. 460465, May/June 1986.

[9]

T. Furuhasshi, S. Okuma, and Y. Uchikawa, A study on the theory of instantaneous reactive power,
IEEE Trans. Ind. Electron., vol. 37, pp.8690, Feb. 1990.

[10] M. Areds, J. Hafner, and K. Heumann, Three-phase four-wire shunt active filter control strategies,
IEEE Trans. Power Electron., vol. 12,pp. 311318, Mar. 1997.

137

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Harmonic Mitigation In Doubly Fed Induction Generator for


Wind Conversion Systems By Using Integrated
Active Filter Capabilities
B.Rajesh kumar, R.Jayaprakash.
Dept. Of Electrical and Electronics Engineering
M.Kumarasamy College Of Engineering, Karur,Tamil Nadu, India,
rajeshkumarb.eee@mkce.ac.in
Dept. Of Electrical and Electronics Engineering
M.Kumarasamy College Of Engineering,
Karur,Tamil Nadu, India, rjai21192@gmail.com
AbstractThis paper presents the control of WECS (Wind Energy Conversion System),equipped with a
DFIG (Doubly Fed Induction Generator ,for maximum power generation and power quality improvement
simultaneously. The proposed control algorithm is applied to a DFIG whose stator is directly connected
to the grid and the rotor is connected to the grid through a back to-back AC-DC-AC PWM(Pulse Width
Modulation) converter .the RSC (Rotor Side Converter) is controlled in such a way to extract a maximum
power for a wide range of wind speed. The GSC (Grid Side Converter) controlled in order to filter
harmonic currents of a nonlinear load coupled at the PCC (Point Of Common Coupling) and ensure
smooth DC bus voltage .Simulation results show that the wind turbine can operate at its turbine can
operate at its optimum energy for a wide range of wind speed and power quality improvement is achived.
Key words Variable speed DFIG, MPPT, wind energy, power quality, active filtering, GSC.

NOMENCLATURE

138

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
I. INTRODUCTION
With the increase in population and industrialization, the energy demand has increased significantly.
During the last decade, and to reduce the pollution problem, much effort have been focused on the
development of environmentally friendly sources of energies such as wind and solar. WECS (wind energy
conversion system) equipped with DFIG (Doubly Fed Induction Generator) has received increasing
attention due to its noticeable advantages over other WT (Wind Turbine) systems. Compared to the fixed
speed wind turbine, the variable speed wind turbine can provide decoupled control of active and reactive
power of the generator and improve power quality. Due to several integration of nonlinear loads, such as
power electronics converters and large alternating current drives, which are polluting sources of the grid,
the modern WECS is not only controlled to product active power to the customers but also to improve the
power quality and support the grid during any kind of faults. Ref.[2]
One can profit of the power electronic converters to provide some of the ancillary services (reactive
power absorption or injection to achieve voltage control, harmonic currents compensation). Recently, many
groups of researchers have addressed the issue of making
use of the WECS converters, connected between the generator and the grid, to improve grid
power quality and achieve harmonic currents mitigation. In Ref. [5], Singh et al. have studied the grid
synchronization with harmonics and reactive power compensation capability of a permanent magnet
synchronous generator-based variable speed wind energy conversion system. In Ref.[6] this work, the GSC
(Grid Side Converter) is actively controlled to feed generated power as well as to supply the harmonics and
reactive power demanded by the non-linear load at the PCC (Point of Common Coupling). In Gaillard et
al. have controlled the RSC (Rotor Side Converter) for reactive power compensation and active filtering
capability of a WECS equipped by a DFIG without any over-rating.
This paper presents the control of a WECS, equipped by a doubly fed induction generator, for
maximum power generation and power quality improvement simultaneously. A speed PI controller is
used for MPPT (Maximum Power Point Tracking) and ensures maximum power generation. This control
strategy is applied to the rotor side converter by using a stator flux oriented strategy and an optimal speed
reference which is estimated from the wind speed. Elsewhere, another speed Hysteris controller is used to
control the GSC (Grid Side Converter) by using the oriented voltage control strategy in order to ensure a
smooth DC voltage and compensate the harmonic currents of a non linear load connected at the PCC. The
feasibility and effectiveness of these controls strategies, in terms of active power production and active
filtering, have been tested by simulation.
II. SYSTEM CONFIGURATION AND OPERATING PRINCIPLE
Shows Fig.1 a schematic diagram of proposed DFIG based WECS with integrated active filter
capabilities. In DFIG, the stator is directly connected to the grid as shown in Fig. 1. Two back to back
connected Voltage Source Converters (VSCs) are placed between the rotor and the grid. Nonlinear loads
are connected at PCC as shown in Fig. 1. The proposed DFIG works as an active filter in addition to the
active power generation similar to normal DFIG. Harmonics generated by the nonlinear load connected at
the PCC distort the PCC voltage. In Ref.[5] these nonlinear load harmonic currents are mitigated by GSC
control, so that the stator and grid currents are harmonic free. RSC is controlled for achieving MPPT and
also for making unity power factor at the stator side by using voltage oriented reference frame. Synchronous
reference frame (SRF) control method is used for extracting the fundamental component of load currents
for the GSC control.
III. DESIGN OF DFIG BASED WECS
Selection of ratings of VSCs and DC link voltage is very much important for the successful operation
of WECS. The ratings of DFIG and DC machine used. In this section, detailed design of VSCs and DC link
voltage is discussed for the experimental system used in the laboratory.
A. Selection of DC Link Voltage
Normally, the DC link voltage of VSC must be greater than twice the peak of maximum phase
voltage. The selection of DC link voltage depends on both rotor voltage and PCC voltage. While considering
from the rotor side, the rotor voltage is slip times the stator voltage. In Ref.[2] DFIG used in this prototype
has stator to rotor turns ratio as 2:1. Normally, the DFIG operating slip is 0.3. So the rotor voltage is
always less than the PCC voltage. So the design criteria for the selection of DC link voltage can be achieved
by considering only PCC voltage. While considering from the GSC side, the PCC line voltage (vab) is 230
139

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
V as the machine is connected in delta mode.
So the DC link voltage is estimated as [1],

Here Vab is the line voltage at the PCC. Maximum modulation index is selected as 1 for linear
range. The value of DC link voltage (Vdc) by (1) is estimated as 375 V. Hence, it is
selected as 375 V.
B. Selection of VSC Rating
The DFIG draws a lagging Volt-Ampere Reactive (VAR) for its excitation to build the rated air gap
voltage. It is calculated from the machine parameters that the lagging VAR of 2kVAR is needed when it is
running as a motor.
In DFIG case, the operating speed range is 0.7 p.u to 1.3 p.u. So the maximum slip (smax) is 0.3.
For making unity power factor at the stator side, reactive power of 600 VAR (Smax*Qs = 0.3*2 kVAR) is
needed from the rotor side (Qrmax). The maximum rotor active power is (Smax*P). The power rating of the
DFIG is 5 kW. So the maximum rotor active power (Prmax) is 1.5kW (0.3*5 kW=1.5 kW). So the rating of
the VSC used as RSC,Srated is given as,
The rating of the VSC used as RSC, Srated is given as,

Thus kVA rating of RSC, Srated (2)is calculated as 1.615 kVA.


C. Design of Interfacing Inductor
The design of interfacing inductors between GSC and PCC depends upon allowable GSC current
limit (igscpp), DC link voltage and switching frequency of GSC. Maximum possible GSC line currents are
used for the calculation. Maximum line current depends upon the maximum power and the line voltage at
GSC. The maximum possible power in the GSC is the slip power. In this case, the slip power is 1.5 kW.
Line voltage (VL) at the GSC is 230 V (the machine is connected in delta mode). So the line current is
obtained as,1.5kW/(3*230) =3.765 A.

Out of all variable speed wind turbines, Doubly Fed Induction Generators (DFIGs) are preferred
because of their low cost. The other advantages of this DFIG are the higher energy output, lower converter
rating and better utilization of generators. These DFIGs also provide good damping performance for the
weak grid. Independent control of active and reactive power is achieved by the decoupled vector control
algorithm presented in Ref[2]. The dynamic performance of proposed DFIG is also demonstrated for
varying wind speeds and changes in unbalanced nonlinear loads at PCC . This vector control of such system
is considering the peak ripple current as 25% of rated GSC current.Interfacing inductor between PCC and
GSC is selected as 4 mH.
V. CONTROL STRATEGY
Control algorithms for both GSC and RSC are presented in this section. Complete control schematic
is given in Fig. 3. The control algorithm for emulating wind turbine characteristics using DC machine and
Type A chopper are given below.
140

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
1 . Turbine modeling
The mechanical power captured by the turbine from the wind is given by (4) the following expression:
Here is the air density (1.225 kg/m3), s is the area of the wind wheel (m), v is the wind speed (m/s),
cp (, ) is the power coefficient of the turbine, is the tip speed ratio and is the pitch angle. The tip speed
ratio is given by (5) the following equation:
Here R is the radius of the turbine(m) and is the speed turbine (rd/s). The turbine blade can capture
the maximum of the wind power.
Fig.2 shows the curve of the power coefficient versus for a constant value of the pitch angle .in
the case of a variable speed system one can let changing with the varation of the wind speed v in order
to maintain at its optimal value .so the turbine blade can capture the maximum of the wind power. This
variable speed turbine design
can control reactive power independently of the real power, within a wide range of output levels .
In this study, it was assumed that the wind turbines have a power factor range from 0.95 lagging to 0.95
leading.

Fig. 1. Proposed System Configuration

Fig. 2 Power coefficient versus the tip speed

2. Doubly Fed Induction Generator


DFIG base wind energy conversion system with integrated active filter capabilities.In Ref [2] Wind
turbines use a doubly-fed induction generator (DFIG) consisting of a wound rotor induction generator and
an AC/DC/AC IGBT-based PWM converter. The stator winding is connected directly to the 50 Hz grid
while the rotor is fed at variable frequency through the AC/DC/AC converter.
The DFIG technology allows extracting maximum energy from the wind for low wind speeds by
optimizing the turbine speed, while minimizing mechanical stresses on the turbine during gusts of wind.
The electromagnetic torque is expressed by:

141

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
Here :
J is the total inertia; g is the generator speed; f is the total mechanical damping coefficient and p is
the Number of pole pairs. Rs and Rr are the stator and rotor phase resistances.
The optimum turbine speed producing maximum mechanical technology is the ability for power
electronic converters to generate or absorb reactive power.Thus eliminating the need for installing capacitor
banks as in the case of squirrel-cage induction generator.

Fig. 3. Control Algorithm of the proposed WECS

3. Control of Rotor Side Converter


The main purpose of RSC is to extract maximum power with independent control of active and
reactive powers. Here, the RSC is controlled in voltage oriented reference frame. So the active and reactive
powers are controlled by controlling direct and quadrature axis rotor currents (idr and iqr) respectively.
Direct axis reference rotor current (9) is selected such that maximum power is extracted for a particular
wind speed (8). This can be achieved by running the DFIG at a rotor speed for a particular wind speed.

are respectively the synchronous angular speed of the generator and the angular speed of the rotor
with .
are respectively the ststor and rotor inductance and M is the magnetizing inductance.
is the turbine speed.
Here the speed error (er) is obtained by subtracting sensed speed (r) from the reference speed (r*).
kpd and kid are the proportional and integral constants of the speed controller. er(k) and er(k-1) are the
speed errors at kth and (k-1)th instants. idr*(k) and idr*(k-1) are the direct axis rotor reference current at
kth and (k-1)th instants. Reference rotor speed (r*) is estimated by optimal tip speed ratio control for a
particular wind speed.
The tuning of PI controller used in both RSC and GSC are achieved by using Ziegler nichlos method.
initialy value is set to zero and increase the value of until the response stars oscillating with a period of Now
the value of is taken as 0.45 and is taken as 1.2 /
Where slip angle ( ) is calculated as,

Here is calculated from PLL for aligning rotor currents into voltage axis. The rotor position ( )
Achieved with an encoder.
142

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
4. Control of Grid Side Convert
The control block diagram of this converter is
presented in Fig. 3.The grid side converter uses direct field oriented vector control. Here no position
sensors are required. The orientation of the reference frame is done along the supply voltage vector to
obtain decoupled control over active and reactive power.in Ref[6] the grid side converter (GSC) can
provide reactive power, and the reactive power is also can be control dependently while the active power
generating. The control of this GSC for mitigating the harmonics produced by the nonlinear loads. Here
an indirect current control is applied on the grid currents for making them sinusoidal and balanced. These
grid currents are calculated by (11) subtracting the load currents from the summation of stator currents and
GSC currents. Active power component of GSC current is obtained by processing the DC link voltage error
(vdce) between reference and estimated DC link voltage (Vdc* and Vdc) through PI controller as

Here kpdc and kidc are proportional and integral gains of DC link voltage controller. Vdce(k) and
Vdce (k-1) are DC link voltage errors at kth and (k-1)th instants. igsc*(k) and igsc* (k-1) are active power
component of GSC current at kth and (k-1)th instants.
The grid phase voltage can be expressed as follow:

Instantaneous load currents (ilabc) and the value of


phase angle from EPLL are used for converting the load currents in to synchronously rotating dq
frame (ild). In synchronously rotating frames, fundamental frequency are converted into DC quantities and
all other harmonics are converted into non-DC quantities with a frequency shift of 50 Hz. currents in
By using park transformation ,Eq,(12) can be expressed as follow:

The active and reactive powers, exchanged between the grid and the GSC, are given by (15) the
following equations:

If the d-axis is aligned with the stator voltage, one can write: vdg =us and vqg = 0. Hence, the active
and reactive powers expressions are easily simplified as follows:

The DC capacitor voltage Vdc is controlled by the


current idg in the voltage vector-oriented reference
143

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
frame. Thus, a reference current idgref was derived from the DC link voltage error ec and its ec by
tuning the FLC controller, as shown in Fig.3. To control the reactive power (Qg) to a desired value (Qgref),
a command current iqgref is derived from Eq. (). After a dq-abc transformation of these reference currents,
hysteresis modulation may then be implemented as shown in Fig. 3.
5. Active Filtering
There are various methods to identify the harmonic currents of a nonlinear load. The most classical
methods are instantaneous power theory p-q or d-q or synchronous detection method Ref.[6].
Practically, SPBF (selective pass band filter) or LPF (low pass filter) has been used to extract the harmonic
currents components . Frequency domain compensation, which is based on Fourier analysis, is not very
used because it requires more real time processing power. In the case, the I instantaneous power theory is
used For being compensated, by the
GSC, the resulting d-q reference harmonic currents (ildh, ilqh) must be subtracted from the currents
(idgref, iqgref) as shown in Fig. 6.
6. Maximum Active Power Generation by the MPPT Strategy
In this section, the system is controlled to track its maximum power operating point. Fig. 3 shows the
responses for a wind speed in ramp form. The graphs
shown correspond (in order of appearance): (1) wind
speed; (2) generator speed and its reference; As can be seen from the plots, the generator rotor speed
is controlled according to MPPT strategy. Also, the power coefficient is kept around its optimum Cpmax
= 0.4993. The stator active power is varied according to the MPPT strategy and a unity power factor is
ensured at the stator side. Moreover, the reactive power is maintained to zero. The zoom of a stator voltage
and the corresponding current shows that the DFIG produces active power to the grid (Fig. 6).
SIMULATION RESULTS
1. WIND SPEED

Fig.4.Wind Speed

This Fig.4 shows variation of the amplitude of speed of the wind with time.From this graph we have
concluded that the speed of wind will not remain constant and vary with time.
WIND SPEED =1400
2. MPPT power generation
This Fig.5 shows that maximum constant power can be extracted from the wind power plant using
MPPT algorithm. MPPT can be implemented by varying the pitch angle and yawing.

Fig.5. MPPT Power Generation


144

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
3.GRID CURRENT (iG)
In this Fig.6 shows that the constant grid current can be maintained in the grid.

Fig.6 Grid Current (iG)

5.STATOR SIDE
The Fig.7 given below shown that the amplitude and waveform of the induced stator side voltage
and currents.

Fig.8 Stator Side

6.ROTOR SIDE
The Fig.9 shown below represents current induced in the rotor are not pure and contain some
harmonics.There convertors are used to eliminate the
harmonics and maintain the Voltage.we can able to connect rotor side to the grid only by maintaining
the rotor side voltage constant.

Fig.9 Rotor Side

Conclusion
The GSC control algorithm of proposed DFIG has been modified for supplying the harmonics and
reactive power of the local loads. In this proposed DFIG, the reactive power for the induction machine has
been supplied from the RSC and the load reactive power has been supplied from the GSC. The decoupled
control of both active and reactive powers has been achieved by RSC control. The proposed DFIG has
also been verified at wind turbine stalling condition for compensating harmonics and reactive power of
local loads. This proposed DFIG based WECS with an integrated active filter has been simulated using
MATLAB/ Simulink environment and simulated results are verified.Steady state performance of proposed
DFIG has been demonstrated for a wind speed. Dynamic performance of this proposed GSC control
algorithm has also been verified for the variation in the wind speeds and for local nonlinear load
145

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
REFERENCES
[1] H. Polinder, S. W. H. de Haan, J. G. Slootweg, and M. R. Dubois, Basic operation principles and
electrical conversion systems of wind turbines, EPE J., vol. 15, no. 4, pp. 4350, Dec. 2005.
[2]

S. Muller, M. Deicke and R. W. De Doncker, Doubly fed induction generator systems for wind
turbines, IEEE Ind. Appl. Magazine, vol.8, no.3, pp.26-33, May/Jun 2002.

[3]

A. Gaillard, P. Poure and S. Saadate, Active filtering capability of WECS with DFIG for grid power
quality improvement, Proc. IEEE Int. Symp. Ind. Electron., Jun. 30, 2008, pp. 2365 -2370.

[4]

Z. Boutoubat, L. Mokrani, and M. Machmoum, Control of a wind energy conversion system


equipped by a DFIG for active power generation and power quality improvement, Renewable
Energy, vol. 50, pp. 378-386, Feb 2013.

[5]

A. Gaillard, P. Poure, S. Saadate, M. Machmoum, Variable speed DFIG wind energy system for
power generation and harmonic mitigation, Renewable Energy 34 (6) (2009) 1545-1553.

[6]

E. Tremblay, A Chandra and P. J. Lagace, Grid-side converter control of DFIG wind turbines to
enhance power quality of distribution network, 2006. IEEE PES General Meeting, pp. 6.

[7]

E. Tremblay, S. Atayde and A Chandra, Direct power control of a DFIG-based WECS with active
filter capabilities, 2009 IEEE Electrical Power & Energy Conference (EPEC), 22-23 Oct. 2009,
pp.1-6.

[8]

R Datta, Rotor side control of grid-connected wound rotor induction machine and its application to
wind power generation, Ph.D. dissertation, Dept. Electr. Eng., Indian Inst. Sci., Bangalore, India,
2000.

[9]

B. Rabelo and W. Hofmann, "Control of an optimized power flow in wind power plants with doublyfed induction generators, " in Proc.34th Annu. Power Electronics Specialists

146

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Dwt Based Audio Watermarking Using Energy Comparison


K.Thamizhazhakan ,

Dr.S.Maheswari

PG scholar Department of EEE Kongu Engineering College


Perundurai , Erode
tamilecesec@gmail.com
Assistant Professor Department of EEE
Kongu Engineering College
Perundurai Erode, maheswari_bsb@yahoo.com
Abstract An energy relationship between adjacent audio sections of audio-zero watermark have been
proposed. The host audio signal is divided into number of sections. One level Discrete Wavelet Transform
is applied to each audio data blocks. Then the relational value array has been generated by comparing the
energy of neighbored approximation coefficients. Watermark bits are XORed with the relational value
array in order to produce the secret key during embedding process. In extraction scheme the watermark
bits are extracted by the reverse process of embedding. Experimental results confirmed that the Discrete
Wavelet Transform (DWT) based audio watermarking offers high robustness against various attacks like
additive noise, re sampling and etc.,
Index terms Index terms-Audio zero watermark, Discrete Wavelet Transform (DWT), Energy
comparison.

I. INTRODUCTION
Watermarking is a technique through which the information is carried without degrading the quality
of the original signal. Key is used to increase the security, which does not allow any unauthorized users to
manipulate or extract data. Watermarking technology is now helpful in the attention of protecting copyrights
for the images. There are two types of watermarks are present. They are visible and invisible or transparent
watermarks, which cannot be perceived by the human sensory system. Based on the embedding domain,
watermarking system can be classified as spatial domain and transform domain [6].
An audio watermarking is a technology to hide information in an audio file without the information
to the listener and without affecting the quality of the audio signal [9]. The spatial domain watermarking
system can directly alters the main data elements in an image to hide the watermark data. The transform
domain watermarking system alters the transforms of data elements to hide the watermark data. This has
proved to be more robust than the spatial domain watermarking [8]. Some complexities are present in
the execution process. To overcome those problems a new audio signal decomposition method called
Discrete Wavelet Transform (DWT) is used in our method [1]. One was derived from low frequency DWT
coefficients, and the other was constructed from DWT coefficients of log-polar mapping of the host image
[2].The audio embedded in to watermark with the help of secret key and the watermarked image pass
through the channel which several attacks like noise addition, re-sampling etc.,[4] The same secret key after
attacks in watermarked image to recover the original watermark image.

Figure 1.Watermark system modelt


147

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
Watermarking should be made in a way that they should provide the high robustness against
various attacks such as the digital-to-analog and analog-to-digital conversions, noise addition, filtering,
time scale modification, echo addition and sample rate conversion Watermarked signal should not lose
the quality of the original signal. It is called imperceptibility. Watermarking is employed on the original
samples of the audio signal. Then for the transformation techniques, the discrete cosine transform and the
discrete wavelet transform [3] etc., In transformation based approach the embedding is done on the samples
of the host signal after they are transformed. Based on the application domain, the watermarks are classified
into source based watermarks and destination-based watermarks. Source-based watermarks are desirable
for authentication only. In destination based watermarks, each distributed copy gets a unique watermark
identified by the particular buyer only [7].
II. WATERMARK EMBEDDING
The embedding process involves several steps of operations. The steps are explained in
detail as follows [10]. They are segmentation of an audio signal, DWT based time decomposition of the
segmented frames of an audio signal. The block diagram of watermark embedding is shown in Figure 2.
Step 1: The audio signal gets divided into number of frames. The number of samples in each
frame of a segmented audio signal is same for the purpose of watermark embedding.

Figure 2.Flow chart for watermark embedding

Figure 3.Host audio signal (1)


148

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Figure 4.Host audio signal (2)


Step 2: Each sections are called audio data blocks. Each section has N samples.

Figure 5.Segmented Frames

Step 3: One level Discrete Wavelet Transform (DWT) is applied into samples in each sections.
The signal divided into approximation and detailed coefficients. Then the approximation coefficients of one
level DWT is obtained.
Step 4: Energy will be calculated in approximation coefficients.
S(i) = sum(abs(Yi(k))*abs(Yi(k)))
(1)

Figure 6.1D Discrete Wavelet Transform (DWT)

The energy S(i) is sum of absolute values of approximation coefficients multiplied by other
absolute values of approximation coefficients.
Step 5: Compare the energy of each coefficients. The energy S(i) is greater than next energy
values S(i)+1 then the condition is TRUE. In TRUE condition the relational value get 1.The energy S(i) is
less then next energy values S(i)+1 then the condition is FALSE. In FALSE condition the relational value
get 0.The relational values gotten between each adjacent section is called relational value array.
Step 6: The binary-pixel watermark image and relational value array on perform exclusive OR
operation to get a key. With this key the watermark can be extracted. Send this key to the extracting side.
149

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Figure 7.Watermark image

Figure 8.Key

III.WATERMARK EXTRACTION
The watermark extraction is the reverse process of watermark embedding. The watermarked
audio signal is processed and finally the watermarked image is extracted.
Step 1: The audio signal is divided into number of sections.
Step 2: Each sections are called audio data blocks. Each section has N samples.
Step 3: One level Discrete Wavelet Transform (DWT) is applied to every segmented frame. The
approximation coefficients of one level DWT is obtained.
Step 4: Energy will be calculated in approximation coefficients.

Figure 9.Flow chart for extraction method

Step 5: Compare the energy of each coefficients. The energy S(i) is greater than next energy values
S(i)+1 then the condition is TRUE. In TRUE condition the relational value get 1.The energy S(i) is less than
150

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
next energy values S(i)+1 then the condition is FALSE. In FALSE condition the relational value get 0. The
relational values gotten between each adjacent section is called relational value array.
Step 6: The relational value array and key on perform exclusive OR operation to recover the original
watermark image in the extraction side.

Figure 10.Recovered image

IV.RESULTS AND DISCUSSIONS


The audio signal is taken for the simulation process. It is sampled at the rate of 44.1kHz.The
5 X 6 binary watermark image embedded into the audio signal to get a secret key. In extraction method the
audio signal and generated key is Xored to recover the original watermark image. Our simulation results
are analyzed by calculating Signal to Noise Ratio (SNR), Bit Error Rate (BER) and Normalized Cross
Correlation (NC) values for different audio signals.
Signal to Noise Ratio (SNR) is a specification that measures the level of the audio signal
compared to the level of noise present in the signal. It is important sound level measurement used in
describing the capabilities and qualities of many electronic sound components. It is used to calculate for
original and watermarked audio signals.
Table I.SNR values for different audio signals

The bit error rate (BER) is the number of bit errors per unit time. The bit error ratio is the number
of bit errors divided by the total number of transferred bits during a studied time interval.BER is a unit
less performance measure, often expressed as a percentage. The bit error ratio can be considered as an
approximate estimate is accurate for a long time interval and high number of bit errors.
Table II. BER and NC values of different audio signals against various attacks.

151

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
V.CONCLUSION
In our proposed method, watermark bits are embedded into an audio signal by modifying their
DWT coefficients. Watermarks are embedded into the selected higher energy subsections. The proposed
scheme does not require any additional information to discover the watermark from test image. High
robustness is ensured by the proposed scheme together with the usage of selected higher energy region
and DWT based watermark embedding. This method achieves SNR values ranging from 14dB to 20dB for
different watermarked audio signals and offers strong robustness against several kinds of attacks such as
Noise addition, Re-sampling and etc.,
REFERENCES
[1] L. Liang and S. Qi, A new SVD-DWT composite watermarking, in Proc. of IEEE Int. Conf. on
Signal Processing, eds. X Z Wei (Beijing, China, 2006), pp. 365-372.
[2]

S. Wu S., J. Huang J., D. Huang and Y. Q, Efficiently Self-Synchronized Audio Watermarking for
Assured Audio Data Transmission, IEEE Trans Broadcast, Vol.51, No.1, pp. 69-76, 2005.

[3]

V. K. Bhat, I. Sengupta, and A. Das, An Adaptive Audio Watermarking Based on the Singular
Value Decomposition in the Wavelet Domain, Digital Signal Processing, vol. 20, no. 6, pp. 15471558, 2010.

[4]

J. Huang, Y. Wang, and Y. Q. Shi, A blind audio watermarking algorithm with self-synchronization,
in Proc. IEEE Int. Symp. Circuits and Systems, vol. 3, 2002, pp. 627630.

[5]

Q. Wen, T.-F. Sun, and S.-X. Wang, Concept and application of zero-watermark, Tien Tzu Hsueh
Pao/Acta Electronica Sinica, vol. 31, no. 2, pp.214216, 2003.

[6]

Dhar, P.K. , Shimamura T, Entropy-based audio watermarking using singular value decomposition
and log-polar transformation, Circuits and Systems (MWSCAS), 2013 IEEE 56th.

[7]

Yang Yu, Lei Min, Cheng Mingzhi, Liu Bohuai, Lin Guoyuan, Xiao Da, An Audio Zero-Watermark
Scheme Based on Energy Comparing, information security, china communications,2014.

[8]

Wang X, Peng H, Audio Watermarking Approach Based on Energy Relation in Wavelet Domain,
Journal of Xihua University Natural Science, Vol.28, No.3,2009.

[9]

D. Kiroveski and S. Malvar, Robust Spread Spectrum Audio Watermarking, IEEE International
Conference on Acoustics, Speech and Signal Processing (ICASSP01), pp. 1345-1348, 2001.

[10]

X.-Y. Wang and H. Zhao, A novel synchronization invariant audio watermarking scheme based on
DWT and DCT, IEEE Transactions on Signal Processing, vol. 54, no. 12, pp. 48354840, 2006.

152

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Self - Propelled Safety System Using CAN Protocol


M. Santhosh Kumar, ,

Dr.C.R.Balamurugan

PG Scholar ME Embedded System Technologies


Arunai Engineering College Tiruvannamalai, Tamilnadu, India
santhoshmurugan92@gmail.com
Associate Professor/EEE Arunai Engineering College
Tiruvannamalai, Tamilnadu, India
crbalain2010@gmail.com
Abstract The transportation through vehicles are very important in day to day life. In existing
transportation methods the drawbacks are short circuit due to fault in the wiring. Glaring light accidents
due to opposite vehicle headlight illumination at night driving. Gas leakages preservation, faults due to
increase in temperature near engine. sound pollution caused by the louder horn volumes. In accurate fuel
level monitoring. Unequal wheel pressure may leads to accidents The proposed system has two modules
Master and Slave they are communicating through CAN (Controller Area Network).The Simulation
has been developed using Proteus simulation tool .The proposed system will be in cost effective safety
system, reliability and hardware size will be small.
Index terms Controller Area Network, Automotive safety, Fuel level monitoring, wheel pressure,
Adaptive headlights, Automatic dim/bright, Temperature, Gas leakage prevention.

I. INTRODUCTION
The need of a vehicles increased every day, People are like safe and comfort travel with low cost
investment. In this proposed system there are seven safety measures are included in this system. These
safety measures are the most common reasons for road accidents during day and night time driving. The
Main motivation of this proposed system is to reduce driving accidents for automotive.
In this proposed system the included measures are,
To reduce night time driving accidents due to opponent headlight illumination by AFHAS

(Automatic Front Headlight Adjustment System) because most of the accidents arisen at night

time driving.
To reduce the short circuit faults at the vehicle wiring connections.
Gas leakage detection and prevention.
Monitoring temperature in an automotive engine location.
Automatically adjust horn sound for the respective surroundings.
To display accurate fuel level.
Monitoring the wheel pressure.
II. LITERATURE SURVEY
Postolache [1] made an survey that the CAN and TIA-485 are two of the most used standards in
field bus systems. While CAN ISO IS-11898 includes complete data link layer specifications on top of
its physical layer, TIA-485 only addresses the physical layer of the 7-layer OSI model. Jianqun Wang et
al [2] discussed an autonomous dynamic synchronization method for CAN communication based on the
queue of IDs of frames is proposed to realize a quasi synchronous communication result. The purpose of
the method is setting up a quasi synchronous principle, in which every CPU sends information according
to an agreed consequence so that the possibility of collisions is reduced and the situation on which CPUs
send data in no order is avoided and thus the flow of effective data in communication and real-time ability
of data in network are heighten, and at the same time arranges the CPU an given sending queue to reduce
the losses of frames due to arbitration conflicts, which, guarantees the synchronous acquisition greatly.
Hyeryun Lee et al [3] demonstrated that automobiles on the streets were very vulnerable to the cyber attacks
conducted by injecting harmful packets into the popular car network, CAN via a wireless communication
channel, Bluetooth. In contrast to the some previous works showed the vulnerability of automobiles, based
on the in-depth analysis, we showed that it was possible to attack automobiles by only injecting the packets

153

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
fuzzed in random manners. Donghyuk Jang et al [4] described the end-to-end channel modeling of CAN
communication using a transmission matrix and verified it through experiments. By multiplication of each
transmission matrix, the total transmission matrix and the transfer function of the system. Based on the
proposed method, the channel frequency response which can be used for decision of suggested tap length
and bandwidth without real measurements. Jaromir Skuta and Jifi Kulhanek [5] focused to verification of
communication algorithm and possibilities with use of PCAN-LAIN interface. The high level controller
was PC with USB-CAN interface. The literature survey reviles few recent papers on CAN protocol.
III.
PROPOSED WORK
In this proposed work, these seven measures are converted into two modules one is master module
another one is slave module. The master module shown in fig.1 it has LPG gas leakage sensor and exhaust
gas control unit and temperature monitoring unit and automatic front headlight adjustment system .The
slave module shown in fig.1 it has digital fuel level sensor, short-circuit identification unit, wheel pressure
monitoring sensor and Radio Frequency transmitter and receiver for horn volume automatic adjustment.
These two modules are designed in double layer PCB. This proposed system has hardware and software.
Surface Mount Device Hardware used in this proposed system. Both master and slave modules are
communicating through vehicle communication bus Controller Area Network protocol.

Fig 1. Block Diagram

A. Master Module
The master module have temperature monitoring unit, automatic front headlight adjustment system,
LPG gas leakage sensor and exhaust gas control, which are connected to an PIC 16f877A microcontroller.
It has 5 I/O (Input /Output) Ports for connecting to the sensor modules. The sensor modules are
connected to the PIC 16f877A through I/O ports in fig 2.
The sensors used are:
1. Temperature Sensor LM35
2. Light Sensor LDR and BH1750FVI
3. Gas leakage sensor-MQ6
4. Accelerometer ADXL335
Each sensor is connected to the I/O ports on the PIC microcontroller. The temperature sensor LM35
which is used to monitor the temperature on the engine. When the heat of an engine exceeds to a certain
level the LM35 detects the increase in temperature and it transmits the signal to the PIC microcontroller
for the abnormal condition in the engine temperature. The PIC microcontroller sends the information to the
LCD1 and enables the LED D1 to alert the user.
The Automatic Front Headlight Adjustment System is mainly reduce the glaring effect during night
time driving .The LDR is used to detect the light intensity of the opposite vehicle and it adjust the intensity
of the vehicle to lower level which makes the driver safely. The accelerometer is used to adjustment of the
headlight according to the steering wheel position.
The gas leakage sensor MQ6 detects the LPG gas and other gas leakages and sends the information
to PIC microcontroller to drive the motor connected to the vehicle window to exhaust the harmful gas
outside and it activates the warring signal through LED D2 and the LCD displays the Gas leakage detected.
B.Slave Module
154

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
The Slave module has four sensors connected to the slave PIC16f877A microcontroller. The sensors
used are:
1. Fuel level sensor- LLS 20160
2. Current Sensor - ACS712-053
3. RFID (Radio Frequency IDentification )
4. Wheel Pressure Sensor TPMS(Tire Pressure Monitoring System)
The analog sensors are normally used for fuel level monitoring .This method displays the fuel level
inaccurately. To overcome the digital fuel level sensor is used. This method displays the exact value of
available fuel digitally. This makes the user to know the fuel level in the tank. If the fuel level decreases the
LED D3 alerts the user as low fuel. The fuel level is displayed in LCD2
The short circuit in electrical connections is detected by using current sensor ACS712-053. It
continuously monitors the current flow in the wires. If the current level reduces by certain level it sends
the warning signal to the PIC microcontroller and LED D4 warns the user and current flow information is
displayed in LCD2 as shown in fig 2.
The volume from the horn is increased rapidly this cause the disturbance to the surroundings and for
hospital and school zone areas. The volume level can be reduced at this area to reduce the sound pollution
and disturbances. The RFID is used to sense the zone of the particular area and automatically adjusts the
volume on the horn. The slave module recognizes whether the vehicle is driving in general zone or school
zone or hospital zone and the area is displayed.
The wheel pressure is the important factor for increasing the performance of vehicle and fuel
efficiency. Low wheel pressure may leads to increase in temperature in wheel and cause accidents. The
TPMS provides the information about the pressure in the each wheel. The user can easily monitor the wheel
pressure through LCD2.
C. CAN Protocol Implementation
The two modules Master and Slave are operated separately. This modules can be interfaced with
the help of CAN Protocol. This provides the monitoring and controlling of the each module at a single
monitoring system. The CAN protocol has CAN Bus, CAN Transceiver MCP2551 and CAN Controller
MCP2515. The CAN Controller provides the connection to the modules connected to it. The speed of an
CAN controller is 1Mb/s The CAN Transceiver MCP2551 provides the interface between CAN controller
and to the CAN bus. CAN Transceiver which is a high speed CAN. This MCP transceiver will act as
conduit between physical buses and CAN protocol. It provides differential transmit and receive competence
for the CAN protocol controller and it is fully attuned with the ISO11898 standard. This CAN transceiver
supports 1Mbps speed and CAN nodes can be connect up to 112 nodes.

Fig 2 CAN Protocol Module Circuit Diagram

It is used to transmit as well as receive the signal. The two modules are connected to the Can
bus through the CAN Controller and CAN Transceiver to the Master and Slave Module. The monitored
information of the slave module is transmitted to the CAN Bus to the Master module. The monitored
information of Master module along with Slave module are passed to the UART(Universal Asynchronous
Receiver Transmitter).The UART protocol is a point to point communication and it transmits one data at a
time. The UART has one master and one slave module.
The Output of both the modules are transmitted through the UART protocol on the master module
to the virtual terminal. The implementation of CAN protocol along with two modules are shown in fig 2.
155

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
IV. SIMULATION OUTPUT
The output of each sensor is monitored through the LCD display. The sensors operate in two ways
during normal and abnormal conditions.
A.Master during Normal condition

Temperature

Fig 3 Temperature during Normal

Front Light Adjustment

Fig 4 Luminance during Short Range

Gas level

Fig 5 Normal Gas Level

B.Slave during Abnormal Condition

Fuel Level

Fig 6 Abnormal Fuel Level

Current Sensor

Fig 7 Short Circuit

RFID Sensor

Fig 8 School Zone


156

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Fig9 Hospital Zone

Wheel Pressure Sensor

Fig10 Wheel Pressure Abnormal

C. CAN Protocol Implementation


The output of each module is connected to the CAN protocol through SPI on the Master module.
The output is transmitted to a virtual terminal placed on the vehicle dashboard to monitor both the modules
data on a single display window and the sensors can be controlled through the terminal.

Fig11 CAN Protocol virtual Terminal

V. CONCLUSION
The proposed system has been designed and this system has two modules namely master and
slave which takes required action for the night time driving accidents due to glaring effect of headlight
luminance, to provide clear vision for vehicle driver, short-circuit fault line detection, Gas leakage detection
cum prevention action and monitoring the engine area temperature by an analog and digital sensor ,horn
volume level adjustment through RFID for reducing the noise pollution and disturbance to the surroundings
and wheel pressure monitoring to know the pressure on the wheel are above stated in this paper. The
communication between master and slave module through Controller Area Network serial communication
protocol has been implemented and it precedes required actions, values displayed in dashboard for driver
157

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
assistance. This proposed system achieved the active safety system with low cost.
VI. FUTURE WORK
In future, this proposed system can be extended to monitor the additional functions like monitoring
of Anti-skid braking, automatic manual transmission, gearbox control, traction control and door control.
REFERENCES
[1] Postolache CAN open implementation using BCP protocols over TIA-485 in networked embedded
systems, International conference on system theory, control and computing, 2015,Volume 1, No. 5.
Pp.696-698.
[2]

Jianqun Wang , Jingxuan Chen and Ning Cao, A Method to Improve the Stability and Real-time
Ability of CANInternational Conference on Mechatronics and Automation,2015,vol.2,No.1,pp
1531 1536.

[3]

Hyeryun Lee, Kyunghee Choi, Kihyun Chung Jaein Kim and Kangbin Yim Fuzzing CAN
Packets into Automobiles International Conference on Advanced Information networking and
Applications,2015, vol.3,No.5pp.817-821.

[4]

Donghyuk Jang, Sungmin Han, Suwon Kang, and Ji-Woong Choi Communication Channel
Modeling of Controller Area Network (CAN)International Conference on Ubiquitous and Future
Nwtworks, 2015, vol.1,No.2pp.86-88.

[5]

Jaromir Skuta and Jifi Kulhanek Control of car LED lights by CAN/LIN bus2015, vol.2,No.1pp
486-489.

[6] Shane Tuohy, Martin Glavin, Ciarn Hughes, Edward Jones, Mohan Trivedi, and Liam
Kilmartin, Intra-Vehicle Networks: A Review IEEE Transactions on Intelligent Transportation
systems,Vol.16,issue 2,pp.534-545,2014.
[7]

Beying Deng and Xufeng Zhang Car networking application in vehicle safety Workshop on
Advanced Research and Technology in Industry Applications, 2014, pp.834-837.

[8]

Sathya narayanan, Monica and Suresh,Design and implementation of ARM microcontroller based
vehicle monitoring and control system using Controller Area Network(CAN) protocol,Internation
Journal on Innovative Research in Science, Engineering and Technology, 2014 ,Vol 3,Issue 3,pp.712718.

[9]

Yeshwant Deodhe, Swapnil Jain and Ravindra Gimonkar, Implementation of Sensor Network
using Efficient CAN Interface, International Journal of Computing and Technology, 2014, Vol. 1,
Issue 1. Pp.19-23.

[10] Alberto Broggi, Michele Buzzoni, Stefano Debattisti,Paolo Grisleri, Maria Chiara Laghi, Paolo
Medici, and Pietro Versari, Extensive Tests of Autonomous Driving Technologies IEEE
Transactions on Intelligent Transportation Systems, Vol. 14, No. 3, pp.1403-1415, 2013.
[11] Vikash Kumar Singh and Kumari Archana, Implementation of CAN Protocol in Automobiles
Using Advanced Embedded System International Journal of Engineering Trends and Technology
(IJETT), 2013, Vol.4 pp. 4422-4427.
[12] Jaimon Chacko Varghese, Binesh Ellupurayil Balachandran, Low Cost Intelligent Real Time Fuel
Mileage Indicator for Motorbikes, International Journal of Innovative Technology and Exploring
Engineering , 2013, Vol-2, Issue-5,pp.97-107.
[13] Ashwini S. Shinde , Prof. vidhyadhar and B. Dharmadhikari , Controller Area Network for Vehicle
automation, International Journal of Emerging Technology and Advanced Engineering ,2012, Vol.
2, Issue 2,pp.12-17.
[14] A.Che Soh, M.K.Hassan and A.J.Ishak, Vehicle Gas Leakage Detector, The Pacific Journal of
Science and Technology, 2010, Vol. 11. Number 2, pp.66-76.

158

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Control Analysis of Statcom Under Power System Faults


Dr.K.Sundararaju, 2T.Rajesh

1 Head of the Department, Dept. Of Electrical and Electronics Engineering,


2 Dept. Of Electrical and Electronics Engineering ,
1,2 M.Kumarasamy College Of Engineering, Karur,Tamil Nadu, India.Karur,Tamil Nadu, India.

ABSTRACT - Voltage source convertors based static synchronous compensators are used in
the transmission and distribution line for voltage regulation and reactive power compensation.
Nowadays angle controlled STATCOM have been deployed in the utilities to improve the output
voltage waveform quality with lower losses compared to that of PWM STATCOMS.Even though
angle control STATCOM has lot of advantages,it suffers in their operation,when unbalanced and
fault conditions occur in the transmission and distribution lines. This paper presents an approach of
Dual Angle control strategy in STATCOM to overcome the drawbacks of the conventional angle
control and PWM controlled STATCOMS.Here,this paper will not completely changes the design
of conventional angle control STATCOM,instead it add only (ac ) AC oscillations to the output
of the conventional angle controller output (dc)to make it as a dual angle controlled. Hence the
STATCOM is called dual angle controlled (DAC) STATCOM.
Index terms- Dual angle control (DAC),hysteresis controller,STATCOM.
I. INTRODUCTION

There are lot of devices used in the power system for voltage regulation, reactive power
compensation and power factor regulation[1]. The voltage source convertor(VSC) based STATCOM is
one of the widely used device in the large transmission and distribution systems for voltage regulation and
reactive power compensation. Nowadays angle controlled STATCOM have been deployed in the utilities
to improve the output voltage waveform quality with lower losses compared to that of PWM STATCOMS.
The first commercially implemented installation was 100 MVAr STATCOM at TVA Sullivan substation
and followed by New York Power Authority installation at Marcy substation in New York state in[13] and
[16].150-MVA STATCOM at Leardo and Brownsville substation at Texas,160-MVA STATCOM at Inez
substation in Eastern Kentucky, 43-MVA PG&E Santa cruz STATCOM and 40-MVA KEPCO (Korea
Electric Power Corporation) STATCOM at Kangjin substation in South Korea are the few examples of the
commercially implemented and operating angle controlled STATCOM on worldwide.

Even though angle control STATCOM has lot of advantages compared to other STATCOMS,
it suffers in their operation by over current and possible saturation of the interfacing transformers caused
by negative sequence during unbalanced and fault conditions occur in the transmission and distribution
lines in [4]. This paper presents an approach of Dual Angle control strategy in STATCOM to overcome
the drawbacks of the conventional angle control and PWM controlled STATCOMS[2]. Here,this paper
will not completely changes the design of conventional angle control STATCOM,instead it add only (ac
) AC oscillations to the output of the conventional angle controller output (dc) to make it as a dual
angle controlled.Hence the STATCOM is called dual angle controlled (DAC) STATCOM.Angle control
STATCOM same degree of freedom compared to that of PWM STATCOM, but it is widely used because
it has higher waveform quality of voltage compared to that of PWM STATCOM.

This paper presents a new control structure for high power angle controlled STATCOM.Here the
only control input to angle control STATCOM is phase difference between VSC and ac bus instantaneous
voltage vector.In the proposed control structure, is split into two parts, dc and ac. The DC part dcwhich
is the final output of the conventional angle controller is incharge of controlling the positive sequence
VSC output voltage.The oscillating part ac controls the dc link voltage oscillations. The proposed model
STATCOM has the capablity to operate under fault conditions and able to clear the faults and unbalanced
occurs in the transmission and distribution lines.
In this paper,we have implemented a new control structure in STATCOM ,which has the ability to clear
such as sag and swell and other types of which will appears in the power systems. The analysis of the
proposed control structured STATCOM is done on the MATLAB simulations and the experimental results
are satisfied.
159

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
II. CONVENTIONAL STATCOMS UNDER NORMAL AND SYSTEM FAULT CONDITIONS
Voltage source vectors which are basic one for the building of the FACTS devices can be divided into
two types based on their control methods [17].The first one is the PWM or vector controlled STATCOM
and another is the angle controlled STATCOM.In the PWM based STATCOM,by controlling the amplitude
of firing pulses given to the voltage source convertors the final output voltage of the convertor can be
increased or decreased.These type of inverters will be uneconomical,because the switching loss associated
with the VSC are very high.

Fig.1. Control structure of vector controlled STATCOM

Then the second type is the angle controlled STATCOM .Here by changing the output voltage angle of the
STATCOM for a particular time on compared to that of line voltage angle, the inverter can be able provide
both inductive and capacitive reactive power.

Fig.2. Control structure of angle controlled STATCOM

By controlling the towards the positive and negative direction and varying the dc link voltage,
we can able increase or decrease the final output voltage of voltage source convertors(VSC) in [2].Here
the ratio between the dc and ac voltage in STATCOM should be kept constant. If the final output voltage
of the STATCOM is greater than the line voltage it will absorb reactive power from the line.But, if the
output voltage of the STATCOM is lesser than the line voltage ,then it will inject reactive power into the
line.Throughout this paper the performance of the proposed control structure will be shown by MATLAB
simulations.
160

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
III. ANGLE CONTROLLED STATCOM UNDER UNBALANCED CONDITIONS
VSC is the basic building block of the all conventional and angle controlled STATCOMs.Therefore,
study about this method to improve the performance of VSC under unbalanced and fault conditions is
important and practical. There are many methods are proposed in the literature about improving
the performance of the voltage source convertors. But all cannot be applicable to the angle controlled
STATCOM with only one control input angle().
There are only few methods proposed in the literature about the angle controlled STATCOM
under ac fault conditions.The paper [8] calculates the amount of dc link capacitor that minimizes the
negative sequence current flow on the STATCOM tie line.It tells that by choosing a particular value for
the dc link capacitor, the tie line with the inductor will becomes an open circuit for the negative sequence
current by positive sequence angle control.In the another paper,hysteresis controller is used in addition to
the conventional angle controller.In this controller, the VSC will detect and implement hysteresis switching
to control their phase currents.Each VSC will have its own overcurrent limit and it should not be exceeded
in normal and fault conditions.

Fig.3. Equivalent circuit of an VSC connected to AC system

This system will protects the switch and limits STATCOM current under fault conditions.The
dc-link voltage oscillations will be occurred in this method and it will cause the STATCOM to trip.The
injection of poor quality voltage and current waveforms into faulted power system will produce undesirable
stress on the power system components [7].
IV. ANALYSIS OF STATCOM UNDER UNBALANCED OPERATING CONDITIONS
In this method a set of unbalanced three phase phasor is split into two symmetrical positive and
negative sequences and zero sequence component. The line currents in the three phases of system is
represented by the equations 1,2,3and 4 mentioned below,

161

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
In the angle controlled STATCOM the only control input angle should be identically applied to all
three phases of the invertor.Here the zero sequence components can be neglected because there is no path
for the neutral current flow in the three phase line. The switching function for an angle control STATCOM
should always be symmetric.The switching function for the three phases a,b and c are represented in the
equation 5,6,7 mentioned below,

Where is the angle by which the invertor voltage leads/lags the line voltage vector and K is the factor for
the invertor which relates the dc side voltage to the phase to neutral voltage at the ac side terminals.The
invertor terminal fundamental voltage is given in the equation 8,9,10 mentioned below,

Basically, the unbalanced system can be analysed by postulating a set of negative sequence voltage
source connected in series with the STATCOM tie line. The main idea of the Dual Angle Control strategy
is to generate a fundamental negative sequence voltage vector at VSC output terminals to attenuate the
effect of negative sequence bus voltage.The generated negative sequence voltage will minimize the
negative sequence current produced on STATCOM under fault conditions.The third harmonic voltage will
be produced at VSC output terminals because of interaction between dc link voltage second harmonic
oscillations and switching function.The third harmonic voltage is positive sequence and contains phase a,b
and c which are 1200 apart. Basically, the negative sequence current will be produced in the unbalanced
ac system conditions generates the second harmonic oscillations on the dc link voltage and it will reflects
as third harmonic voltage at the VSC output terminals and fundamental negative sequence voltage.Similar
to fundamental negative sequence voltage, dc link voltage oscillations will decide the amplitude of second
harmonic voltage in [3].Here by controlling the second harmonic oscillations on the dc link voltage ,the
negative sequence current can be reduced.Decreased negative sequence current will reduce the dc link
voltage.Reducing the dc link voltage second harmonic will reduce the third harmonic voltage and current
at the STATCOM tie line in [12].Here the control analysis of STATCOM under fault conditions are done
in MATLAB
V. PROPOSED CONTROL STRUCTURE DEVELOPMENT
As discussed in the previous section ,the STATCOM voltage and current during unbalanced conditions are
calculated by connecting a set of negative sequence voltage in series with STATCOM tie line are shown
in Fig.
162

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Fig.4.Equivalent circuit of STATCOM with series negative sequence voltage source

Assume second harmonic oscillation at dc link voltage as


Then the reflected negative sequence voltage at phases a,b and c STATCOM terminal are calculated
by the equation 11,12 and 13 mentioned below,


The derivative of STATCOM tie line negative sequence currents with respect to time are
calculated by the equation 14,15 and 16 mentioned below,

Transformation from abc to negative synchronous frame is defined as

In proposed structure,angle is divide into two parts dc and ac .The angle dc is the output of
the positive sequence controller and ac is the output of the negative sequence controller. The angle ac is
the second harmonic oscillations which will generate negative sequence voltage vector at the VSC output
terminals to attenuate the effect of the negative sequence bus voltage on fault conditions.The ac should be
properly filtered out otherwise it will leads to higher order harmonics on the ac side.
163

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Fig.5.Three bus AC system model

VI. EXPERIMENTAL RESULTS

Fig.6.Voltage waveform of the grid(without STATCOM)

Here the voltage is suddenly decreasing in the particular time interval due to sudden change in the load
value.when the load connected to the system does not remain constant,then the current and voltage of
the line will not remain constant. During fault occurance, the current and voltage of the grid will not
remain constant, so the STATCOM can be used to maintain the voltage .Because voltage is the important
protection parameter, it has capability to damage insulations of the transmission and protection device.
164

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Fig.7.Current waveform of the grid (without STATCOM)

Here ,the reduction in amplitude of voltage is observed because of the sudden change in the load value.
This is due to the inverse proportionality nature of voltage and current value in normal power systems.
This sudden increase of load is achieved by connecting a load to the grid by means of switch.By giving a
time sequence to the switch for connecting it with the grid,we can able to connect and disconnect the load
automatically for the particular time sequence.

Fig.8.Voltage waveform of the grid (with STATCOM)

Fig.9.Current waveform of the grid (with STATCOM)


165

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
Here,the voltage is maintained on a constant value due to the reactive power compensation by the
STATCOM.The reactive power compensation is done by STATCOM on supplying current in leading angle
to the line voltage.Here the STATCOM is connected to the grid by means of switch .
VII. CONCLUSION

This paper proposed a new control structure to improve the performance of the conventional angle
controller STATCOM under unbalanced and fault conditions occurs on the transmission line.This method
does not completely redesign the structure of the STATCOM instead it add only ac oscillations to the output
of conventional angle controller.The ac oscillations will generate negative sequence voltage at the VSC
output terminals toattenuate the effect of the negative sequence bus voltage generated at the line terminals
during fault conditions.
REFERENCES
[1] C. Schauder and H. Mehta, Vector analysis and control of advanced static VAR compensators, in
.Eng.-C, Jul. 1993, vol. 140,pp. 299306.
[2] H. Song and K. Nam, Dual current control scheme for PWM converter under unbalanced input
voltage conditions, IEEE Trans. Ind. Electr., vol. 46, no. 5, pp. 953959, Oct. 1999.
[3] A. Yazdani and R. Iravani, A unified dynamic model and control for the voltage-sourced converter
under unbalanced gridconditions, IEEETrans. Power Del., vol. 21, no. 6, pp. 16201629, Jul. 2006.
[4] Z. Xi and S. Bhattacharya, STATCOM operation strategy with saturabletransformer under threephase power system fault, in Proc. IEEE and.Electron. Soc., 2007, pp. 17201725.
[5] M. Guan and Z. Xu, Modeling and control of a modular multilevel converter-based HVDC system
under unbalanced grid conditions, IEEETrans. Power Electron., vol. 27, no. 12, pp. 48584867,
Dec. 2012.
[6] Z. Yao, P. Kesimpar, V. Donescu, N. Uchevin and V. Rajagopalan,Nonlinear Control for
STATCOM Based on Differential Algebra, 29th Annual IEEE Power Electronics Conference,
Fukuoka, vol.1, pp.329-334, 1998.
[7] F. Liu, S. Mei, Q. Lu, Y. Ni, F.F. Wu and A. Yokoyama, The Nonlinear Internal Control of
STATCOM: Theory and Application,International Journal of Electrical Power & Energy Systems,
vol. 25, pp.421-430, 2003.
[8] M. E. Adzic, S. U. Grabic, V. A. Katic, Analysis and Control Design of STATCOM in Distribution
Network Voltage Control Mode, SixthInternational Symposium Nikola Tesla, Belgrade,vol.1,
pp.1-4, 2006.
[9] N. G. Hingorani and L. Gyugyi,Understanding FACTS, Concepts and Technology of Flexible AC
Transmission Systems. Piscataway, NJ:IEEE Press, 1999.
[10] P. Rao et al., STATCOM control for power system voltage controlapplications, IEEE Trans.
Power Del., vol. 15, no. 4, pp. 13111317, Oct. 2000.
[11] D. Soto and R. Pena, Nonlinear control strategies for cascaded multilevel STATCOMs, IEEE
Trans. Power Del., vol. 19, no. 4, pp. 19191927, Oct. 2004.
[12] VAR Planning With Tuning of STATCOM in a DG Integrated Industrial System T. Aziz, M.
J.Hossain, T. K. Saha and N. MithulananthanIEEE transactions on power delivery,vol. 28, no. 2,
April 2013.
[13] S. Bhattacharya, B. Fardenesh, and B. Sherpling, Convertible static compensator: Voltage source
converter based FACTS application in the New York 345 kV transmission system, presented at the
5th Int. Power Electron. Conf., Niigata, Japan, Apr. 2005.
[14] P. N. Enjeti and S. A. Choudhury, A new control strategy to improve the performance of a PWM
AC to DC converter under unbalanced operatingconditions, IEEE Trans. Power Electron., vol. 8,
no. 4, pp. 493500, Oct. 1993.
[15] P.W. Lehn and R. Iravani, Experimental evaluation of STATCOM closed loop dynamics, IEEE
Trans. Power Del., vol. 13, no. 4, pp. 13781384, Oct. 1998.
[16] J. Sun, L. Hopkins, B. Sherpling, B. Fardanesh, M. Graham, M. Parisi, S.MacDonald, S. Bhattachary,
S. Berkowitz, and A. Edris, Operating characteristicsof the convertible static compensator on the
345 kV network, in Proc.IEEE PES Power Syst. Conf. Expo., 2004, vol. 2, pp. 73273.
166

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Automatic Road Distress Detection and


Characterization System
S.Keerthana PG scholar, Mr.C.Kannan
ME-Embedded System Tecnologies
Arunai Engineering College, Tiruvanamalai,Tamilnadu,India,
keerthanashanmu@gmail.com
Associate Professor/EEE
Arunai Engineering College, Tiruvanamalai,Tamilnadu,India,
kannanc305@gmail.com
Abstract Maintenance to be performed before the defects develops on the distress road that can cause
more serious problems to valuable human life. In existing system give a detailed description of mobile
laser scanning system (RIEGL VMX-450) can be used. The RIEGL VMX 450 mobile laser scanning
system offers extremely high measurement rates providing dense and accurate and feature rich data even
at high driving speeds. This paper presents a study of literature about crack and distress detection in road
pavements and objects has been a constant field of research in pavement management. Conventionally,
humans were engaged to detect distress and cracks in the pavements and they are using a report sheet
for their assessment. But, this process was a time consuming one and was costlier too. We propose a new
image database retrieval method based on shape information. In that method used to detect the road
pavement cracks rudiments defects from the video. To characterize the identified cracks according to their
type and obtain the accurate result of road crack detection by using shape based image retrieval algorithm.
SBIR will provide valuable information on the condition of a road network.
Index terms t mobile laser scanning system (MLS), road pavement, shape based image retrieval
(SBIR), crack characterization, road quality maintenance.

I. INTRODUCTION
Cracking is one of the most common and important types of asphalt pavement distress. In many
countries, they are spent to reduce errors and increase the performance of the automatic evaluation of
the quality of the road. Generally, cracking distress can be divided into three main types longitudinal,
transversal, and alligator cracking.

Fig (a): Longitudinal Crack

Fig (b): Transversal Crack

167

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Fig(C): Alligator Crack

Traditionally, data about pavement cracking has been gathered by human inspectors by collecting
data manually through visual surveys. Manual surveys are time consuming and costly and definitely
involve a fair amount of subjectivity. There are also dangerous risks to the surveying personnel due to high
speeds of public traffic. In my proposed system the main aim is to reduce shadowing eliminating white
land marking and increase the image resolution and recognition rate by using shape based image retrieval
algorithm.
II. LITERATURE SURVEY
Haiyan Guan., et., all proposed a survey of literature about road feature extraction, giving a detailed
description of a Mobile Laser Scanning (MLS) system (RIEGL VMX-450) for transportation related
applications. The proposed Road SEA detects road curbs from a set of profiles that are sliced along vehicle
trajectory data. Haiyan Guan., et., all proposed the assessment of pavement cracks is one of the essential
tasks for road maintenance. The proposed ITV Crack comprises the following: The preprocessing involving
the separation of road points from non road
points using vehicle trajectory data. The generation of the geo referenced feature (GRF) image from
the road points. Henrique Oliveira., et., all proposes a fully integrated system for the automatic detection
and characterization of cracks in road flexible pavement surfaces. The first task addressed, i.e., crack
detection, is based on a learning from samples paradigm, where a subset of the available image database is
automatically selected and used for unsupervised training of the system. The second task deals with crack
type characterization, for which another classification system is constructed, to characterize the detected
cracks connect components. Haiyan Guan, Jonathan Li., et.all this paper presents the development and
implementation aspects of an automated object extraction strategy for rapid and accurate road marking
inventory. The proposed road marking extraction method is based on
2-D geo referenced feature (GRF) images, which are interpolated from 3-D road surface points
through a modified inverse distance weighted (IDW) interpolation. Wei Chen., et., all Data visualization is
an efficient means to represent distributions and structures of datasets and reveal hidden patterns in the data.
This paper introduces the basic concept and pipeline of traffic data visualization, provides an overview of
related data processing techniques, and summarizes existing methods for depicting the temporal, spatial,
numerical, and categorical properties of traffic data. M. Salmon., et, all The existing algorithm is planned
to be enhanced by analyzing connected components and by introducing some further post- processing
techniques. A novel approach to automatically distinguish cracks in digital pavement images is proposed
in this paper. Wei Na., et, all this paper provides an approach for achieving an automatic classification for
pavement surface images. First, image enhancement is performed by mathematical morphological operator.
Secondly, pavement image segmentation is performed to separate the cracks from the background. Tien
Sy NGUYEN., et., all this paper presents a new measure which takes into accounts simultaneously
brightness and connectivity, in the segmentation step, for crack detection on road pavement images.
We have introduced a new method for crack detection on road pavement images. By considering all
characteristics of crack and by unrestricting crack orientations and forms are observed. Y.H. Tseng., et, all
in this research, we specific focus on developing strategies for executing the inspection tasks using robots.
We developed three strategies. The first strategy is random-walk. The second strategy is random-walk
with map recording. The third one adds the vision capacity to the robot. Three proposed strategies have a
higher possibility of revisiting distresses, and it means making the results more reliable. Haiyan Guan., et.,
all this paper presents an automated approach to detection and extraction of road markings from mobile
laser scanning (MLS) point clouds by taking advantages of multiple data features. The test dataset collected
168

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
by a RIEGL VMX-450 MLS system was used in this paper for the validation of the proposed method. The
experimental results demonstrated that the proposed method has a good performance on the extraction of
road markings from a large volume of MLS data.
III.PROPOSED ALGORITHM
A. SHAPE BASED IMAGE RETRIEVAL
Retrieval efficiency and accuracy are two important issues in designing content based database
retrieval system. We propose a new image database retrieval method based on shape information .This
system achieves both the desired efficiency and accuracy using a two stage hierarchy: In the first stage
simple and easily computable statistical shape features are used to quickly browse through the database to
generate a moderate number of plausible retrieval. In the second stage the outputs from the first stage are
screened using a deformable template matching process to discard spurious matches. This demonstrates the
need for developing shape features that are better able to capture human perceptual similarity of shapes.

Fig: Block Diagram of Pavement Crack Detection

B. VIDEO FRAME EXTRACTION


In image processing intensity level will be analyzed by using frames. In that video the various frame
will be analyzed by using MATLAB code.
C. PREPROCESSING
In preprocessing images obtained for crack analysis would be preprocessed. Acquired images may
be of different dimensions. Image resizing algorithm is applied on the images, which in turn convert them
into a square image. Image resize algorithm uses various interpolation techniques to obtain an image of the
desired dimension. Resizing algorithm adds the specified number of rows and columns in the given image
and scales them to the required size. The algorithm computes the required number of rows and columns
to preserve the aspect ratio of the image if the rows and columns are not specified. In preprocessing using
a median filter. The median filter is a nonlinear digital filtering technique, often used to remove
noise. Median filtering is very widely used in digital image processing because, under certain
conditions, it preserves edges while removing noise.

D. EDGE DETECTOR
Edge detection is the name for a set of mathematical methods which aim at identifying points
in a digital image at which the image brightness changes sharply or, more formally, has discontinuities. The
points at which image brightness changes sharply are typically organized into a set of curved line segments
169

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
termed edges. The same problem of finding discontinuities in 1D signal is known as step detection and the
problem of finding signal discontinuities over time is known as change detection. Edge detection
is a fundamental tool in image processing, machine vision and computer vision, particularly
in the areas of feature detection and feature extraction. So this kind of problem will occurred using
edge detection .The Sobel edge detector is used in image processing, particularly within edge detection
algorithms. Technically, it is a discrete differentiation operator, computing an approximation of the
gradient of the image intensity function. At each point in the image, the result of the Sobel operator is either
the corresponding gradient vector or the norm of this vector. The Sobel operator is based on convolving the
image with a small, separable, and integer valued filter in horizontal and vertical direction and is therefore
relatively inexpensive in terms of computations. On the other hand, the gradient approximation that it
produces is relatively crude, in particular for high frequency variations in the image.
The operator uses two 33 kernels which are convolved with the original image to calculate
approximation of the derivative-one for horizontal changes, and one for vertical. If we define A as the
source image, and Gx and Gy are two images which at each point contain the horizontal and vertical
derivative approximations, the computational as follows:

Where * denotes the 2-dimensional convolution operation. Since the sobel kernels can be decomposed
as the products of an averaging and a differentiation kernel, they compute the gradient with smoothing.
For example,

can be termed as

The x-coordinate is defined as increasing in the right direction and the y-coordinate is defined as
increasing in the down direction. At each point in the image, the resulting gradient approximations can be
combined to give the gradient magnitude using

Using this information, we can also calculate the gradients


direction:

For example, is 0 for a vertical edge which is darker on the right side.
IV.SIMULATION RESULT

Fig(a): Input video frame


170

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Fig (b): Image smoothening

Fig(c): Image saturation

Fig (d): Image segmentation


171

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Fig (e): Image segmentation results

Fig (f): Global Cracks Detected

V. CONCLUSION
Based on the detected road surface data an automated road marking and pavement using Shape
Based Image Retrieval (SBIR) algorithm is implemented. It can cause more serious problems to
valuable human life, such as potholes and pop-outs. Developing such a project like this will lead to, reduce
maintenance cost and it will create a better road network for people to use with neat and safe journey.
REFERENCES
[1] Haiyan Guan, Jonathan Li, Yongtao Yu, Michael Chapman, and Cheng Wang, (2015) Automated
Road Information Extraction From Mobile Laser Scanning Data IEEE Transaction on Intelligent
Transportation Systems, FEB, Vol.16, Issue. 1, Page No: 194 205.
[2]

Haiyan Guan, Jonathan Li, Yongtao Yu, MichaelChapman, and Cheng Wang, Ruifang Zhai (2015)

Iterative Tensor Voting For Pavement CrackExtraction Using Mobile Laser Scanning Data
IEEE Transactions on Geosciences and Remote Sensing, March, Vol.53, No.3, Page No: 1527 1537.
[3]

Haiyan Guan, Jonathan Li, Yongtao Yu, Zheng Ji, (2015) Using Mobile LIDAR data for rapidly
updating road markingsIEEE Transactions on Intelligent Transportation Systems, March, Vol.18,
No.1, Page No: 125-137.

[4]

Wei Chen, Fangzhou Guo, and Fei-Yue Wang, Fellow, (2015)A survey of traffic data visualization
IEEE Transactions on Intelligent Transportation Systems, March, Vol.14, No.1, Page No: 1-15.

[5]

Henrique Oliveira and Paulo Lobato Correia, (2013) Automatic Road Crack Detection and

Characterization IEEE Transactions on Intelligent Transportation Systems, March, Vol.14,

172

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
No.1, Page No: 155-168.
[6]

M. Salman, S. Mathavan, K. Kamal, M. Rahman, (2013)Pavement crack detection using the Gabor
filter IEEE Annual Conference on Intelligent Transportation Systems, Oct, Vol.6-9, No.1, Page No:
2039-2044.

[7]

Haiyan Guan a, Jonathan Li b, Yongtao Yu b, Cheng Wang b,(2013)Rapid update of road surface
databases using mobile LIDAR:Road- Markings Fifth International Conference on Geo- Information
Technologies for Natural Disaster Management, Page No: 124-129.

[8]

Wei Na and Wang Tao, (2012)Proximal support vector machine based pavement image classification
IEEE fifth International Conference on Advanced Computational Intelligence (ICACI), Oct, Vol.1820, No.1, Page No: 686-688.

[9]

Tien Sy NGUYEN, Stphane BEGOT, Florent DUCULTY,Manuel AVILA,(2011)A new method


for crack detection on pavement surface imagesIEEE International Conference on ImageProcessing,
March, Vol.50, No.1, Page No: 1069- 1072.

[10]

Y.H. Tseng, S.C. Kang, Y.S. Su, C.H. Lee, J.R. Chang,(2010)Strategies for autonomous robot to
nspect
pavement
distress
IEEE/RSJ International Conference on Intelligent Robots
and Systems Oct, Vol.18-22, No.1, Page No: 1196-1201.

[10] Zhao Liu, Daxue Liu, Tong tong Chen, Chong yang Wei,(2013)Curb detection using 2D range data
in a campus environment Seventh International Conference on Image and Graphics FEB, Vol.16,
Issue. 1, Page No: 291 296.

173

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Circle Based Path Planning Algorithm for Mobile Anchor


Trajectory Localization in Wireless Sensor Networks -A Review
D.Poovaradevi, PG Scholar, Mrs.S.Priyadharsini
ME Embedded System Technologies
Arunai Engineering College, Tiruvannamalai, Tamilnadu, India
Poovaradevi.ece@gmail.com
Assistant Professor/EEE, Arunai Engineering College
Tiruvannamalai, Tamilnadu, India
priyamshanmugam@gmail.com ,
Abstract Many applications require high degree of sensor node to identify their locations in wireless
sensor network. Location information is gathered from manual setting or GPS device. Since manual
setting requires huge cost of human time, and GPS requires expensive device cost. Both approaches are
not applicable for large scale WSN. The mobile anchor node is used to finding the position of unknown
location node. The optimal path planning mechanism is evaluated to minimize the time for determining
the localization and should increase the accuracy. In proposed system the circle based path planning
mechanism can be implemented, because it covers four corners of the sensing field by increasing the
diameter of the concentric circles. The sensor node is located at the center of this circle, a single mobile
anchor node moves randomly through the sensing field to determine the localization and also detecting
the hacker node in WSN. The Performance of the proposed system can be evaluated through a series of
simulation with NS-2 environment.
Index terms Wireless Sensor Network (WSN), localization, mobile anchor node, circle based path
planning mechanism.

I. INTRODUCTION
Wireless Sensor Network (WSN) is a multi-hop wireless network consisting of a large number
of small, low-cost, low-power sensor nodes to perform intended monitoring functions in the target area.
The sensor node is used to sense the data and forward that data over a wireless medium to a remote data
collection device. Localization is a critical requirement for determining where a given node is physically
located. It is necessary for data correlation. Location information is gathered from manual setting or GPS
device. Since manual setting requires huge cost of human time, and GPS requires expensive device cost,
it does not work indoor environment and underwater. Both approaches are not applicable for large scale
WSN.
The localization method can be classified according to the type of information. First method is
range based localization where the locations are calculated from node to node distance estimation or inter
node angles. Second method is range free localization method in which the locations are determined by
radio connectivity constraints. Both methods are more complex and more expensive because they require
infrared, X-ray or ultrasound techniques to calculate the distance.
In this paper we propose a circle based path planning mechanism to determine the localization of
each node because it covers four corners of the sensing field by increasing the diameter of the concentric
circles and also determine the hacker node in the group of wireless sensor networks.
II. LITERATURE SURVEY
A predefined trajectory algorithm [1] proposed to determine the accurate and low cost sensor
localization and also minimize the localization error of the sensor node using a single mobile anchor node
and the obstacle resistant trajectory is also developed to detect obstacle in the sensing field. Route planning
mechanism for MANET [2] developed using a single mobile anchor node to determine the locations of
sensor node and also determine the block-hole attack such as denial of service attack (DOS) by improving
the security in each and it drops the incoming packet between source and destination. In this algorithm they
mainly focus on determining and improving the security of routing protocol Ad-hoc on Demand Distance
Vector (AODV).This paper presents better results for end to end delay, packet delivery ratio and throughput.
174

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
But in this approach each node requires the behaviour of broadcasting node for trusting the receiving
nodes. Particles Swarm Optimization (PSO) [3] uses a single mobile anchor node to determine the location
of individual sensor node and it is a population based stochastic technique. The PSO algorithm shares many
similarities with evolutionary computation called Genetic Algorithm (GA) but it has no evolution operators
such as crossover and mutation. It has the potential solution called particles. The PSO consist of each time
step changing the velocity of each particle toward its best locations. They analyse a simulation result using
PSO algorithm based on three matrices such as localization error, percentage of localized error and chord
length.However, the accuracy of the localization results is depends on the length of the chord. Scan
algorithm [4] is for determine the location of sensor node using beacon points with more than two mobile
anchor nodes to increase the percentage of localized sensor and also minimize the localization error. The
scan method is a static path planning scheme. The unknown node uses Omni-directional antenna and mobile
anchor node uses directional antennas to receive feedback message from unknown nodes to calculate the
virtual force to determine the location of sensor node. Superior path planning mechanism [5] for mobile
beacon assisted localization based on Z-curve method localization. They use a mobile beacon points to
decrease the time required for localization, increase the accuracy of the position and also increase the
coverage. The Z-curve mechanism is used to find the shortest path it will be selected by mobile beacons to
broadcast their beacon messages to unknown sensor node. The obstacle resistant trajectory is also proposed
to handle the obstacles in the sensing field. An Energy efficient localization strategy using particle swarm
optimization (PSO) [6] to determine the location of each and every sensor. The static sensors are deployed
in the geographical area for data gathering process and also try to estimate the position of mobile sink which
is relocating to improve the performance of overall network area. The mobile sink estimates its position
with respect to neighbouring nodes. Few sensors with known position are deployed in the geographical area
and using their information mobile will try to determine the location unknown node by using PSO. This
energy efficient location method increase the implementation cost and also reduce the error accumulation.
Localization algorithm [7] using mobile anchor node based on regular hexagon in two dimensional WSN
to determine the location unknown node. The location unknown node determined by using trilateration
method and it can achieve high localization ratio and accuracy location when the communication range is
not smaller than trajectory resolution. The performance this localization algorithm depends upon
communication range, broadcast interval, movement trajectory and path length. In this method they propose
several mobile anchor nodes to reduce the localization time and also improve the localization accuracy.
Path planning mechanism [8] to determine the location of sensor node and minimize the location error. The
obstacle resistant trajectory is also developed to determine the obstacles in the sensing field. If any obstacle
in the sensing field can obstruct the radio connectivity between anchor node and sensor node. They develop
a single mobile anchor because the multiple anchor node cause the beacon collision in the sensing field.
Fuzzy logic based localization [9] for multiple mobile anchors uses fuzzy grid prediction scheme for
simulate the result and hardware result can be implemented in i -Robot mobile host. They formulate mobile
node localization, fuzzy multilateration and fuzzy grid prediction scheme for node localization in noisy
environments. By using this algorithm the distance between mobile node and anchor nodes are fuzzified to
obtain the fuzzy location. An optimization algorithm [10]developed to solve flip ambiguity problem in
localization and also localize the blind nodes in WSN. The optimization algorithms are simulated annealing,
particle swarm optimization and genetic algorithm used to solve the problem of flip ambiguity to increase
accurate and unique localization and also save the energy consumption. They propose GA is used to localize
blind nodes and SA is used to find localization then PSO is used to solve flip ambiguity problem. Node
localization [11] implemented in environmental applications such as disaster relief, forest fire tracking and
target tracking. This paper develops the node localization to report the origin of events, assist group querying
of sensors, routing and to answer questions on the network coverage. The node localization depends upon
the following factors such as anchor density, node density, computation and communication cost, accuracy
of the localization. A novel iterative multilateral localization algorithm [12] discussed based on the use of
time round mechanism and anchor node triangle to reduce the error accumulation also reduce location error
to prevent abnormal phenomena caused by trilateration problem in localization through limiting the number
of neighbouring beacon points used in time rounds. This paper proposes high accuracy localization even if
in large range errors, it can produce good result and it is applicable to RSSI range based technique. This
algorithm uses two events such as periodic timer trigger event and receiving location data packet event.
Mobile Anchor Positioning (MAP) [12] made for WSN to determine the location by using Global Positioning
175

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
System (GPS). It is a range free localization method which uses the beacon points of mobile anchor and
location packets of neighbouring nodes to determine the locations of node. In this the anchor node is
equipped with global positioning system, to broadcasts its coordinates to the sensor nodes as it moves
through the network and the sensor nodes collect enough beacons to calculate the locations . By increasing
the number of mobile anchor nodes which increase the percentage of localized nodes with increase the
execution time of localization. Three novel subspace approaches [14] analysed for cooperative location for
node localization in fully connected to reduce the biases and mean square error of the sensor node and also
position estimation. The full set and minimum set subspace algorithms are used for centralized processing.
The distributed subspace algorithm which takes available distance information to produce the solution with
no iteration. The drawback of this proposed algorithm is its work only connectivity information because
which can only find shortest path distance all pair of nodes but the theoretical performance is unavailable.
Distributed node localization algorithm [15] made survey on location implementation called mobile beacon
improved optical fibre (MB-IPF). In this each mobile anchor node is equipped with GPS which moves
around the sensing field based on Gauss-Markov mobility model. The unknown node estimates its position
in a fully connected mode based on received mobile beacons in GPS. It consist of three phase to find
position of node such as information sampling, initial point estimation and self localization. The literature
survey involves few recent papers on path planning mechanism.
III. LOCALIZATION TECHNIQUES
Localization is a essential need to determine where the sensor node is physically located in many
WSN applications such as forest fire tracking, target tracking, military surveillance etc.
SYSTEM ARCHITECTURE:

Fig 3.1 system architecture

The system architecture consists of mobile sink differentiated by different colors. Mobile sink is used
to collect the data from sensing field. The localization method classified into two types such as 1)Range
based localization 2)Range free localization. The range based algorithm is used to find the point-to-point
distance calculation and angle estimation to calculate the position of sensor node. It consist of different
parameters they are Received Signal Strength Indicator (RSSI), Time of Arrival (TOA), Time Difference
Of Arrival(TDOA) and Angle Of Arrival(AOA).The range free localization further classified into two
types they are Local technique and Hop based technique. In local technique each mobile anchor node
equipped with GPS to determine the location unknown node. In hop based technique the Distance vector
(DV) routing is used to find the position of landmark announcement. The localization method consists of
following different factors.
Accuracy: In many WSN applications accuracy is very essential to determine the locations. For
example in military application sensor network is deployed for intrusion detection.
Power: Power is important for computation. Each sensor network has limited power which is
supplied by battery.
Cost: Cost is very critical requirement in the localization. Many of the localization algorithms give
low cost in the development of localization.
Static Nodes: The static nodes are deployed in environment which has the ability to compute and
sensing capability to sense and forward the data from source to destination.
Mobile Nodes: The mobile anchor node is also deployed in wireless environment and it is equipped
176

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
with Global Positioning System (GPS) to determine the position of sensor node. It consumes more power
compared to static node. The mobile anchor node is one of the reference node to determine the location of
unknown node.

Fig 3.2 Localization Concept

In localization method sensor node fixed at the centre of the circle, a single mobile anchor moves
through the sensing field, it broadcast its beacon messages and a sensor node select appropriate position
on anchor node which is called beacon point to construct a chord of its communication circle. However
it requires three beacon points to construct a communication circle. The accuracy of localization depends
upon the chord length. In this the location unknown node can be determined by using mobile anchor node.
IV. PATH PLANNING ALGORITHM
Path planning can be classified into static or dynamic. Static path planning method is used to design
movement trajectory before initial execution. Dynamic path planning scheme is proposed for real time
distribution of unknown node in given ROI. It consist of Breadth-First (BRF) algorithm and a Backtracking
Greedy (BTG) algorithm. Static path planning can be SCAN, HILBERT and S-CURVE method. As the
scan method is a straight line method it cannot give guarantee to length of each chord which exceeds a
certain threshold. Hilbert method is a curve method which cannot give guarantee that sensor node to obtain
three or more beacon to form a communication circle and the s-curve method is difficult to obtain each
sensor node can form a valid chords. The proposed CIRCLE method gives guarantee that four corners of
the sensing field by increasing the diameter of the communication circle.

Fig 4.1 circle based path planning


177

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
In circle method the total path length obtained from

L=>Total length in the sensing field


R-X=>The distance between two vertical segments of mobile anchor trajectory
R=> Radius of the mobile anchor.
X=> set in the range 0 < X R.
A) DATA FLOW DIAGRAM

Fig 4.2 Data flow diagram

The data flow diagram shows steps to determine the localization using path planning mechanism.
V. SIMULATION RESULT
The simulation result implemented in NS2 platform and IEEE 802.11 used as the MAC layer in our
simulation experiments. The simulation result consists of different parameters and its value shown in table.

Fig 4.3 circle based path planning


178

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Throughput

Throughput Graph:

Packet delivery ratio

PDR Graph:

Packet drop

PACKET DROP GRAPH:

V.CONCLUSION
In this paper propose a circle based path planning algorithm to determine the location of every
179

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
sensor and to minimize the time to localization of every sensor node. The sensor node is located at the
center of this circle, a single mobile anchor node moves randomly through the sensing field to determine
the localization. The simulation result shows better results for Drop, Packet delivery ratio (PDR) and
Throughput. The proposed algorithm used in many of the WSN applications such as flood detection, forest
fire detection and military surveillance etc and the sniffer is used to detect the hacker node.
The future scope is to reduce the localization error due to the obstacle in the sensing field.
REFERENCES
[1]

S. Divya and P.Purusothaman,Predefined Trajrctory Algorithm for Mobile Anchor Based


Localization in Wireless Sensor Network International Journal of Computer Science and Network
Security,2015, Vol 1,pp.71-76.

[2]

Prof. Shubhangini Ugale, Ms. Poonam B.Kshirsagar,Mr.Ashwin W. Motikar,Route Planning


Algorithm for Localization in Wireless Sensor Network International Journal of Advanced Research
in Computer and Communication Engineering,2015,Vol 4, Issue 6,pp.526-528.

[3]

P. Sangeetha and B. Srinivasan,Mobile Anchor Based Localization Using PSO and Path Planning
Algorithm In Wireless Sensor NetworksInternational Journal of Innovative Research and Advanced
Studies,2015,Vol 2,Issue 2,pp.5-10.

[4]

Anupnam Kumar.Localization Using Beacon Points In Wireless Sensor NetworkIJSEC,2015,


pp.1259-1261.

[5]

Javad Rezazadeh, Marjan Moradi, Abdul Samad Ismail, Eryk Dutkiewicz,Superior Path Planning
Mechanism for Mobile-Beacon Assisted Localization in Wireless Sensor NetworkIEEE Sensors
Journal,2014,pp.1-13.

[6]

Ms. Prerana Shrivastava, Dr. S.B Pokle, Dr.S.S.Dorle,An Energy Efficient Localization Strategy
Using Particle Swarm Optimization in Wireless Sensor Network, International Journal of Advanced
Engineering and Global Technology,2014,Vol 02,Issue 19,pp.17-22.

[7]

Guangjie Han, Chenyu Zhang, Jaime Lloret, Joel J. P. C. Rodrigues,A mobile Beacon Assisted
Localization Algorithm Based on Regular Hexagon in Wireless Sensor Network, The Scientific
World Journal,2014,pp 1-12.

[8]

Chia-Ho Ou, Wei-Lun He,Path Planning Algorithm for Mobile Anchor Based Localization in
Wireless Sensor Network, IEEE sensors Journal,2013,Vol 13, No 02,pp.466-475.

[9] Harsha Chenji,, Radu Stoleru,Toward Accurate Mobile Sensor Network LocalizationIEEE
Transaction on Mobile Computing,2013,Vol12,No .6,pp.1094-1106.
[10] Mansoor-ul-haque, Farrukh Aslam Khan, Mohsin Iftikhar,Optimized Energy-efficiet
Iterative Distributed LocalizationIEEEInternational Conference on Systems, Man, and
Cybernetics,2013,PP.1407-1512.
[11] P.K Singh, Bharat Tripathi, Narendra Pal Singh,Node Localization in Wireless Sensor Network
International Journal of Computer Science and Information Technologies,2011,Vol.2(6),
pp.2568-2572.
[12] Zhang Shaoping, Li Guohui, Wei Wei, Yang Bing,A Novel Iterative Multilateral Localization
Algorithm for Wireless Sensor Nerwork Journal Of Networks,2010,Vol. 5,No.1,pp.112-119.
[13] W.-H. Liao, Y.-C. Lee, S.P. Kedia,Mobile Anchor Positioning for Wireless Sensor NetworkThe
Institute of Engineering and Technology,2010,Vol.5,Issue.7,pp.914-921.
[14] Frankie K. W. Chan, H. C. So, W.-K. Ma,A Novel Subspace Approach for Cooperative Localization
in Wireless Sensor Network Using Range MeasurmentsIEEE Transactions On Signal Processing,2
009,Vol.57,No.1,pp.260-269.
[15] Kuang Xing-hong, Shao Hui-he,Distributed Localization Using Mobile Beacons in Wireless Sensor
Network The Journal Of China Universities of Post and Telecommunications,2007,Vol.14,pp.7-12.
180

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Collision Free Packet Transmission


for Localization
P.Saranya, V.Saravanan
ME-Embedded System Technologies,
Department of Electrical & Electronics Engineering, Arunai Engineering College,
Tiruvannamalai, Tamilnadu, India. saranyapalaniece@gmail.com
Associate Professor,
Department of Electrical & Electronics Engineering, Arunai Engineering College,
Tiruvannamalai, Tamilnadu, India. vsaranaec@yahoo.co.in
Abstract Localization is a difficult task of an underwater acoustic sensor network (UASN) which
requires multiple packet exchanges. The medium access control (MAC) determines how sensor nodes
share the channel for packet exchanging between anchor nodes and target nodes. To obtain the maximum
network efficiency for a specific task, various MAC protocols are required for multiple tasks. In existing
method using the single anchor node to locate and finding a location of single node at a time, but it produce
delay during communications it need more time to find location of entire node. To overcome these issues,
an efficient MAC protocol is designed which employs multiple anchor nodes to find localization. It
reduces the communication delay by using low complexity algorithm and location aided routing protocol.
Index terms medium access control(MAC), anchor node, dynamic multi-channel packet scheduling
,underwater acoustic sensor node, location aided routing protocol, low complexity algorithm

I. INTRODUCTION
In the recent years, many researchers have investigating underwater acoustic sensor networks
(UWASNs) are useful for the oceanic environment monitoring, oceanic geographic data collection, offshore
exploration, assisted navigation, disaster prevention and tactical surveillance. The channel of UWASNs
has many specific characteristics, such as long propagation delay, low available bandwidth, multi-path
transmission, and the Doppler spread, which make different from terrestrial wireless networks.

Fig1:2D Architecture of Underwater Acoustic sensor network[2]

The common communications architecture for underwater wireless sensor network is shown in
Figure 1. Sensor nodes in the network may also communicate with a surface station and autonomous
underwater vehicle deployed in the underwater
sensor network, the location of the sensor nodes need to
determine the sensed data. Radio wave Frequency communication does not work well for underwater
and the well-known use of
GPS is restricted to surface nodes.Hence,packet exchange between the
underwater nodes and surface nodes needed for a localization that should carried out using an
acoustic communications.UASN acoustic channels have unique characteristics ,long propagation delay,
limited bandwidth and multipath interference.. Localization schemes in UASN fulfill the following
desirable qualities such as more accurate, fast transmission, wide coverage of communication between
181

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
nodes and low cost. The nodes are organized in a groups and group has a anchor node to communicate a
data transmission to the nearest surface station. In the second level microwave communication is used
to collect the information to the surface node and to the land based data to the collection Centre. Once
the data reaches the coastal collection center, it can then be relayed to an international collection center
via satellite communication (Fig. 1). Major Challenges that follow the model includes limited power,
optimization of energy harvesting techniques, increased bit error rates (BER) and low signal to noise ratio
(SNR) in case of low power nodes. Efficient routing techniques can play a vital role in controlling
the power consumption by employing low power multi-hop transmission provided that the propagation
delay do not exceeds the required limits.
In this paper designed to improve network efficiency and also multiple anchor nodes are used to find
multiple nodes locations. The low complexity of localization algorithms and location aided routing protocol
have been introduced and analyzed in the literature which are relatively different from the ones studied
for terrestrial wireless sensor networks. Hamid Ramezani and Geert Leus [1] introduced the concept of
dynamic multi-channel packet scheduling, here the network splits the existing channel into several sub
channels reduce the scheduling time and minimize the duration of the localization task by using an two
low complexity algorithm when most of the underwater sensor nodes does not cover for localization only
for single anchors node locate at a time. Hamid Ramezani et al [2] analyzed two classes of packet
scheduling in an underwater acoustic sensor network intended to
minimize the localization time based on collision free and collision tolerant schemes. Collision free
packet transmission is fully connected in a single hop network on a sequence of each anchor has to transmit
immediately after receiving the previous anchor packet and also sequence ordering of packet minimize the
localization time only for single hop network. Collision tolerant schemes can able to control the collision
guarantee to successful transmission of each anchor nodes in a single hop network. H. Ramezani et al
[3] analyzes the localization in an underwater wireless network which has a fixed node and tracking a
mobile target target node from acoustic sensor network calculating time-of-flight (ToF) measurements
in a underwater environment with a sound speed. Rahman Zandi et al [4] designed for a simple range
based localization scheme, calculating received message power comparing with the sending message
of the sensor nodes. At this point some losses of messages are obtained due to irregular communication
range in the sensor network. M. C. Domingo [5] designed a magnetic induction (MI) technique which
reduces the path loss and extends communication ranges in sea water because of transmission range
increased moderately, but when acoustic channel is severely degraded magnetic induction technique is
more complex. For fresh water it achieves better performance and has been extended to hundreds of
meters. B. Gulbahar and O. B. Akan [6] analyzed in terms of basic communication metrics, signal-tonoise ratio(SNR), bit-error rate, and connectivity and communication bandwidth. The performance of
3D networks covering hundreds of meters sea depths and a few km areas that fully connected networks
with communication bandwidths extending from a few to tens of KHz. High SNR for networking areas
reaches to a few km in deep sea by forming a fully connected and power efficient multi-coil network.
The performance dependence of the grid network on inter-coil distance, coil radius, wire diameter and
the capacitance is explored. Ameer and Lillykutty Jacob [7] proposed a modified version of stochastic
proximity embedding localization algorithm exhibits good performance in the underwater sensor network
which uses ranges between all nodes with unknown location. The unknown location nodes cannot find
the localization error and measurement error. Wouter van Kleunen et al [8] proposed a MAC protocol
designed for time-synchronisation, localization and scheduling communication for small cluster network in
underwater node. These three aspects with two phases one is coordination phases, it can able to measure the
propagation delay between nodes and another is communication phases here the nodes are communicated
in a scheduling communication for a sensor network. Both phases can perform the relative positioning of
the nodes, a viable approach in underwater and does not for unknown nodes in large cluster networks. J.P.
Kim et al [9] designed a MAC protocol that can enable many sensor nodes in large-scale networks to share
the limited channel resource is an indispensable component to maximize the localization coverage and
speed, while minimizing communication costs. These can be achieved with MAC protocol that requires no
node coordination. To emphasis on the impact of MAC
protocol comprises a fixed node with a two dimensional network. P. Nicopolitidis et al [10] proposed
adaptive push system for acoustic information dissemination of data in an underwater clients to the priori
unknown needs of the clients achieving a broadcast schedule and also efficiently combats the problem of
182

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
high latency of the underwater acoustic wireless environment. Adaptive push system is not affected by
the broadcast server. Z. Peng et al [11] introduced a contention-based medium access control protocol
with parallel reservation (COPE-MAC) for underwater acoustic networks. Further, COPEMAC protocol
designed using a parallel reservation and cyber carrier sensing technique. Parallel reservation method
improves the communication efficiency and cyber carrier sensing technique detects and avoids collision
by mapping physical channel. Thus the overall system performance, in terms of the system throughput
and channel utilization, is improved. C-C. Hsu et al [12] proposed a TDMA (Time Division Multiple
Access) based MAC scheduling protocol for energy saving and throughput improvement compared to
Spatial-Temporal MAC. This type of protocol cannot design a distributed MAC scheduling scheme that
could adapt to traffic and topology change dynamically. M. T. Isik and O. B. Akan [13] analyzed the ThreeDimensional Underwater Localization (3DUL), a 3D localization algorithm for underwater acoustic
sensor networks is a distributed, iterative and dynamic solution to the underwater acoustic sensor network
localization problem that exploits only three anchor nodes at the surface of the water. K.Kredo et al [14]
analyzed the staggered TDMA underwater MAC protocol increases the performance of traditional
TDMA by using propagation delay and schedule overlapping transmissions and also synchronized protocol
performance good in underwater environment.
II. NETWORK ARCHITECTURE

Fig 2: Network Architecture

In the network with architecture as shown in Fig.2, for the information generated at sensor nodes
transmits hop-by-hop to the sink in a many-to-one pattern. As packets move more closely toward the sink,
the packet collision increases. Because of the long propagation delay and the low available bandwidth
in UWASNs, existing contention based MAC protocols with handshake mechanism is not appropriate for
their high reserved cost, and existing schedule based MAC protocols is not appropriate for the long slot
time. The T-Lohi protocol and the ordered carrier sense multiple access (CSMA) protocol work well
in single-hop underwater acoustic
networks for discarding the handshake mechanism, but they can not obtain high performances
in multi-hop networks. In this paper proposed a modified low complexity algorithm produces high
performances in multi anchor in multi hop networks.
A. Localization Basics
Localization is one of the most important technologies play a difficult task in many applications
especially underwater wireless sensor networks. Localization algorithm classified in to three different
categories based on sensor nodes. Stationary localization algorithm, mobile localization algorithm and
hybrid localization algorithm. Three kinds of sensor nodes are used in underwater acoustic sensor network:
they are anchor nodes, unknown nodes and reference nodes. Unknown nodes are responsible for sensing
environment data. Anchor nodes are responsible for localizing unknown node, and reference nodes consist
of localized unknown nodes and initial anchor nodes.Currently, many localization algorithms are proposed
for underwater acoustic sensor networks. Researchers classify localization algorithm in to two categories:
distributed and centralized localization algorithms based on where the location of an unknown
node is determined. In distributed based localization algorithm, each underwater sensor nodes can sensed
the unknown node and collect the localization information then runs a location estimation algorithm
individually.Centralised localization algorithm, the location of each unknown is estimated by a base station
or a sink node.
183

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
B.Packet scheduling
Packet scheduling in underwater sensor network can be classified in to two types, collision free
scheme and collision tolerant scheme.
Collision free packet scheduling: Collision free localization packet scheduling is analyzed where it
is fully connected network based on the anchor node. Each anchor node transmits a packet immediately
after receiving a previous anchor packet and also there exists an optimal ordering sequence which minimize
the localization time. Here fusion center is required to know the position of all anchors, packet loss of a
subsequent anchor will not know when to transmit a packet. If an anchor node does not receive a packet
from a previous anchor it waits for a predefined time (counting from the starting time of the localization
process) again continue the transmission shown in fig3.

Fig 3: Collision free transmission

Collision tolerant packet scheduling: During a localization period or receiving a node transmit
randomly to avoid a coordination among anchor node in a collision tolerant packet scheduling, anchor
node work independently. Packet transmitted from different anchor now collide each other with successful
reception shown in fig 4.

Fig 4: Collision tolerant packet transmission

III. LOCATION AIDED ROUTING PROTOCOL


Location aided routing protocol is used to locate information to reduce the number of nodes to
whom route request is propagated. .By using location information, the proposed Location-Aided Routing
(LAR) protocols limit the search for a new route to a smaller request zone of the network. This results in a
signicant reduction in the number of routing messages. In energy efficient location aided routing protocol
(EELAR) discussed to a wireless base station is used and the network's circular area centered at the base
station is divided into six equal sub-areas. At route discovery instead of flooding control packets to the
whole network area, they are flooded to only the sub-area of the destination mobile node. The base station
stores locations of the mobile nodes in a position table. To show the efficiency of the proposed protocol we
present simulations using NS-2. Simulation results show that EELAR protocol makes an improvement in
control packet overhead and delivery ratio. LAR utilizes the location information of mobile nodes with the
goal of decreasing routing related overhead in mobile and adhoc networks.
IV. LOW COMPLEXITY ALGORITHM
The complexity of the optimal solution (without any heuristic approach), which makes it impossible
to be used when the number of anchors is large. In this work, we propose two heuristic algorithms with a
smaller complexity that can be adopted for practical applications. The proposed low complexity algorithm
has been evaluated in terms of network efficiency, multiple anchor nodes are used to find multiple node
184

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
locations. It improves the throughput and reduces communication delay.
The suboptimal algorithm is based on a greedy approach, the initial phase, the waiting times of
the transmitting nodes are set to zero and it will transmit in the rst sub channel. When the waiting time
of an anchor nodes are dynamic, it will be removed from the scheduling task. Based on the waiting time,
the collision-risk neighbors of the selected anchor are detected, and their corresponding waiting times are
modied node can be in sleep mode or idle it can be eliminated, such a way no collisions will occur in the
network . It may happen that there are two or more anchors with the same minimal
waiting time. In this case, we select the one which has the lowest index as well. Localization
process of low complexity algorithm shown in figure 5, explains that if multi anchor nodes are created ,then
continue the localization process to reduces the delay otherwise create a node to apply a MAC protocol .The
process of creating new multi anchor node is continued ,unless the result is displayed.

Fig 5: Flow diagram of localization process

V. SIMULATION RESULT AND DISCUSSIONS


We have implemented the decentralized network coding and low complexity algorithms using a
network simulator. A packet transmission takes exactly one time unit. We assume that a node can either
send or receive and it can only send or receive multiple packets at a time. Nodes have a nominal ratio range
of 250m. Transmissions are broadcasted and are received by all neighbors, nodes. The MAC layer is an
idealized with perfect collision avoidance. At each time unit, a schedule is created by randomly picking
a node and scheduling its transmission if all of its neighbors are idle. This is repeated until no more nodes
are eligible to transmit. The simulation area has a size of 1500m1500m. The number of nodes is 50 which
is source node is selected as 12 and destination node is 25. Each node has one information unit to send to
all nodes. For the network coding, we use a dynamic eld size of node and transport the data the packet
scheduling as suggested in collision free packet transmission and collision tolerant packet transmission.
All the simulations are performed without this assumption, which reduces the communication delay
and improve the network efficiency, improves the throughput and reduces the delay shown in figure
185

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
8.The comparison performances of MAC protocol as shown in figure 6, from that TDMA MAC has high
performances compared to other medium access protocol.
TDMA MAC protocol reduces the delay and throughput between anchor node and target node
as shown in figure 7 and figure 8.The total number of nodes used as 50 transmitting a data of all nodes
communicates linearly reduces the delay ,the periodic changes the throughput at certain period reduces the
throughput linearly as shown in figure8.

Fig 6: Comparison Performances of three MAC protocols

Fig 7: Communication Performances of TDMA with delay

Fig8: Communication performances of TDMA with throughput

VI. CONCLUSIONS
In this work, the problem of scheduling the localization packets of the anchors is formulated in
an underwater sensor network. In existing method using the single anchor node to
locate and finding a location of single node at a time, but it produce delay during communications it
need more time to find location of entire node. To overcome this issues design an efficient MAC protocol to
improve the network efficiency and multiple anchor nodes are used to find multiple nodes location reduces
the communication delay by using low complexity algorithm and location aided routing protocol. The
proposed algorithm is low complexity algorithm in order to minimize the duration of the localization task.
Moreover, we observed that system adjust the multi anchor nodes dynamically compared high performance
in different MAC protocol such as TDMA MAC, BMAC and MAC-MAN (mobile anchor node or sink
186

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
node).
VII. REFERENCES
[1] Hamid Ramezani and Geert Leus, Localization Packet Scheduling for Underwater Acoustic Sensor
Networks, IEEE Journal On Selected Areas in Communications, Vol. 33, No. 7, July 2015.
[2] Hamid Ramezani, Fatemeh Fazel, Milica Stojanovic and Geert Leus, Packet Scheduling
for Underwater Acoustic Sensor Network Localization, IEEE International Conference on
communications workshops (ICC), pp.108-113, 2014.
[3]

H. Ramezani, H. J. Rad, and G. Leus, Target localization and tracking for an isogradient sound
speed profile, IEEE Transaction on Signal Processing, Vol. 61, No. 6, pp. 14341446, March
2013.

[4] Rahman Zandi, Mahmoud Kamarei, and Hadi Amiri Underwater Acoustic Sensor Network
Localization Using Received Signals Power IEEE 2013.[5]
A. G. Dimakis, K. Ramchandran,
Y. Wu, and C. Suh, A survey on network codes for distributed storage, Proceedings of the IEEE,
vol. 99, no. 3, pp. 476489, 2011.
[5]

M. C. Domingo, Magnetic induction for underwater wireless communication networks, IEEE


Transaction on Antennas Propagations, Vol. 60, No. 6, pp. 29292939, June 2012.

[6]

B. Gulbahar and O. B. Akan, A communication theoretical modeling and analysis of underwater


magneto- inductive wireless channels, IEEE Transaction on Wireless Communication., Vol. 11,
No. 9, pp. 33263334, September 2012.

[7]

Ameer P M and Lillykutty Jacob, Localization Using Stochastic Proximity Embedding for
Underwater Acoustic Sensor Networks, IEEE 2012.

[8]

[8]. Wouter van Kleunen, Nirvana Meratnia, Paul J.M. Havinga MDS-Mac: A Scheduled MAC
for Localization, Time- Synchronisation and Communication in Underwater Acoustic Networks
IEEE 15th International Conference on Computational Science and Engineering 2012.

[9]

J.P. Kim, H.-P. Tan, and H.-S. Cho, Impact of MAC on localization in large-scale seabed s ensor
networks, in Proc. IEEE AINA., Washington, DC, USA, pp. 391396, 2011.

[10] P. Nicopolitidis, G. I. Papadimitriou, and A. S. Pomportsis, Adaptive data broadcasting in


underwater wireless networks, IEEE J. Ocean. Eng., Vol. 35, No. 3, pp. 623634, July 2010.
[11] Z. Peng, Y. Zhu, Z. Zhou, Z. Guo, and J.-H. Cui, COPE- MAC: A contention-based medium access
control protocol with parallel reservation for underwater acoustic networks, in Proc. OCEANS
IEEE Sydney, pp. 110,2010.
[12] C-C. Hsu, K.-F. Lai, C.-F. Chou, and K. C.-J. Lin, ST- MAC: Spatial temporal MAC scheduling for
underwater sensor networks, in Proc. IEEE INFOCOM, pp. 1827 1835, 2009.
[13]

M. T. Isik and O. B. Akan, A three dimensional localization algorithm for underwater


acoustic sensor networks, IEEE Transaction on Wireless Communication., Vol. 8, No. 9, pp. 4457
4463, September 2009.

[14] K. Kredo, P. Djukic, and P. Mohapatra, STUMP: Exploiting position diversity in the
staggered TDMA underwater MAC protocol, in Proc. IEEE INFOCOM, pp. 29612965, 2009.

187

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Measurement Of Temperature Using Thermocouple


and Software Signal Conditioning Using Labview
P.Nandini1, Mr.C.S.ManikandaBabu2,
Student1, Department of ECE(PG)1
Associate Professor(Sl. Grade)2, Department of ECE(PG)2,
Sri Ramakrishna Engineering College, Coimbatore.
nandini.27.9@gmail.com1, manikandababu.shelvaraju@srec.ac.in2
Abstract Temperature is a measure of average kinetic energy of the particles in a sample of matter. The
measurement of temperature using the thermocouple includes the signal conditioning stages of reference
temperature sensor for Cold Junction Compensation, high amplification and linearization. Implementing
the signal conditioning stages in the FPGA based embedded hardware increases the data acquisition
rate. The proposed work includes measuring the temperature using the thermocouple & software signal
conditioning and identifying the errors in the amplification, ADC count and linearization. The potential
or voltage of the thermocouples varies non-linearly with change in temperature. The data acquisition
process is carried out by using NI 9211 Thermocouple module along with carrier NI 9162.
Keywords Cold Junction Compensation, High amplification, Linearization, Software signal
conditioning, ADC count, NI 9211, NI 9162.

I. INTRODUCTION
A thermocouple is a device made of two dissimilar conductors or semiconductors that contact each
other at one or more points. Voltage is produced in the Thermocouple when the temperature of one of
the contact points differs from the temperature of another, which is known as the thermoelectric effect.
It is a major type of temperature sensor used for measurement and control purpose, and also converts
a temperature gradient into electricity. Based on Seebecks principle, thermocouples can measure only
temperature differences and they need a known reference temperature to yield the absolute readings. The
Seebeck effect describes the voltage or Electromotive Force (EMF) induced by the temperature gradient
along the wire. The change in material EMF with respect to a change in temperature is called the Seebeck
coefficient or thermoelectric sensitivity. This coefficient is usually a non-linear function of temperature.
For small changes in temperature over the length of a conductor, the voltage is approximately linear,
which is represented by the following equation (1) where V is the change in voltage, S is the Seebeck
coefficient, and T is the change in temperature:
V = ST (1)
Thermocouples require some form of temperature reference to compensate for the cold junctions.
The most common method is to measure the temperature at the reference junction with a direct-reading
temperature sensor then apply this cold-junction temperature measurement to the voltage reading to determine
the temperature measured by the thermocouple. This process is called Cold-Junction Compensation (CJC).
Because the purpose of CJC is to compensate for the known temperature of the cold junction, another
less-common method is forcing the junction from the thermocouple metal to copper metal to a known
temperature, such as 0 C, by submersing the junction in an ice-bath, and then connecting the copper wire
from each junction to a voltage measurement device.
Data AcQuisition[2] is the process of measurement of an electrical or physical phenomenon such
as voltage, current, temperature, pressure, or sound with a computer. PC-based DAQ systems exploit the
processing power, productivity, and display of industry-standard computers that provides more powerful,
flexible, and cost-effective measurement solution. When dealing with the factors like high voltages, noisy
environments, extreme high and low signals, or simultaneous signal measurement, signal conditioning is
the most essential process for an effective data acquisition system. It maximizes the
accuracy of a system, and allows sensors to operate properly, and guarantees safety.
Static and Dynamic Temperature Measurements[1] were done earlier and it was found that a time
constant of about 0.01 would be a good choice for use with the digital filter. This experiment was conducted
in order to test the sensitivity of a J-type thermocouple, and also to test its dynamic response to a known
step input. The sensitivity of the thermocouple was found by plotting its voltage vs. the temperature of the
188

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
water it was submerged in. The sensitivity of the thermocouple was found to be 9.8728 mV/C. The time
constant was found by quickly submerging the thermocouple in boiling water. The average time constant
was 0.01464 seconds.
II. PROPOSED WORK

Fig.1.Block diagram of temperature measurement and signal conditioning of Thermocouple

The Fig.1.illustrates the process flow for measuring the thermocouple temperature. The hot
Temperature where two dissimilar metals contact each other, is first acquired from the sensor using the
DAQ device. The reference Cold Junction Temperature is measured. This reference temperature will not be
absolute zero degree Celsius. So Cold Junction Compensation is done inorder to avoid signal conditioning
errors. The formulas are used to convert CJC voltage to temperature value. NIST standard sheet provides
the coefficient values. The obtained temperature is checked for linearity and high amplification is done to
get better results.
To determine the temperature at the thermocouple junction we can start with equation (2) shown
below, where VMEAS is the voltage measured by the data acquisition device, and VTC (TTC Tref ) is
the Seebeck voltage created by the difference between TTC (the temperature at the thermocouple junction)
and Tref (the temperature at the reference junction)

NIST thermocouple reference tables are generated as shown in Table.1 with the reference junction
held at 0 C.
Table.1. NIST Standard table

We can rewrite Equation (2) as shown in Equation (3) where VTC (TTC ) is the voltage measured
by the thermocouple assuming a reference junction temperature of 0 C, and VTC (Tref ) is the voltage that
would be generated by the same thermocouple at the current reference temperature assuming a reference
junction of 0 C:

In Equation (4), the computed voltage of the thermocouple assumes a reference junction of 0 C.
Therefore, by measuring VMEAS and Tref , and knowing the voltage-to-temperature relationship of the
thermocouple, we can determine the temperature at the primary junction of the thermocouple.
There are two techniques for implementing CJC when the reference junction is measured with a
direct-reading sensor: hardware compensation and software compensation. A direct-reading sensor has an
output that depends on the temperature of the measurement point. Semiconductor sensors, thermistors, or
189

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
RTDs are commonly used to measure the reference-junction temperature. For example, several National
Instruments thermocouple measurement devices include high-accuracy thermistors located near the screw
terminals where thermocouple wires are connected.
With hardware compensation, a variable voltage source is inserted into the circuit to cancel the
influence of the cold-junction temperature. The variable voltage source generates a compensation voltage
according to the ambient temperature that
allows the temperature to be computed assuming a constant value VTC (Tref ) in Equations (3) and
(4). With hardware compensation, we do not need to know the temperature at the data acquisition system
terminals when computing the temperature of the thermocouple. This simplifies the scaling equation.
The major disadvantage of hardware compensation is that each thermocouple type must have a separate
compensation circuit that can add the correct compensation voltage. This disadvantage results in additional
expense in the circuit. Hardware compensation is often less accurate than software compensation.
Alternatively, we can use software for CJC. After a direct-reading sensor measures the referencejunction temperature, software can add the appropriate voltage value to the measured voltage that
compensates for the cold-junction temperature. Equation (3) states that the measured voltage, VMEAS, is
equal to the difference between the voltages at the hot junction (thermocouple) and cold junction.
Thermocouple output voltages are highly non-linear; the Seebeck coefficient can vary by a factor
of three or more over the operating temperature range of some thermocouples. Therefore, we must either
approximate the thermocouple voltage-versus-temperature curve using polynomials, or use a look-up table.
The polynomials are in the following form where v is the thermocouple voltage in volts, T is the temperature
in degrees Celsius, and a0 through an are coefficients that are specific to each thermocouple type:
For voltage -to- temperature conversion,

The temperature -to- voltage conversion during cold junction temperature measurement is given by,

Initially the Cold Junction Temperature of the thermocouple is measured and it is converted into
equivalent voltage. The reference temperature to CJC voltage conversion is given by the formula

Where Tcj is the cold junction temperature, Vcj is the computed cold junction voltage, and the T0,
V0, pi and qi are coefficients. And the coefficients are selected based on the thermocouple type using NIST
Table.
To calculate the CJC voltage and CJC temperature. CJC voltage.vi is used as sub VI. Thermocouple
type and CJC channel are given as controls. DAQmx driver has predefined VIs for creating channel,
reading and clearing task. The sensor connected to the channel 0 of NI 9211[3] acquires input and it is read
by DAQmx read. The value read from thermocouple is defined for CJC Temperature. This CJC temperature
is converted into CJC Voltage for dynamic analysis.
The reference temperature acquired from ice bath is converted to Cold Junction Compensation(CJC)
voltage. The array represents the t0 , v0 , p and q coefficients given as input to the math script node
through index array. The thermocouples specified in the array are in the order. The thermocouple type and
reference temperature TCJ are also given as controls. The formula is entered in the MathScript Node and
CJC Voltage is obtained as the output interms of millivolt. The error in and error out terminals are provided
to bypass if any error occurs.
The Sub VI is developed to have Thermocouple channel as control signal and produces the
Thermocouple Voltage as output. The measured voltage is added with CJC compensation voltage. The
effective voltage is converted into the equivalent temperature. DAQmx is used for creating channel,
sampling clock, starting task, reading values, stopping task and clearing VI. The For loop is used for
reading 30 values continuously and finally displaying a single output value. The array elements from the
190

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
For loop are added and divided with number of samples to produce final Thermocouple Voltage.
The obtained CJC mV is converted into thermocouple temperature. Here the control inputs are
thermocouple types, CJC voltage, mV range and the temperature interms of 0C is obtained as the output
from MathScript Node. The array represents the T0 , v0 , p and q coefficients which are obtained from the
NIST standard table.

The CJC compensated voltage is given to the filter circuit to remove the noise signals and amplified
with the gain of 31.25. The output voltage from thermocouple is in the range of -80mV to 80mV. Its offset
shifted to 0-160mV and then amplified with the gain of 31.25 to reach the voltage range of 0-5V.
Amplified voltage = 31.25 x CJC compensated voltage
The ADS1240 ADC is used with the 24-bit resolution and delta-sigma configuration. The amplified
input voltage is given to the ADC input. The 0-5 voltage converted into (0-224 -1= 0 to 16777215 bits)
using the below mentioned formula.
ADC Count = (Input voltage in mV / 5000) x 16777216 bits
Then mV to Temperature ADC count is coded. Thermocouple range.vi and mV to temperature.
vi are used as sub vis. The controls are the Thermocouple range, CJC voltage, Thermocouple mV and
error in status code. The indicators are CJC voltage, ADC count, Linearity range, Temperature, ADC Vin,
Linearity region and Thermocouple measuring range. Linearity range is selected from the output mV of
thermocouple range. The string array contains the thermocouple range for different types of thermocouple.
Thermocouple measuring linearity region is determined by selecting the accurate range of thermocouple
from the obtained mV.
The mV range for every type of thermocouple is selected and by using case structure, Inrange and
coerce icon is used to check whether the thermocouple voltage range lies in the limit or not. The mV range
varies for every type of thermocouple. Based on the upper and lower limit in the Inrange icon, the select
terminal chooses the appropriate mV range for the given Thermocouple type and the voltage. The error
checking block is present to indicate if the voltage value is out of range.
III. RESULTS AND DISCUSSION
Static analysis is done for a particular true temperature value. Variations in temperature can be
obtained by running the vi again and again. Stable temperature applications which require slow changes
can use static analysis. Linearity is the major parameter in Static analysis. The physical channel and CJC
channel are selected based on the connection of thermocouple with the NI 9211 hardware. The sampling
rate is given as 10 and the 30 readings are obtained for a single table input. The Thermocouple type and True
temperature are given as inputs. After specifying all the inputs, the initiate is clicked. The Thermocouple
mV obtained is amplified and ADC count is calculated. The error % is calculated by finding the difference
between the True temperature and Measured temperature. The Static analysis of measurement of temperature
of thermocouple is shown in Fig.2.

Fig.2. Front panel of Static analysis

The waveforms are plotted for True Temperature Vs Thermocouple Voltage, Amplified Voltage,
ADC count and Measured Temperature as shown in Fig.3. The waveforms implies that the static analysis
produces non-linear variation in temperature. Thus non-linearity is obtained as the result of Static analysis.
191

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Fig.3. Waveforms of Static analysis

The Front panel of dynamic analysis shown in Fig.4. explains the temperature calculation for a range
of temperature. The input parameters are initialized and Starting and Ending temperature is mentioned.
The samples are collected and the process starts when the Start temperature is reached. The Time constant
starts counting when the start temperature is obtained. The Time constant calculates the time taken by the
thermocouple to reach 63.2% of the temperature difference between the starting and end temperature. Gain
is calculated using the formula del T/ del mV.

Fig.4. Front panel of Dynamic analysis

The waveforms for dynamic analysis is shown in Fig.5. The Time Vs Measured temperature,
Measured Thermocouple Voltage are plotted. The gain is increased in dynamic analysis.

Fig.5. Waveforms of Dynamic analysis

IV. CONCLUSION
The measurement of temperature using the thermocouple included the signal conditioning stages of
reference temperature sensor (for Cold Junction Compensation), high amplification and linearization. The
Thermocouple voltage(mV) is converted into temperature( 0 c ) which can be used in real time applications
to have fast data acquisition rate. The errors in the amplification, ADC count and linearization are identified
and compensated. In Static analysis, non-linearity is obtained for variation in temperature. In Dynamic
analysis, gain is increased Comparision and evaluation of performance of software and hardware based
192

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
signal conditioning will be done and implemented in FPGA in the future.
REFERENCES
[1] .Hayden S. Hanzen, Joseph Eckroat, Static and Dynamic Temperature Measurements, National
Instruments, 2012.
[2]

. http://www.ni.com/data-acquisition/usb/

[3]

. http://www.ni.com/pdf/manuals/373466e.pdf

[4]

. http://www.ni.com/pdf/manuals/374014a.pdf

[5]

. www.ni.com

193

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Comparative Analysis of 16:1 Multiplexer and 1:16


Demultiplexer using Different Logic Styles
K.Anitha, PG Scholar, R.Jayachitra
M.E-Embedded System Technologies
Arunai Engineering College
Thiruvannamalai, India, Anithanbj93@gmail.com
Assistant Professor\EEE
Arunai Engineering College, Thiruvannamalai, India
chitraanitha@yahoo.co.in
Abstract Conventional CMOS is compared with transmission gate logic, pass transistor logic and
two adiabatic logic styles namely Efficient Charge Recovery Logic (ECRL) and Improved Efficient
Charge Recovery Logic (IECRL). A 16:1 multiplexer and 1:16 demultiplexer using different low power
technique are designed and results are compared based on their minimum/maximum power consumption
and transistor count. The proposed schematics multiplexer and demultiplexer are simulated using
MICROWIND2 and DSCH2 software.
Index Terms Transmission gate logic, Pass transistor logic, Efficient Charge Recovery Logic, low
power.

I. INTRODUCTION
During recent years the main and highly concerned issue in the low power VLSI design is energy/
power dissipation. This is due to the increasing demand of portable systems and the need to limit the
power consumption in VLSI chips. In conventional CMOS circuits, the basic approaches used for reducing
power consumption are by reducing the supply voltages, on decreasing node capacitances and minimize
the switching activities with efficient charge recovery logic. The adiabatic logic works on the principle
of energy recovery logic and provides a way to reuse the energy stored in load capacitors rather than the
conventional way of discharging the load capacitors to the round and wasting this energy. The Power
consumption is the major concern in low power VLSI design technology.
MOTIVATION
A. Need for low power design
The requirement for low power design has caused a large paradigm shift where energy dissipation
has become as essential consideration as area and performance. Several factors have contributed to this
trend. The need for low power devices has been increasing very quickly due to the portable devices such
as laptops, mobile phones and battery operated devices such as calculator, wrist watches. These products
always put a large attention on minimizing power in order to maximize their battery life. Another motive
for low power is associated to the high end products. This is due to the packaging and cooling of such high
performance, high density and high power chips are prohibitively expensive. Another consideration low
power design is related to the environment. The Micro electronics products become tolerable usage in
everydays life, their need on energy will sharply increase. Therefore the reduction in power consumption
reduces the heat generated and so reduces the cost required for extra cooling systems in homes and office.
B. Multiplexer

Fig 1: Multiplexer
194

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
A multiplexer is a device which selects one of many input signals and forwards the selected input to
the output. Block diagram of multiplexer is shown in figure (1). Multiplexers are mainly used connect many
sources or devices with single destination or device. A Multiplexer is also known as data selector. Figure
(2) shows the function of multiplexer.

Fig 2: Function of Multiplexer

C. Demultiplexer

Fig 3: Demultiplexer

A demultiplexer is a device which has single input and many outputs. Demultiplexer is used to
connect a single source to multiple destinations. Figure 2 shows the block diagram of demultiplexer. The
multiplexer and demultiplexer work together to perform the process of transmission and reception of data
in communication system. It performs the reverse operation of multiplexer. Both play an important role in
communication systems. Figure (4) shows the function of demultiplexer.

Fig 4: Function of Demultiplexer

LOW POWER TECHNIQUES


Conventional CMOS

195

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
Fig 5: CMOS switching process

Conventional CMOS designs consume a lot of energy during switching process. Two major sources
of power dissipation in digital CMOS circuits are dynamic power and static power. Dynamic power is
related to the changing events of logic states or circuit switching activities including power dissipation due
to capacitance charging and discharging. Figure (1) shows the CMOS switching process.

Fig 6: CMOS NAND gate

During device switching, power dissipation primarily occurs in conventional CMOS circuits as
shown in figure (1). In CMOS logic design half of the power is dissipated in PMOS network and during the
switching events, stored energy is dissipated during discharging process of output load capacitor. CMOS
NAND gate is shown in the figure (2) which consists of 2 PMOS and 2NMOS devices.
Transmission gate

The CMOS transmission gate (T-gate) is a useful circuit for both analog and digital applications. It
acts as a switch that can operate up to VDD and down to VSS. The CMOS transmission gate utilizes the
parallel connection of an NMOS and a PMOS transistor. When the transmission gate is on, it provides a
low-resistance connection between its input and output terminals over the entire input voltage range. In
transmission gate, the parallel connection of an NMOS and a PMOS transistor acts as a switch. The symbol
and truth table of transmission gate is shown in figure (7). A Transmission Gate is a kind of MUX structure.

Fig 7: Symbol and Truth Table of Transmission Gate

Efficient Charge Recovery Logic (ECRL)

ECRL consists of two cross coupled PMOS transistors and two N-functional blocks for ECRL
adiabatic logic block. Both out and out bar are generated. Energy dissipation is reduced to a large extent
in ECRL logic by performing the precharge and evaluation phase simultaneously. ECRL dissipates less
energy than other adiabatic logics by eliminating the precharge diodes. It consists of only two PMOS
switches. It provides full swing at the output. The basic structure of ECRL logic is similar to the Differential
Cascode Voltage Switch Logic (DCVSL) with differential signaling. Figure (8) shows the ECRL NAND
gate. A major disadvantage of ECRL circuit is that the coupling effects due to the two outputs are connected
by the PMOS latch and the two complementary outputs can interfere with each other.

196

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Fig 8: ECRL NAND gate

Improved Efficient Charge Recovery Logic (IECRL)

IECRL consists of a pair of cross coupled PMOS device and two N-functional blocks. In IECRL,
delay has been improved by adding a pair of cross coupled NMOS devices in the ECRL design. The basic
structure of IECRL is similar to the Modified Differential Cascode Voltage Switch Logic (MDCVSL) with
differential signaling. Figure (9) shows the IECRL NAND gate. The IECRL logic is the improved ECRL
logic. The performance of IECRL is better than the ECRL logic even though the number of transistors is
higher than the ECRL logic. The main advantage of IECRL logic is that it consists of a pair of cross coupled
NMOS devices to improve the performance of ECRL logic.

Fig 9: IECRL NAND gate

DESIGN AND IMPLEMENTATION


Multiplexer and demultiplexer are used in many applications such as in communication system,
many to many switch, ALU design, parallel to serial converter. A 16:1 multiplexer and 1:16 demultiplexer
are designed using transmission gate logic, ECRL and IECRL which shows reduce in power dissipation
compared to the conventional CMOS logic. The proposed circuit and layout for combinational circuits has
been designed in microwind2 version tool and DSCH2 software.
The DSCH2 and Microwind2 are user friendly PC tools for the design and simulation of CMOS
integrated circuits. The schematic diagram of all proposed circuits is designed in DSCH software. Using
DSCH2, verilog file is generated for schematic diagram of all logic operation. By compiling this verilog
file in MICROWIND2, the CMOS layout of the schematic diagram is generated. This layout is simulated
in MICROWIND2 to observe the power dissipation of the circuit.
16:1 Multiplexer

Figure (10), (11), (12) and (13) shows the 16:1 multiplexer design using Conventional CMOS logic,
Transmission gate logic, ECRL and IECRL respectively. The circuits using TGL, ECRL and IECRL are
compared with the conventional CMOS based 16:1 multiplexer based on the power dissipation.

197

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Fig 10: 16:1 Multiplexer using Conventional CMOS logic

Fig 11: 16:1 Multiplexer using TGL

Fig 12: 16:1 Multiplexer using ECRL

198

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Fig 13: 16:1 Multiplexer using IECRL

1:16 Demultiplexer

Fig 14: 1:16 Demultiplexer using Conventional CMOS logic

Fig 15: 1:16 Demultiplexer using TGL


199

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Fig 16: 1:16 Demultiplexer using ECRL

Fig 17: 1:16 Demultiplexer using IECRL

Figure (14), (15), (16) and (17) shows the 1:16 demultiplexer design using conventional CMOS
logic, transmission gate logic, ECRL and IECRL respectively. The circuits using TGL, ECRL and IECRL
are compared with the conventional CMOS based 1:16 demultiplexer based on the power dissipation.
Comparative analysis

The simulation results are compared based on the power dissipation of the proposed circuits and
their transistor count with conventional CMOS logic design.

Table 1: Comparison of 16:1 Multiplexer design using different low power techniques

200

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
Table 1: Comparison of 1:16 Demultiplexer design using different low power techniques
CONCLUSION
The proposed combinational circuits primarily focus on lowering the power dissipation. Logics for
16:1 multiplexer and 1:16 demultiplexer are designed and the results indicate that they have lesser power
dissipation than conventional CMOS circuits when compares to adiabatic techniques. The power dissipation
in conventional CMOS circuits is minimized through adiabatic technique. Adiabatic logic works with the
idea of switching activities which reduces the power by offering back the stored energy to the supply. The
proposed circuits using ECRL and IECRL are compared with conventional CMOS logic and transmission
gate logic for the 16:1 multiplexer and 1:16 demultiplexer. It is observed that the adiabatic technique is
good choice for low power application. From the analysis IECRL logic shows significant energy saving
compared with conventional CMOS logic, Transmission gate logic and ECRL logic. The future scope
of this work is that the proposed 16:1 multiplexers and 1:16 demultiplexers can be cascaded to construct
multiplexers and demultiplexers along more number of inputs and output lines
REFERENCES
[1] G. Eason William C. Athas, Lars J. Svensson, Jeffrey G. Koller, Nestoras Tzartzanis, and Eric
Ying-Chin Chou, Low-Power Digital Systems Based on Adiabatic switching Principles IEEE
Transactions on Very Large Scale Integration Systems, VOL. 2,NO. 4, DECEMBER 1994.
[2]

A.Chandrakasan, S. Sheng and R. Brodersen, Low-power CMOS digital design, IEEE Journal of
Solid State Circuits, Vol. 27, No 4, pp. 473-484, April 1992.

[3]

Deepti Shinghal, Amit Saxena and Arti Noor, Adiabatic Logic Circuits: A Retrospect MIT
International Journal of Electronics and Communication Engineering, Vol. 3, No. 2, pp. 108114,
2013.

[4]

Shruti Konwar, Thockchom Birjit Singha, Soumik Roy, Reginald H. Vanlalchaka Adiabatic logic
based low power multiplexer and de-multiplexer Computer Communication and Informatics
(ICCCI), 2014 International Conference Page(s): 15, Jan. 2014.

[5]

Manasvi Pandey & Darpan Sibbal, Mux and Demux and There Uses in Telephone Lines,
International Journal of Research,Vol-1, Issue-10, 2014.

[6]

Anu Priya and Amrita Rai, Adiabatic Technique for Power Efficient Logic Circuit Design, IJECT,
Vol. 5, Issue Spl-1, Jan - March 2014.

[7]

Shruti Konwar, Thockchom Birjit Singha, Soumik Roy, Reginald H. Vanlalchaka, Adiabatic Logic
Based Low Power Multiplexer and Demultiplexer, 2014 International Conference on Computer
Communication and Informatics (ICCCI -2014), Jan. 03 05, 2014.

[8]

Abhishek Dixit, Saurabh Khandelwal and Dr. Shyam Akashe, Design Low Power High Performance
8:1 MUX using Transmission Gate Logic (TGL), International Journal of Modern Engineering &
Management Research Volume 2 Issue 2, June 2014 p.no.14-20.

[9]

Abdhesh Kumar Jha and Anshul Jain, Comparative Analysis of Demultiplexer using Different
Logic Styles, International Journal for Scientific Research & Development, Vol. 2, Issue 12, 2015.

[10] Bhakti Patel and Poonam Kadam, Comparative Analysis of Adiabatic Logic Techniques,
International Journal of Computer Applications, pp 20- 24, 2015.
[11] Mohamed Azeem Hafeez and Aziz Mushthafa, Analysis of Adiabatic Circuit Approach for Energy
Recovery Logics, International Journal of Engineering Sciences & Research Technology, pp 702707, October, 2015.
[12] Ms.Amrita Pahadia and Dr. Uma Rathore Bhatt,Layout Design, Analysis and Implementation of
Combinational and Sequential Circuits using Microwind SSRG International Journal of VLSI &
Signal Processing (SSRG-IJVSP), volume 2, Issue 4, , pp 6-14, July-August 2015.
[13] Saseendran T K and Rajesh Mehra, Area and Power Efficient CMOS De-multiplexer Layout
on 90nm Technology, International Journal of Scientific Research Engineering & Technology
(IJSRET), EATHD-2015 Conference Proceeding, ISSN: 22780882, pp 14-15, March 2015.
201

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
[14] Richa Singh, Prateek Raj Gautam, Anjali Sharma, Energy Efficient Design of Multiplexer Using
Adiabatic logic, Int. Journal of Electrical & Electronics Engg, Vol. 2, Spl. Issue 1, pp 104-107,
2015.
[15] Abdhesh Kumar Jha and Anshul Jain, Comparative Analysis of Demultiplexer using Different
Logic Styles, International Journal for Scientific Research & Development, 2015, Vol. 2, Issue 12.
[16] K.Anitha and R.Jayachitra Design and Analysis of CMOS and Adiabatic 16:1 Multiplexer and
1:16 Demultiplexer, International Journal of Advanced Research in Electrical, Electronics and
Instrumentation Engineering (IJAREEIE) 2015, vol 4, Issue-12.

202

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Design And Implementation Of Low Power Floating


Point Unit Using Reconfigurable Data Path: A Survey
Lathapriya V #1 and Sivagurunathan P T *2
II M.E, VLSI design,
M.Kumaraswamy College of Engineering,
Tamil nadu, INDIA
#

*
Assistant Professor, Department of ECE,
M.Kumaraswamy College of Engineering,
Tamil nadu, INDIA

Abstract Floating point arithmetic is widely used in many areas especially scientific computation and
signal processing. The main objective of this paper is to reduce the power consumption and to increase the
speed of execution and the implementation of floating point multiplier using sequential processing on the
reconfigurable hardwareFloating Point (FP) addition, subtraction and multiplication are widely used in
large set of scientific and signal processing computation. In addition, the proposed designs are compliant
with IEEE-754 format and handles over flow, under flow, rounding and various exception conditions.
The adder/subtractor and multiplier designs can achieve high accuracy with increased throughput.
This approach is To provide a high accuracy reconfigurable adders and multipliers for floating point
arithmetic. To understand how to represent single and double precision floating point architecture in
single architecture using quantum flux circuits for DSP applications.
Keywords Floating point unit, Delay ,High Throughput

I. INTRODUCTION
Floating point addition and multiplication are the most frequent floating point operations. Many
scientific problems require floating point arithmetic with high level of accuracy in their calculations. A
floating point number representation can simultaneously provide a large range of number and a high degree
of precision. The IEEE 754 floating point standard is the most common floating point representation used
in modern microprocessors.
Efficient use of the chip area and resources of an embedded system poses a great challenge while
developing algorithms in embedded platforms for hard real time applications, like digital signal processing,
control systems and so on. As a result, a platform of modern microprocessors is often dedicated to hardware
for floating point computation. Previously, silicon area constraints have limited the
complexity of the floating point unit or FPU .Advances in integrated circuit fabrication technology
have resulted in both smaller feature sizes and areas. It has therefore become possible to implement more
sophisticated arithmetic algorithms to achieve higher FPU performance.
The recent advancements in the area of Field Programmable Gate Array (FPGAs) has provided
many useful technique and tools for the development of dedicated and reconfigurable hardware employing
complex digital circuits at the chip level. Floating point Addition, Multiplication is most widely used
operation in DSP/Math processors, Robots, Air Traffic Controller, Digital computers, because of its raising
application the main emphasis is on the implementation of floating point multiplier effectively such that it
uses less chip area with more clock speed.
II. FORMATS
i. Fixed point Format
A value of a fixed-point data type is essentially an integer that is scaled by a specific factor determined
by the type. For example, the value 1.23 can be represented as 1230 in a fixed-point data type with scaling
factor of 1/1000, and the value 1230000 can be represented as 1230 with a scaling factor of 1000.
Unlike floating-point data types, the scaling factor is the same for all values of the same type, and
does not change during the entire computation.
ii. Floating point format
One of the ways to represent real numbers in binary is the floating point formats.
203

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
There are two different formats for the IEEE 754 standard. Binary interchange format and Decimal
interchange format. In the multiplication of floating point numbers involves a large dynamic range which
is useful in DSP applications.
The advantage of floating-point representation over fixed-point and integer representation is that it
can support a much wider range of values. For example, a fixed-point representation that has seven decimal
digits with two decimal places can represent the numbers 12345.67, 123.45, 1.23 and so on, whereas a
floating-point representation (such as the IEEE 754 decimal32 format) with seven decimal digits could in
addition represent 1.234567, 123456.7, 0.00001234567, 1234567000000000, and so on. The floating-point
format needs slightly more storage (to encode the position of the radix point), so when stored in the same
space, floating-point numbers achieve their greater range at the expense of precision.
The IEEE floating point standard defines both single precision and double precision formats.
Multiplication is a core operation in many signal processing computations, and as such efficient
implementation of floating point multipliers is an important concern. This paper presents the FPGA
implementation of double precision floating point multiplier using Quartus tool. The floating point numbers
is based on scientific notation[4]. A scientific notation is just another way to represent very large or very
small numbers in a compact form such that they can be easily used for computations.
Floating point number consists of three fields:
1. Sign (S): It used to denote the sign of the number i.e. 0 represent positive numbers and 1 represent
negative number.
2. Significant or Mantissa (M): Mantissa is part of a floating point number which represents the
magnitude of the number.
3. Exponent (E): Exponent is part of the floating point number that represents the number of places
that the decimal point (or binary point) is to be moved.
Number system is completely specified by specifying a suitable base , significant (mantissa) M, and
exponent E.

Figure 1 FPU architecture


III. LITERATURE REVIEW
The following section consists of several architecture designs for FPU to undergo the survey to
obtain the better solution for low power operation of LSRDP.
A. DESIGN AND IMPLEMENTATION OF SINGLE PRECISION PIPELINED FLOATING
POINT CO-PROCESSOR
Manisha Sangwan explained that Floating point numbers are used in various applications such
as medical imaging, radar, telecommunications Etc. This paper deals with the comparison of various
arithmetic modules and the implementation of optimized floating point ALU. Here pipelined architecture
is used in order to increase the performance and the design is achieved to increase the operating frequency
by 1.62 times. The logic is designed using Verilog HDL. Synthesis is done on Encounter by Cadence after
timing and logic simulation. Carry Look Ahead (CLA) adder save this propagation time by generating
and propagating this carry simultaneously in the consecutive blocks. So, for faster operation this carry
look ahead adder is used and for division process Goldsmith (GDM) algorithm is used. There are different
algorithms available for multiplication such as Booth, modified booth, Wallace, Bough Wooley, Braun
multipliers. But issue with multipliers is speed and regular layout so keeping both the parameters in mind
modified booth algorithm was chosen. It is a powerful algorithm for signed-number multiplication, which
204

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
treats both positive and negative numbers uniformly. Ultimately these individual blocks are clubbed to
make Floating point based ALU in a pipelined manner to minimize the power and to increase the operating
frequency at the same time. These comparative analyses are done on cadence and Xilinx both.
B. EMBEDDED COMPLEX FLOATING POINT HARDWARE ACCELERATOR
Amin Ghasemazar, Mehran Goli, Ali Afzali-Kusha provide a common solution to deal with the large
execution delay and power consumption in modern processors is to use a co-processor that is capable of
executing specific types of instructions in parallel with the main processor.In this paper, we present a new
pipelined Multiple Input Multiple Output (MIMO) ALU that is accelerated to perform complex floating
point operations including addition, subtraction, multiplication and division. Our proposed Accelerated
complex floating point ALU (AALU) has significantly lower delay (in clock cycles) compared to an
earlier work that has an Instruction Set Extension (ISE) based architecture. Hardware is implemented with
MIMO (Multiple Input Multiple output) data path in a MIPS processor as well as NIOS II configurable and
extensible processor using Instruction Set Extension (ISE). The architecture of AALU to further speed-up
the design by utilizing more architecture level techniques such as parallel processing or pipelining.
C.CORRECTLY ROUNDED ARCHITECTURES FOR FLOATING-POINT MULTI-OPERAND
ADDITION AND DOT-PRODUCT COMPUTATION
Yao Tao Gao Deyuan Fan Xiaoya presents hardware architectures performing correctly rounded
Floating-Point (FP) multi-operand addition and dot-product computation, both of which are widely used in
various fields, such as scientific computing, digital signal processing, and 3D graphic applications. A novel
realignment method is proposed to solve the catastrophic cancellation and multi-sticky bits. Implementation
results show that our architectures not only can produce correctly rounded results, whose errors are less
than 0.5 ULP(Unit in the Last Place).Two problems may incur: catastrophic cancellation and multi-sticky
bit. Only one rounding operation is performed in both of the proposed FP multi-operand adder and dotproduct computation unit. Long-Adder architecture, which is a data parallel architecture and has the same
significand adder with long accumulator architecture.Correct Rounding of FADDn and FDPn not only
minimizes the maximum error of the result, but also makes the result deterministic, which can improve
the portability of software. As the transistors become cheaper and cheaper, the cost of using the hardware
method to accelerate complex but frequent operations is no longer unaffordable. Our architectures can
reduce the delay of multi-operand addition and dot-product computation compared with the network
architecture.
D. A HIGH SPEED FLOATING POINT DOT PRODUCT UNIT
Akash Kumar Gupta implemented in two approaches conventional parallel and fused approach
Products are no need to be rounded and only sum has to be rounded. Thus the precision is improved.
The presented fused dot product unit decreases the complexity of hardware implementation due to fused
technique by allowing shared units compared to conventional parallel approach with discrete units. These
all provides fused dot product unit with reduced delay and reduced area and high accuracy as only one
rounding operation is performed. The fused primitives are faster, smaller, than the parallel approaches
and provide a slightly more accurate result.The Fused dot product unit is more accurate than conventional
parallel dot product unit in sense only one rounding is performed over 3 rounding in parallel approach.
E. LOW-POWER LEADING-ZERO COUNTING AND ANTICIPATION LOGIC FOR HIGHSPEED FLOATING POINT UNITS
GiorgosDimitrakopoulos, a new leading-zero counter (or detector) is presented. New boolean
relations for the bits of the leading-zero count are derived that allow their computation to be performed
using standard carry-lookahead techniques. The new circuits can be efficiently implemented either in
static or in dynamic logic and require significantly less energy per operation compared to the already
known architectures. Efficient technique for handling the error of the leading-zero anticipation logic is
also presented.Normalization involves the use of a leading-zero counting (LZC), or detection unit, and a
normalization shifter, while in almost all cases, leading-zero anticipation (LZA) logic is also employed
to speed up the computation. Our goal is to clarify which part of the prediction circuit that consists of the
LZA logic and the LZC unit, is more critical in terms of energy and delay for the performance of the whole
circuit. The approach we adopted relies on a hybrid technique where dual-rail dynamic logic is used only in
specific parts of the circuit, while the majority of the gates produces single rail outputs. The derivation of
205

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
this technique is based on a simple observation. A novel technique for handling the possible error of LZA
logic was described that imposes the minimum overhead to the normalization shifter without introducing
any further limitation.
F. A MULTIPLE-MODE FLOATING-POINT MULTIPLY-ADD FUSED UNIT FOR TRADING
ACCURACY WITH POWER CONSUMPTION
Kun-Yi Wu, Chih-Yuan Liang, Kee-Khuan Yu, and Shiann-RongKuang showed wide use of floatingpoint (FP) multiply and accumulate operations in multimedia and digital signal processing applications,
many modern processors adopt FP multiply-add fused unit (MAF) to achieve high performance, improve
accuracy and reduce power consumption. FP arithmetic units usually occupy the major portion of a
processors area and power dissipation, a multiple-mode FP multiply-add fused unit which utilizes the
iterative multiplication and truncated addition techniques to support seven operating modes with various
errors for low power applications. It can execute either one multiply-accumulate operation with three
modes, one multiplication operation with two modes or one addition operation with two modes. When
compared to the traditional IEEE754 single-precision FP MAF, the proposed unit has 4.5% less area and
23% longer delay to achieve multiple modes which can sacrifice a little (< 1%) accuracy for saving large (>
33%) power consumption. Implementation results exhibited that the proposed MMAF unit can efficiently
reduce about 33% and 43% power consumption with 0.130% and 0.909% accuracy lost for MAC operation.
G. UNIFIED ARCHITECTURE FOR DOUBLE/TWO-PARALLEL SINGLE PRECISION
FLOATING POINT ADDER
Manish Kumar Jaiswal, Ray C. C. Cheung, M. Balakrishnan, and Kolin Paul stated Floating point
(F.P.) addition is a core operation for a wide range of applications. This brief presents an area-efficient,
dynamically configurable, multiprecision architecture for F.P. addition. Key components involved in the
F.P adder architecture, such as comparator, swap, dynamic shifters,leading one-detector (LOD), mantissa
adders/subtractors, and rounding circuit, have been redesigned to efficiently enable resource sharing for
both precision operands with minimal multiplexing circuitry. Compared to a standalone DP adder with
two SP adders, the proposed unified architecture can reduce the hardware resources by 35%, with a
minor delay overhead. we have presented an architecture for floating point adder with on-the-fly dual
precision support, with both normal and sub-normal support, and exceptional case handling. It supports
double precision with dual single precision (DPdSP) adder computation.
H.SPLIT-PATH FUSED FLOATING POINT MULTIPLY ACCUMULATE (FPMAC)
Suresh Srinivasan, KetanBhudiya, RajaramanRamanarayanan, explained about Floating point
multiply-accumulate (FPMAC) unit is the backbone of modern processors and is a key circuit determining
the frequency, power and area of microprocessors. FPMAC unit is used extensively in contemporary
client microprocessors, further proliferated with ISA support for instructions like AVX and SSE and also
extensively used in server processors employed for engineering and scientific applications. In this work we
have three key innovations to create a novel double precision FPMAC with least ever gate stages in the
timing critical path: a) Splitting near and far paths based on the exponent difference (d=Exy-Ez= {-2,-1,0,1}
is near path and the rest is far path), b) Early injection of the accumulate add for near path into the Wallace
tree for eliminating a 3:2 compressor from near path critical logic, exploiting the small alignment shifts in
near path and sparse Wallace tree for 53 bit mantissa multiplication, c) Combined round and accumulate
add for eliminating the completion adder from multiplier giving both timing and power benefits. Our design
by premise of splitting consumes lesser power for each operation where only the required logic for each
case is switching. Splitting the paths also provides tremendous opportunities for clock or power gating the
unused portion (nearly 15-20%) of the logic gates purely based on the exponent difference signals. The split
path design provides a natural way for gating opportunities and even under normal case may lead to 15-20%
of lesser switching gates based on the near or far path operation. This innovation can significantly help the
microprocessor designs with fast timing area and power convergence. 1 GHz Leading Zero Anticipator
Using Independent Sign-Bit Determination Logic K. T. Lee and K. J. Nowka, Leading Zero Anticipators
(LZAs) predict the position of the leading logical one by examining the mantissas of the adder and the
addend in parallel with the addition. A Leading Zero Anticipator was fabricated in a 1 GHz PowerPC
microprocessor. It initially computes both the positive and negative sum and shift amounts. In parallel, it
computes the sign-bit to select one in a final muxing stage. This organization enables the LZA to operate at
measured frequencies of up to 1.OGHz unit. This helps to speed the normalization process in normalized
206

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
floating-point addition or fused multiplication-addition units by generating the proper normalization shift
amounts. In conventional designs, either the smaller magnitude operand must be subtracted from the larger
or both a leading-zero anticipator and a leading-one anticipator are both implemented and the adder sign-bit
selects the normalization shift amount. These solutions require either a magnitude compare or additional
delay and load due to the multiplexor dependent on the adder sign-bit. Furthermore, taking the sign-bit from
the adder generally requires a long wire which produces extra signal delay and degrades signal integrity.
It is, thus, desirable that the sign-bit be determined independently without too much of area and power
overhead.
I. DELAY-OPTIMIZED IMPLEMENTATION OF IEEE FLOATING-POINT ADDITION
P. M. Seidel and G. Even, showed that dual path FP-addition algorithm, the common criterion for
partitioning the computation into two paths has been the exponent difference. The exponent difference
criterion is defined as follows: The near path is defined for small exponent differences (i.e.,1;0;1), and the
far path is defined for the remaining cases. The adopted AMD algorithm is the fastest among all considered
FP adder algorithms in the logic level analysis The FP-adder implementation corresponding to this SUN
patent uses a special path selection condition that simplifies the near-path. The near-path deals only with
effective subtractions and no rounding is required.The delay of our FP-adder algorithm using the Logical
Effort Model the delay-optimal implementation of our FP addition algorithm by considering optimized gate
sizing and driver insertion. For comparisons we scale the delay estimations from this model to units of FO4
inverter delays (inverter delay with a fanout of four). The algorithm is a two-staged pipeline partitioned into
two parallel paths called the R-path and the N-path. A parallel-prefix adder is used to compute the sum and
the incremented sum of the significands. The FP-adder design achieves a low latency by combining various
optimization techniques such as: A nonstandard separation into two paths, a simple rounding algorithm,
unification of rounding cases for addition and subtraction, sign-magnitude computation of a difference
based on ones complement subtraction, compound adders, and fast circuits for approximate counting of
leading zeros from borrow-save representation.
IV.COMPARISION TABLE
Floating point arithmetic delay

Floating Point Arithmetic Power(mW)

V. CONCLUSION
The floating point calculations consume more power and occupy larger area due to high dynamic
range. Using fused floating point techniques the area and power have been reduced. For that reason fused
FMA and FAS concept used in FFT butterfly radix-2 calculation. Based on the data flow analysis, the
proposed fused floating-point adder can be split into three pipeline stages, which increases the throughput.
The proposed floating point arithmetic unit can be achieved by adopting the intermediate gating threshold
voltage method.
207

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
VI. REFERENCES
[1] Jongwook Sohn, Member, IEEE, and Earl E. Swartzlander, Jr (2014) A Fused
Three-Term Adder.IEEE Transactions.

Floating-Point

[2]

Alexandre F. Tenca (2009) Multi-Operand Floating-Point Addition IEEE International Symposium.

[3]

Anant G. Kulkarni,Dr. Manoj Jha, Dr. M. F. Qureshi; (2014). Design and Simulation of Eight Point
FFT Using VHDL; IJISET - Vol. 1 Issue 3

[4]

Anant G. Kulkarni, Dr. M. F. Qureshi, Dr. Manoj Jha,(2013) Discrete Fourier


Approach To Signal Processing IEEE.,vol.3.issue 10.

[5]

Anant G. Kulkarni & Sudha Nair July-December 2009, Design and Implementation of Frequency
Analyser using VHDL, International
Journal of Electronics Engineering , Volume 1,
Number 2, pp. 265-268

[6]

Bhaskar J. A VHDL Primer Pearson Printice Hall Publication Third Edition

[7]

Chih-Yuan Liang, Kee-Khuan Yu, And Shiann-Rong Kuang (2012) Multiple-Mode Floating-Point
Multiply-Add Fused Unit For Trading Accuracy With Power ConsumptionIEEE; Page(s): 429
435.

[8]

Claudio Brunelli, Fabio Garzia, Jari Nurmi. J Real-Time Image Proc (2008). A c o a r s e - g r a i n
reconfigurable architecture for multimedia applications featuring subword computation capabilities.

[9]

Daumas, M. Matula, D.W. Jul 1993. Design of a Fast Validated Dot Product Operation 11th
Symposium on Computer Arithmetic, 1993.Proceedings, pp 62 - 69, 29 Jun-2.

[10]

David Elam and Cesar Iovescu. Sep. 2003. A Block Floating Point Implementation for an N-Point
FFT on the TMS320C55x DSP. Application Report SPRA948 page1-11 .

208

Transform:

2007.

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

A Survey on Han-Carlson adder with Efficient Adders


Kaarthik.K, PG scholar, Dr .C.Vivek , Associate Professor,
Dept.of Electronics and communication engineering,
M. Kumarasamy College of engineering,
Karur, Tamilnadu.
kaarthikmkce@gmail.com
Dept.of Electronics and communication engineering,
M. Kumarasamy College of engineering,
Karur, Tamilnadu
vivekc.phd@gmail.com
Abstract Regular CSLA uses dual Ripple Carry Adder to perform addition operation. Modified
CSLA (M-CSLA) uses BEC as one circuit which reduces the area furthermore, such that the total gate
count is reduced subsequently. From the architecture of Han Carlson adder it is observed that there is
a possibility of reducing the delays further in partial addition components. In this research, we modify
CSLA with Han Carlson adders to reduce propagation delay between gates. Our proposed adders are tree
structure based and are preferred to speed up the binary additions. This work estimates the performance
of proposed design will be better in terms of Logic and route delay. The experimental results will show
that the performance of HC with parallel prefix adder is faster and area efficient compared to conventional
modified CSLA.
Keywords component:, Carry select adder (CSLA), Carry Look-Ahead Adder (CLA), Ripple Carry
Adder (RCA), Han Carlson (HC), Binary to excess code (BEC)

I. INTRODUCTION
Addition is a fundamental operation for any digital system, digital signal processing (DSP) or
control system. A fast and accurate operation of a digital system is greatly influenced by the performance
of the residential adders. Adders are also very important component in digital systems because of their
extensive use in basic digital operations such as subtraction, multiplication and division. Hence, improving
performance of digital adder would highly advance the execution of binary operations inside a circuit
contained those blocks. The performance of a digital circuit block is gauged by analyzing its power
dissipation, layout area and its operating speed.
The Carry Select Adder (CSA) provides a compromise between small areas but longer delay Ripple
Carry Adder (RCA) and a large area with short delay Carry Look-Ahead Adder (CLA) [1]. In mobile
electronics, reducing the area and power consumption are key factors in increasing portability and battery
life. Even in the servers and desktop computers, power consumption is an major design constraint. Design
of area- and power-efficient high-speed data path logic system are the most substantial areas of research in
VLSI system design. In digital adders, the speed of addition is limited by the time requirement to propagate
a carry through the adder. The sum for each bit position in elementary adder is generated sequentially
after the previous bit position has been summed and a carry propagated into the next position [3]. Among
different types of adders, the CSA is intermediate regarding speed and area [2].
VLSI Integer adders find the applications in Arithmetic and Logic Units (ALUs), microprocessors
and memory addressing units. Speed of the adder frequently decides the minimum clock time in a
microprocessor. The need for a Parallel Prefix adder is that it is primarily fast on comparison with ripple
carry adders. Parallel Prefix adders (PPA) are family of adders derived from the common carry look ahead
adders.
These adders are well suited for adders with wider word lengths. PPA circuits uses a tree network to
reduce the latency to be O(log2 n) where n represents the number of bits. A three stage process is generally
involved in the construction of PPA. The first step involves the creation of generate, complementary skill
and propagate signals for all the input operand bits.

209

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
Second step involves the generation of carry signals. In Parallel Prefix Adders, the dot operator
and the semi-dot operator are introduced. The dot operator is defined by the equation (4) and the
semi-dot operator is defined by the equation (5)

In the above equation, operator is applied on two pair of bits and . These bits represent,
generate and propagate signals used by addition. The output of the operator is a new pair of bits which
is once again combined using a dot operator or semi-dot operator with another pairs of bits. This
procedural use of dot operator and semi-dot operator creates a prefix tree network which ultimately
ends in generation of all carry signals. In the final step, the sum bits of the adders are generated with the
propagate signals of operand bits and the preceding stage carry bit using a xor gate. The semi-dot operator
will be obtained as last computation node in each column of the prefix graph structures, where it is
essential to compute only generate term, whose value is the carry generated from that bit to the succeeding
bit.
II. REQUIREMENTS
A. Design
We propose a high speed Carry Select Adder by replacing Ripple Carry Adder with parallel
prefix adder. Adders are the basic building blocks in digital integrated circuit based designs. Ripple
Carry Adder (RCA) is usually preferred for addition of two multi-bit numbers as these RCA offer
fast design time among all types of adders. However RCAs are slowest adder as every full adder must
wait until the carry is generated from previous full adder. On the other hand, Carry Look Ahead
(CLA) adder are faster adder, but they required more area. The Carry Select Adder is a compromise
on between the RCA and CLA in terms of area and delay. CSLA is designed by using dual RCA: due to
this arrangement the area and delay are concerned factors. It clears that there is a scope for reducing
delay in such arrangement. In this research, we have implemented CSLA with parallel prefix adders.
Parallel prefix adders are tree based structure and are preferred to speed up the binary
additions. This process estimates the performance of proposed design in terms of logic and route
delay. The experimental results show the performance of CSLA with parallel prefix adder is fast and
area efficient compared to conventional modified CSLA.
B. Functionality
In addition to the final deadline, each section of the project was given separate deadlines to ensure
each design group was making sufficient progress throughout the semester. The first deadline required us
to turn in the ADD, OR, PASS A, 8:1 MUX functions, as well as an arbitrary function that we chose on
our own, and the second design review required the ADD, SUB, SHIFT, ALU, in/out connectivity, and
registers working. Since we had already finished those parts previously, the final report does not cover those
individual components, but it does require that our ALU be able to complete each function and demonstrate
its correctness.
The total list of functions that our ALU must complete is listed in Table 1.
Table 1. Required ALU functions

210

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
C. Metric
A single full-adder is capable to add two one-bit numbers and an input carry. In order to add binary
numbers which is more than one bit, the full-adders must be employed in addition. A n-bit parallel can be
constructed using number of full adder circuits connected in parallel.
The parallel adder is ripple carry adder in which the carry output of each full-adder stage is connected
with the carry input of the next higher-order stage. Therefore the sum and carry output of any stage cannot
be produced until the input carry occurs; this leads to time delay in the addition process. The delay is known
as carry propagation delay.
The ripple carry adder is constructed by cascading full adders (FA) blocks in series. One full adder
is responsible for the addition of two binary digits at any stage of the ripple carry. The carryout of one stage
is fed directly to the carry-in of the next stage. Even though this is a simple adder and can be used to add
unrestricted bit length numbers, it is however not very efficient when large bit numbers are used. One of the
most serious drawbacks of this adder is that the delay increases linearly with the bit length.
One method of speeding up the process by eliminating the inter stage carry delay is called carry
look-ahead addition. This method utilizes logic gates to look at the lower order bits of the augends and
addends to see if a higher-order carry is to be generated.
The use of one half-adder or one full-adder are great for add up two binary numbers with a length
of one bit each, but when the computer needs to add up two binary numbers with a longer length, there are
several ways of doing this. The fastest way is to use the Parallel Binary Adder. The Parallel Binary Adder
uses one half-adder, along with one or more full adder.
The total number of adder needed depends on the length of the largest of the two binary numbers
that are to be added. For example, if we need to add up the binary numbers 1011 and 1, we would need
four adder in total, because the length of the larger number is four, by keeping this in mind, here is a
demonstration of how a four-bit parallel binary adder works, by 1101 and 1011 as the two numbers to add:

Fig 1. Ripple-Carry Adder

When we add with the computer, it adds from right to left. Just like when we add without the
computer, in the parallel binary adder is a step by step list, Fig.1 showing you what happens in the parallel
Binary Adder.
D. Specification
Each of the three metrics have been specifically stated how they will be evaluated. Active power
is measured for one computation per cycle at the highest frequency achievable by the design for a specific
series of inputs, which PICo will supply at the second design review. The delay is the worst case access
delay. The area is the sum of the widths of transistors used in the design.
In addition, our design is assumed to interface with pads that connect to the outside world with all
inputs valid .5 FO4 delay before the rising edge of the clock, and hold for 1 FO4 delay after the rising edge
of the clock. Therefore, we assumed that the clock is an ideal signal driven through a static CMOS buffer.
III. DESIGN
During our design process, we encountered several design decisions that we had to make to reduce
our overall metric. We made our original designs for each sub-circuit when they were due for the design
reviews, however, we did not take into account metric decisions then.
When it came time to reduce the overall delay, power, and cost of our design, we went to each
individual sub circuit and evaluated how we could reduce the specifications for that sub circuit, in an effort
211

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
to reduce the overall sub circuit. For many part of our processor, we chose to use new designs, such as new
Adders, reduced the number of gates where we could, and sized our transistors proportionally in order to
be the most efficient.
Images of our designs can be found attached as Appendices to this document
A. Adder

The most important decision in our Digital Signal Processor was choosing the adder, as its delay
would be significantly greater than any other function, and so would be the determining factor in our
maximum speed. In all cases, the adder was converted to an adder/ subtractor by adding an inverter, a 2:1
multiplexer, and a select line. This line was determined to be 0 for add, and 1 for subtract. This line also
was the carry in for the entire adder, which gave a 2s complement version of B when subtract was selected.
1) Ripple Carry Adder : Originally we used a ripple-carry adder, which gave us a large delay of
11ns, but was simple to implement by chaining together full Adders. The full adders were designed using
the mirror adder pattern as Fig. 3rather than the full static CMOS design. The large delay was a result of
each full adder having to wait for the carry bit to be calculated from the previous full adder, and as a result, a
large number with 16 bits would take a long time to fully calculate. Because of the large delay, we searched
for faster adders to increase speed as in Fig.2.

Fig 2. Ripple-Carry Adder

Fig 3. Mirror Full Adder

2 ) Carry Look-Ahead Adder: Carry Look-Ahead adders were designed to reduce overall
computational time by using propagate and generate signals for each bit position, based on whether a carry
is propagated through to the next bit.
The carry lookahead adder (CLA) solves the carry delay problem by calculating the carry signals
in advance, based on the input signals. It is based on the fact that a carry signal will be generated in two
cases: (1) when both bits
and ai are bi 1, or (2) when one of the two bits is 1 and the carry-in is 1.
This sequence of adding from the least significant bit and propagating the carry bit ahead reduces
the overall delay, but is more complex than the Ripple-Carry adder, and also uses more transistors overall.
Due to its complexity and size, as well as the possibility of a Manchester adder see Fig.4, we decided not to
use a carry look-ahead adder.
212

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Fig 4. Carry Look Ahead Adder

3) Carry Select Adder: In the carry select adder, there are two full adders, each of which takes a
different preset carry-in bit. The sums and carry-out bits that are produced are then selected by the carry-out
from the previous stage.
One of the earliest logarithmic time adder designs is based on the conditional - sum addition
algorithm. In this scheme, blocks of bits are added in two ways: assuming an incoming carry of 0 or of 1, with the
correct outputs selected later as the blocks true carry-in becomes known. This is one of the speed-up techniques
that is used in order to reduce the latency of carry propagation as seen with the ripple-carry adder.
Basically, the adder will add the sum with and without a carry from the previous stage, and will
then use a multiplexer to determine which sum is the correct one, depending on whether or not there was a
carry. The basic design for the carry select adder is shown in Fig.5.

Fig 5. Carry Select Adder

The Carry Select adder was more efficient than the Ripple-Carry Adder, with a delay of 8ns. It was
more difficult to implement, but since we had already finished our 2:1 multiplexers, it wasnt too difficult.
The major problem with the carry select adder was the size, well over double that of the ripple carry adder.
This was too large a price to pay in cost to get only a 3ns increase in speed.
4) Manchester Carry Adder : The Manchester Carry Chain is a variation of the Carry Look-Ahead
adder, but instead uses shared transistor logic to lower the overall transistor count. The Manchester Carry
Adder consists of cascading chains of Manchester Carry chains, which is broken down in order to reduce
the number of series-propagate transistors, resulting a great reduction in delay as the number of transistors
in series is reduced. As with the Carry Look-Ahead adder, it was too complex to be used in this design is
shown in Fig.6.

Fig 6. Manchester Carry Adder


213

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
B. Transmission Gates
Some sub-circuits of our design also make use of transmission and pass gates, depending on
the situation. We decided to use pass gates in certain instances because it reduced our overall number of
transistors required, which meant less power and cost. Their was also a slight reduction when we used pass
gates in most cases, though we could only use these gates when the next input was buffered, in order to
restore the signal to its full power is shown in Fig.7.

Fig 7. Transmission Gate

C. 8:1 Multiplexer
For our 8:1 Multiplexer, we originally used four 2:1 multiplexers combined with a 4:1 multiplexer,
which worked well. Our 4:1 multiplexers were created out of two 2:1 multiplexers, combined with another
2:1 multiplexer to selected between all 4 inputs, as shown in Fig.8.

Fig 8. 4:1 Multiplexer

However, we realized that we could implement the same function using two 4:1 multiplexers
combined with a 2:1 multiplexer to meet the requirement as well, and it would also offer a better delay,
with fewer transistors.
D. Parallel Prefix Adder
Parallel Prefix Adder (PPA) is very useful in todays world of technology because of its
implementation in Very Large Scale Integration (VLSI) chips. The VLSI chips rely heavily on fast and
reliable arithmetic computation. These contributions can be provided by PPA. There are many types of
PPA such as Brent Kung , Kogge Stone , Ladner Fisher , Hans Carlson and Knowles.
For the purpose of this research, only Han Carlson adder will be investigated with the other types
of adders.

Fig 9: PPA Structured Diagram

The design file has to be analyzed from Fig.9, synthesis and compile before it can be simulated.
Simulation results in this project come in the form of Register Transfer Level (RTL) diagram, functional
vector waveform outcome and classic timing analysis. The RTL design can be obtained by using the RTL
viewer based on the Netlist viewer. Functional vector waveform outcome are produced by selecting random
bit values and add up to produce the sum and carry bits. Timing analysis can be obtained by viewing the
214

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
summary of the classic timing analysis after compiling the whole project. The
simulations are done by using the functions.
Simulation analysis is prepared by viewing the results from the simulated VHDL source code.
Analysis of the simulation is performed once the desired simulation outcome is obtained. Simulation
results show the classic timing analysis, RTL schematic diagram and also vector waveform outcome of
the simulated designs. The analysis of the PPA is conducted by viewing the time delay produced by Han
Carlson adders in performing bits addition as displayed in Fig.11.
Finally, the PPA comparison will be made once all six
simulation results are analyzed. Han Carlson adder and the various efficient adders will be compared
at this stage and will be conducted in its bit category. The comparisons will be based on the computational
speed or also known as time propagation delay and area (cost) from Fig.10.

Fig 10. Han Carlson adder

Fig 11. System design Flow chart

IV. SURVEY RESULTS


We tested all of our components using simulation through the modelsim 6.2, the area, power and
delay of the various adders are been compared and obtained the outputs in the form of chart. In this obtained
output the power distribution of the Han Carlson adder is higher when compared with the all other adders
which could be highly efficient for an IC chip to provide the results. The delay is also an major term to be
concerned while the execution of results, if the delay is higher, then the processing speed will be noticed
as less even it is an efficient adder the delay is higher it is not been considered by anybody for their work.
The delay is also be a major factor , in Han Carlson adder it lower when compared with the other types of
215

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
adders which could be a major advantage for our adder.
The area of an IC chip is been decided by the number of gates used in our project to obtain the
expected result. In the Han Carlson adder the number of gates used is less , if the number of gates is reduced
the area will also been gets reduced in chip. It can able use the special technique called as the folding
transformation technique which could be highly useful to obtain the result with less number of gates where
in the other type of adders this technique is not applicable, So in area wise also the Han Carlson adder is
highly efficient and produce the results without speculation see the Performance analysis from Fig.12.

Fig 12. Performance Analysis of Adders

V. ACKNOWLEDGEMENTS
Our thanks to M.Kumarasamy college of Engineering for offering us the opportunity to do this
wonderful project, and to Dr. V. Kavitha for her Guidance to do the survey.
VI. REFERENCES
[1] Bender, Ryan (April 17, 2000). A Simulator for Digital Circuits. Massachusetts Institute of
Technology. Retrieved on April 28, 2008 from http://mitpress.mit.edu/sicp/full_text/sicp/book/
node64.html
[2]

Alan, Elay (2007). Hierarchal Schematics and Simulation Within Cadence. University of California
at Berkley. Retrieved on April 28, 2008 from http://bwrc.eecs.berkeley.edu/Classes/ICDesign/
EE141_f07/CadenceLabs/hierarchy/hierarchy.htm

[3]

Lin, Charles (2003). Half Adders, Full Adders, Ripple Carry Adders. University of Maryland.
Retrieved April 28, 2008 from http://www.cs.umd.edu/class/sum2003/cmsc311/Note/Comb/adder.
html

[4]

Mlynek, D. Design of VLSI Systems. EPFL. Retrieved on April 28, 2008 from http://lsiwww.epfl.
ch/LSI2001/teaching/webcourse/ch06/ch06.html

[5]

Lie, Sean. (2002). Carry Select Adder Details. Retrieved April 28, 2008 from http://www.slie.ca/
projects/6.371/webpage/cryseladderdetails.html

216

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

A Data Flow Test Suite Minimization and Prioritization in


Regression Mode
M.Vanathi1 , J.Jayanthi2
Sona college of Technology
Sathya.vanathi11@gmail.com
Sona college of Technology
computer Science and Engineering
Abstract The objective of regression testing is to verify the new changes (addition /deletion)
incorporated are implemented correctly. In next step, optimizing the number of test cases by making
it effective and efficient. Here dataflow testing in regression mode is considered for verification and
validation. Huge number of test cases must be minimized and prioritized based on the proposed algorithm.
The program to be tested must be represented using control flow graph. The du and dc path must be
identified. The recurrent definition and usage must be identified and test cases will be prioritized in
proposed system using junit tool.
Keywords Test Minimization, Test Prioritization, Control Flow Graph,Data Flow Techniques.

I. INTRODUCTION
Software testing is essential phase in software engineering which is used to detect errors as early as
possible to ensure that changes to existing software do not break the software and also used to determine the
quality of software product. The main myth is good programmers write code without bugs.
Phases in a tester`s mental life can be categorised into 5 phases. They are

Phase 0(Debugging Oriented)

Phase1(Demonstration Oriented)

Phase2(Destruction Oriented)

Phase3(Evaluation Oriented)

Phase4(Prevention Oriented)
Test Case is step by step description of the action that we do during the testing. Test Suite is the
collection of test cases and it is way in which we are grouped the test case based on the module wise
structure and rule module wise structure along with future.
Test prioritization is basic idea to group test cases based on some criteria and we can prioritize based
on another set of criteria such as impact of failure ,cost to fix etc. We can prioritize the test cases by using
methods like Cosine methodology, Greedy algorithm, prioritization metrics and measuring efficient etc.
The Goals of prioritization are,

To increase the rate of fault detection.

To increase the coverage of code.

To increase their confident in the reliability of the system.
TestCase minimization technique is used to find and remove the redundant test case from the test
suite. Test cases became redundant because their input/output relation is no longer meaningful due to
change in program and their structure is no longer in conformity with software coverage. It is yet another
method for selecting tests for regression testing.
Regression testing is used to verify that changes work correctly and meet specified requirement. It is
executed after defect fixes in software or its environment. Whenever the defects are done a set of test cases
that are need to be rerun/ retesting to verify defect fixes are affected or not. Rerunning or retesting of all
test cases in test suite may require an unacceptable amount of time. Minimizing the test case will overcome
these difficult.
Data flow testing is based on selecting the path through the programs control flow in order to explore
sequence of events related to status of data or variable or object. It flows on the points at which variable
receives value and points at which values are used. It denotes each link with symbolslike d,k,u,c,p or
sequence of symbols like dd,du,ddd,.etc that denotes sequence of operation. The data object state and usage
are, Defined(d), Killed(k) and Usage(u).
217

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
The JUnit framework encourages developers to write test cases and hen to rerun all of these test
cases whenever they modify their code. Whenever size of the code grows, rerun-all regression testing
strategies can be excessively expensive.
II. RELATED WORK
This section consists of survey of test case prioritization and test case minimization techniques.
2.1 Minimize Test Case generation using Control Structure Method
Swathi.J.N and Sangeetha.S proposed an idea that software testing is the critical activity in any
industrial strength software development process. As the software grows in size, its complexity increases
and testing becomes more difficult. Hence generating test cases manually makes more error so they propose
an automatic generating of test cases using control structure method this tool aims to achieve coverage of
a given structural code by including statement coverage, decision coverage , path coverage and branch
coverage analysis and it also helps developer and tester to measure the effectiveness of test case generated
using metric called Test Effective Ratio.
To ensure that all statements have been executed at least once the Cyclomatic complexity is
provided with number of tests. Complexity can be computed in ant one of the three ways,
The number of regions of the flow graph(G) corresponds to the cyclomatic complexity V(G).
V(G)=E-N+2,where E is the number of flow graph edges and N is the number of flow graph nodes.
V(G)=P+1,Where P is the number of predicate nodes.
Test Effectiveness Ratio (TER) is classified by, dividing the Number of statements exercised by the
test case by Total number of statements in the source code.
2.2 Generating Minimal Test Cases for Regression Testing
Sapna PG, and ArunkumarBalakrishnan proposed an idea that they generate the test cases from
the specification of UML diagram and set of terminals are given as input to stenier tree algorithm and
the minimal test case to check functionality. In stenier minimal tree problem, vertices are divided into
two parts: Terminal and non-Terminal parts. The changed nodes are defined as terminal nodes to ensure
inclusion in the test set. A lot of work is available for generating regression test cases both white box and
black box strategies. A minimal set of test cases is generates as an indicator to the effectiveness of the
change. Initial result shows that the method is applicable for quick testing to ensure that basic functionality
works correctly.
UML activity diagram is developed using two types of nodes. They are action and control nodes.
The action nodes consist of Activity, Call Behaviour Action, Send Signal and AcceptEvent. The control
nodes consist of Initial, Final node, Flow-Final, Decision, Merge ,Fork and Join. This diagram is used to
generate test cases.
The activity diagram is converted into Control flow graph. The weight for each edge is calculated
by using measure where the calculation is measured based on the incoming and outgoing dependencies as
given below,
Where(ni)inis the number of incoming dependency of node ni and (nj)out is the number of outgoing
dependency of node nj.
2.3 Minimizing Test Cases By Generating Requirement Using Mathematical Equation
Mamta Santosh and Rajvir sing propose idea of list of testing requirement for test suite and find the
set of test case satisfy the testing requirement by requirement matrix, prioritization process of control flow
graph, fault exposing potential value and test case requirement matrix formations. Test suite minimization
techniques are used to remove redundant and obsolete test cases from test suite. Test case minimization
approach can be considered as an optimization problem. Genetic algorithm can be used for minimization as
it robust and provide optimized result. For applying genetic algorithm test case requirements relationships
need to be transformed in mathematical model expressed in form of functions and parameters that optimize
the model. At result the minimized test cases.
Here Heuristic based approach selects test cases based up on the strategies of essential redundant
and 1to 1 redundant strategies
218

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
The fitness function is calculated by summing up the test case requirement matrices. Execution time
has been applied for optimization.This approach reduced the test suite size and covered all requirements.
2.4 Test Suite Minimization by Greedy Algorithm
SriramanTallam and Neelam Gupta proposed an idea of test suit once developed is reused and
update frequently as software will provide redundant is test case. Due to the resource and time constraints
for re-executing large test suites, it is important to develop techniques to minimize available test suites
by removing redundant test cases. Test suite minimization is NP complete. Here regularly measure the
extent of test suite reduction obtained by algorithms and prior heuristics for test suite minimization. The
same size or smaller size test suite selected than that selected by prior heuristics and had comparable time
performance.
The concept analysis and test suite minimization consist of Object Implication, Attribute Implication
and Owner Reduction. These all always preserve the optimality of solution for the context table. Then it
uses greedy heuristic would be applied if the context table is not empty.
2.5 FIVE TECHNIQUES OF REGRESSION TESTING
GhinwaBaradhi and Nashat Mansour proposed an idea of comparing the 5 techniques of Regression
testing slicing., incremental, firewall, generic & simulated annealing algorithm. Comparison of these
techniques is based on qualitative & quantitative- execution time, number of selected retests, type of testing,
type of approach ,level of testing, precision, inclusiveness, user parameter. This comparison says that all
algorithms are suitable for different requirements of regression techniques. The assessment is based on the
following consideration, medium size modules are important for assessment, since they are more realistic
and execution time assessment is based on comparing the algorithms with each other.
This tends to indicate that the incremental algorithm has more favourable properties than the other
four algorithms. The assessment is based on the following consideration,
Medium size modules are more important for assessment,since they are more realistic.
Since the test cases were manually developed, it was not possible to run experiments that were
statistically highly-sound, especially for execution time.
Execution time assessment is based on comparing the algorithms with each other.
To choose minimum number of test case and to perform fast regression testing these selections
should be done genetically.
III. PROPOSED SYSTEM
Test suite minimization and prioritization is process of avoiding redundant and unwanted test
suite. Till now minimization of test cases are done by applying several techniques like genetic algorithm,
stenier tree algorithm, decision tree etc but in proposed system ,minimize test case is done by data flow
testing techniques which is concern of regression mode.

Fig: 3.1 System architecture of proposed system

Fig 3.1 describes the system architecture of proposed system. In this input is given as coding and
output is in format of dataflow test case generation. Here also the processes of data flow testing techniques
are used.
Here first java code get parsing by splitting the input into two things Split the input into tokens
and then find the hierarchical structure of the input.Then they are converted into control flow graph(CFG).
219

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
The test input data generation is done by two path Executable and Infeasible paths. Test case generation are
generated with graph traversal. Test cases are specially written in JAVA and tested with JUnit test cases.
JUnit testing and prioritization
JUnit test cases are java classes that contain one or more test methods and are grouped into test
suite.

Fig 3.2 JUnit test suite structure

JUnit test classes that contain one or more test methods can be run individually or a collection of
JUnit test cases can be run as unit. JUnit is designed tests to run as sequences of tests invoked through test
suites that invoked test classes.
JUnit framework allowed us to achieve the following four objectives,

Treat each Test Case Class as a single Test case


Reorder the Test Case classes to produce a prioritized order


Treat individual test method within test cases for prioritization


Reorder the test methods to produce a prioritized order.


All test cases are run and executed individually with the JUnit test Runner in desired order. After
the execution the result can be viewed using CLOVER views.Clover views consist of four views they are,

Coverage explorer

Test Run explorer


Clover dashboard

Test contribution
After all these reports of this also get by using clover views. By using this, data flow testing
techniques are going to be tested.
IV. CONCLUSION
The proposed tool has been designed and developed with java code. The outcome of this tool is to
assist the tester to test the code in efficient manner. The test cases are tested and provide report about the
test cases. Using this tool the test cases are generated and tested with its report. In this too all test are tested.
JUnit tool help us to see the report in graphical way which is help full in easy understanding.
REFERENCE
[1] Adam Kurpisz,Samulileppanen, On the sum of the squares hierarchy with knapsack covering
inequality ,arxiv: 1407. 1746v1 [cs.DS], 2014.
[2]

Amrita jyoti,Yogesh Kumar and D.pandy Recent priority algorithm in regression testing,International
journal of information technology and knowledge management,volume-2,no.2,pp.391-394,2010.

[3]

Alex kineer and Hyunsook Do Empirical studies of test case prioritization in a JUnit testing
environmentno.2,pp 1-12,2007.

[4] Bang ye wu: A simple approximation algorithm for internal steiner minimum tree.CoRR,
abs/1307,3822,2013.
[5] Baradhi.G and Mansour.N A Comparative Study of Five Regression Testing Algorithm.
Proceedings of IEEE international symposium on software testing and analysis,pp,143-152,1997.
[6]
220

Jyoti and Kamna Solanki A Comparative study of five regression testing techniques:ASurvey,IISN

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
2277-8616.IJSTR issues 8 aug 2014.
[7]

R.Beena and S.Sarala, Code Coverage based Test Caseselection and Prioritization,International
Journal of software Enginneering& Application,vol.4,no.6,nov 2013.

[8]

Sapna P.G and Hrushikes ha Mohanty Prioritization of scenarios based on UML activity diagrams.
IN 1st International Conference on computational intelligence, pages 271-276.IEEE computer
society,2009.

221

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Maximization of Co-Ordination Gain In Uplink System Using The


Dynamic Cell Clustering Algorithm
Ms.M.Keerthana, , Ms.K.Kavitha,
Final Year, Electronics And Communication engineering,
M.Kumarasamy College of Engineering, Karur.
Keerthimahi95@gmail.com
Final Year, Electronics And Communication engineering,
M.Kumarasamy College of Engineering, Karur.
replymekavi@gmail.com
Abstract The dynamic clustering algorithm changes the composition of clusters periodically. We
consider two well-known dynamic clustering algorithms the full-search clustering algorithm (FSCA)
and the greedy-search clustering algorithm (GSCA). The coordinated communication system as a
new parameters are to maximize the coordinated communication system. Two well-known dynamic
clustering algorithms in this paper: the full-search clustering algorithm (FSCA) and the greedy-search
clustering algorithm (GSCA. Simulation results show that the MAX-CG clustering algorithm improves
the average user rate and the edge user rate and IW clustering algorithm improves the edge users rate
and reduces the complexity to only half of the existing algorithm

I. INTRODUCTION
In the uplink communication system, the base station (BS) receives low intensity signals from
cell-edge users and signals from users at the edge of adjacent cells simultaneously. In the downlink
communication system, the user receives signals from the BS in its own cell and signals from the BSs
at the adjacent cells with similar power. The received signals from other cells act as interference and
cause performance degradation. In this case, both the capacity and the data rate are reduced by the intercell interference (ICI)[1]. In the past, the fractional frequency reuse (FFR) scheme, which is a simple ICI
reduction technique, has been used to achieve required performance in interference limited environment .
Since the FFR scheme increases the performance at the cell-edge but degrades the overall cell throughput, a
coordinated system was proposed to overcome the weakness of the FFR scheme. Also, the techniques for ICI
mitigation and performance enhancement by sharing the full channel state information (CSI) and transmit
data were studied in. [2]However, the techniques are difficult to implement in a practical communication
system because of the large amount of information to be shared between BSs. Instead of the impractical
scenario that requires full CSI and transmit data sharing among the whole network, a clustering algorithm
has been applied to practical communication systems by conguring the cluster for sharing full CSI between
a limited number of cells. The clustering algorithms are classied into two types: static clustering algorithm
and dynamic clustering algorithm. A dynamic clustering algorithm to avoid the ICI was developed , whose
objective is that the overall network has the minimum performance degradation while also improving the
performance of the cell-edge user. A clustering algorithm for sum rate maximization by using the greedy
search was proposed to improve the sum rate without guaranteeing the cell-edge users data rate. However,
when the size of the whole network is large, the complexity of the algorithm is increased rapidly. If the
complexity of the algorithm is large.the processing speed cannot adopt the change of the channels. [3]The
purpose of coordinated communication is to minimize the inter-cell interference to the cell-edge user and to
improve their performance. When the clusters are not properly congured, the performance of the cell-edge
users will be further degraded. Even though the existing algorithm improves the overall data rate, it does
not consider the goal of the coordinated communication: the improvement of the performance of cell-edge
users.
EXISTING SYSTEM
In the uplink communication system, the base station (BS) receives low Intensity signals from celledge users and signals from users to the edge of adjacent cells simultaneously. In the downlink communication
system, the user receives signals from the BS on its own cell and signals from the BSs at the adjacent cells
in similar power. In the past, the fractional frequency reuse (FFR) scheme, which is a simple ICI reduction
222

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
technique, has been used to achieve required performance in interference-limited environment. In the static
clustering algorithm, the construction of clusters is xed, so that clusters are composed of a limited number
of adjacent cells . The advantage of the static clustering algorithm is that BSs share the CSI and received
data with other BSs belonging to the same cluster without the central controller[4]. So, the complexity is
low and there is no CSI sharing between clusters caused by the clustering algorithm.
PROPOSED SYSTEM
The dynamic clustering algorithms to improve the cell-edge user data rate and to reduce the
complexity the same time. The proposed clustering algorithms have lower complexity than the existing
algorithms. Moreover, improve the cell-edge users performance without increase the complexity. First,
maximization of coordination gain MAX-CG) clustering algorithm is proposed. The MAX-CG clustering
algorithm maximizes the coordination gain between he coordinated communication system and the singlecell communication system. Its performance is close to the optimal clustering algorithm - the full search
clustering algorithm. This effect mainly comes from the usage of the coordination gain which highlights
the benefit of coordinated communication[5]. The coordination gain is increased only if the benefit of the
BS coordination is large. Second, we develop interference weight (IW) clustering algorithm which reduces
complexity and improves both the average user rate and the 5% edge user rate.
MAX-CG ALGORITHM
We dene a new parameter called as the coordination gain of data rate, which is the rate difference
between the coordinated communication and the single cell communication. The most important objective
of the BS coordination is to improve the cell-edge users performance As we explained in Section III,
the static clustering algorithm. and the GSCA do not guarantee the cell-edge users performance. The
former does not reect the change of the channel environment, while the latter considers only the sum
rate maximization. Since the GSCA tries to maximize the sum rate, the cluster is composed of the cells
with many cell-center users. Thus, the cluster which is made last has low coordination gain. To combat
these problems, we use the coordination gain to make clusters maximize the performance gain[6]. The
coordination gain is the rate difference between C(C) and C(NC). C(C) is the sum rate of users in cluster G,
and C(NC) is the sum rate of users in cluster G on the basis that those users do not coordinate.
IW CLUSTERING ALGORITHM
In this section, we propose an algorithm to supplement the MAX-CG clustering algorithm. In
comparison with the GSCA, the MAX-CG clustering algorithm improves both the sum rate and the weak
users rate and it catches up with the performance of the FSCA. Never the less complexity of the MAXCG clustering algorithm is higher than the GSCA because it calculates the sum rate of all combinations.
Therefore, we propose the interference weight (IW) clustering algorithm to reduce the complexity of
clustering without performance loss. We consider the pair wise relationship between two cells for clustering.
In this approach, we dont need to make all combinations of BS, so that we can reduce the complexity of the
clustering algorithm. Even though we narrow down the scope to 2 cells case for the coordination gain, is
difcult to solve without making any further simplication. In the high SINR regime, we can simplify the
optimization problem fortunately[7]. Note that the high SINR regime for user b in the cell i assumes SINR
1, which implies that the transmit power of the user is larger than 0 (turning off the transmit power never
leads to the high SINR regime) . Since we assume that the user uses same transmit power P, the assumption
for the high SINR regime is available in this system for practical coordinated systems.
SEECH PROTOCOL
The energy efficiency is a important for wireless sensor networks in smart space and extreme
environments. The cluster is based on a communication protocols and consider a role for energy saving
in hierarchical wireless sensor networks. In most of dynamic clustering algorithm is a cluster head
simultaneously serves as a relay sensor node to transmit its cluster or clusters data packet to the data sink.
As a result, each node would have cluster head role as many relay process during network lifetime. In
our view, this is inefficient from an energy efficiency perspective because in most of cases, a node due
to its position in the network comparatively is more proper to work as a cluster head and/a relay. The
new distributed algorithm named scalable energy efficient We proposed novel dynamic cell clustering
algorithms for maximizing the coordination gain in the uplink coordinated system. The MAX-CG clustering
algorithm maximizes the coordination gain and improves the average user rate. Simulation and analytical
223

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
results show that the complexity of the MAX-CG clustering algorithm is much less than that of the FSCA.
The IW clustering algorithm reduces the complexity of the MAX-CG clustering algorithm and uses the IW
to supplement the disadvantage of the GSCA. Thus, the IW clustering algorithm improves the performance
and simplies the clustering. The IW-weak clustering algorithm improves the 5% edge user rate without
the loss of the average user rate. Therefore, the proposed clustering algorithms are simple and efcient such
that they are suitable[8].
GRAPH

FLOW CHART

RESULT
We proposed novel dynamic cell clustering algorithms for maximizing the coordination gain
in the uplink coordinated system. The MAX-CG clustering algorithm maximizes the coordination gain
and improves the average user rate. Simulation and analytical results show that the complexity of the
MAX-CG clustering algorithm is much less than that of the FSCA. The IW clustering algorithm reduces
the complexity of the MAX-CG clustering algorithm and uses the IW to supplement the disadvantage of
224

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
the GSCA. Thus, the IW clustering algorithm improves the performance and simplies the clustering.
The IW-weak clustering algorithm improves the 5% edge user rate without the loss of the average user
rate. Therefore, the proposed clustering algorithms are simple and efcient such that they are suitable for
practical coordinated systems.
REFERENCE
[1] S.H.Aliand, V.C.M.Leung, Dynamic frequency allocation in fractional frequency reused OFDMA
networks, IEEE Trans. Wireless Communication. vol. 8, no. 8, Aug. 2009.
[2]

H. Zhang and H. Dai, Cochannel interference mitigation and cooperative processing in downlink
multicell multiuser MIMO networks, EURASIP J. on Wireless Communication. and Networking,
July 2004.

[3]

A.Papadogiannis, D.Gesbert, and E. Hardouin, A dynamic clustering approach in wireless networks


with multi-cell cooperative processing, in Proc. IEEE ICC, Apr.2010

[4] S.Kaviani and W.A.Krzymien,Sum rate maximization of MIMO broadcast channels with
coordination of base stations, in Proc. IEEE WCNC, 2008.
[5]

J. Zhang, R. Chen, J. G. Andrews, A. Ghosh, and R. W. Heath, Networked MIMO with clustered
linear precoding, IEEE Trans. Wireless Communication. vol. 8, no. 4, Apr. 2009.

[6]

A. Papadogiannis, D.Gesbert, and E. Hardouin, A dynamic clustering approach in wireless networks


with multi-cell cooperative processing, in Proc. IEEE ICC, 2008,

[7]

B. O. Lee, H. W. Je, I. Sohn, O. Shin, and K. B. Lee, Interference aware decentralized precoding
for multicell MIMO TDD systems, in Proc. IEEE GLOBECOM, 2008,

[8]

SEECH: Secure and Energy Efficient Centralized Routing Protocol for Hierarchical WSN,
International Journal of Engineering Research and Development e-ISSN: 2278-067X, p-ISSN:
2278-800X, www.ijerd.com, Volume 2, August 2012

225

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Srf Theory Based Reduced Rating Dynamic Voltage


Restorer for Voltage Sag Mitigation
S.Sunandha1, M.Lincy Luciana2 and K.Sarasvathi3
M.E., EEE, M.Kumarasamy College of Engineering,
Karur, India.

M.E., EEE, M.Kumarasamy College of Engineering,


Karur, India.

M.E., EEE, M.Kumarasamy College of Engineering,


Karur, India.

Abstract With the restructuring of power systems and with shifting trend towards distributed and
dispersed generation, the issue of power quality is going to take newer dimensions. The present research
is to identify the prominent concerns in this area and hence the measures that can enhance the quality
of power are recommended. Voltage sag is a common and undesirable power quality phenomenon in
the distribution systems which puts sensitive loads under the risk. An effective solution to mitigate this
phenomenon is to use dynamic voltage restorers and consequently, protect sensitive loads. In addition,
different voltage injection schemes for DVRs are explored to inject minimum energy for a given apparent
power of DVR. The performance of this proposed DVR is examined with different control strategies
like conventional Proportional and Integral (PI) control and Synchronous Reference Frame (SRF)
Theory based PI control. TheproposedReduced Rating DVR with SRF theory based PI Controller offers
economic solution for voltage sag mitigation. Simulation results are carried out by MATLAB with its
Simulink to analyze the proposed method.
Index Terms Dynamic Voltage Restorer, Voltage Sag, PI Controller, SRF theory based PI Controller.

I. INTRODUCTION
Power Quality and reliability in distribution system have been attracting an increasing interest
in modern times and have become an area of concern for modern industrial and commercial applications.
Introduction of sophisticated manufacturing systems, industrial drives, precision electronic equipments
in modern times demand greater quality and reliability of power supply in distribution networks than
ever before. Power Quality problems encompass a wide range of phenomena. Voltage sag/swell, flicker,
harmonics distortion, impulse transients and interruptions are a prominent few [1]. These disturbances are
responsible for problems ranging from malfunctions or errors to plant shut down and loss of manufacturing
capability. Among the power quality problems voltage sag is the most frequently occurring one. Therefore
this sag is the most important power quality problem in the power distribution system.
Voltage Sag or Voltage Dip is defined by the IEEE 1159 as the decrease in the RMS voltage level to
10%-90% of nominal, at the power frequency for durations of cycles to one minute. The IEC (International
Electro-technical Commission) terminology for voltage sag is dip. The IEC defines voltage dip as a sudden
reduction of the voltage at a point in the electrical system, followed by voltage recovery after a short period,
from a cycle to a few seconds. Voltage sags are usually associated with system faults but they can also
be generated by energisation of heavy loads or starting of large motors which can draw 6 to 10 times its
full load current during starting. There are two types of voltage sag which can occur on any transmission
lines; balanced and unbalanced voltage sag which are also known as symmetrical and asymmetrical voltage
sag respectively. Most of these faults that occur on power systems are not the balanced three-phase faults,
but the unbalanced faults. In the analysis of power system under fault conditions, it is necessary to make a
distinction between the types of fault to ensure the best results possible in the analysis.

Unsymmetrical voltage sag

Single phase voltage sag

Two phase voltage sag

Symmetrical voltage sag

Three phase voltage sag
226

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
As the quality of power is strictly related to the economic consequences associated with the
equipment and should therefore be evaluated considering the customers point of view. So the need for
solutions dedicated to single customers with highly sensitive loads is great since a fast response of voltage
regulation is required. Further it needs to synthesize the characteristics of voltage sags both in domestic and
industrial distributions.
In order to meet these challenges, it needs a device capable of injecting minimum energy so as to
regulate load voltage at its predetermined value. The traditional mitigating methods include tap-changing
transformers and uninterrupted power supplies. These methods are costly and not fast enough to mitigate
voltage sag but the use of custom power devices is considered to be the most proficient method. The term
custom power pertains to the use of power electronics controller in a distribution system. There are three
custom power devices such as distribution-STATCOM
(D-STATCOM), Dynamic Voltage Restorer (DVR), and Unified Power Quality Conditioner
(UPQC). Dynamic Voltage Restorer (DVR) is one of the prominent devices for compensating the power
quality problems associated with voltage sags/swells [2-5]. Dynamic voltage restorer (DVR) can provide
an effective solution to mitigate voltage sag by establishing the appropriate predetermined voltage level
required by the loads [6It is recently being used as the active solution for voltage sag mitigation in modern
industrial applications [7].DVR could maintain load voltage unchanged during any kind of faults, if the
capability of energy storage of DVR was infinite. Because of the energy limitations of energy storage of
DVR, it is necessary to minimize energy injection from DVR [8]-[10].
In this paper, Dynamic Voltage Restorer (DVR) is designed and implemented with the proposed
compensation strategy, which is capable of compensating power quality problems associated with voltage
sags/swells with minimum energy and maintaining a prescribed level of supply voltage. The simulation of
the proposed DVR is accomplished using MATLAB/ SIMULINK. The performance of the proposed DVR
for different supply disturbances is tested under various operating conditions.
II. DYNAMIC VOLTAGE RESTORER
The schematic diagram of Dynamic Voltage Restorer (DVR) connected distribution system is
shown in Fig. 1. DVR is a solid state inverter which injects the series voltage with a controlled magnitude
and phase angle to restore the quality of voltage to the pre-specified value and avoid load tripping. The
function of the DVR will inject the missing voltage in order to regulate the load voltage from any disturbance
due to immediate distort of source voltage. The DC side of DVR is connected to an energy source or an
energy storage device, while its ac side is connected to the distribution feeder by a three-phase inter facing
injection transformer. Since DVR is a series connected device, the source current, is same as load current.
DVR injected voltage in series with line such that the load voltage is maintained at sinusoidal nominal value
[11]. It is normally installed in a distribution system between the supply and the critical load feeder at the
point of common coupling (PCC).

Fig. 1. Basic Circuit of DVR Connected System

A. OPERATING PRINCIPLE of DVR


The schematic diagram of a self-supported DVR is shown in Fig.2. Three phase source voltages
(Vsa, Vsb, and Vsc) are connected to the 3-phase critical load throughseries impedance (Za, Zb, Zc) and
an injection transformer in each phase. The terminal voltages (Vta, Vtb, Vtc) have power quality problems
and the DVR injects compensating voltages (VCa, VCb, VCc) through an injection transformer to get
227

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
undistorted and balanced load voltages (VLa, VLb, VLc). The DVR is implemented using a three leg
voltage source inverter with IGBTs along with a dc capacitor (Cdc). A ripple filter (Lr, Cr) is used to filter
the switching ripple in the injected voltage. The considered load, sensitive to power quality problems is
a three-phase balanced lagging power factor load. A self-supported DVR does not need any active power
during steady state because the voltage injected is in quadrature with the feeder current.

Fig.2. Schematic Diagram of Capacitor Supported DVR

Fig.3. Phasor Diagram for Voltage Sag

The DVR operation for the compensation of sag in supply voltage is shown in Fig.3. Before sag the
load voltages and currents are represented as VL (presag) and Isa as shown in Fig.3. After the sag event,
the terminal voltage (Vta) is gets lower in magnitude and lags the presag voltage by some angle. The DVR
injects a compensating voltage (VCa) to maintain the load voltage (VL) at the rated magnitude. VCa has
two components, VCad and VCaq. The voltage in-phase with the current (VCad) is required to regulate
the dc bus voltage and also to meet the power loss in the VSI of DVR and an injection transformer [5].
The voltage in quadrature with the current (VCaq) is required to regulate the load voltage (VL) at constant
magnitude.
III. CONTROL OF DVR
The efficiency of DVR depends on the performance of the control technique involved in switching
of inverters. Hence different control techniques such as PI controller, and SRF theory based PI controller
were used here. Based on the comparison between the performances of these controllers in controlling the
switching of PWM inverter switches, the optimum controller that improves the performance of DVR is
suggested.
A. PI CONTROLLER
A PI controller output signal is directly proportional to the linear combination of measured
actuating error signal and its time.
A proportional-integral (PI) controller shown in fig.4.drives the plant to be controlled with a weighted
sum of the error (difference between the actual sensed output and desired set-point) and the integral of that
value. An advantage of a proportional plus integral controller is that its integral term causes the steady-state
228

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
error to be zero for a step input. PI controller input is an actuating signal which is the difference between
the Vref and Vin. Output of the controller block is of the form of an angle , which introduces additional
phase-lag/lead in the three-phase voltages.
The output of error detector is Vref - Vin.
Vref equal to 1 p.u. voltage
Vin voltage in p.u. at the load terminals.

Fig.4. PI Controller

B. SRF THEORY BASED PI CONTROLLER

Fig.5. SRF Control Algorithm for DVR

Fig. 5. shows the SRF control algorithm which is able to detect different types of power quality
problems without an error and introduces the appropriate voltage component to correct instantly any
deformity in the terminal voltage to keep the load voltage balanced and constant at the nominal value [12],
[13]. This is a closed loop system which needs DC link voltage of DVR and amplitude of load voltage to
produce direct axis and quadrature axis voltages. When the load voltage descents 10% of its reference load
voltage then the error signal is generated by the DVR controller to generate the PWM waveform for 6-pulse
IGBT device.
SRF Theory is used for the control of DVR. The voltages at the PCC are transformed to the reference
frame using abc-dq0 conversion as,

The harmonics and the oscillatory components are excluded using low pass filters. The components
of voltages in d-axis and q-axis are,

229

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
The SRF based method is also used to acquire the direct axis and quadrature axis components of
load voltage. The load voltages in three phases are changed to d-q-0 frame using Parks transformation. A
three phase PLL is used to
synchronize these signals with the terminal voltages. The d-q components are passed through low
pass filters to extract the dc components Vd and Vq. In order to maintain the dc bus voltage of DVR, the
error between the reference dc bus voltage and the sensed dc bus voltage of DVR is given to a PI controller
of which output is considered as the loss component of voltage and is added to the dc component of Vd to
generate Vd*. The reference d-axis load voltage is therefore as

Similarly, a second PI controller is used to standardize the amplitude of the load voltage. The
amplitude of load voltage at point of common coupling is calculated from the ac voltages (VLa, VLb,
VLc) as,

The amplitude of the load voltage (VL) is employed over the reference amplitude (VL*) and the
output of PI controller is considered as the reactive component of voltage (Vqr) for regulation of load
voltage added with the dc component of Vq to generate Vq*. The reference q-axis load voltage is therefore
as,

The resultant voltages (Vd*, Vq*, V0) are again altered into a-b-c frame using reverse Parks
transformation as,

Reference load voltages and the sensed load voltages are used in PWM generator to generate gate
pulses for the switches.
IV. PROPOSED COMPENSATION STRATEGY
In a three phase distribution system, when the voltage sag occurs, the fuel cell based DVR needs
to provide essential voltage to compensate it. The voltage Vinjis inserted such that the load voltage Vload
is constant in magnitude and undistorted even though the supply voltage Vs is not constant in magnitude or
is distorted.
Fig. 6. shows the phasor diagram for different voltage injection schemes of the DVR. VL(presag)
is a voltage across the critical load prior to the voltage sag. During the voltage sag, the load voltage is
reduced to VL(sag) with a phase lag angle of . Now the DVR needs to provide some voltage such that the
load voltage magnitude is maintained at the pre-sag condition. Based on the phase angle of load voltage,
the voltage injected by DVR can be comprehended in four ways. Vinj1 represents the voltage injected by
DVR that is in-phase with the VL(sag). With the injection of Vinj2, the load voltage magnitude retains the
same but it leads VL(sag) by a small angle. In Vinj3, the load voltage holds the same phase as that of the
pre-sag condition.

230

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Fig. 6.

Phasor diagram for voltage injection schemes

Vinj4 is the condition where the injected voltage is in quadrature with the current and this injection
comprises no active power. On assessment of these four voltage injection schemes, with the injection of
Vinj1, a minimum possible rating of the converter is achieved. The sinusoidal signal is phase modulated by
means of the angle and the modulated three phase voltages are given by,

V. MODELING AND SIMULATION


The voltage sag is created in a 415V, three phase distribution system with the help of a three phase
to ground fault and a DVR is modeled to mitigate sag event and is simulated using MATLAB is shown
in Fig. 7. The control algorithm used is SRF theory along with PI controller. The proposed compensation
technique is also implemented to reduce the rating of DVR. An equivalent load considered is a 10kVA
0.8-pf lagging linear load. The parameters of the considered system for the simulation study are given in
the Appendix.
VI. PERFORMANCE OF THE PROPOSED SYSTEM
The performance of DVR is validated for supply voltage disturbance such as balanced and
unbalanced voltage sag. Unbalanced voltage sag is created in supply voltage for duration of 0.2s to 0.4s
by using single line to ground fault and double line to ground fault are shown in fig.8. (a),(b) respectively.
Balanced voltage sag is created in supply voltage for duration of 0.2s to 0.4s by using three phase fault is
shown in fig.8 (c). Voltage sag which is a condition of a temporary reduction in supply voltage, the DVR
injects an equal positive voltage component in all three phases which are in phase with the supply voltage
to correct it, is shown in fig.9. (a),(b),(c) respectively. The load voltage is sustained sinusoidal by injecting
proper compensation voltage by the DVR is shown in fig.10.
To implement the reduced rating DVR, it is necessary to find the optimal voltage injection angle.
The magnitude of the voltage injected by the DVR for mitigating the same kinds of sag event in the load
with different angles of injection is observed. The volt-ampere ratings of the DVR for the four voltage
injection schemes are given in Table. I. In Scheme-1, the DVR voltage is injected in-phase with sag voltage.
In Scheme-2, the DVR voltage is injected at a small angle of 30o and in Scheme-3, the DVR voltage
is injected at an angle of 45o. The injection of voltage in quadrature with the line current comes under
Scheme-4.
On comparison of four voltage injection schemes, when the DVR injects voltage in-phase with the
sag voltage (scheme-1), the output power supplied by DVR is minimized is shown pictorially in fig. 11 (a),
(b) for both the PI and SRF theory based PI controller respectively.
231

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Fig. 7. Simulink Model of DVR Connected System

Fig. 8 (a). Simulation of Single Phase Voltage Sag

Fig. 9 (a). DVR Output for Compensating Single


Phase Sag

Fig. 9 (b). DVR Output for Compensating Two


Phase Sag
Fig. 8 (b). Simulation of Two Phase Voltage Sag

Fig. 8 (c). Simulation of Three Phase Voltage Sag

232

Fig. 9 (c). DVR Output for Compensating Three


Phase Sag

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Fig. 10. Simulation of Compensated Load Voltage

Fig. 11 (a). Comparison of voltage injection


Fig.11 (b). Comparison of voltage injection
schemes for PI controller
schemes for SRF theory based PI controller
VII. CONCLUTION
The operation of DVR has been demonstrated with conventional PI controller and SRF theory
based PI controller using various voltage injection schemes for both balanced and unbalanced voltage
sag compensation in MATLAB/Simulink. From the simulation analysis, the DVR based on Synchronous
Reference Frame theory is able to detect different types of power quality problems without an error and
injects the appropriate voltage component to correct immediately any abnormality in the terminal voltage to
keep it balanced and constant at the nominal value. Simulation and experimental results shows that the DVR
with the proposed compensation strategy successfully protects the sensitive load against symmetrical and
unsymmetrical voltage sag. A comparison of the performance of the DVR with different voltage injection
schemes has been performed. Based on the performance analysis it is concluded that the voltage injection
in-phase with the sag voltage reduces the power injected by DVR. This results in minimum rating of DVR
which makes the DVR more economical with optimum size. This work can also be extended to other power
quality problems.
APPENDIX
AC Line Voltage : 415V, 50Hz
Line Impedance : Ls = 3.0mH,
233

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
Rs = 0.01
Linear Loads
Ripple Filter
Rf = 4.8

: 10-kVA, 0.80-pf lag


: Cf = 10F,

DC Voltage of DVR
DC Bus Voltage PI Controller
AC Load Voltage PI Controller
PWM Switching Frequency
Series Transformer

: 300V
: Kp1 = 0.5, Ki1 = 0.35
: Kp2 = 0.5, Ki2 = 0.35
: 10kHz
: 10kVA, 200V/300V

REFERENCES
[1] M. H. J. Bollen, Understanding Power Quality ProblemsVoltage Sags and Interruptions,New
York, NY, USA: IEEE Press, 2000.
[2]

A. Ghosh and G. Ledwich, Compensation of Distribution System Voltage using DVR, IEEE
Trans. Power Del., vol. 17, no.4, pp.10301036, October 2002.

[3]

H. Igarashi and H. Akagi, System configurations and operating performance of a dynamic voltage
restorer, IEEE Trans. Ind. Appl., vol. 123-D, no. 9, pp. 10211028, 2003.

[4]

A.Ghosh, A.K.Jindal and A.Joshi, Design of capacitor Supported Dynamic Voltage for Unbalanced
and distorted Loads, IEEE Trans.Power Del., Vol. 19, no. 1, pp.405-413, January 2004.

[5]

J. G. Nielsen, M. Newman, H. Nielsen, and F. Blaabjerg, Control and Testing of a Dynamic Voltage
Restorer (DVR) at Medium Voltage Level, IEEE Trans. Power Electronics, vol. 19, no. 3, pp.
806813, May 2004.

[6]

J.A. Martinez and J.M.Arnedo, Voltage Sag Studies in Distribution Networks-partI: System
modeling, IEEE Trans. Power Del.,vol.21, no. 3, pp. 338345, July 2006.

[7]

M. R. Banaei, S.H. Hosseini, S. Khanmohamadi, and G. B. Gharehpetian, Verification of a new


Energy Control strategy for Dynamic Voltage Restorer by simulation, Simul. Model. Pract.
Theory,vol.14, no.2, pp. 112-125, February 2006.

[8]

F. M. Mahdianpoor, R. A. Hooshmand and M. Atae, A New Approach to Multifunctional Dynamic


Voltage Restorer Implementation for Emergency Control in Distribution Systems, IEEE Trans.
PowerDelivery, vol. 25, no. 4, pp. 882890, April 2011.

[9]

M. Moradlou and H. R. Karshenas, Design Strategy for Optimum Rating Selection of Interline
DVR, IEEE Trans. Power Delivery, vol.26, no. 1, pp. 242249, January 2011.

[10] A.K. Sadigh, K.M. Smedley, Review of Voltage Compensation methods in Dynamic Voltage
Restorer (DVR), IEEE Power and Energy Society General Meeting, July 2012, pp.1-8.
[11]

Pradip Kumar Saha, SujaySarkar, SurojitSarkar, Gautam Kumar Panda, By Dynamic Voltage
Restorer for Power Quality Improvement, International Journal of Engineering and Computer
Science, Volume 2, Issue 1 Jan 2013..

[12] D.P.Kothari, Pychadathil Jayaprakash, Bhim Singh, and Ambrish Chandra, "Control of Reduced
Rating Dynamic Voltage Restorer with a Battery Energy Storage System,IEEE Trans.Power Del.,
vol. 50, no. 2, March/April 2014.
[13] Himadri Ghosh, Pradip Kumar Saha and Goutam Kumar Panda,Design and Simulation of a Novel
Self Supported Dynamic Voltage Restorer for Power Quality Improvement, Int. J. Elect. Power
Energy Syst., vol. 3, no. 6, ISSN 2229-5518, June 2012.

234

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Performance Comparison Of Ann, Flc And Pi Controller Based


Shunt Active Power Filter For Load Compensation
Lincy Luciana.M1 Sunandha.S2 Sarasvathi.K3
Assistant Professor, Department of
EEE,M.Kumarasamy College of Engineering, Karur
1, 2,3

Abstract To reduce the power quality issues, it is important to eliminate the harmonics in the power
systems. The harmonic elimination through Shunt Active Power Filter (SAPF) provides higher efficiency
when compared with other filters.Nonmodel-based controllers have been designed for the control of a
SAPFto reduce the distortion which is created by the non-linear loads.An Artificial Neural Network
(ANN) is becoming a deterioration technique in many controlapplications due to its parallel operation and
high learningcapability. In this paper, the Least Mean Square (LMS) based ADALINEANN is proposed
to regulate the DC bus voltage (Vdc) to eliminate harmonics and load compensation in the system. The
Simulink model of the proposed system is developed using MATLAB/SIMULINK tool. The performance
of anANN controller is compared with FLC and conventional PI controller.The proposed method offers
a better dynamic response and efficient control in varying load conditions.
Index Terms Power Quality, Harmonic Elimination, Shunt Active Power Filter, Neural Network
controller.

I. INTRODUCTION
Many domestic and industrial non-linear loadsare power electronic switching devices such
as television, personal computers, business and office equipment namely copiers, printers, industrial
equipment such as Programmable Logic Controllers (PLCs), Adjustable Speed Drives (ASDs), rectifiers,
inverters, CNC tools. The power quality issues like interruptions, voltage sag, swell, harmonics, noise and
switching transients are occurred in power system and introduces serious power pollution to the utility
side. Among these power quality issues, the harmonics are the major contribution for polluting the power
grid. Traditionally, passive LC filters have been used to avoid these effects [1]. The resonance, fixed
compensation and huge size are the problems arising in passive filters. These problems are overcome by
the introduction of active filters which addresses more than one harmonic at a time.
Among the active filters, the SAPF is a power electronic converter that is connected in parallel and
cancels the reactive and harmonic currents due to non-linear load [2]. Ideally, the SAPF needs to generate
reactive and harmonic current to compensate the non-linear loads in the supply line. The SAPF is Voltage
Source Inverter (VSI) with DC side capacitor (Cdc) and used to generate the filter current (if) and is injected
into the utility power grid. This cancels the harmonic components by the non-linear load and keeps the
utility line current (is) sinusoidal [3]. It has the advantage of carrying the compensation current and small
amount of active fundamental current supplied to compensate for system losses.
The Vdc is regulated by using PI controller. This improves the system performance effectively.
Several techniques are available to generate the switching current for the APF [4]-[6]. Bhim Singh et
al proposed PI control algorithm for single phase SAPF [7]. In PI control strategy, reference current is
calculated by sensing only line currents [8]. The PI controller requires accurate linear mathematical models,
which fails to perform satisfactorily under non-linearity, load disturbances and parameter variations [9].
The conventional control requires mathematical analysis of the system so soft computing is an
alternate solution to control the APF. Soft computing is a technology to extract information from the
process signal by using expert knowledge. In order to enhance the performance of SAPF, genetic algorithm,
bacterial foraging technique, particle swarm optimization, Ant Colony Optimisation (ACO), fuzzy logic
controller and ANN technique are employed. The SAPF is optimized by bacterial foraging (BF) technique
for load compensation in [4] and Ant colony optimization (ACO) in [5]. The APF is controlled by ANN
technology in [10]-[11]. The adaptive neural network compensation algorithm is used to compensate
harmonics and reactive power for PQ and DQ strategy [14]-[15]. The Takagi Sugeno-FLC and mamdani
FLC are compared in [16]. The FLC with different membership functions are compared in [17]-[18]. The
conventional PI, FLC and ANFIS are compared in [12]-[13] based on PQ strategy.
235

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Fig.1 Basic principle of SAPF

In this paper, an ANN is used for controlling SAPF. The performance indices considered are
percentage peak overshoot (%Mp), DC link voltage settling time (Vdc_Ts) and Total Harmonic Distortion
(%THD). The proposed ANN controller offers improved dynamic response by comparing with FLC and
conventional PI controller.
REFERENCE SOURCE CURRENT ESTIMATION METHOD
Due to non-linear load, the harmonic distortion occurs in the supply system and in other loads which
are connected from the same supply. Hence the SAPF is connected across the main supply system at Point
of Common Coupling (PCC). Fig.1 shows the basic principle of SAPF. It controls and cancels the current
harmonics on the utility side by supplying a compensating current which makes the source current in phase
with the source voltage [3].
From Fig.1, the instantaneous current is given by Eq. (1):

There are also some switching losses in the PWM converter and so the utility must supply a small
overhead for the capacitor leakage and converter switching losses accumulated with real power of the load.
236

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Fig.2. Schematic diagram of ANN controller based SAPF

The total peak current supplied by the source (I_sp) is given by Eq. (7)
where,I_slis the peak value of the loss current
If the active filter provides the total reactive and harmonic power, then it will be in phase with the
utility voltage and purely sinusoidal. At this time, the active filter must provide the compensating current
as in Eq. (8):

Thus, for accurate and instantaneous compensation of reactive and harmonic power, it is necessary
to estimate i_s (t) (i.e., the fundamental component of the load current as the reference current). The
peak value of the reference current can be estimated by controlling the DC side capacitor voltage. Ideal
compensation requires the main current to be sinusoidal and in phase with the source voltage, irrespective
of the load current nature. The desired source currents, after compensation, can be given as in Eq. (9):

where I_sp is the amplitude of the desired source current, while the phase angle can be obtained
from the source voltages.
Hence the Isp needs to be determined. The peak value of the reference current has been estimated by
regulating the Cdc voltage of the PWM converter. The capacitor voltage is compared with a reference value
and the error is processed in ANN controller. The output of the ANN controller has been considered as the
amplitude of the desired source current and the reference currents are predicted by multiplying this peak
value with the unit sine vectors in phase with the source voltages. The detailed schematic diagram of ANN
controller based SAPF is shown in Fig.2. The modified Space Vector Pulse Width Modulation (SVPWM)
current control scheme [19] is used to generate switching pulse of SAPF. In this paper, the performance of
proposed ANN controller is compared with FLC and conventional PI controller of SAPF.
DESIGN OF ANN CONTROLLER
An ANN is implemented to control the Cdc voltage based on processing of the Vdc error i(n) is used
to improve the dynamic of SAPF. An ANN consists of a large number of strongly connected elements.
The artificial neurons represent a biological neuron conceptconceded in a computer program. The artificial
neuron model is shown in Fig.3.
237

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
Inputs i(n) enter into the processing element from the left. The first step is to multiply each of these
inputs by their respective weighting factor w(j). These modified inputs are then fed into the summing
function (w(j)*i(n)and the information flow to the output through a transfer function which may be the
threshold function, sigmoid function, tangential function, Gaussian function, hyperbolic function, linear
function or pure linear function. [20]

Fig.3.Artificial Neuron model

Inputs i(n) enter into the processing element from the left. The first step is to multiply each of
these inputs by their respective weighting factor w(j). These modified inputs are then fed into the summing
function (w(j)*i(n)and the information flow to the output through a transfer function which may be the
threshold function, sigmoid function, tangential function, Gaussian function, hyperbolic function, linear
function or pure linear function. [20]
Adaline Based Control Algorithm
The basic concept of proposed ANN is based on the LMS algorithm and it is trained through
ADALINE tracks the unit vector templates to maintain minimum error. The initial weight is set to zero
whereas the learning rate is the coefficient of convergence and its value lies between 0 and 1. It is used as
0.001 in the LMS algorithm. The LMS based ADALINE control algorithm is shown in Fig.4.The initial
output pattern is compared with the current output and the weights are updatedusing LMS algorithm until
the error becomes small.

Fig.4.LMS based ADALINE control algorithm

The amplitude of the desired source current is estimated by ANN is given in Eq. (10)

where Y is the amplitude of the desired source current, is learning rate, e(n) iserror between output
equation and target value, i(n) isinput values and w(j) is weights of the ADALINE network.
An ADALINE is used to extract the amplitude of desired source is shown in Fig.5. The input of the
ANN block i(n) is the errorsignal by comparing the capacitor voltage and reference value.The desired peak
value of source current is estimated by using LMS based ADALINE ANN.
The reference currents are estimated by multiplying this desired peak value with the unit sine
vectors in phase with the source voltages. The modified SVPWM current control scheme is used to generate
switching pulse of SAPF by comparing actual source current and desired source current which is estimated
by ANN.
238

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
DESIGN OF FUZZY LOGIC CONTROLLER
In order to keep Cdc voltage constant [20], the active power flowing into the filter needs to be
controlled. If the active power flowing into the filter can be controlled, losses inside the filter get compensated.
Then, the Vdc can be maintained at the desired value. The FLC is implemented to control the Cdc voltage
based on processing of the Vdc error e(t) and its derivative e(t) is used to improve the dynamic of SAPF.
The input variables are given in Eq. [12] & [13],

Fig.5. ADALINE is used to extract the amplitude of desired source current

A FLC consists of four stages:


(i) Fuzzification
(ii) Knowledge base
(iii) Inference mechanisms
Defuzzification
The knowledge base is composed of a data base and rule base, and is deliberated to obtain good
dynamic response under improbability in process parameters and peripheralturbulences. The data base,
consisting of input and output membership functions, provides information for the suitablefuzzification
operations, the inference technique and defuzzification.
The inference technique uses a collection of linguistic rules to transform the input conditions into
a fuzzified output. Finally, defuzzification is used to transform the fuzzy outputs into control signals [12].
Fig.6 shows a block diagram of the FLC for DC voltage control of SAPF.
A.Design of Control Rule
To propose the control rules of FLC, the formulation of rule set plays a key role in improvement
of the system performance. In the case of the fuzzy logic based DC voltage control, the capacitor voltage
deviation (e(t)) and its derivative (e(t)) are considered as the inputs of the FLC and the requirement for
voltage regulation is taken as the output of the FLC. The input and output variables are transformed into
linguistic variables as given in [17].
In this case, the knowledge of the systems behavior is put in the form of rules of inference. The
rule table which is shown in Table contains 49 rules. To convert crisp variables into fuzzy sets, the
seven fuzzy sets are NL (Negative Large), NM (Negative Medium), NS (Negative Small), ZE (Zero), PS
(Positive Small), PM (Positive Medium) and PL ( Positive large) have been chosen. Normalized triangular
membership functions are used for the input and output variables used here are shown in Fig.7.

Fig.6. FLC for DC voltage control


239

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

CONVENTIONAL PI CONTROLLER PARAMETERS


By using conventional PI controller, the performance of the proposed ANN controller of SAPF
is evaluated. The SAPF system parameters are presented in Table II. Parameters of the conventional PI
controller are designed based on the model in [2]. The values of conventional PI controller proportional
gain and integral gain are selected as 0.57 and 10.3 respectively.

VI. SIMULATION RESULTS AND DISCUSSION


The Simulink model was developed using MATLAB SimPower Systems toolbox. The FLC and
ANN are designed by using MATLAB fuzzy and neural toolbox. The performance of the FLC and ANN
are detailed below.
Dynamic Performance of SAPF
The balanced and sinusoidal three-phase input source voltages are considered. The diode bridge
rectifier is considered as load for the system. The Total Harmonic Distortion (THD) before SAPF is 28.01%.
The performance of SAPF with the proposed FLC, ANN controller and conventional PI controller has been
analyzed for the following three cases.
1. Switch-on response
The SAPF is switched on at t=50ms. The concert of DC capacitor voltage of conventional PI controller
and FLC controller is given in Fig. 8. The performance of SAPF in The FLC of SAPF is characterized as
follows:
(i)
Seven fuzzy MFs for each input and output.
(ii)
Fuzzification using continuous universe of discourse.
(iii)
Implication using Mamdanis min operator.
(iv)
Aggregation using Mamdanis max operator.
(v)
De-fuzzification using the centroid of area method.
terms of Vdc_Ts, %Mp and %THD are listed in Table.III. The supply voltage (Vsa), supply current
(Isa), load current (ILa), filter current (Ica) and Vdc related to phase-A of conventional PI controller ANN
and FLC controller are shown in Fig. 9 to Fig. 11 respectively.
240

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
The capacitor voltage of ANN settles at 6ms and FLC settles at 10ms. ANN settles faster compared
to conventional PI method takes 310ms. In all the cases, %THD is within the limit of IEEE 519-1992
standard.

(a)

(b)

(c)

Fig.7. Normalized triangular membership function of FLC for: (a) input variable (e(t)), (b) input
variable (e(t)) (c) output variable (Iref)
2.Transient response
To execute the transient analysis, the load resistance is increased from 6.7 to 10 at t=0.3s.
The supply voltage (Vsa), supply current (Isa), load current (ILa), filter current (Ica) and Vdc related to
conventional PI controller, ANN and FLC controller are shown in Fig. 12 to Fig. 14 respectively. The
performance indices are listed in Table.III.
When compared to conventional PI with FLC and ANN, rise or dip in Vdc is larger in conventional
PI and takes more cycles to settle down. In conventional method, the %THD in source current settles after
3-4 cycles. The FLC takes 10ms and ANN takes 6ms to settle down at Vdc
TABLE III
PERFORMANCE ANALYSES OF SAPF BASED ON CONVENTIONAL PI, ANN AND FLC
CONTROLLERS

241

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Fig.9.Vsa, Isa, ILa, Ica and VdcofConventional PI controller

242

Fig.11. Vsa, Isa, ILa, Ica and Vdc of


ANN

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

The comparison of settling time, %THD and peak overshoot for Conventional PI, FLC and
ANN controllers is shown in Fig.15 to Fig.17. The ANN controller settles at 6ms compared to FLC
and Conventional PI controllers. The %THD is reduced in ANN compared to FLC and Conventional PI
controllers. But the peak overshoot produced in ANN is larger compared to FLC and Conventional PI. Thus,
the dynamic performance of ANN is compared with FLC and conventional PI controller. The performance
of ANN is improved than FLC and conventional PI controller. The ANN has a settling time of 6ms, which
is much better than the FLC and conventional PI controller.
CONCLUSION
In this paper, the Nonmodel-based controllers are designed to achieve better utilization and
reactive current compensation. The soft computing techniques were applied to control the switching of the
SAPF. The LMS based ADALINE network is trained online to extract the fundamental load active current
magnitude. The performance such as settling time, %THD of the ANN-based SAPF controller is better
than FLC and conventional PI controllers and it is found to provide much better response under dynamic
conditions.
REFERENCES
[1] Akagi H, Fellow, New trends in Active filters for power conditioning, IEEE transactions on
industry applications, Vol. 32, No.6, pp.1312-1322, 1996.
[2] Mishra S, Bhende C.N, Bacterial Foraging Technique-Based Optimized APF for Load
Compensation,IEEE transactions on power delivery, Vol. 22, No. 1.pp.457 465, 2007.
[3]

Sakthivel. A, Vijayakumar. P, Senthilkumar.A, Design of Ant Colony Optimized Shunt Active


Power Filter for Load Compensation, International Review of Electrical Engineering, vol.9, No.4.
pp.725-734, 2014.
243

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
[4]

Bhim Singh. Kamal Al-Haddad, AmbrishChandra,A Review of Active Filters for Power Quality
Improvement, IEEE Trans. On Industrial Electronics, Vol. 46, No.5, pp.960-971,1999..

[5]

El-Habrouk M, Darwish M K, MehtaP, Active power Filterrs: A Review., Proc. IEE Electric
Power Applications,pp.403-413, 2000.

[6]

ZainalSalam, Tan Perng, AwangJuosh. Harmonics Mitigtion using Active Filter: A Technical
Review, Elektrika, pp.17-26, 2006.

[7]

Bhim Singh, Ambrish Chandra, Kamal Al-Haddad, An Improved Single Phase Active Filter with
Optimum DC capacitor, Proceedings of the IEEE IECON 22nd International Conference, pp.677682, 1996.

[8] C. N. Bhende , S. Mishra, S. K. Jain, TS-Fuzzy-Controlled Active Power Filter for Load
Compensation, IEEE Transactions On Power Delivery, Vol. 21, No. 3, pp.1459-1465, 2006.
[9]

S.K. Jain, P. Agrawal, H.O. Gupta, Fuzzy logic controlled shunt active power filter for power
quality improvement, IEEE Proc. Electronics Power Applications. Vol. 149, No. 5, pp.317-328,
2002.

[10] J.R.Vazquez and P.Salmeron, Active Power Filter Control using Neural Network Technologies,
IEE Proc. Electr. Power Appl. Vol.150, No.2, pp. 139-145, 2003.
[11] Bhim Singh, Ambrish Chandra, Kamal Al-Haddad, Computer-aided modeling and simulation of
active power filters, Electrical Machines and Power systems, pp. 1227-1241, 1999.
[12] Parmod Kumar, AlkaMahajan, Soft Computing Techniques for the Control of an Active Power
Filter, IEEE Transactions On Power Delivery, Vol. 24, No.1, pp. 452-461, 2009.
[13] Brahmaiah.routhu, N.Arun, PI, FUZZY and ANFIS Control of 3-phase Shunt Active Power Filter,
International Journal of Engineering and Technology, Vol. 5, No. 3, pp.2163-2171, 2013.
[14] Bhim Singh, JitendraSolanki, An Implementation of an adaptive control algorithm for a threephase SAPF, IEEE Transactions on Industrial Electronics, Vol.56, No.8, pp.2811-2820, 2009.
[15] Bhim Singh, Jayaprakash, Implementation of Neural Network controlled three leg VSC and
transformer as three-phase four-wire DSTATCOM, Vol.47, No. 4, pp.1892-1901, 2011.
[16] FatihaMekri, BenyounesMazari and Mohammed Machmoum, Control and Optimisation of Shunt
Active Power Filter parameters by Fuzzy Logic, Can. Journal. Elect. Comput. Eng., Vol. 31, No.3,
pp.127-134, 2006.
[17] Suresh Mikkilli, A.K.Panda, Real time implementation of PI and fuzzy logic controllers based shunt
active filter control strategies for power quality improvement, ELSEVIER Journal Of Electrical
Power And Energy Systems 43, pp.1114-1126, 2012.
[18] K. Sundararaju, A. Nirmal Kumar, Cascaded and Feed forwarded Control of Multilevel Converter
Based STATCOM for Power System Compensation, International Review on Modelling and
Simulations (I.RE.MO.S.), Vol. 5, No. 2, pp . 609 -615 .
[19] Anup Kumar Panda, Suresh Mikkilli, FLC based Shunt active filter (p-q and Id -Iq) control
strategies for mitigation of harmonics with different fuzzy MFs using MATLAB and real-time
digital simulator, ELSEVIER Journal of Electrical Power and Energy Systems 47, pp.313336,
2013.
[20] Microsemi User Guide, Space Vector Pulse Width Modulation Hardware Implementation, 2014.
[21] Dhuliya A, Tiwary U.S, Introduction to Artificial ANN, IEEE transaction in Electronic Technology.
pp. 36 62, 1995.

244

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

An Approach for Lossless Data Hiding


Using Lzw Codes
Sindhuja J1,Anitha B2
PG Scholar, 2Assistant Professor
Department of Information Technology
Kongu Engineering College
1

Abstract Hiding a message in compression codes can reduce transmission costs and simultaneously
make the transmission more secure. In high-performance, data-hiding LempelZivWelch (HPDHLZW) scheme, which reversibly embeds data in LZW compression codes by modifying the value of
the compression codes, where the value of the LZW code either remains unchanged or is changed to
the original value of the LZW code plus the LZW dictionary size according to the data to be embedded.
Compared to other information-hiding schemes based on LZW compression codes, the proposed scheme
achieves better hiding capacity by increasing the number of symbols available to hide secrets and also
achieves faster hiding and extracting speeds due to the lower computation requirements.
Keywords LZW, Steganography, Information hiding.

I. INTRODUCTION
With the rapid development of new Internet techniques, huge amounts of data are generated on the
Internet daily. With the extensive, worldwide use of the Internet, it is now necessary to encrypt sensitive data
before transmission to protect those data. Reversible data-hiding techniques can ensure that the receiver can
receive hidden messages and recover needed data without distortion. Reversible data-hiding has received
extensive attention since recoverable media are more useful when protecting the security and privacy of
sensitive information. For example, assume that the personal information of a patient is private information
and the patients X-ray images are used as cover media. It is very important to recover the X-ray images
without any loss of detail after retrieving the patients personal information. Currently, reversible datahiding schemes are applied in three domains, i.e., the spatial domain, the transformed domain and the
compression domain. In the spatial domain, the values of the pixels of the cover image are altered directly
to hide the data. In the transformed domain, the cover image is processed by a transform algorithm to
obtain the frequency coefficients. Then, the frequency coefficients are modified to hide the data. In the
compression domain, the compression code is altered to hide the data. LZW coding is a simple,well-known,
lossless compression algorithm that compresses and decompresses data by using a dictionary that is
automatically produced, so LZW coding eliminates the need to analyze the source file or transmit any
auxiliary information to the decoder.
The related DH-LZW scheme based on the LZW algorithm hides the data by shrinking one character
of one symbol to hide the data. However, the hiding capacity was low because only the symbol whose
length is greater than the threshold can hide secret data and an embeddable symbol hides only one secret bit.
The HCDH-LZW scheme is used to improve the performance of Shim, Ahn, and Jeons method
by shrinking the characters according to the length of the symbol used to hide the data, thereby achieving
higher embedding capacity. This hiding capacity is higher because more symbols are available to hide
secret bits and because one symbol can hide more than one secret bit. However, only symbols with lengths
larger than the threshold can hide data and repeated symbols increase the size of the dictionary, which, in
turn, lowers the hiding speed. In addition, the extracting algorithm is very complicated, and this increases
the computation costs. Further, both scheme must transmit auxiliary information, the threshold value.
To overcome the shortcomings of these methods, the proposed data-hiding scheme that is based
on LZW codes by utilizing the relationship between the output compression codes and the size of the
dictionary. The proposed scheme guarantees that the receiver can recover the source data and extract the
hidden data without loss. In comparison to other proposed schemes, our scheme can achieve a much higher
embedding capacity and lower computation costs.
The need for data hiding is such that the existence of the message is not known to anyone apart from
the sender and the intended receiver. In data hiding, the receiver can able to recover only the hidden data
and not the source data which is used as a cover medium.
245

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
Cover Medium
This is the medium in which he/she wants to hide data. It can be an image or text data or any audio
or video file. It is also called as source data.
Classification of cover medium
In modern approach, depending on the nature of cover object, it can be divided as follows,
Text cover medium
Hiding information in plain text can be done in many different ways. Many techniques involve
the modification of the layout of a text, rules like using every nth character or the altering of the amount of
white space after lines or between words.
Image cover medium
Image steganography is steganography technique using image as cover. It uses the fact that human
visual system is having low sensitivity to small changes in digital data. It modifies pixel values of image for
data hiding.
Audio cover medium
In audio steganography system, the cover medium is digital Audio. Secret messages are embedded
in digital sound. Some common methods used in audio steganography are LSB coding, parity coding, phase
coding, spread spectrum and echo hiding.
Video cover medium
Video files are generally a collection of images and sounds, so most of the presented techniques
on images and audio can be applied to video files too. The great advantages of video are the large amount
of data that can be hidden inside and the fact that it is a moving stream of images and sounds.
The reversible data hiding allows the receiver to recover the source data and extract the hidden
data without any loss. Reversible data hiding schemes can be applied in three domains.
In the spatial domain, the values of the pixels of the cover image are altered directly to hide the
data. In the transformed domain, the cover image is processed by a transform algorithm to obtain the
frequency coefficients and then the frequency coefficients are modified to hide the data.
In the compression domain the compression code is altered to hide the data. The merit of using
reversible data hiding schemes in the compression domain is that such schemes can reduce transmission
costs and simultaneously secure the information that is transmitted.For example, assume that the personal
information of a patient is private information and the patients x-ray images are used as cover media. The
receiver should be able to recover both the x-ray image and personal information of a patient which is hided
in the x-ray image without any loss. This is an example of reversible data hiding technique in medical field.
The reversible data hiding techniques are used in various applications such as military, science
and education, digital image processing and in various domains. It is used to recover both the source data
and the hidden data.
Compression is a reduction in the number of bits needed to represent data. Compressing data can
save storage capacity, speed file transfer, and decrease costs for storage hardware and network bandwidth.
The term data compression refers to the process of reducing the amount of data required to represent a given
quantity of information. The aim of data compression is to reduce redundancy in stored or communicated
data which increases data density. It can be applied in file storage, distributed systems and data transmission.
The main idea of data compression is to reduce the quantity or amount of data to be sent or
transmitted. Various algorithms are used for data compression technique to reduce the size of the file to
be transmitted without the degradation of the original file. There are different types of data compression
present in compression technique which can be used for compression of text.
Compressing data can be a lossless or lossy process. Lossless compression enables the restoration
of a file to its original state, without the loss of a single bit of data, when the file is uncompressed. Lossless
compression is the typical approach with executables, as well as text and spreadsheet files, where the loss
of words or numbers would change the information. A simple characterization of data compression is that
it involves transforming a string of characters in some representation into a new string which contains the
same information but whose length is as small as possible
.Lossy compression permanently eliminates bits of data that are redundant, unimportant or
imperceptible. Lossy compression is useful with graphics, audio, video and images, where the removal
246

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
of some data bits has little or no discernible effect on the representation of the content. Graphics image
compression can be lossy or lossless.
The main advantages of compression are a reduction in storage hardware, data transmission time
and communication bandwidth, and the resulting cost savings.A compressed file requires less storage
capacity than an uncompressed file, and the use of compression can lead to a significant decrease in
expenses for disk and drives. A compressed file also requires less time for transfer, and it consumes less
network bandwidth than an uncompressed file. Many data processing applications require storage of large
volumes of data and no of such applications are increasing constantly as the use of computers.
2.Related works
In this section, the LZW compression, DH-LZW and HCDH-LZW schemes are described briefly.
2.1. The LZW algorithm
The LZW algorithm compresses the source file mainly by substituting fixed length codes for the
variable length sequential symbols of the source file. Since the dictionary initializes with ASCII values
from 0 to 255, the shortest code length is nine bits. The encoder reads the source data sequentially, and if
the sequences are in the dictionary, the code that corresponds to the previous symbol that was scanned will
be the output; otherwise, the sequences are placed into the next unused dictionary location. The longer the
symbols in the dictionary are, the higher the LZW compression ratio is. Since the decoder constructs the
same dictionary dynamically and automatically in the decoding phase, the sender does not have to send the
entire dictionary to the receiver. Then, the decoder recovers the source file by converting the LZW codes to
the corresponding symbols according to the dictionary.
2.2. The DH-LZW scheme
The main idea was to shrink one character of one symbol to hide the data. Shim, Ahn, and Jeons
scheme sets THD as a threshold to decide whether or not a symbol can be used to hide data. During the
hiding phase, a symbol can be used to hide one secret bit only when the length of the symbol is larger than
THD. The hiding strategy must match the parity of the symbols length to the secret bit by shrinking the last
character of the symbol. The parity bit of the odd number is 1, and the parity bit of the even number is 0.
So if the secret bit equals the parity bit of the symbols length, then the symbol does not shrink;
otherwise, the symbol shrinks the last character, and the shrunken character is returned to the source file.
Since the symbols length increases gradually during the construction of a dictionary, the shrunken symbol
already exists in the dictionary. The extracting phase just tests the parity bit of the symbol. There may be a
secret bit when a new symbol is added to the dictionary.
If the symbols length is larger than THD, the secret bit equals the parity bit of the previous symbols
length. If the symbols length is equal to THD, there are two possibilities. One possibility is that the symbol
has existed in the dictionary, which means that the symbol hides one secret bit that is equal to the parity bit
of the previous symbols length else the symbol does not hide the data.
2.3. The HCDH-LZW scheme
The main idea of the HCDH-LZW scheme is to shrink the symbol until it is as short as possible
while hiding as much data as possible. If current symbols length is greater than 2, it can be used to hide
data. The larger the symbols length is, the more data it can hide. This scheme still uses the LZW code as
output. When the current symbols length is greater than 2, the encoder can hide secret bits by shrinking the
symbol. When the symbols length equals 2, the previous symbols length equals 1, and the symbol cannot
be shrunk. The number of secret bits that an embeddable symbol can hide is the logarithm of its previous
symbols length.
For example, if the previous symbols length is 2 or 3, it can be used to hide 1 bit, and if the previous
symbols length is between 4 and 7, it can be used to hide 2 bits. The main idea of the extracting phase is
to examine the following decoded characters to count the secret bits. For example, if the symbol combines
with two characters of the following symbol still in the dictionary, then the hidden secret bits are 10. In
the extracting phase, if the secret value is 0, then the symbol remains unchanged; otherwise, it is shrunk
according to the secret value.
3. The Proposed work
The main idea of the proposed scheme is to modify the value of the LZW codes to hide a secret
rather than modifying the content of the dictionary. Every embeddable symbol can be used to hide one
247

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
secret bit. Before a new symbol is added to the dictionary, the encoder modifies the value of the output
LZW code according to the secret value. If the secret bit is 0, the output is the original LZW code and if
the secret bit is 1, the output is the sum of the value of the LZW code and the current size of the dictionary.
The hiding algorithm is summarized as follows,

In this scheme, once a new symbol is added into


the dictionary, that symbol can be used to
hide a secret. As a result, the number of bits for hiding secret information is equal to the number of new
symbols. The data hiding phase modifies the value of the LZW compression code to hide secrets, except for
the initial 256 symbols in the dictionary.
In the data extracting process, assume that the value of the current processing code is C and the
current size of the dictionary is Size. If C is larger than size, then he extracted secret bit is 1,otherwise, the
secret bit is 0. And if the extracted secret bit is 1,then the original LZW code is the difference between
C and Size. If the extracted secret is 0,then the original LZW code is C. The data extracting algorithm is
summarized as follows,

In the example, the source file is sddsddssddsdsddsddsd, and the secret file is 1001000100.In
the following table, the first secret bit is 1, and since 256 symbols existed in the dictionary before the data
hiding procedure, the output code is the value of the original code plus the current size of the dictionary,
i.e., 371.
248

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
After generating the 257th item, since the secret bit is 0, the second output code remains unchanged
as 100. In the extracting phase, the first LZW code is 371, which is larger than the current dictionary size of
256, so the hiding secret bit is 1, and the extracted symbol is s. The second LZW code is 100, which is
smaller than the current dictionary size 257, so the secret bit is 0, and the extracted symbol is d.

The proposed scheme increases the embedding capacity by increasing the number of embeddable
symbols. The increased hiding and extracting speeds of the proposed scheme are the result of the simple
computation of the proposed scheme. Moreover, the proposed scheme decreases the dictionarys size
because there is no modification of the content of the dictionary during the data hiding phase.
This schemeis based on the LZW compression code but modifies the value of the LZW compression
codes to embed secret data. The proposed scheme increases the number of symbols available to hide
secrets and does not change the content of the dictionary. Since the maximum number of hidden bits in the
proposed scheme is equal to the size of the dictionary, it achieves much higher embedding capacity than
HCDH-LZW. In addition, the proposed scheme achieves faster hiding and extracting speeds than HCDHLZW. Also, the dictionary generated by our proposed scheme is much smaller than that for HCDH-LZW.

249

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
4.Performance analysis
Four text files are taken to analyze the performance of the proposed system. The size of each file
is measured in bytes.

Figure 6.1 Embedding and Extracting capacity

In Figure 6.1 Embedding capacity graph, the comparison between the existing system and the
proposed system is shown. The embedding capacity is increased according to the file size. Thus the data
hiding speed increases in high performance lossless data hiding scheme.
5.Conclusion and Future work
In the proposed scheme, the value of the LZW compression code is modified to embed the secret
data. The proposed scheme increases the number of symbols available to hide secrets and does not change
the content of the dictionary. It achieves high embedding capacity and faster hiding and extracting speed
than HCDH-LZW. The dictionary generated is also much smaller than the HCDH-LZW. From the results
it can be observed
that the proposed scheme works better when compared to the existing system and high embedding
capacity is achieved. This scheme can be applied with efficient version of LZW algorithm which can be
taken as the future work.
6 REFERENCES
[1] Chang C.C, Lee C.F, Chuang L.Y, (2009), Embedding secret binary message using locally adaptive
data compression coding, International Journal of Computer Sciences and Engineering Systems,
Vol.3, No.1, pp 55-61.
[2]

Chang C.C, Lin C.Y, (2007), Reversible Steganographic method using SMVQ Approach based on
declustering, Information Sciences, Vol.177, No.8, pp 1796-1805.

[3]

Chang C.C, Lu T.C, (2006), Reversible index domain information hiding scheme based on side-

250

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
match vector quantization, Journal of Systems and Software, Vol.79, No.8, pp.1120-1129.
[4]

Chang C.C, Wu W.C, (2006), A Steganographic method for hiding secret data using side match
vector quantization, IEICE Transactions on Information and Systems, Vol.8, No.9, pp.2159-2167.

[5]

Chen C.C, Chang C.C, (2010), High Capacity reversible data hiding for LZW codes, In proceedings
of the second International conference on Computer Modeling and Simulation, No.1, pp.3-8.

[6]

Chen W.J, Huang W.T, (2009), VQ indexes compression and information hiding using hybrid
lossless index coding, Digital Signal Processing, Vol.19, No.3, pp.433-443.

[7]

Jo M, Kim H.D, (2002), A Digital image watermarking scheme based on vector quantization,
IEICE Transactions on Information and Systems, Vol.85, No.6, pp.1054-1056.

[8]

Lu Z.M, Wang J.X, Liu B.B, (2011), An improved lossless data hiding scheme based on image VQ
index residual coding, Journal of Systems and Software, Vol.82, No.6, pp.1016-1024.

[9]

Ma K, Zhang X, Yu N, Li F, (2013), Reversible data hiding in encrypted images by reserving room


before Encryption, IEEE Transactions on Information Forensics and Security, Vol.8, No.3, pp.553562.

251

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Improving the non-functional Quality of Service (QoS)


attributes of web services for Tourism
S.Dharanya1,J.Jayanthi2
Sona college of Technology, dharu0907@gmail.com
Sona college of Technology, Computer Science and Engineering
Abstract The Quality of Service (QoS) non-functional model that compose of four criterions as
parameter for the quality of web services model such as Service Cost, Service Response Time, Service
Availability And Service Reputation. By improving the non-functional QoS attributes the performance
of the web services could be improved. The performance of existing ones is not satisfactory, since it is
implemented with either the improvement of Service Response Time or Service Availability. In this
proposed model it deals about the improvement on both the Service Response Time (SRT) and Service
Availability (SA) for tourism.
Keywords Service Response Time (SRT), Service Availability (SA), Service Cost (SC), Service
Reputation (SR).

I. INTRODUCTION
QoS is defined as the ability of a web service to respond to expected invocations. A web service is
a piece of software that is available over the internet and uses a standardized XML messaging system.Web
service enables communication among various applications by using open standards such as HTML, XML,
WSDL, and SOAP.
HTML (Hypertext Markup Language)
HTML is the collection of markup symbols or codes in a file for display on WWW (World Wide
Web). It is used to create visually engaging interfaces for web application.
XML (Extensible Markup Language)
XML i
s defines a set of rules for encoding documents in both the format human readable and machine
readable
<? XML version = 1.0 encoding = UTF_8?>
WSDL (Web Services Description Languages)
WSDL is an XML based language and is used for describing the functionality offered by a web
services. It provides a machine readable description of how the service can be called, what data structures
it returns and what parameter it expects.
SOAP (Simple Object Access Protocol)
SOAP is a protocol specification for exchanging structured information in the implementation of
web services in computer networks. It uses XML information set for its message format.
Related works
In [10], Web services are the collection of software components and standards for the next generation
technologies.Integration with GIS application to produce interactive interface for travel and Tourism
Domain. GIS (Geographic Interface System) based technology incorporates common database operations
such as query and statistical analysis with the unique visualization and geographic analysis benefits offered
by maps. Quality of service (QoS) is a combination of several qualities or properties of a service, such as:
Availability, Response Time, Throughput and Security [2].
Response time of structured BPEL (Business Process Execution Languages). The constructor
sequence correspond to a sequential execution of s1 to sn elementary Web services. The anal1ytical
formulas of mean response time E(Tsequence) is given by[11].

252

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
Methods for web services performance optimization. SOAP Compression [5]

If T is positive improve the performance of web services.


If T is negative does not improve the performance of web services.
Scalability of web services can be improved byreducing the time spent at the slow server [4].
G=(T(g))/(T(1)) (3)
Availability is a function A(t), which is the probability that the system is operational (i.e., delivers
the correct service) at instant of time t. This function quantifies the alternation between deliveries of correct
service and incorrect service. A system can fail to deliver a correct service due to the following reasons[3].
The presence of faults, caused by system errors;
The presence of overloading condition, i.e. the server is so much busy that it is not able to deliver
a correct service.
The maximum time that elapses from the moment that a web service receives a SOAP request until
it produces the corresponding SOAP response [8]. It is calculated as
R=T1-T2 (4)
T1 = Time at which web service produces soap response.
T2 = Time at which web service receives soap request
Availability of the web service is the probability of the service is accessible. It is calculated using
the following expression,

In [9], it measure the overall response time taken using Automation testing tool which has the Server
Processing time, Data transfer time and page rendering time.
Overall Response time = Server processing time + Data transfer time + page rendering time

Fig: 1An approach to get Overall response time using automation tool during load test

Proposed System
The Quality of Service (QoS) non-functional model that compose of four criterion as parameter
for the quality of web services model such as Service Cost, Service Response Time, Service Availability
and Service Reputation. In this proposed model it deals about the improvement of Service Response Time
(SRT) and Service Availability (SA). By improving this non-functional Quality of Service (QoS) attributes,
the performance of the web services could be improved.

Fig: 2 Web Service Architecture


253

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
3.1 Web Services
Web Services are software components described via WSDL which are capable of being accessed
via standard network protocols such as SOAP over HTTP. Web services provided by different vendors have
unpredictable characteristics. Different Web services with different QoS (Quality of Service) requirements
will complete for network and system resources (bandwidth, processing time). An enhanced QoS for Web
services will bring competitive advantage to service providers.
3.2 Web Service Roles
Service provider (or) Publisher
Service Requestor (or) Consumer
Service Registry
3.2.1 Service Provider
This is the provider of the web service. The service provider implements the service and makes it
available on the internet or intranet.
3.2.2 Service Requester
The requestor utilizes an existing web services by opening a network connection and sending an
XML request.
3.2.3 Service Registry
This is a logically centralized directory of services. The registry provides a central place where
developers can publish new service or find existing ones. It therefore serves as a centralized clearing house
for companies and their services.
4. Pseudo Code
//Availability
long Total_sec= 30*24*3600, Total_upsec,Total_downsec;
float downtime_percent, uptime_percent;
int days,hours,mins,secs;
if(days>0 || days<30)
{
if(hours>0 || hours<24)
{
if(mins>0 || mins<60)
{
if(secs>0 || secs<60)
{
Total_downsec = (days*24*3600)+(hours*3600)+(mins*60)+secs;
}
}
}
}
Total_upsec=Total_sec -Total_downsec;
downtime_percent=(Total_downsec / Total_sec)*100;
uptime_percent= 100 downtime_percent;
5. Conclusion and Future work
Tourism is a major revenue making domain globally. In this work the Service Availability (SA) is
calculated using Pseudo code. Tourism is a kind of travel to obtain leisure-time, spiritual-trip and family
or job purposes, typically for a little span of duration. Here the biggest problem is lacking in obtaining the
available tourism domain resources (Places to visit, Hotel, Pilgrims, and Airways etc). Web services are
playing a vital role in the software industry. The proposed system improve the Service Response Time
using JMeter tool and Service Availability (SA) using the Pseudo code. Future work is by combing the
Service Response Time(SRT)and Service Availability (SA) the performance could be improved.
REFERENCES
[1] Amira Sobeih, International Institute for Sustainable Development (IISD) Geographic Information
254

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
Systems in Egypt, ISBN 1-895536-77-4, 2005.
[2]

Daniel A. Menasc, George Mason University QoS issues in web services IEEE Internet computing
1089-7801/02/2002.

[3]

D.Controneo, M.Gargiulo, S.Russo, G.Ventre Improving the availability of web services, 2002.

[4]

Daniel A. Menasc Response Time Analysis of composite web services IEEE computing, vol. 8, 1,
pp 90-92, 2004.

[5]

HouZhai-wei, ZhaiHaixia and GaoGuo-hong A study on web services performance optimization


ISBN 978-952-5726-11-4 pp 184-188.

[6]

M. Abdel-Fadeel; Samar K. Saad; SouadOmran Opportunities and challenges of using Geographial


Information System (GIS) in sustainable tourism development: the case of Egypt Marwa, 2013.

[7]

M. Dakshayini, H.S.Guruprasad An optimal model for priority based service scheduling policy for
cloud computing environment International journal of computer application (0975-8887)-2011.

[8]

MarziehKarimi, Faramarz Safi Esfahani and NasimNoorafza Improving Response Time of Web
Service Composition based on QoS Properties Indian Journal of Science and Technology, vol 8
(16), 55122, July2015.

[9]

Rahul Sharma An Efficient approach to calculate overall response time during Load Testing
International journal of advanced research in computer science and Software Engineering, volume
5, Issue8, August 2015 ISSN: 2277 128X.

[10] R.Sethuramana, T.Sasiprabha, A.Sandhya An Effective QoS Based Web Service Composition
Algorithm for Integration of Travel & Tourism Resources Procedia Computer Science 48 (2015)
pp 541-547.
[11] Serge Haddad, Lynda Mokdad, Samir Youcef Response time analysis for composite web services
vol 44, pp 1041-1045.
[12] Steffen Bleul, Thomas Weise and Kurt Geihs The Web Service Challenge A review on Semantic
Web Service Composition Electronic communications of the EASST volume X(2008) ISSN 18632122.
[13] Verka JOVANOVIC, Angelina NJEGUS The Application of GIS and its Components in Tourism,
Yugoslav Journal of Operations Research, Vol 18 (2008), number 2, 261-272.

255

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Solar Powered Auto Irrigation System


V R.Balaji#1 And M.Sudha*2
Assistant Professor, EEE, Kumaraguru College of Technology,
Coimbatore, India
*
PG Scholar, EEE, Kumaraguru College of Technology,
Coimbatore, India

Abstract The Auto irrigation system of this system uses soil moisture sensor to detect the moisture
level and 4X4 keypad for various crops control. When the moisture content of the soil is reduced then the
sensor sends detected value to the microcontroller. Then the water pump is automatically ON according
to the moisture level. The main aim of this paper is to reduce the human intervention for farmers and use
solar energy for irrigation purpose. The entire system controlled by the PIC microcontroller.
Index Terms Auto irrigation, moisture sensor, water pump, PIC microcontroller.

I. INTRODUCTION
The proper method is to be implemented for the irrigation system because of lack of rain and
scarcity of water in soil. Agricultural field always needs and depends on the water level of the soil. But
continuous extraction of water from soil reduces the moisture level of soil to avoid this problem planned
irrigation system should be followed. And improper use of water leads to wastage of significant amount
of water. For this purpose, automatic plant irrigation system is designed using moisture sensor and solar
energy.
The proposed system derives power from sunlight through photo-voltaic cells. Hence, the system
cannot depend on the electricity. In this proposed model by using sunlight energy, power the irrigation
pump. The circuit comprises of soil moisture sensor are inserted in the soil to sense whether the soil is wet
or dry.
A PIC microcontroller is used to control the whole system. When the moisture level of the soil
is low then the sensor detects the soil condition and gives condition to the relay unit connected to the
switch of the motor. It will ON in dry condition and switch off the motor when the soil is in wet condition.
The moisture level of the soil is sensed by the sensor inserted into the soil which gives signal to the
microcontroller whether the land needs water or not. The signal from the sensor received through the
output of the comparator and it is preceded with instruction from the program stored in the microcontroller.
When the soil is dry motor ON and in wet condition motor is OFF. This condition of motor ON and OFF
is displayed on a 16X2 LCD.
A.PV cell
Photovoltaic cell is a system converts light energy into electricity. Photovoltaic cell is otherwise
known as solar cells. This is used in simple and complicated application. The simplest system of photo
voltaic cells is small calculators and wrist watches in everyday usage. Most complicated system that provide
electricity for pumping water, powering communications equipment, lights to the homes and running our
appliances. The PV cells which takes sunlight and convert it into electricity this is kept as a small grid.
Solar electric panels more commonly referred to as photovoltaic, or PV, panels, it converts sunlight into
electricity. The electricity is used to run appliances and electrical devices or stored in batteries to be used
later. Solar Thermal Panels are used in commercial purpose to heat the water.
Solar collectors are the heart of most active solar thermal energy systems. The collector
absorbs the sun's light energy and converts it into heat energy. This thermal energy used to heat water for
commercial and residential purposes and conserve the electricity power. Solar buildings technologies are
useful to the buildings which uses more power to run man applications. Solar thermal collectors are the
main component of active solar systems, and are designed to meet the specific temperature requirements and
climate conditions for the different endues. Flat-plate collectors, Evacuated-tube collectors, concentrating
collectors, transpired air collectors these are some types of collectors in solar system. The proposed system
uses the solar energy to ON the water pump. Here the irrigation maintained through the soil moisture sensor
and solar energy. There are many plants which required minimum level of moisture. If the required level of
water is not provided then the plant will die and results in low production [2]. By irrigate the crop according
to the moisture level they need, is provided by the soil moisture sensor. Due to the presence of sensor crops
256

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
will irrigate properly.
II.SYSTEM DESIGN
This System consists of a Solar panel, which is the main source of energy and is given to the charge
controller for extracting regulated power from Solar panel at different irradiation and also to maintain
correct charging voltage and
current in order to charge the battery and increase its life. Water conservation in farm land is
controlled using microcontroller with soil moisture sensor.

Fig 2.1 Irrigation system diagram

The boost converter is used to convert DC to DC power to improve the output power
of the solar panel because if solar panel receives less amount of light then boost converter gives higher
voltage compared with input voltage. Boost converter is a switch mode power supply contains a diode and
a transistor with one energy storage element, capacitor. Filters are used to reduce output voltage ripple..

Fig: 2.2 Basic circuit of boost converter

When the switch is closed then the current flows in clockwise direction through the inductor
and it stores some energy by generating a magnetic field. When the switch is opened, current will be
reduced as the impedance is higher. The magnetic field previously produced will be destroyed to maintain
the current flow towards the load. For this the polarity will be reversed (means left side of inductor will be
negative now). As a result two sources will be in series causing a higher voltage to charge the capacitor
through the diode D. The automatic irrigation system consist of solar panel, boost converter, Inverter,
motor supply, soil moisture sensor, LCD display, 4X4 key pad, microcontroller, regulator. Soil moisture
sensor is inserted into the soil for level of moisture detection and also it indicates different moisture level
for different crops. In this system crops like paddy, wheat, and sugarcane can be irrigated. For the selection
of crops 4X4 key pad is used in this system. The next important part of the system is solar panel here the
power is driven from the solar panel. The solar panel that converts sunlight into electricity this converted
electricity is send to boost converter and to the battery.
257

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
Regulator is used to regulate the power from the convertor. Here the microcontroller needs 5v
power supply so the IC7805 is used in the system. The power supply is also connected to the single AC
motor .To ON/OFF the motor relay 12v is connected to the motor.
III. PROPOSED SYSTEM
The proposed system uses Solar power panel to energies the system and soil moisture sensor
to sense the water level for crops. Solar power is used only the source of power to control the overall
system, supply from the solar panel 12V is given to boost converter circuit. The boost converter circuit has
resistance R1, R2 (1k, 330) these are used to control the voltage from solar panel.IN4007 Diode(d1) is
act as voltage controlled device, inductance(100H) are connected series in it. Through MOSFET device
PWM pulse is generated to increase the stored voltage in capacitance (1000F) with respect to T/2 cycle.
Constant voltage from boost converter is stored to 12V Battery, 500W inverter are used to convert
12V DC to 230V AC for ac pump. Regulator IC 7805 positive regulator are used to regulate the 12V DC to
5V DC with the help 1000F and 100F with current limiting resistor 330. 5V from regulator are used to
operate the PIC microcontroller, microcontroller act as a control circuit to control the overall process. It has
40pin IC each pin is connected for respective operation, soil moisture sensor are dipped in the soil to sense
the humidity value. Soil humidity value for different crops are selected by 4x4 matrix keypad, programming
for crop selection and respective humidity value are programmed in the PIC16F877A microcontroller.
Signal from microcontroller to 12V relay are operate to on/off the motor pump. Water flow from the pump
are depends upon the signal from PIC microcontroller.
The system is controlled by the PIC microcontroller. When the soil moisture sensor sense the low
level of the soil moisture then a signal is send to the microcontroller then the controller check for the
condition given in program. In program stored in the microcontroller is different for different crops. The
humidity level needed to grow the crop is varies from one crop to another. According to the growth of
crop water is supplied. The irrigation is automated with Soil moisture sensor and the relay unit. When soil
moisture level is low then a signal send to the relay to switch ON the motor and when the soil is wet then
motor is in OFF condition. Relay gives the ON/OFF condition to the motor.
The entire system is powered by solar panel energy. When the system uses solar energy then
the electricity energy can be conserved. The PIC microcontroller needs 5v supply and motor needs 230v
supply. Regulator is connected to the PIC microcontroller to regulate the power supply from the solar panel.
IV. HARDWARE AND SIMULATION RESULTS
This System consists of a Solar panel, which is the main source of energy and is given to the charge
controller for extracting regulated power from Solar panel at different irradiation and also to maintain
correct charging voltage and current in order to charge the battery and increase its life. Water conservation
in farm land is controlled using microcontroller with soil moisture sensor. The simulation of this system
consist of PIC microcontroller connected with LCD display, relay, 4X4 key pad, Transistor and power
supply from solar panel.

Fig 4.1 Simulation diagram


258

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Fig 4.2 Output waveform with moisture level

A. Boost converter
This charge controller is suitable for charging flooded lead acid, Gel cell or sealed lead
acid (SLA) and Absorbed Glass Mat type batteries. The Boost converter charge controller keeps the solar
panel current and voltage at the regulated power point while charging the battery. Boost converter helps to
maintain the constant output from solar panel to battery.
B. Regulator
In this regulator IC 7805 are used to convert the 12v supply from battery to 5v supply through
the microcontroller 16F877A and to hygrometer soil moisture sensor.

Fig 4.3 Basic circuit of IC7805

C. 4X4 Keypad

Fig .4.4 4X4 Keypad

The MCP23X08 devices have several features that


make them ideal for controlling a
4x4 matrix keypad. These features have been broken down into two main groups:
The ports input and output characteristics.
The interrupt-on-change feature, which is an important aspect of the key scan method used.
259

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
D. Hardware setup

Fig 4.5 Hardware with moisture sensor

Fig 4.6 Microcontroller and LCD

Fig 4.7 Relay unit

V. CONCLUSION
The proposed system is beneficial to the farmers when this system is implemented. And also useful
to the government with solar panel energy, solution for energy crisis is problem.
When the soil needs
water is indicated by the sensor by this automatic irrigation system is implemented. Then the various crops
also irrigated with this system by turn on the button. According to the button pressed the irrigation system
detects the moisture level of the crop. For example, Wheat, Paddy, Sugarcane crops moisture content of soil
is detected and irrigated automatically. Automatic irrigation system is used to optimize the usage of water
by reducing wastage and reduces the human work. The energy needed to the water pump and controlling
system is given by solar panel. Solar panels which are small grid that can be produce excess energy. By
using solar energy reduces the energy crisis problem.
The system requires minimal maintenance and attention because they are self-starting. To further
enhance the daily pumping rates tracking arrays can be implemented. This system demonstrates the
260

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
feasibility and application of using solar PV to provide energy for the pumping requirements for sprinkler
irrigation. Even though this system requires more investment but it solves more irrigation problem after
long run of this system.
VI. FUTURE WORK
In future, the Automated Irrigation System Using Linear Programming provides to be a real time
feedback control system. This control system monitor and controls all the activities of drip irrigation
system efficiently and also efficient water management gives more profit in less cost. Using this system,
manpower and water can be saved, as well as with this system the productivity improved and ultimately the
profit. In future with some modification in this system can also supply agricultural chemicals like sodium,
ammonium, zinc, calcium to the field along with fertilizers by adding new sensors and valves.
REFERENCES
[1] Chaitali R. Fule and Pranjali K. Awachat,Design and Implementation of Real Time Irrigation
System using a Wireless Sensor Network, Proceedings of the International Journal of Advance
Research in Computer Science and Management Studies, Volume 2, Issue 1, January 2014.
[2]

M. Lincy Luciana, B.Ramya, and A. Srimathi, Automatic Drip Irrigation Unit Using PIC Controller,
Proceedings of the International Journal of Latest Trends in Engineering and Technology, Vol. 2,
Issue 3, May 2013.

[3]

H.T ingale and N.N. Kasat, Automated Irrigation System, Proceedings of the International
Journal of Engineering Research and Development,
Volume 4, Issue 11, November 2012.

[4]

K.Prathyusha, M. Chaitanya Suman, Design of embedded systems for the automation of drip
irrigation, International journal of application or innovation in engineering of management, volme
1, Issue 2, October 2012.

[5] Cuihong Liu Wentao Ren Benhua Zhang Changyi Lv, The application of soil temperature
measurement by LM35 temperature sensors, International conference on Electronic and Mechanical
Engineering and Information Technology (EMEIT), 2011.
[6]

Andrew J. Skinner and Martin F. Lambert, An Automatic Soil Pore-Water Salinity Sensor Based
on a Wetting-Front Detector , IEEE Sensors journal, vol. 11, no. 1, January 2011.

[7]

Haley, M, and M. D. DukesEvaluation of sensor-based residential irrigation water application,


ASABE Annual International Meeting, Minneapolis, Minnesota, 2007.

AUTHOR PROFILE
V R. Balaji obtained his B.E. degree from Sudharsan Engineering College, Anna
University. He completed his M.E in Government College of Technology at Coimbatore
in the specialisation of Power Electronics and Drives. His area of interest in the field
of power quality management in utility grid. Currently he is working as a Assistant
professor in the department of EEE at KCT, Coimbatore.
M.Sudha obtained her B.E. degree from Vivekanandha Institute of Engineering and
Technology for Women. She pursuing M.E in Kumaraguru College of Technology at
Coimbatore in the specialisation of Embedded system technologies.

261

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Securing Internet Banking With A Two-Shares Visual


Cryptography Secret Image
Aparnaa.K.S, Santhi.P
Department of Computer Science and Engineering,
Paavai Engineering College, Anna University,
Chennai, Tamil Nadu, India
ks_aparnaa@yahoo.in
Department of Computer Science and Engineering,
Paavai Engineering College, Anna University,
Chennai, Tamil Nadu, India
santhipalanisamypec@paavai.edu.in
Abstract Phishing is an illegal attempt to steal the sensitive personal information of any individual
or organization without their consent. This information leakage in the internet banking will lead to a
huge threat to the enormously increasing online bankers in day-to-day life. A new approach for phishing
websites identification and securing Banking customers personal data from phishing attackers is
introduced. Original Personal Security Image captcha is chosen individually by Client using their self
preferred text and then create an image with overlapping Text over a dummy background. Later, splitting
it into two shares and store those in separate databases like server and client level database created to
serve as intermediary database that asks some questions about the security text provided by the client
during the security image creation (which we call Captcha as graphical password (CaRP). CaRP is both a
Captcha and a graphical password scheme) we make them as a security key for users. The original image
is achieved only if client share CaRP1 is merged with server share CaRP2 i.e. CaRP1 + CaRP2.

I. INTRODUCTION
Phishing attack is the attempt by an individual or a group to steal personal confidential information
through fake or look-a-like websites of an existing authorized website. It is a form of online identity theft
that aims to steal the sensitive personal information such as online banking passwords and credit card
information from the users. There has been extensive press coverage over the phishing attacks because such
attacks have been escalating in the numbers along with increasing online customers and sophistication.
To provide improved security from leaking of confidential information we need to switch over to an even
more reliable protection scheme to ensure safe networking of transactions. Bank customers are the favorite
targets of those who involve in phishing attacks.
At present many bank customers use online transactions frequently. So, the customer will have
username and password to access the bank account. These are sensitive and confidential information. When
these fall into hands of the phishing attackers, the information can be used by the attacker to access the
bank account and make a huge loss to the customer. Unfortunately many people fall into the scams of these
attacks and the victims lose their confidential information in wrong hands.
1.1. OVERVIEW
Now-a-days where online banking has been increasing rapidly, sadly, the Phishing scams are
increasing in same pace. The most used two methods of attacking by them are

(i) Email phishing

(ii) Website phishing
Email Phishing involves the sending of a fake mail to the victim and requesting them to provide
confidential information like an established organization. This can be avoided by just being aware of one
fact that no legitimate bank will include a form within an email that they send you. Website Phishing is
the process of creating a look-a-like fake website of an established organization and stealing the data from
the user. This can be avoided by verifying whether the website you are at is a secured website or not. But
verifying every time is not always possible even to expert customer. So this made rise to develop some
reliable techniques to overcome Website Phishing.
Unlike email phishing the victims of website phishing can lead to huge number of victims because
262

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
it is hard to identify if it is the secured site by novice users. Even experts can become victims because of
increased online purchases which is done mostly through pop-up window for payments.
1.2. EXISTING SYSTEM
The existing system makes use of a Security image personally chosen by the client during the
process of signing up. Then on every log in, the user is requested to enter the Username or User Id and
the next step takes to next new webpage to get Password to prevent phishing by displaying their Security
image which is loaded from the server. This will give the user the surety that it is not a fake website but
an authorized site. This system uses two steps logging in process which makes any phishing attacker to
perform two attacks to get the actual information of both Username and Password from the client.
1.1.1. Demerits of Existing System
a) Limited Security Images
The existing system has certain limitation of the Security images in which the user has to choose
one among those security images. The probability of guessing the security image or finding them is so easy
using this limited pool of images.
b) Two attacks to get all the Information
The phishing attacker first will attack the user to get the username. Simultaneously masquerade
the user into Server and steal the security image by making an incomplete transaction and also displaying
the user a temporary error. The user could be asked again to type same username and this time the stolen
security image can be displayed and the victim falsely believes it as an authorized site. Then the attacker
can receive the password from the user. This stolen username and password may be used even to perform
unapproved transactions by the attacker.
c) Single step heuristic based technique
Heuristic is the precise word for self discovery or common level of sensing problem with some
simple basic logics. The heuristic based Anti-Phishing technique used here makes the user to verify if the
website that they enter the password is a secured website or a phishing site by seeing if the security image
is true. But it does not verify from the server end if it is the actual client or an attacker who has stolen
information from client through Phishing website and impersonates as the client. So, just a security image
is not secured enough to stop this third party eaves dropping.
1.3. PROPOSED SYSTEM
The system uses Visual Cryptography to the Security image which is considerably better in
reliability. In order to achieve comparatively higher reliability, we prepare a Security image for each
client using their own choice of word. Then that image is split up and used while submission of User Id
and password which are made entered in separate web pages to perform the image verification. To help
novice users who cannot identify secured http connection(https://), this image display will ensure them to
know that this website is not fake and they are free from Phishing attacks. The user is first asked to provide
Username; after displaying the first share of the security image stored in Intermediary database the user
is asked to answer any one question chosen randomly from the pool of questions about the Security image
text the user provided during signing up; after validating the answer with the security text values in Server
database, the server discloses the complete Security image superimposing first share received and second
share that it has; looking at the complete image user is satisfied and sever also satisfied that no one is eaves
dropping or impersonating. Finally the user enters the password.
1.3.1. Merits of Proposed System
a) Delimit the possible Security Images
The proposed system creates security images from a text chosen by the user. The text is embedded
with some black and white image with lesser contrast and higher brightness such that text is visible to
human eye. This technique delimits the limited security image concept of existing system.
b) Number of attacks to steal is impossible
The total number of attacks to completely impersonate someone is made undefined. It is because, the
security text details are only known to the client. Even if the attacker tries to eaves drop he cannot answer
all the questions that are asked randomly every time during log in.
263

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
Additional to this, even if the attacker steals the security image, the image has both user provided
text with server generated key. The attacker cannot identify the user text to answer the questions.
c) Multi-step heuristic based technique
The heuristics of verifying if the website is not a phishing site is increased by Visual cryptography
technique over the security image that is encoded by the user text within a image through Steganography.
2. PROBLEM DESCRIPTION
2.1. PHISHING ATTACKS
One definition of phishing is given as it is a criminal offensive activity using social engineering
techniques to steal others personal information without their acknowledgment. Phishing attackers attempt
to fraudulently acquire sensitive information, such as security codes, passwords and credit card details,
by masquerading like a trustworthy person or business or service provider in an electronic communicable
network. Another comprehensive definition of phishing, states that it is the act of sending an email to
a user falsely claiming to be an established legitimate enterprise into an attempt to scam the user into
surrendering private information that will be used for identity theft or making a look-a-like webpage of any
established legitimate institution and sending a link to the user.
The conduct of identity theft with this acquired sensitive information has also become a lot more
easier with the use of modern technology and identity theft can be described as a crime in which the
impostor may obtain key pieces of information like Social Security numbers and driver's license numbers
and uses them for his/her own gain. Online transactions are becoming so common these days that there are
various attacks made over this. Among those attacks, phishing is identified as a challenging security threat
which allows different forms of information leakage where attackers develop new innovative ideas that are
increasing the victims count in each second. So, preventive mechanism should also be equally effective.

Fig.2.1. Activity comparison of Actual website with Phishing website

2.2. TECHNIQUES TO OVERCOME PHISHING ATTACKS


The proposed system introduces a new technique for detecting whether it is a original site or
duplicate. In this system an image is formed by the user credentials and embedded into pixel by pixel. It
denotes the shares of a white pixel and a black pixel. Note that the choice of shares for a white and black
pixel is randomly determined (there are two choices available for each pixel). Neither share provides any
clue about the original pixel since different pixels in the secret image will be encrypted using independent
random choices. Two shares are superimposed.The value of the original pixel P can be determined. If P is
a black pixel, we get two black sub pixels. If it is a white pixel, we get one black sub pixel and one white
sub pixel.
The concept of image processing and an improved visual cryptography is used. Image processing
is a technique of processing an input image and to get the output as either a processed form of the same
image and/or modified characteristics of the input image under some criteria. Visual Cryptography (VC)
is a method of encrypting a secret image to multiple shares of single image, such that stacking the same
sufficient number of shares reveals the actual secret image.
264

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
3. DEVELOPING METHODOLOGY
We are securing the internet banking from Phishing attackers using two major techniques. One
is Steganography and the other is Visual Cryptography Sharing (VCS) scheme. Steganography is the
art of writing hidden messages such that none apart from the actual receiver knows of the existence of the
message.
3.1. STEGANOGRAPHY
We are using the Steganography technique of Bit-Plane Complexity Segmentation (BPCS)
which is a lossy image compression technique. This technique was first proposed by Kyushu Institute of
Technology. This allows us to embed a large amount of data in the images. We embed the Security text
provided by the user into an image of Black and White random pixel combination of background.

Fig.3.1 Steganography with Bit-Plane Complexity Segmentation on Text

3.2. VISUAL CRYTOGRAPHY SHARING SCHEME


Visual Cryptography Sharing (VCS) scheme is the method of encrypting a visual format of secretive
and sensitive data into multiple shares, which on overlapping provides the actual visual data. Based on the
number of shares to be created, following are some important types of threshold schemes.
(2, 2) threshold scheme- It is a basic threshold scheme takes input as a secret message and

encrypts it in two different shares that reveal the output into the secret image when they are
overlaid.
(2, n) threshold VCS scheme-This scheme encrypts the secret image into n shares in such a way

that when at least two of the shares are superimposed the secret image is revealed. The user will
be prompted for n, the number of participants.
(n, n) threshold VCS scheme-This scheme encrypts input of the secret image to n shares in such a

way that only when all of the n shares are superimposed will reveal the secret image.
(k, n) threshold VCS scheme- This scheme encrypts the input secret image to n shares in such a

way that when any group of at least k shares of the secret image are superimposed then the secret

image will be revealed.
3.3. (2,2) THRESHOLD VCS SCHEME
In the case of (2, 2) threshold VCS scheme, each pixel P in the original image is encrypted into
two sub pixels (pairs) called shares. The choice of the shares for a white and black pixel in the image is
randomly determined (there are two choices available for each pixel). Neither share provides any clue about
the original pixel because all pixels in the secret image will be encrypted independently using random
choices.

265

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
Table.3.1. Pixel sharing technique in proposed system using Visual Cryptography

If the two shares are superimposed or overlapped, then the value of the original pixel P can be
determined. If P is a black pixel, then we get it as two black sub pixels. If it is a white pixel, then we get it
as one black sub pixel and the other one as one white sub pixel.
4. USER INTERFACE REQUIREMENTS
The modules that are to be implemented in the interface are enlisted as follows:

A. Registration With Secrete Code

B. Image Captcha Generation

C. Shares Creation(VCS)

D. Login Phase
4.1 REGISTRATION WITH SECRETE CODE TEXT
In the registration phase, the user details like username or user-Id, password, email-Id, address, and
a Security text are requested from the user at the time of registration for the securing of website and user
from Phishing attackers. The key string can be a combination of some alphabets and numbers to provide
more secure environment. This string is concatenated with some randomly generated unique Key string
in the server.

Fig.4.1. Security Text concatenated with Key String from Server

4.2 IMAGE CAPTCHA GENERATION


The key string is converted into an image using the java classes Buffered Image and Graphics2D
in this phase. The image dimension is 260*60.
Text colour is Black and the background colour is white. Text font is set by Font class in java. After
image generation it will be written into the user key folder in the server database using the Image class.


Fig.4.2. Security Image created with the Text (Security text + Key String)

4.3 SHARES CREATION (VCS)


The image captcha is divided into two shares such that one of the shares is kept with the database
created for User by the server (Intermediary Database) and the other share is kept in the server database.
The user's share and the original image captcha is displayed to the user for later verification during login
phase. The image captcha is also stored in the actual database of any confidential website as sensitive and
confidential data.
266

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Fig.4.3. Creating two shares for the security image using Visual Cryptography

4.4 LOGIN PHASE


When the user is logging in by entering his confidential information for using his account, then
first the user is asked to enter only his username (user id).Then the user gets a webpage where his share is
displayed from the intermediary server and from a list of random questions about his secret key string, a
question is asked. For example, How many alphabets did you use in your secret key text?. This share is
sent to the server where the user's share and share which is stored in the database of the website for each
user, is superimposed together to produce the image captcha and displayed in the next webpage which is
only performed if the user has answered the secret question correctly in previous step. The image captcha
is displayed to the user.
Here the end user can check whether the displayed image captcha matches with the captcha created
at the time of registration. Then end user is required to enter the password and using this, the user can log
in into the website. Using the username and image captcha generated by stacking two shares one can verify
whether the website is genuine/secure website or a phishing website and is it secured enough to enter the
confidential data.

Fig.4.4. Steps involved in Log In process of the proposed system.


267

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
5. PERFORMANCE AND SIGNIFICANCE
The concept of steganography and the (2,2) threshold visual cryptography sharing scheme is used.
Then the Intermediary database holds questions regarding the actual share value that is given as input by
the User during Sign Up. For phishing detection and prevention, we have used a new methodology to detect
the phishing website. It prevents the leakage of password and other confidential information to the phishing
attackers by fake look-a-like websites.
Since this methodology is based on the Anti-Phishing Image Captcha validation scheme using visual
cryptography and Steganography, its reliability is at an improved level. The significances and limitations
of this methodology are listed below.
Phishing web pages are forged web pages that are created by malicious people to mimic Web
pages of real web sites which has web pages with highest visual similarities to scam their victims. Victims
of phishing web pages may expose their sensitive information like bank account, password, credit card
number, or other important information to the phishing web page owners.
5.1 MERITS OF THE PROPOSED METHODOLOGY
It prevents password and other confidential information from the phishing websites with 3 step

logging in from other single and double steps logging in which are already in practice.
The second step of viewing the share 1 and questioning any one of the question randomly chosen

from pool of questions about Secret Image Text provided by User at Sign up time increases the

reliability and security from masquerading attacks by phishing attackers.
The third step of logging in which displays the user a superimposed image of share 1 and share 2
of the Secret image provides the user the assurance that it is not a phishing website and then the

user can provide his/her sensitive data.
This method provides improved security by not letting the intruder from logging in into the account

even when the user knows the username of a particular user because the intruder is not aware of
the secret image or the whereabouts of the text.
The proposed methodology is mainly useful in the growing fields of financial web portal, banking

portal, online shopping market, where the attacks of phishing websites will create a big chaos and

makes them secured.
6. CONCLUSION
Currently phishing attacks are so common. Phishing attack is a threat globally and where the attackers
capture and store the users confidential information. This information can be used by the attackers to
access those victims accounts who are unaware of the attack that has stolen their information. Phishing
websites can be easily identified using our proposed "Securing Internet Banking using 2-Shares Visual
Cryptography Secret Image".
The proposed methodology preserves confidential information of users using 3 steps security system
with (2,2) threshold VCS scheme that involves 2 shares of Security image. Final layer verifies if the website
is a genuine website or a phishing website through display of Original Secret Image. If the website is a
phishing website, then in that situation, the phishing website cant display the Security image captcha for
that particular user (who wants to log in into the website) because the Security image captcha is generated by
the stacking of two shares, one with the Intermediary database and the other with the actual Server database
of the website. Here the server cross validates image Captcha corresponding to the user by superimposing
both the shares of the image
The individual user can verify if it is their Captcha and if it is correct then he/she can ensure that the
site is not a Phishing website. So, using image Captcha technique with Steganography, no machine based
user can crack the password or other confidential information of the users. And as a third layer of security it
prevents intruders attacks on the users account because using Steganography with Visual Cryptography
technique along with answering of questions about the Secret image intruders cant succeed with any stolen
information. This method provides additional security by not letting the intruder to log in into the account
even if the user is aware of the account details of a particular user. The proposed methodology spreads its
significance in preventing the attacks of phishing websites in the fields of financial web portal, banking
portal, online shopping market.
7. REFERENCES
[1] R.Biddle, S.Chiasson and P.C.Van Oorschot, Graphical pass -words: Learning from the first twelve
268

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
years, ACM Comput. Surveys, vol. 44, no. 4, 2012.
[2]

Cognitive Authentication Schemes safe against Spyware IEEE publication under security and
privacy 2006 by Weinshall, D., Hebrew University of Jerusalem.

[3]

R. Dharnija and A. Perrig., Deja vu: A user study using images for authentication". In Proc. 9th
USENIX Security Symposium, 2000.

[4] S.Wiedenbeck, J.Waters, J.C.Birget, A.Brodskiy, and N.Memon, Pass Points: Design and
longitudinal evaluation of a graphical password system, Int. J. HCI, vol. 63, pp. 102127, Jul. 2005.
[5]

Steganography using Genetic Algorithm along with Visual Cryptography for Wireless Network
Application by G.Prema and S.Natarajan.

[6]

Dirik, N.Memon, and J.C.Birget, Modeling user choice in the pass points graphical password
scheme, in Proc. Symp. Usable Privacy Security, 2007, pp. 2028.

[7]

J.Thorpe and P.C.Van Oorschot, Human-seeded attacks and exploiting hot spots in graphical
passwords, in Proc. USENIX Security, 2007, pp. 103-118.

[8]

P.C.Van Oorschot and J.Thorpe, Exploiting predictability in click based graphical passwords, J.
Comput. Security, vol. 19, no. 4, pp. 669702, 2011.

[9]

Board of Governors of the Federal Reserve System, Federal Deposit Insurance Corporation,
National Credit Union Administration, Office of the Comptroller of the Currency, and Office of
Thrift Supervision, Authentication in an Internet Banking Environment October 12, 2005, the
FFIEC agencies.

[10] Alok Bansal, Yogeshwari Phatak and Raj Kishore Sharma, Quality Management Practices for
Global Excellence, Prestige Institute of Management and Research Indore, 2015, page number 253.
[11] P.C.Van Oorschot and S.Stubblebine, On countering online dictionary attacks with login histories
and humans-in-the-loop, ACM Trans. Inf. Syst. Security, vol. 9, no. 3, pp. 235258, 2006.
[12]

Ch.Ratna Babu, M.Sridhar and Dr. B.Raveendra Babu, Information Hiding in Gray Scale Images
using Pseudo -Randomized Visual Cryptography Algorithm for Visual Information Security.

8. WEB REFERENCES
[1] h t t p : / / b o o k s . g o o g l e . c o . i n / b o o k s ? i d = I - 9 P 1 E k T k i g C & p g = P A 4 3 3 & r e d i r _
esc=y#v=onepage&q&f=false
[2] http://googleonlinesecurity.blogspot.jp/2012/01/landing-another-blow-against-email.html
[3] http://www.computerworld.com/s/article/9219155/Suspected_Chinese_spear_phishing_attacks_
continue_to_hit_Gmail_users
[4]

http://archive.wired.com/science/discoveries/news/1998/01/9932

[5] http://www.pcworld.com/article/125739/article.html?page=1
[6] http://www.nytimes.com/2007/02/05/technology/05secure.html?ex=1328331600&en=295ec5d099
4b0755&ei=5090&partner=rssuserland&emc=rss&_r=0
[7] http://web.archive.org/web/20080406062149/http://people.deas.harvard.edu/~rachna/papers/
emperor-security-indicators-bank-sitekey-phishing-study.pdf

269

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Robot Vehicle Controlled by Gestures


N.Prakash#1 and P.K.Dheesshma*2
Assistant professor, Department of EEE,
Kumaraguru college of Technology,
Coimbatore,Tamilnadu

PG Scholar, Department of EEE ,


Kumaraguru College of Technology ,
Coimbatore,Tamilnadu
*

Abstract Gesture recognition makes humans to communicate with the machine and interact naturally
without any mechanical equipments. A lot of research has been already done in the field of gesture
recognition using different mechanism and algorithms. The majority of study is done in this field using
Image processing techniques and methodologies. This is aimed in designing a cost effective low power
consuming device to control the locomotion of robot using gesture from hand which leads to the advance
in the concept of unmanned vehicle. Recognition rate of postures has a lot of scope for improvement by
compromising the system response time. The theme of the study is to design a robot vehicle which can
be controlled by gesture
Index Terms gesture recognition, image processing techniques, unmanned vehicle, robot vehicle

I. INTRODUCTION
Revolution of robot in various areas, and people are trying to control them more accurately and
easily. The application of controlling robotic gadget it becomes quite complicated when there comes the
part of controlling it with remote or many different switches. In military application, industrial robotics,
construction vehicles in civil side, medical application for surgery. In these fields it is quite complicated
to control the robot with remote or switches, sometime the operator may get puzzled with switches and
button itself, so a new concept is introduced ,to control the robot vehicle by the moving the of hand and
then simultaneously control the movement of robot vehicle. Over the past few decades people are finding
easiler way to communicate with robots in order to enhance their contribution in our daily life. Humans and
robots are combined in order to overcome the new challenges. From the very early stages it was one of the
main objectives to control the robot smoothly and make humans feel comfortable. So rather using the older
method of controlling robot by means of remote or keyboard its better to control a robot with the help of
our hand gesture. Because hand gesture is very natural way of communication for humans.
Hand gesture technology being used more mostly used in many fields nowadays. Its becoming very
popular in the robotic industry . Gesture recognition enables humans to communicate with the machine
(HMI) and interact naturally without any mechanical devices. .The gestures of different organs of the body
are used to control the wheel chair and different intelligent mechanisms have been developed to

Fig 1 Accelerometer controlled robot vehicle

control the intermediate mechanisms[4] By including the concept of gesture recognition, it is


possible to point a finger at computer screen so that the cursor will move accordingly.[1].
There are many varieties of hand gesture technologies available nowadays but one of the most popular
form is accelerometer based hand gesture technology. Accelerometers have also involved themselves in to
270

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
the digital and hand written character recognition based on gesture classification. The 3-axis accelerometers
signal classification finds an application which involves quite a very precision and uses different methods
The accelerometer can measure the static acceleration of gravity in tilt sensing application, as well as due
to motion, shock or vibration dynamic acceleration can be measured. [2].Accelerometer can be able to
capture human hand gestures. By sensing the gesture the vehicle will works accordingly. There are many
challenges associated with the accuracy and advanatage of gesture recognition software. For image-based
gesture recognition there are some limiations on the equipment used and image noise. Images or video may
not be consistent or occur in the same location. Items in the background have a distinct features of the users
that may make recognition more complicated.
The variety of implementations for image-based gesture recognition may also cause issue for
viability of the
technology to general usage. Human gestures are captured by visual sensors, robust computer vision
methods are also required to capture he gesture, for example, hand tracking and hand posture recognition[3]
is used for capturing movements of the head, facial expressions or gaze direction. Our main objective is
to minimize the delay and make the response time faster. On the other hand to make a prototype that is
inexpensive that its cheaper than the traditional gesture control robotic car.
This paper is organized as follows. Section II describes the detail concept of the Proposed System.
Section III presents the hardware setup of the proposed system. Section IV discusses the results related to
the proposed design.
II . Proposed system
This paper describes the concept of a robot vehicle which can be helpful in civil and military
works. In the existing system is based on the wired means of communication which is a drawback and that
is overcome by means of wireless technology. The concept is based on the movement of the users hand
movement the robot vehicle takes its direction. Here, the Gesture sensor which is then affixed with the
internal flex sensor is placed on the transmitter side, which may capture the user movement and then it is
given to the microcontroller by means of an interfacing unit. Then it transfers the information to the other
side by means of a wireless communication protocol (Zigbee).

Fig 2 Block diagram at the transmitter side

Here, a wireless camera is the affixed in the robot vehicle which may then shot the movement of
the vehicle and then displayed on the computer at the transmitter side. And here communication between
the vehicle and the computer may happen by means of radio frequency.
At the receiving side the robot will be placed which then
perform all the action that is then done by the user. And this
can be then it finds the application in civil and military.
271

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Fig 3 Block diagram at the receiver side

The data that is transmitted at the transmitter side is then received by the zigbee at the other end .
And based on the action that is done by the user is then sensed by the microcontroller on the receiver side.
And then before the action done by the robot vehicle it is given to the relay and the driver circuit and the
robot perform the similar action as that of the human. And then action can be taken by the wireless camera
and then it is viewed on the Tv on the transmitter side.
A. GESTURE SENSOR:
Gesture sensors, also called bend sensors, measure the amount of deflection caused by bending
the sensor. There are various ways of sensing deflection, from strain-gauges[4]to hall-effect sensors. The
three most common types of flexion sensors are conductive ink-based, fiber-optic, conductive fabric/thread/
polymer-based. A property of gesture sensors is that folding the sensor at one point to a prescribed angle
is not the most effective use of the sensor. As well, by bending the sensor at one point that is more than
90 may permanently cause damage the sensor. Instead, fold the sensor around a radius of curvature. The
smaller the radius of curvature and the greater the whole length of the sensor may be involved in the
deflection, the greater it causes the resistance.
B. EMBEDDED SYSTEM:
ATmega328 is a chip microcontroller from the faily Atmel and correspondingly belongs to the
mega AVR series. The Atmel 8-bit AVR RISC based microcontroller hat combines 32kB ISP flash memory
with read-while-write capabilities, 1kB EEPROM, 2kB SRAM, 23 general-purpose I/O lines, and also
have 32 general-purpose working registers, three flexible timers/counters with compare modes, also have
internal as well as external interrupts, serial programmable universal asynchronous receiver transmitter
(USART), a byte-oriented 2-wire serial interface, SPI serial port, 10-bit A/D converter, programmable
watch-dog timer(PWT) with an internal oscillator and five software-selectable power-saving modes. Here,
in this study we prefer this mainly because it is for cost effective . and communication in microcontroller
can be done in bidirectional .
C. Wireless Transmission Protocol (Zigbee):
ZigBee is one of the standard based wireless technology designed to address the unique needs with
low cost, feasible and power wireless sensor control networks. Since ZigBee can be used almost anywhere,
is easy to implement and requires only a feasible power to operate. In this paper, communication happens
by means of wireless protocol in existing system only wired means of communication can be used by it
have lots of limitations which is overcome by means of wireless communication. ZigBee uses 128-bit
keys to implement its security mechanisms. A key can be associated either to a network, and also being
used by both ZigBee layers and the MAC sublayer, or to a link, which can be acquired through preinstallation, agreement or by transport. Link keys are established based on a master key which controls
link key correspondence. Ultimately, at least the initial master key must be acquired through the secured
272

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

medium (transport or pre-installation), as the security of the whole network depends on it. Link and master
keys are then only visible on the application layer. Different services uses different variations of the link
key in order to avoid leaks and security risks.
D. DRIVER UNIT (L293D):
This driver circuit is a 16-pin DIP package motor driver IC (IC6) having four input pins and four
output pins. All four input pins are joined to output pins of the decoder IC (IC5) and the four output pins are
connected to DC motors of the robot. Enable pins are then used to enable input/output pins on both sides
of IC6. Motor drivers circuit may act as current amplifiers and they take a low-current control signal and
provide a higher-current signal. This higher current signal is used to drive the motors. Enable pins 1 and
9 (corresponding to the two motors) must be high for motors to start operating. When an input is high, the
associated driver circuit gets enabled. As a result, the outputs become active and work in phase with their
inputs. Similarly, when the input enable is low, that driver is disabled, and their outputs are off and in the
high-impedance state.
E.WIRELESS CAMERA:
Wireless technology is the new concept that is applied then just about everything these days,
and video surveillance is an added advantage of it. A wireless camera may then includes a transmitter to
send video over the air(radio frequency) to a receiver instead of through a wire .Most wireless cameras
are technically cordless devices, meaning is that though they transmit a radio signal, they still need to be
connected it to a power source(battery). Still, wifi is the commonly used industry term. Some cameras do
have internal batteries of course, making them purely wireless. But battery lifetime is still an problem for
professional or even semi-professional applications. These devices work on a simple principle. The camera
contains a wifi-radio (RF) transmitter. This radio frequency transmitter may transmit the camera's video,
which can be receiver up by a receiver, which will be connected to a monitor or recording device. Some
receivers have internal storage, while others must be connected to a external storage devices.
III HARDWARE SETUP
Controlling robotic widgets has becomes quite hard and complicated and when there comes the
part of controlling it with remote or many distinct switches. Mostly in military application, industrial
robotics, construction vehicles in civil side, medical application for surgery/operation. In this field it is quite
complicated to control the robot with remote or switches, sometime the operator may get puzzled with the
switches and button itself, so a new concept is introduced to control the robot vehicle with the movement
of hand which will simultaneously control the movement of robot [5]. Here, is a prototype model which
illustrates the model of robot vehicle.

Fig 4: Robot vehicle model

Based on the movement of hand the robot may move in the desired direction.

Fig 5:Transmitter side


273

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
Here, in the transmitter side the movement is then sensed and then it is passed to the receiver side by
means of wireless protocol(Zigbee). Where the discrete packets of information
are transferred when it matches with address of the receiver side it may then transfer the data.

Fig 6: Robot model

Robot Moves in the Direction of the hand gesture of the user


IV. RESULT:

CONCLUSION:
Controlling issue is always the main factor. In this study to tried to make easier and simpler
controlling system. Here,it is mainly focused to make a system which is cheap and reliable. There are lots
of opportunities to make many important projects based on our project. We can add a video which will
transmit the footage wirelessly in our monitor. A robotic arm can be added in the system and which also can
be operated with the help of hand gesture. Hand gesture control wheel chair can be made by following the
same mechanism as our project with a bigger and high torque motor. This wireless gesture control system
can also be helpful for controlling our home appliances.
REFERENCES
[1] https://en.wikipedia.org/wiki/Gesture_recognition
[2]

Analog Devices Semiconductors and Signal Processing ICs.Analog[Online].Available:http://www.


analog.com/static/importe d-files/data_sheets/ADXL335.pdf [Accessed: Nov 12, 2014 ]

[3]

E. Miranda and M. Wanderley, New Digital Instruments: Control and Interaction Beyond the
Keyboard, A-R Editions, Wisconsin, 2006.

[4]

K. K. Kim, K. C. Kwak, and S. Y. Ch, "Gesture Analysis for Human-Robot Interaction", Proc. of the
8th Int. Conf. on Advanced Communication Technology, Vol. 3, pp. 1824-1827. 2006

[5] http://www.engineersgarage.com/contribution/accelerometer-based-hand-gesture-controlled-robot
274

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Four Dimensional Coded Modulation With Gmi for


Future Optical Communication
P.Jayashree1, M.Jayasurya2, J.Keerthana3, K.Kasthoori4, R.Sivaranjini5
1, 2, 3, 4-UG Scholar, 5-Assistant Professor Dept of ECE,
M. Kumarasamy College of Engineering,
Karur.
Email-Id:jayashreesundharam@gmail.com1,
srsivaranjinir670@gmail.com5
Abstract The coherent optical CM transceivers where the receivers is based on a Genaralized Mutual
Information structure.Coded modulation is the combination of forward error correction and multilevel
constellations.Coherent optical communication systems result in a four-dimensional signal space, which
naturally leads to 4D-CM transceivers. The constellations that transmit and receive information in each
polarization and quadrature independently is best 4D constelaion designed for uncoded transmission.
Theoretical gains are as high as 4D constellations designed for uncoded transmission. Theoretical gains
are as high as 4 db ,which are validated via numerical simulations of low density parity check codes.
The GMI is the correct metric to study the performance of capacity approaching CM transceivers.GMI
is useful in practiacal life for the communication purpose. By GMI the transferred bits reach the receiver
part without any losses in data and also if we use GMI the efficiency would be more and also BER is
reduced to greater extent
Index Terms Bit interleaved coded modulation, Bit wise decoders, Channel capacity, Coded
modulation, Fiber optic communications, Low density parity check codes, Non linear distortion.

I. INTRODUCTION
Incoherent fiber optic communication systems, both quadratures and both polarizations of the
electromagnetic field are used. This naturally results in a four-dimensional (4D) signal space. To meet the
demands for spectral efficiency, multiple bits should be encapsulated in each constellation symbol, resulting
in multilevel 4D constellations. To combat the decreased sensitivity caused by multilevel modulation,
forward error correction is used. The combination of FEC and multilevel constellations is known as coded
modulation. The most popular modulation [1] multilevel coding and bit interleaved coded modulation [3][5].

II. RELATED WORKS


A coding technique is described which improves error performance of synchronous data links
without sacrificing data rate or requiring more bandwidth. This is achieved by channel coding with
expanded sets of multilevel/phase signals in a manner which increases free Euclidean distance. Soft
275

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
maximum likelihood[11] decoding using the Viterbi algorithm is assumed. Following a discussion of
channel capacity, simple hand-designed trellis codes are presented for 8 phase shift keying and 16 quadrature
amplitude shift keying modulation.
The simple codes achieve coding gains in the order of 3-4 dB. It is then shown that the codes can
be interpreted as binary convolutional codes with a mapping of coded bits into channel signals, which we
call "mapping by set partitioning." Based on a new distance measure between binary code sequences which
efficiently lower bounds the Euclidean distance between the corresponding channel signal sequences, a
search procedure for more The use of trellis coded modulation[12]-[14] in combination with an outer
block code is considered for next generation 100 Gb/s optical transmission systems. When the concatenated
code based on the two interleaved BCH codes is used as the outer code, the NCG is 9.7 dB.
The impact of quantization on the performance of the concatenated TCM scheme with the two
interleaved BCH [15] outer codes is evaluated, and it is shown that 4-bit quantization is sufficient to
approach the infinite precision performance to within 0.15 dB.
Performance assessment of MIMO BICM demodulators based on mutual information. In this
paper Codes evaluated for PDM QPSK 100 Gb/s optical systems are generally designed for binary input
binary output. Such systems typically employ a hard decision decoder. It is well known that soft decision
decoding has the potential to improve the performance over hard decision decoding [16] when the forward
error correction overhead is
sufficiently high. For binary transmission FEC coding requires raising the some processing of
Symbol rate and thus increasing the transmitted signal bandwidth. The use of trellis coded modulation as
a technique to combine coding and modulation has been considered in different forms. To achieve better
performance, bit interleaved coded modulation schemes[17] based on binary low density parity check
component codes have been proposed.
However, due to the high implementation complexity of the soft iterative decoding process of LDPC
codes at speeds of 100 Gb/s, only component codes of moderate length are considered. This introduces a
non negligible loss in term of net coding gain NCG at the bit error rate of interest for optical transmission
systems. In this letter, we consider the use of TCM [18] as the inner code in as concatenated coding scheme.
A soft decision inner decoder and a hard decision outer decoder are employed, thus allowing for
a reasonable implementation complexity. Two block codes specified in ITUT recommendation [19] are
used as an outer code. The effect of quantization of the in phase and quadrature components of the received
signal samples after optical electronic conversion is investigated.
Application of multi level coded modulation for 16 a constellations in coherent systems is studied. An
MLCM system with Reed Solomon component codes and multi stage decoder is considered. A systematic
numerical method for finding set partitioning and optimal code rates is presented.
Results of numerical simulations and experiments supporting this statement are presented. For the
cost efficient hardware implementation of bandwidth variable transceivers supporting several polarization
multiplexed and four dimensional modulation formats, digital signal processing algorithms are required
which
operate format transparent and consume little hardware resources. We discuss data aided signal
processing as one possible option, in particular with respect to carrier frequency recovery and channel
estimation in combination with frequency domain equalization.
The generalized mutual information is an achievable rate for bit interleaved coded modulation and
is highly dependent on the binary labeling of the constellation [20]. The BICM-GMI, sometimes called
the BICM capacity, can be evaluated numerically. This approach, however, becomes impractical when
the number of constellation points and the constellation dimensionality grows, or when many different
labelings are considered.
We focus on recent advances in four dimensional modulation formats and in modulation format
transparent data aided digital signal processing. It is argued that four dimensional modulation formats present
an attractive complement to conventional polarization multiplexed formats in the context of bandwidth
variable transceivers, where they enable a smooth transition with respect to spectral efficiency while
requiring marginal additional hardware effort. The cost efficient hardware implementation of bandwidth
variable transceivers supporting several polarization multiplexed and four dimensional modulation formats,
digital signal processing algorithms are required which operate format transparent and consume little
hardware resources.
276

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
III. OUR PROPOSED FRAME WORK

A generic bit interleaved coded modulation systems with an approximate demodulator or log
likelihood ratio computer. Caireetal. in terms of the capacity of an independent parallel channel model with
binary inputs and continuous LLR [5] as outputs, and by Martinezetal.
In terms of the generalized mutual information GMI where the BICM decoder is viewed as a
mismatched decoder.GMI and capacity of the parallel channel model coincide under optimal demodulation,
they differ in general for the case of an approximate demodulator. Multidimensional constellations
optimized for uncoded systems were shown to give high MI and are thus good for ML decoders bits reach
the receiver. By GMI the transferred part without any losses in data and also if we use GMI the efficiency
would be more and also BER is reduced.
In Probability theory and information theory, the mutual information or formerly trans information
of two random variables is a measure of the variables mutual dependence.. MI is the expected value of the
point wise mutual information. The most common unit of measurement of mutual information is the bit.
The number of constellation points and the constellation dimensionality grows, or when many
different labelings are considered .A generic bit interleaved coded modulation systems with an approximate
demodulator or log likelihood ratio computer. Caireetal. in terms of the capacity of an independent parallel
channel model with binary inputs and continuous LLRs as outputs, and by Martinezetal.
In terms of the generalized mutual information GMI where the BICM decoder is viewed as a
mismatched decoder.GMI and capacity of the parallel-channel model coincide under optimal demodulation,
they differ in general for the case of an approximate demodulator. Multidimensional constellations optimized
for uncoded systems were shown to give high MI and are thus good for ML decoders bits reach the receiver.
By GMI the transferred part without any losses in data and also if we use GMI the efficiency would
be more and also BER is reduced.
The generalized mutual information is an achievable rate for bit interleaved coded modulation
and is highly dependent on the binary labeling of the constellation. The BICM GMI, sometimes called
the BICM capacity, can be evaluated numerically. This approach, however, becomes impractical when
the number of constellation points and/or the constellation dimensionality grows, or when many different
labeling are considered.
A simple approximation for the BICM GMI based on the area theorem of the demapper's extrinsic
information transfer function is proposed. Numerical results show the proposed approximation gives good
estimates of the BICM GMI for labelings with close to linear EXIT functions, which includes labelings
of common interest, such as the natural binary code, binary reflected Gray code, etc.
This approximation is used to optimize the binary labeling of the 32 APSK constellation defined
in the S2 standard. Gains of approximately 0.15 Db are obtained.
Four dimensional modulation formats present an attractive complement to conventional polarization
multiplexed formats in the context of bandwidth variable transceivers, where they enable a smooth transition
with respect to spectral efficiency while requiring marginal additional hardware effort. Results of numerical
simulations and experiments supporting this statement are presented. Bandwidth variable transceivers
enable the software controlled adaptation of physical layer parameters such as transmitted bit rate, spectral
efficiency and transparent reach according to the traffic demands at hand. In particular, we focus on recent
advances in four dimensional modulation formats and in modulation format transparent.
277

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
The generalized mutual information is an achievable rate for bit interleaved coded modulation and
is highly dependent on the binary labeling of the constellation. The BICM GMI, sometimes called the
BICM capacity, can be evaluated numerically.
The generalized mutual information is an achievable rate for bit interleaved coded modulation
and is highly dependent on the binary labeling of the constellation. The BICM GMI [6]-[7], sometimes
called the BICM capacity, can be evaluated numerically. This approach, however, becomes impractical
when the number of constellation points and/or the constellation dimensionality grows, or when many
different labelings are considered. A simple approximation for the BICM GMI based on the area theorem
of the demapper's extrinsic information transfer function is proposed. Numerical results show the proposed
approximation gives good estimates of the BICM GMI for labelings with close to linear EXIT functions,
which includes labelings of common interest, such as the natural binary code, binary reflected Gray code,
etc. This approximation is used to optimize the binary labeling of the 32 APSK constellation defined in
the S2 standard. Gains of approximately 0.15 dB are obtained.
Four dimensional modulation formats present an attractive complement to conventional polarization
multiplexed formats in the context of bandwidth variable transceivers, where they enable a smooth transition
with respect to spectral efficiency while requiring marginal additional hardware effort. Results of numerical
simulations and experiments supporting this statement are presented.
Bandwidth variable transceivers enable the software controlled adaptation of physical layer
parameters such as transmitted bit rate, spectral efficiency and transparent reach according to the traffic
demands at hand. In particular, we focus on recent advances in four dimensional modulation formats and in
modulation format transparent.The generalized mutual information is an achievable rate for bit interleaved
coded modulation and is highly dependent on the binary labeling of the constellation. The BICM GMI,
sometimes called the BICM capacity, can be evaluated numerically.
IV. PREFORMANCE EVALUATION
The number of bits per dimension is an integer, due to their practical relevance. The examples
studied have 1, 2, and 3 bits/dimension, which corresponds to, respectively, 4, 8, and 12 bits/symbol or M =
16, 256, and 4096 constellation points.The simulation result shown in fig 3.1,fig 3.2 is from our base paper
and We have discussed to improve our SNR and BER which is under survey.

V. CONCLUSION
The achievable rates for coherent optical CM transceivers where the receiver is based on BW
decoder. Multidimensional constellations optimized for uncoded systems to give high MI and ML decoders.
These constellations are not well suited for BW decoders . It was shown that GMI is correct metric to study
the performance of capacity approaching CM transreceivers. Due to that BER and SNR increase to create
an extent.
REFERENCE
[1] G. Ungerboeck, Channel coding with multilevel/phase signals, IEEE Trans. Inf. Theory, vol. IT278

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
28, no. 1, pp. 5567, Jan. 1982.
[2]

H. Imai and S. Hirakawa, A new multilevel coding method using error-correcting codes, IEEE
Trans. Inf. Theory, vol. IT-23, no. 3, pp. 371377, May 1977.

[3]

E. Zehavi, 8-PSK trellis codes for a Rayleigh channel, IEEE Trans. Commun., vol. 40, no. 3, pp.
873884, May 1992.

[4]

G. Caire, G. Taricco, and E. Biglieri, Bit-interleaved coded modulation, IEEE Trans. Inf. Theory,
vol. 44, no. 3, pp. 927946, May 1998.

[5]

A. Guillen i Fabregas,` A. Martinez, and G. Caire, Bit-interleaved coded modulation, Found.


Trends Commun. Inform. Theory, vol. 5, no. 12, pp. 1153, 2008.

[6]

S. Benedetto, G. Olmo, and P. Poggiolini, Trellis coded polarization shift keying modulation for
digital optical communications, IEEE Trans. Commun., vol. 43, no. 24, pp. 15911602, Feb.Apr.
1995.

[7]

H. Bulow, G. Thielecke, and F. Buchali, Optical trellis-coded modulation (oTCM), in Proc. IEEE
Optic. Fiber Commun. Conf., Los Angeles, CA, USA, Mar. 2004.

[8]

H. Zhao, E. Agrell, and M. Karlsson, Trellis-coded modulation in PSK and DPSK communications,
in Proc. Eur. Conf. Opt. Commun., Cannes, France, Sep. 2006.

[9]

M. S. Kumar, H. Yoon, and N. Park, Performance evaluation of trel-lis code modulated oDQPSK
using the KLSE method, IEEE Photon. Technol. Lett., vol. 19, no. 16, pp. 12451247, Aug. 2007.

[10] M. Magarini, R.-J. Essiambre, B. E. Basch, A. Ashikhmin, G. Kramer, and A. J. de Lind van
Wijngaarden, Concatenated coded modulation for optical communications systems, IEEE Photon.
Technol. Lett., vol. 22, no. 16, pp. 12441246, Aug. 2010.
[11] I. B. Djordjevic and B. Vasic, Multilevel coding in M -ary DPSK/Differential QAM high-speed
optical transmission with direct
[12] detection, J. Lightw. Technol., vol. 24, no. 1, pp. 420428, Jan. 2006.
[13] C. Gong and X. Wang, Multilevel LDPC-Coded high-speed optical sys-tems: Efficient hard
decoding and code optimization, IEEE J. Quantum Electron., vol. 16, no. 5, pp. 12681279, Sep./
Oct. 2010.
[14] L. Beygi, E. Agrell, P. Johannisson, and M. Karlsson, A novel multilevel coded modulation scheme
for fiber optical channel with nonlinear phase noise, in Proc. IEEE Global Telecomm. Conf., Miami,
FL, USA, Dec. 2010.
[15] B. P. Smith and F. R. Kschischang, A pragmatic coded modulation scheme for high-spectralefficiency fiber-optic communications, J. Lightw. Tech-nol., vol. 30, no. 13, pp. 20472053, Jul.
2012.
[16] R. Farhoudi and L. A. Rusch, Multi-level coded modulation for 16-ary constellations in presence
of phase noise, J. Lightw. Technol., vol. 32, no. 6, pp. 11591167, Mar. 2014.
[17] L. Beygi, E. Agrell, J. M. Kahn, and M. Karlsson, Coded modulation for fiber-optic networks:
Toward better tradeoff between signal processing complexity and optical transparent reach, IEEE
Signal Process. Mag., vol. 31, no. 2, pp. 93103, Mar. 2014.
[18] I. B. Djordjevic, S. Sankaranarayanan, S. K. Chilappagari, and B. Vasic, Low-density parity-check
279

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
codes for 40-Gb/s optical transmission sys- tems, IEEE J. Quantum Electron., vol. 12, no. 4, pp.
555562, Jul./Aug. 2006.
[18] H. Bulow and T. Rankl, Soft coded modulation for sensitiv-ity enhancement of coherent 100-Gbit/s
transmission systems, in Proc. Opt. Fiber Commun. Conf., San Diego, CA, USA, Mar. 2009.
[19] D. S. Millar, T. Koike-Akino, R. Maher, D. Lavery, M. Paskov, K. Kojima, K. Parsons, B. C.
Thomsen, S. J. Savory, and P. Bayvel, Experimental demonstration of 24-dimensional extended
Golay coded modulation with LDPC, in Proc. Opt. Fiber Commun. Conf., San Francisco, CA,
USA, Mar. 2014.C. Hager,
[20] A. Graell i Amat, F. Brannstrom, A. Alvarado, and E. Agrell, Improving soft FEC performance
for higher-order modulations via opti-mized bit channel mappings, Opt. Exp., vol. 22, no. 12, pp.
14-54414-558, Jun. 2014.

280

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Measurement of Temperature using RTD and Software Signal


Conditioning using LabVIEW
C. Nandhini, M. Jagadeeswari
Abstract This paper aims at measuring temperature using Resistance Temperature Detector (RTD)
and its software Signal conditioning stages using LabVIEW. The objective of this paper includes (i)
measurement of temperature using the RTD (ii) signal conditioning stages of voltage/current excitation,
amplification, and linearization (iii) the analysis of static and dynamic resistance characteristics of
RTD. The 4 wire RTD provides good interchangeable configuration and cancels out the lead resistance
effectively compared to other RTD wire configuration. In the proposed technique, it is observed that the
best suited way for static resistance temperature measurement of RTD is by using Rational Polynomial
equation as it provides high linearity compared to other technique and the calibration curve exhibits good
linearity between electrical resistance and temperature. In dynamic resistance temperature measurement,
it yields high gain and increases the efficiency of the system. The data acquisition process is done by
using NI 9219 module along with Hi-Speed USB Carrier NI USB-9162. The RTD (PT100), temperature
sensor is used for temperature sensing and the software signal conditioning is done using the LABVIEW
2014 tool.
Keywords Resistance Temperature Detector (RTD), Excitation, Amplification, Linearization,
Calibration, data acquisition, static and dynamic resistance.

I. INTRODUCTION
Measurement of temperature plays a very important and pivotal part in the quality of end product in
many process industries. Almost all chemical processes and reactions are temperature dependent. Several
types of temperature sensors are available in the market with varied degrees of accuracy. RTD is one such
sensor which finds a wide application in a process industry because of its very high coefficient of resistivity
and very stable operation over a considerable period of time.
Resistance Temperature Detectors (RTDs) operates on the principle of changes in the electrical
resistance of pure metals and are characterized by a linear positive change in resistance with temperature.
Edval J.P. Santos et al. predicts that these transducers display a high linearity and is observed to be better due
to its noise immunity [3]. They are among the most precise temperature sensors available with resolution
and measurement uncertainties of 0.1XC [1]. The data acquisition system is used to gather signals from
the measurement sources and LabVIEW is used to create the DAQ applications.
C.Nandhini, M.E-VLSI Design, Sri Ramakrishna Engineering College, Anna University (Chennai),

Coimbatore, India.

M.Jagadeeswari, Professor& HOD, M.E-VLSI Design, Sri Ramakrishna Engineering College, Anna

University (Chennai), Coimbatore, India

LabVIEW includes a set of VIs to configure, acquire data from, and send data to DAQ devices.
Nasrin Afsarimanesh et al. proposed LabVIEW based characterization and optimization of thermal sensors
for reliable high speed solution [5]. Each DAQ device is designed for specific hardware, platforms and
operating systems. The DAQ process is done by using NI 9219 module along with Hi-Speed USB Carrier
NI USB-9162.
The optimization of the real-world signals is done by using the signal conditioning stages which
varies widely in functionality depending on the sensor. For example, RTD produce very low-voltage
signals, which requires voltage/current excitation, amplification and linearization. Santhosh et al. proposed
a technique that makes the output independent of the physical properties of the RTD and avoids the
requirement of repeated calibration every time the RTD is replaced [7]. It requires very low excitation
current to prevent self-heating. The RTD (PT100), temperature sensor is used for temperature sensing and
the software signal conditioning is done using the LABVIEW 2014 tool.
The following are the advantages of using RTD:
A wide temperature range(-50 to 500XC for thin-film and -200 to 850XC for wire wound)
h Long-term stability
h Simplicity of recalibration
h Accurate readings over relatively narrow temperature spans
281

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
h Good linearity
The rest of the paper is organized as follows. In Section II, we provide a brief description of RTD
Characteristics. In Section III, we introduce description of NI 9219 Hardware used. Proposed work is
described in Section IV. Section V deals with important Results and Discussion. Finally, we conclude this
paper in Section VI.
II. RESISTANCE TEMPERATURE DETECTOR
A platinum resistance temperature detector (RTD) is a device with a typical resistance of 100 [ at
0 XC. It consists of a thin film of platinum on a plastic film [4]. Typical elements used for RTDs include
nickel (Ni) and copper (Cu), but platinum (Pt) is by far the most common because of its wide temperature
range, accuracy, and
stability as shown in Fig.1. Its resistance varies with temperature and it can typically measure
temperatures up to 850XC [2]. The relationship between resistance and temperature is relatively linear.

Fig.1 Physical Architecture of RTD

A. Relationship between Resistance and Temperature


The temperature coefficient, called alpha (\), differs between RTD curves. Alpha is most commonly
defined as shown in equation (1),

he Callendar-Van Dusen equation is commonly used to approximate the RTD curve as in equation
Where, Rt is the resistance of the RTD at temperature t,
R0 is the resistance of the RTD at 0 C,
A, B, and C are the Callendar-Van Dusen coefficients shown in Table I and t is the temperature in
C.
For temperatures above 0 C, the equation (2) reduces to a quadratic equation (3). If we pass a
known current, Iex , through the RTD and measure the output voltage developed across the RTD, V0, then
t can be estimated by,

282

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
where , V0 is the measured RTD voltage and
Iex is the excitation current.
TABLE I
CALLENDAR-VAN DUSEN COEFFICIENTS CORRESPONDING TO COMMON RTDS

B. RTD Calibration
When using RTDs, the temperature is computed from the measured RTD resistance. Depending on
the temperature range and accuracy we can use simple linear fit, quadratic or cubic equations, or a rational
polynomial function.
For the case of measurements between 0XC and 100XC, a linear approximation can be used as
in equation (4).
where R0 = 100, and \ = 0.00385
The average error over the interval 0.38XC can be minimized to 0.19XC by shifting the
equation a little, as in equation (5),
A quadratic fit provides a much greater accuracy in the range of 0XC to 200XC and provides an
rms error over the range of only 0.014XC, and a maximum error of only 0.036XC. The equation for a
European standard 100[ RTD is shown in equation (6),
A cubic fit equation over the range of -100XC to +600XC provides an rms error of only 0.038XC
over the entire range, and 0.026XC in the range of 0XC to 400XC as shown in equation (7),
Fitting the RTD data over its full range (-200 to +850XC) produces the formula (8) for computing
temperature from RTD resistance,

Using the rational polynomial function results in an average absolute error of only 0.015XC over
the full temperature range.
C. Accuracy of Various Approximations
The average absolute errors for the above approximations to the Temperature vs. Resistance RTD
curve are summarized in Table II.
TABLE II
TEMPERATURE RANGE AND AVERAGE ERROR
OF VARIOUS APPROXIMATIONS

283

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

III. HARDWARE DESCRIPTION


The NI DAQ 9219 hardware has four 6 spring-terminal connectors that provide connections for four
analog input channels. The signal name and terminal assignments by mode are explained in reference [9].
In 4-Wire Resistance and 4-Wire RTD modes as shown in Fig.3, the lead wire resistance does not affect
these modes because a negligible amount of current flows across the HI and LO terminals due to the high
input impedance of the ADC.

Fig.3. 4-Wire Resistance and 4-Wire RTD mode

The 3-Wire RTD mode as shown in Fig.4 compensates for lead wire resistance in hardware if all
the lead wires have the same resistance. The NI 9219 applies a gain of 2x to the voltage across the negative
lead wire and the ADC uses this voltage as the negative reference to cancel the resistance error across the
positive lead wire.

Fig.4. 3-wire RTD mode

The 2-Wire Resistance mode as shown in Fig.5 do not compensate for lead wire resistance.

Fig.5 2-wire RTD mode

IV.PROPOSED WORK
The proposed work includes temperature measurement, signal conditioning and analysis of static
and dynamic resistance characteristics of RTD. The hardware setup of the proposed work is shown in Fig.6.

Fig.6 Hardware setup


284

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
The RTD is used to measure the corresponding resistance change with respect to the temperature
and the acquisition of resistance from RTD is done using LabVIEW through NI DAQ 9219 as shown in
Fig.7

Fig.7 Block Diagram for Acquiring Resistance

The acquired resistance is converted to temperature using linear fit, cubic fit, quadratic fit and
rational polynomial equation as shown in Fig.8.

Fig.8 Front Panel_resistance_to_temp

A. STATIC TEMPERATURE MEASUREMENT


The performance criterion for the measurement of quantities that remain constant, or vary only quite
slowly is called static measurement. The static characteristics of instruments are related with steady state
response. The input parameters such as physical channel in which RTD is connected to and the type of RTD
wire configuration accompanied with number of readings, sampling rate, actual temperature are configured.
In order to begin the execution, the actual temperature must be equal to the true
temperature. Then, the process is initiated and the corresponding status is indicated using the status
indicator. After taking the required number of readings, the average for each of the measured resistance,
amplified voltage, ADC count and various approximations to convert resistance to temperature is calculated
and tabulated. Finally, the waveform graph is plotted for the analysis of static temperature measurement.
B. DYNAMIC TEMPERATURE MEASUREMENT
The set of criteria defined for the instruments, which are changes rapidly with time, is called .dynamic
characteristics. It is the relationship between the system input and output when the measured quantity is
varying rapidly. Dynamic characteristics is used to determine the necessary immersion time for constant
temperature, to compare the dynamic properties, so that the best one may be chosen, the true temperature
variation by correcting known indicated values, to describe the dynamics of a sensor for a closed loop
control system and to choose the type and optimum settings of corrector of dynamic errors.
Configure the input parameters such as physical channel in which RTD is connected to and the RTD
wire configuration accompanied with sampling rate, start and end temperature. The start temp button is
used to collect the data. Initially, the resistance for an average of 30 samples is calculated. Then the starting
temperature is determined and the data acquisition process is carried out
until the RTD reaches the end temperature. The values such as number of samples collected,
63.2% end temp, resistance, actual temperature, start and end resistance & temperature, elapsed time, time
constant and gain values are computed. Finally, the waveform graph is plotted for the analysis of dynamic
characteristics of temperature measurement.
V. RESULTS AND DISCUSSION
In this section, the static and dynamic temperature measurements VI are developed and simulated
285

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
using LabVIEW.

Fig.9 Front Panel of Static Temperature Measurement

Fig.10 Front Panel-Waveform Graph of Static Temperature Measurement

Procedure for static temperature measurement


The input parameters such as physical channel in which RTD is connected to and the RTD wire

configuration accompanied with number of readings, sampling rate, actual temperature are

initialized as shown in the Fig.9.
Then the initiate button is clicked for the data acquisition.
The status of the execution is shown using the status indicator and the readings are tabulated when

the required numbers of samples are collected.
Finally, the analysis done button is clicked and the graph is plotted for various approximations
methods as shown in Fig.10.
The results are analyzed and it is found that the rational polynomial function shows better accuracy

and high linearity compared to other approximation techniques.

Fig.11 Front Panel of Dynamic Temperature Measurement

Procedure for Dynamic Temperature Measurement


Configure the input parameters such as physical channel in which RTD is connected to and the

RTD wire configuration accompanied with sampling rate, start and end temperature as shown in
Fig.11.
286

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
Click the start temp button to collect the data and the Status indicator is used to display the current
status of the execution.
Initially, the resistance for the average of the 30 samples is measured to find the average starting

temperature and the data acquisition is done continuously until the RTD reaches the 63.2% of end
temperature.
When the end temperature is reached, click the end temp button.
Then, the average of start and end resistance & temperature, time constant and gain values are
computed.
The gain value is estimated to be positive and results in higher efficiency and the waveform graph

is plotted for the corresponding readings as in Fig.12.

Fig.12 Front Panel-Waveform Graph of Dynamic Temperature Measurement

Thus, from the results of static and dynamic temperature measurement, it is observed that the rational
polynomial equation in static temperature measurement exhibits high linearity than other approximation
techniques and the dynamic temperature measurement yields better gain value and hence results in improved
performance.
VII. CONCLUSION AND FUTURE WORK
The temperature was measured using the RTD & the software signal conditioning stages are done
using LabVIEW 2014 tool which includes voltage/current excitation, amplification and linearization. The
4 wire RTD provides good interchangeable configuration and cancels out the lead resistance effectively
compared to other RTD wire configuration. It is observed that the best suited way for static temperature
measurement of RTD is by using Rational Polynomial equation as it provides high linearity compared to
other techniques. In dynamic temperature measurement, the software signal conditioning yields high gain
and improved efficiency than the traditional method.
The future work is to implement the signal conditioning stages of RTD in an embedded based FPGA
hardware in order to obtain increased data acquisition rate and to enhance the performance.
ACKNOWLEDGMENT
The authors would like to thank Sri Ramakrishna Engineering College for providing excellent
computing facilities & Encouragement and Innovative Invaders Technology for providing a great chance
for learning and professional development.
REFERENCES
[1]
Bonnie C. Baker (2008), Precision Temperature Sensing With RTD CircuitsAN687, Microchip
Technology Inc.
[2]

Dr. M. Jagadeeswari and S. Kalaivani (2015), PLC & SCADA Based Effective Boiler Automation
System For Thermal Power Plant, International Journal of Advanced Research in Computer
Engineering & Technology (IJARCET), Volume 4, Issue 4. page(s):1653-1657.

[3]

Edval J .P. Santos, and Isabela B. Vasconcelos (2008), RTD based Smart Temperature
Sensor: Process Development and Circuit Design, PROC. 26th International Conference On
Microelectronics, Serbia.

[4]

Jikwang Kim, Jongsung Kim, Younghwa Shin and Youngsoo Yoon (2001), A study on the
fabrication of an RTD (resistance temperature detector) by using Pt thin film, Korean Journal of
287

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
Chemical Engineering, Volume 18, Number 1.
[5]

Nasrin Afsarimanesh and Pathan Zaheer Ahmed (2013), LabVIEW Based Characterization and
Optimization of Thermal Sensors, international journal on smart sensing and intelligent system,
page(s):726-739.

[6]

S.K.Sen (2011), An Improved Lead Compensation Technique for Four Wire Resistance Temperature
Detectors, Journal IEEE Transactions on Instrumentation and Measurement, vol. 48, No. 5, page(s):
903-905.

[7]

Santhosh.K.V, B.K.Roy (2014), An Improved Intelligent Temperature Measurement by RTD


using Optimal ANN, Proc. of the Intl. Conf. on Advances in Computer, Electronics and Electrical
Engineering, page(s): 82-86.

288

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Single Image Haze Removal Using Change of


Detail Prior Algorithm
D.Shabna1, Mr.C.S.ManikandaBabu2,
Student1, Associate Professor(Sl. Grade)2,
Department of ECE(PG VLSI DESIGN),
Sri Ramakrishna Engineering College, Coimbatore
Email : shabna653@gmail.com1
Abstract The reliable method for dehazing image was proposed by using haze removal algorithm.
The simple but effective prior is change of detail prior (CoD) algorithm is developed based on the
multiple scattering phenomenon in the propagation of light. By using Change of detail prior algorithm
we can estimate the thickness of Haze from single input image and also effectively recover a high quality
image. The proposed method achieved better results than several state of-the-art methods, and it can be
implemented very quickly. This technique can handle both the color and grayscale images.
Keywords Dehazing, defog, image restoration, depth restoration.

I. INTRODUCTION
Haze (mist, fog, and other atmospheric phenomena) is a main degradation of outdoor images,
weakening both colors and contrasts, resulting from the fact that light is absorbed and scattered by the
turbid medium such as particles and water droplets in the atmosphere during the process of propagation.
Moreover in most automatic systems, which strongly depend on the definition of the input images, this
may fail to work normally caused by the degraded images. Hence image hazing is a challenging task in
computer vision application. Therefore, haze removal is highly desired to improve the visual effect of
these images. Early researches uses a traditional technique to remove haze from the single image in image
processing. First, uses a histogram based [2]-[4] dehazing effect is limited because, it possibly losses
the infrequently distributed pixels in intensity due to global processing on the entire image, and also the
histogram modification technique is difficult to implement in real time application due to large amount of
computational and storage requirements. Later, researches try to improve the dehazing effect with multiple
images. In [6]-[8] Polarization based methods are used for dehazing effect with multiple images, in this
polarization filtered images can remove the visual effects of haze. This method may fail in situations of fog
or very dense haze. In [11],[12] Narasimhan et al. proposes a haze removal approaches with multiple images
of the same scene under different weather conditions. The conventional image enhancement techniques are
not useful in this method since the effects of weather must be modeled by using atmospheric scattering
principles that are closely tied to scene depth. Tan [14] proposes a novel haze removal method is Markov
Random Field (MRF) which is based on maximizing the local image contrast.
Tans approach is tends to produce the over saturated values in images and also produce halo effects
in the images. Fattal [19] proposes a method to remove haze from the color images which is independent
component analysis (ICA), this approach is time consuming and cannot be used for gray scale images. The
main drawback is it has some difficulties to deal with dense hazy images. He et al [5] proposes a novel Dark
Channel Prior (DCP), it is mostly used in non sky patches and any one of color channel has some pixels
whose intensities are very low and close to zero. By using this prior we can estimate the thickness of haze
and can retrieve the original haze free images.
The DCP method is simple and very effective in some cases and also applicable for sky images.
Some improved algorithms [17],[18],[19] are proposed to overcome the weakness of the DCP approach
because the main drawback of this method is it may fail to recover the true scene radiance of the distant
objects and they remain bluish. For effective haze free image Tarel [9] at el proposed a novel guided
image filtering which can be used for edge-preserving smoothing operator, it is fast and non approximate
linear time algorithm. This method is effective and efficient in a computer vision applications. The main
drawback guided/bilateral filters would concentrate the blurring near these edges and introduce halos.
In this paper proposed a novel is change of detail prior algorithm for single image dehazing. This
is simple but effective prior it can estimate the thickness of haze from hazy image and recover a original
289

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
image. The remainder of this paper is organized as follows: In Section 2, we review the atmospheric
scattering model which is widely used for image dehazing .In Section 4, we discuss the proposed approach
of airlight is the combination of smoothing and sharpening filters. In Section 5, we present and analyze the
experimental results. Finally, we summarize this paper in Section 6.
2. BACKGROUND
In this method, a commonly used model establishing the formation of a haze image is it can be
defined as follows
Where I is the observed luminance at the pixel, L_0 represents haze free representation at the same
pixel, L_A represents the atmospheric luminance. To estimate the optical transmission it is defined as
is the scattering co efficient and d is the scene depth of the image. Mostly in many methods that the
particle size is larger when compared with the wavelength of light. The proposed method is to recover the
intrinsic luminance L_(0 ) from its haze free image representation.
To recover a haze free images it requires a three steps
Estimate the atmospheric luminance LA
Estimate airlight A for each pixel in I.
Calculate per-pixel intrinsic luminance L0
As shown in figure (1) the haze removal methods for estimating atmospheric model and airlight are
discussed in next two sections
3. ATMOSPHERIC SCATTERING MODEL
As reported in previous work, the atmospheric luminance considered to be constant in an single
image and relatively high intensity in intrinsic luminance. He et al. [10] proposed the dark channel prior and
improved the estimation of atmospheric luminance. The top 0.1 percent brightest pixels in the dark channel
are first selected, which are usually the most haze-opaque.

Fig 1 Dehazing Process

Among these pixels, the pixels with highest intensity in the input image I are selected as the
atmospheric luminance. The Dark channel is defined as
where I^c is a color channel of I, and (x) is a local patch centered at x. In this paper, an improved
version of He [5] et al.'s method is used to estimate LA. We filter each color channel of an input image by
a N x N minimum filter with a moving window. Then the maximum value of each color channel is taken as
the component of atmospheric luminance LA. When dealing with gray scale images, the filter is operated
on input and then the maximum value is selected as LA. This method produces a similar result but performs
more efficiently.
4.

ESTIMATE AIRLIGHT
This section focus on the approach of estimating airlight A. The airlight model quantifies how a
column of atmosphere acts as a light source by reflecting environmental illumination towards an observer.
In eq(2), the first term L_0.T is called direct attenuation. The second term L_A.(1-T),indicates the thickness
of haze. Hence the term is defined as
290

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

The change of detail prior (COD) is inspired by two common observations. Firstly, an image will
be blurred more due to haze in a local region were haze is very thicker in image. Secondly, if we sharpen
and smoothen a blurred image separately means producing a I_sharp and I_smooth images respectively, so
intensity between these two images will be negatively correlated due to blurring strength. By using equation
(4) can estimate the thickness of haze.
4.1 SHARPENING OPERATOR
Sharpness is actually the contrast between different colors. The purpose of sharpening an image
is to enhance the details attenuated by the scattering model and make it as close to the haze-free image
as possible. Image sharpening is done by gradient domain. In this method, image information detail is
represented by 1 D profile of gradient magnitude which is perpendicular to image edges as shown in fig(2).

Fig 2 Gradient Profile Domain

In this gradient profile domain is defined in starting from edge pixel x_0, a path is traced from p(x_0)
along the gradient directions(two sides) pixel by pixel until the gradient magnitude no longer decreases.
The prior knowledge of the gradient profiles are learned from a large collection of natural images,
which are called gradient profile prior. Sub pixel technique is used to trace the curve of gradient profile.
4.2 SMOOTHING OPERATOR
Smoothing operator is done by Gaussian filter. Gauss filter is used to smooth the clear image from
the input. As smoothing operator is used to simulate multiple scattering effect. As analyzed in previous
method results multiple scattering is very complex process. The gauss filter works by using 2D distribution
as a point spread function..The PSF is defined as
Gaussian PSF do not produce the halo effects in the images.
4.3 COMPARISON
After sharpening and smoothing an image ,a stability criteria is desired to evaluate the difference
between them. The PSNR is not suitable for dehazing because its criteria is based on the neighborhood
pixels, while in smoothing and sharpening filters have already uses the neighborhood information. So the
use of PSNR is redundant. Hence sharpening and smoothing are a pair of opposite operations, so we need
not to compare with the input images.
Since airlight A is the component of input image I and its negatively correlated to difference between
the two filter images. so hazy image is subtracted from criteria, and multiply by a co efficient to get airlight
A result.
5. RESULTS AND DISSCUSION
In this section the of change of detail prior algorithm results are discussed. The execution is carried
out in MATLAB 2013a tool. The PSNR value of output image is found to be better than the input image.
5.1 SIMULATION RESULTS USING MATAB
291

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Fig 3 a) Input Image

Fig 3 b) Dehazed Image

5.2 ANALYSIS REPORT


Table 1 shows the analysis report of PSNR value and MSE value is compared with the existing
technique and proposed technique. Input hazy image contain more error by using visibility approach the
error has been minimized.
6. CONCLUSION AND FUTURE WORK
In this paper, we proposed a simple but effective prior is called change of detail algorithm for single
image dehazing. This algorithm is based on the multiple scattering phenomena so the input image becomes
blurry. When this method is combined with haze imaging model, single dehazing image becomes simple
and effective. This algorithm is based on local content rarer than color and this can be applied to large
variety of images. This method is meaningful for color based images for all application. In haze removal
method there is a still common problem is to be solved , that is scattering co efficient in atmospheric
scattering model cannot be regarded as constant in atmospheric conditions. To overcome this problem
some more physical models can be taken into account.
REFERENCES
[1] Cai Z., Xie B., and Guo F (2010), Improved single image dehazing using dark channel prior and
multi-scale retinex, in Proc. Int. Conf. Intell. Syst. Design Eng. Appl.
[2]

T. K. Kim, J. K. Paik, and B. S. Kang, Contrast enhancement system using spatially adaptive
histogram equalization with temporal filtering, IEEE Trans. Consum. Electron., vol. 44, no. 1, pp.
8287, Feb. 1998

[3]

J. A. Stark, Adaptive image contrast enhancement using generalizations of histogram equalization,

292

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
IEEE Trans. Image Process., vol. 9, no. 5, pp. 889896, May 2000.
[4]

J.-Y. Kim, L.-S. Kim, and S.-H. Hwang, An advanced contrast enhancement using partially
overlapped sub-block histogram equalization, IEEE Trans. Circuits Syst. Video Technol., vol. 11,
no. 4, pp. 475484, Apr. 2001

[5]

He K., Sun J., and Tang X (2011)., Single image haze removal using dark channel prior, IEEE
Trans. Pattern Anal. Mach. Intell., vol. 33, no. 12, pp. 23412353.

[6]

Y. Y. Schechner, S. G. Narasimhan, and S. K. Nayar, Instant dehazing of images using polarization,


in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), 2001, pp. I-325I-332.

[7]

S. Shwartz, E. Namer, and Y. Y. Schechner, Blind haze separation, in Proc. IEEE Conf. Comput.
Vis. Pattern Recognit. (CVPR), vol. 2. 2006, pp. 19841991.

[8]

Y. Y. Schechner, S. G. Narasimhan, and S. K. Nayar, Polarization based vision through haze,


Appl. Opt., vol. 42, no. 3, pp. 511525, 2003.

[9]

He K., Sun J., and Tang X (2013), Guided image filtering, IEEE Trans. Pattern Anal. Mach.
Intell., vol. 35, no. 6, pp. 13971409.

[10] Jiaming Mai, and Ling Shao, Qingsong Zhu (2015), A Fast Single Image Haze Removal Algorithm
Using Color Attenuation Prior, in proc. IEEE transactions on image processing, vol. 24, no. 11.
[11] S. G. Narasimhan and S. K. Nayar, Chromatic framework for vision in bad weather, in Proc. IEEE
Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2000, pp. 598605.
[12] S. K. Nayar and S. G. Narasimhan, Vision in bad weather, in Proc. IEEE Int. Conf. Comput. Vis.
(ICCV), vol. 2. Sep. 1999, pp. 820827.
[13] Narasimhan S. G. and Nayar S. K (2003)., Interactive (de) weathering of an image using physical
models, in Proc. IEEE Workshop Color Photometric Methods Comput. Vis., vol. 6.
[14] R. T. Tan, Visibility in bad weather froma single image, in Proc. IEEE Conf. Comput. Vis.Pattern
Recognit. (CVPR), Jun. 2008, pp. 18.
[15] Narasimhan S. G. and Nayar S. K (2003), Contrast restoration of weather degraded images, IEEE
Trans. Pattern Anal. Mach. Intel., vol. 25, no. 6, pp. 713724.
[16]

S.-C. Pei and T.-Y. Lee, Nighttime haze removal using color transfer pre-processing and dark
channel prior, in Proc. 19th IEEE Conf. Image Process. (ICIP), Sep./Oct. 2012, pp. 957960.

[17] Gibson. K. B., Vo D. T., and Nguyen T. Q (2012)., An investigation of dehazing effects on image
and video coding, IEEE Trans. Image Process., vol. 12, no. 2, pp. 662673.
[18] Yu. J., Xiao C., and Li D (2001), Physics-based fast single image fog removal, in Proc. IEEE 10th
Int. Conf. Signal Process. (ICSP), Oct. 2010, pp. 1048
[19] R. Fattal, Single image dehazing, ACM Trans. Graph., vol. 27, no. 3, p. 72, Aug. 20088, pp. 18.
[20] Tan R. T (2008), Visibility in bad weather from a single image, in Proc. IEEE Conf. Comput. Vis.
Pattern Recognit. (CVPR), pp. 18.
[21] Tomasi C. and Manduchi R (2006), Bilateral filtering for gray and color images, in Proc. 6th Int.
Conf. Comput. Vis. (ICCV), pp. 839846.
[22] Fattal (2009), Single image dehazing, ACM Trans. Graph., vol. 27, no. 3, p. 72.

293

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Modelling Of Hybrid Wind And Photovolatic


Energy System Using Cuk And Sepic Converter
S. Mutharasu1 and R. Meena2

AP/EEE, Department of EEE,


Vivekanandha College of engineering for women,
Tiruchengode.
1

2
PG student, Department of EEE,
Vivekanandha college of engineering for women,
Tiruchengode.

Abstract This project presents a new system configuration of the front-end rectifier stage for a hybrid
wind/photovoltaic energy system. As the power demand increases, power failure also increases. So,
renewable energy sources can be used to provide constant loads. Hybridizing solar and wind power
sources provide a realistic form of power generation. In this topology, both wind and solar energy
sources are incorporated together using a combination of Cuk and SEPIC converter. This configuration
allows the two sources to supply the load separately or simultaneously depending on the availability of
the energy sources. The fused multi input rectifier stage also allows Maximum Power Point Tracking
(MPPT) to be used to extract maximum power from sun when it is available. An Incremental Conductance
is used for the PV system. The average output voltage produced by the system is the sum of the inputs
of these two systems. All these advantages of the proposed hybrid system make it highly efficient and
reliable. Simulation results are given to highlight the merits of the proposed circuit.
Index Terms SEPIC converter, Cuk converter, PV & wind source, MPPT

I. INTRODUCTION
Solar energy and wind energy are the two renewable energy sources most common in use. Wind
energy has become the least expensive renewable energy technology in existence. Photovoltaic cells
convert the energy from sunlight into DC electricity. PVs offer added advantages over offer renewable
energy sources in that they give off no noise and practically require no maintenance.
Hybridizing solar and wind power sources provide a realistic form of power generation. When a
source is unavailable or insufficient in meeting the load demands, the other energy source can compensate
for the difference. Several hybrid wind/PV power systems with Maximum Power Point Tracking (MPPT)
control have been proposed earlier. They used a separate DC/DC buck and buck-boost converter connected
in fusion in the rectifier stage to perform the MPPT control for each of the renewable energy power sources.
This system requires passive input filters to remove the high frequency current harmonics injected into wind
turbine generations. The harmonic content in the generation current decreases its lifespan and increases the
power loss due to heating.
In this topology, both wind and solar energy sources are incorporated together using a combination
of Cuk and SEPIC converters, so that if one of them is unavailable, then the other source can compensate
for it. The Cuk-SEPIC fused converters have the capability to eliminate the HF current harmonics in
the wind generator. This eliminates the need of passive input filters in the system. These converters
can support step up and step down operations for each renewable energy sources. They can also support
individual and simultaneous operations. Solar energy source is the input to the Cuk converter and wind
energy source is the input to the SEPIC converter. The average output voltage produced by the system will
be the sum of the inputs of these two systems.
II. DC-DC CONVERTERS
DC-DC converters can be used as switching mode regulators to convert an unregulated dc voltage
to a regulated dc output voltage. The regulation is normally achieved by PWM at a fixed frequency and the
switching device is generally BJT, MOSFET or IGBT.
A. Cuk converter
The Cuk converter is a type of DC-DC converter that has an output voltage magnitude that is either
greater than or less than the input voltage magnitude. It provides the negative output voltage. This converter
always works in the continuous conduction mode. The Cuk converter operates when M1 is turned on, the
294

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
diode D1 is reverse biased, the current in both L1 and L2 increases and the power is delivered to the load.
When M1 is turned off, D1 becomes forward biased and the capacitor C1 is recharged.

Figure1: Cuk converter

B. SEPIC converter
Single-ended primary-inductor converter (SEPIC) is a type of DC-DC converter allowing the voltage
at its output to be greater than, less than, or equal to that at its input. It is similar to a buck boost converter. It
has the capability for both steps up and step down operation. The output polarity of the converter is positive
with respect to the common terminal.

Figure 2: SEPIC Converter

The capacitor C1 blocks any DC current path between the input and the output. The anode of the
diode D1 is connected to a defined potential. When the switch M1 is turned on, the input voltage, Vin
appears across the inductor L1 and the current IL1 increases. Energy is also stored in the inductor L2 as
soon as the voltage across the capacitor C1 appears across L2. The diode D1 is reverse biased during this
period. But when M1 turns off, D1 conducts. The energy stored in L1 and L2 is delivered to the output, and
C1 is recharged by L1 for the next period.
III. PROPOSED HYBRID SYSTEM
In order to eliminate the problems in the stand-alone PV and wind system and meeting the load
demand, The only solution to combine one or more renewable energy sources to meet the load demand. so
the new proposed input side converter topology with maximum power point tracking method to meet the
load and opt for grid connected load as well as commercial loads. The implementation of new converter
topology will eliminate the lower order harmonics present in the hybrid power system circuit.
A. BLOCK DIAGRAM

Figure 3: Block diagram of hybrid system

B. CIRCUIT DIAGRAM
PV array is the input to the Cuk converter and wind source is the input to the SEPIC converter. The
converters are fused together by reconfiguring the two existing diodes from each converter and the sharing
the Cuk output inductor by the SEPIC converter. This configuration allows each converter to operate
normally individually in the event that one source is unavailable. When only wind source is available, the
295

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
circuit operates as a SEPIC converter. When only PV source is available, the circuit acts as a Cuk converter.

Figure 4: shows the converter topology for the hybrid system

C. MODES OF OPERATION OF THE CONVERTER TOPOLOGY


a. MODE 1: WHEN M2 IS ON AND M2 IS OFF (SEPIC OPERATION)
When M2 is on condition, in the hybrid system, Wind energy will meet the load by a SEPIC
converter operation. The wind energy will produce the Ac power, the Ac power further converted to dc
power by using the rectifier. The converted dc power will stored in battery, and feed the load. Normally the
SEPIC converter will triggered at 50% of the duty cycle to meet the load demand.

Figure 5: SEPIC operation alone

b. MODE 2: WHEN M1 IS ON AND M2 IS OFF (CUK OPERATION)


When M1 is on condition, in the hybrid system, solar energy will meet the load by a Cuk converter
operation. The solar energy will produce the dc power; the dc power will stored in battery, and feed the
load. Normally the SEPIC converter will triggered at 50% of the duty cycle by using the maximum power
point tracking controller to meet the load demand.

Figure 6: Cuk operation alone

c. BOTH WIND AND PV SOURCES


If the turn on duration of M1 is longer than M2, then the converter operates in state I, III and IV and
if the turn on duration of M2 is longer than M1, then the converter operates in state I, II and IV. To provide
a better explanation, the inductor current waveforms of each switching state are given as follows assuming
that d2 > d1; hence only states I, III, IV are discussed in this example. In the following, Ii, PV is the average
input current from the PV source; Ii, W is the RMS input current after the rectifier (wind case); and Idc is
the average system output current. The key waveforms that illustrate the switching states in this example
are shown in Figure 6. The mathematical expression that relates the total output voltage and the two input
sources will be illustrated in the next section.
296

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Figure 7(a): M1 ON, M2 ON

Figure 7(b): M1 ON, M2 OFF

Figure 7(c): M1 OFF, M2 ON

Figure 7(d): M1 OFF, M2 OFF

IV. SIMULATION RESULTS


Figure 8: Matlab simulation for Hybrid System


297

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Figure 9: DC Output voltage Waveform of Solar Energy System in Boost mode

Figure 10: DC Output voltage Waveform of Solar Energy System in Boost mode

V. CONCLUSION
In this paper a new multi-input Cuk-SEPIC rectifier stage for hybrid wind/solar energy system has
been presented. It can support step up/step down operations for each renewable source. Both converters
are efficiently used to improve the system efficiency and voltage profile improvement. Additional input
filters are not necessary to filter out high frequency harmonics. Here MPPT can be realized for each source.
Individual and simultaneous operation is supported. The approach of varying complexity and current sharing
performance has been proposed. The advantage of parallel connected power supply is low component
stress, increased reliability, ease of maintenance and repair, thermal management. The presence of current
sharing loop has been clearly proved for achieving good performance in paralleling of these converters. The
input voltage of Cuk converter is 12 V and the output voltage is 34 V. The SEPIC converter input voltage
is 12 V and the output voltage is 37 V. while combining the Cuk and SEPIC converter, the input voltage
is 24 V and the output voltage is 42 V. A MATLAB Simulink has been developed and compared with the
parallel schemes.
VI. REFERENCES
[1] Divya Teja Reddy Challa, Raghavender Inguva (Nov 2012), An Inverter Fed with Combined WindSolar Energy System Cuk-SEPIC Converter, International Journal of Engineering Research and
Technology (IJERT) Vol.1 Issue 9.
[2]

Arun, Teena Jacob (Aug 2012), Modelling of Hybrid wind and Photovoltaic Energy System using
New Converter Topology, Electrical and Electronics Engineering: An International Journal (EEEIJ)
Vol.1, No.2.

[3]

P. Chidambaram, N. Subramani (Mar 2012) Modelling and Simulation of Hybrid Wind-Solar


energy System using CUK-SEPIC Fused Converter, International Journal of Communications and
Engineering (IJCE) Vol 03, No. 3.

[4]

Carlos A. Canesin, Senior Member, Leonardo P. Sampaio,


Luigi G. Jr., Guilherme A. e Melo
and Moacyr A. G. de Brito (2012) Evaluation of the Main MPPT Techniques for Photovoltaic

298

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
Applications, International Electrical and Electronics Engineering (IEEE).
[5]

D. Das and S. K. Pradhan (2011), Modelling and Simulation of PV Array with Boost Converter: An
Open Loop Study, National Institute Of Technology, Rourkela.

[6]

A. Bakhshai et al. (July 2010) , A Hybrid Wind Solar Energy


Topology, IEEE Magazine.

[7]

Chiang.S.J, Hsin-JangShieh, Member, IEEE, and Ming- Chen: (November 2009) Modelling and
Control of PV Charger System with SEPIC Converter, IEEE Transactions on Industrial Electronics,
Vol.56, No.11.

[8]

G. Adrian and D. C. Drago (2009) , Modelling of renewable hybrid energy sources, Scientific
Bulletin of the Petru Maior University of Tirgu Mures, Vol. 6.

[9]

E. Koutroulis and K. Kalaitzakis (April 2006), Design of a Maximum Power Tracking System for
Wind-Energy-Conversion Applications, IEEE Transactions on Industrial Electronics, Vol. 53.

System: A New Rectifier Stage

[10] D. Das, R. Esmaili, D. Nichols, L. Xu (Nov. 2005) , An Optimal Design of a Grid Connected Hybrid
Wind/Photovoltaic/Fuel Cell System for Distributed Energy Production, in Proc. IEEE Industrial
Electronics Conference, pp. 2499-2504.
[11] Shu-Hung et al., (May 2003) A Novel Maximum Power Point Tracking Technique for Solar Panels
Using a SEPIC or Cuk Converter, IEEE Transactions on Power Electronics, Vol. 18.
[12]

R. Billinton and R. Karki (November 2001) Capacity Expansion of Small Isolated Power Systems
Using PV and Wind Energy, IEEE Transactions on Power Systems, Vol. 16.

299

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Semicustom Implementation Of Reversible


Image Watermarking Using Reversible Contrast Mapping
S.Gayathri1, Mr.C.S.ManikandaBabu2

Student1, Department of ECE (PG),


Sri Ramakrishna Engineering College,
Coimbatore, India.
Associate Professor (Sl. Grade) 2,Department of ECE (PG),
Sri Ramakrishna Engineering College,
Coimbatore, India
Abstract In this paper, proposed a reversible image watermarking method based on contrast mapping.
This algorithm accomplishes transform applied on pixels and their least significant bits are used for
data embedding. It is invertible if the least significant bits of the transformed pixels are lost during data
embedding. Reversible contrast mapping offers high embedding data at low visual distortion. Distortion
caused due to embedding should be made small. This lead to the original and the watermarked image
equivalent so that embedded data remains imperceptible on visualization. These type of imperceptible
distortion is not acceptable in medical and military image. This has led to an interest on reversible
watermarking, where the embedded watermark is not only extracted but also perfect restoration of the
original signal is possible from the watermarked image. Low computation cost and ease of semicustom
make it attractive for real time implementation
Keywords Reversible image watermarking, Reversible contrast mapping, Semicustom

I. INTRODUCTION
Digital watermarking is expressed as a knowledge concealing technique that is developed for
functions like identification, copyright protection and classification of digital media content. During this
technique, a secret information referred to as watermark is embedded into the digital transmission content
with in the decoder, watermark information is extracted from the watermarked signal in an exceedingly loss
less manner though original signal can't be obtained back. In some
necessary applications like military notional, rhetorical law and medical notional, distortion within
the original signal might cause fatal results. For instance, a tiny low distortion in an exceedingly medical
image might interfere with the accuracy of document identification. Distortion issues which can arise in
an exceedingly applications will be fastened in a reversible watermarking technique.Reversible image
watermarking algorithms can be divided into five groups. Lossless compression based algorithms, difference
expansion based algorithms, and histogram shifting based algorithms, prediction error expansion based
algorithms and integer to integer transform based algorithms. Performance of a watermarking algorithm
is categorized into three parts. They are visual quality, payload capacity and computational complexity.
A hardware implementation can be designed on a field programmable gate array board or custom
integrated circuit. The difference between FPGA and custom IC implementation is a trade-offamong the
cost, power consumption and performance. Hardware implementation using FPGA has advantages of low
investmentcost, simpler design cycle, field programmability and desktop testing with medium processing
speed. On the other side, due to lower unit cost, full customcapability and from an integration point, custom
implementation application specific integrated circuit design may be more useful. During past years, FPGAs
wereselected primarily for lower speed, complexity, volume designs, but todays FPGAs can easily push
upto the 500 MHz performance barrier.
A literature survey is survived for various papers which are important to know the previously
available techniques and their advantages and limitations. It also includes the various supporting papers
for the proposed technique and their advantages. There are many techniques available for reversible image
watermarking. The reversible contrast mapping method provides to embed and extract the watermarking.
The data of secretly hiding and communicating information has gained immense importance in the two
decades due to the advantages in generation, storage, and communication technology of digital content.
Watermarking is solutions for tamper detection and protection of digital content. Watermarking can
cause damage to the information present in the cover work. At the receiving end, the exact reconstruction
of the work may not be possible. In addition there exist certain applications that may not pass even
small distortions in report work priority to the downstream techniques. In that applications, reversible
300

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
watermarking instead of other watermarking is occupied. Reversible watermarking of digital image allows
full extraction of the watermark along with the complete reconstruction of the cover work. In past years,
reversible watermarking process are gaining popularity because of its increasing applications in important
and sensitive areas, i.e., military information, health care, and law-enforcement. Due to the rapid evolution
of reversible watermarking process, a latest survey of recent research in this field is highly desirable.
In 2001, Honsinger et al was introduced one of the first reversible watermarking method. They
utilized modulo addition to achieve reversibility in their watermarking process. Macq developed a
reversible watermarking approach by modifying the patchwork algorithm and used modulo addition.
Although, proposed reversible techniques are the imperceptibility of their approaches is not magnificent.
The watermarked images resulting from these techniques cant tolerate from salt and pepper noise due to
the use of modulo addition. A reversible watermarking technique without using modulo addition was then
introduced by proposed the concept of compressing the least significant bit plane of cover image to make
space for the watermark to be embedded.In 2013 Luo et al reported a reversible watermarking method using
interpolation technique. The watermark bits are embedded in the unsampled pixels until no unsampled
pixel is left. After that, the remaining of the watermark bits are inserted into the sampled pixels, which
are interpolated using nearby watermarked pixels. This scheme provides low embedding distortion and
less computational cost, which result in good image quality and efficient algorithm. But the pixels having
value 0 or 255 are unconsidered for embedding in order to prevent overflow/underflow.Correlation between
adjacent pixels is efficiently uncorrelated with interpolation process. Embedding capacity is improved by
adaptive embedding algorithm. Pixel selection process is utilized to obtain good visual quality. Interpolation
value is estimated by using enclosing pixels to obtain better prediction. Structural similarity index is another
metric which measures the similarity between different images.
II. REVERSIBLE CONTRAST MAPPING ALGORITHM
Let (x, y) be the values of pixel pair in an image, then the pixel intensity values are bounded between
[0, 255] for an 8-bits or pixel gray scale image. The forward integer transform for a pair of pixel values
is defined. To prevent the overload and the under load problem, the transform pair is restricted within
a sub-domain. The inverse transformation can perfectly restored the pixel pair values even if the least
significant bits of the transformed pixel pairs are lost, except when both pair of pixels are odd set of
values. The occurrence of odd pair of pixels is expected to below with respect to the total possibility of
occurrence of other combinations, hence a large set of pair of pixels may be available for data embedding.
So that reversible contrast mapping provides high embedding bit rate and this is achieved at a very low
mathematical complexity.High payload embedding through passing more data insertion introduced many
visual distortions. Distortion control is needed to reduce recognized degradation. A straight forward attempt
to control such distortion is to transform pair of pixel values only if they do not exceed a predefined error
thresholdor distortion threshold.
III. WATERMARK EMBEDDING
To realize reversible contrast mapping algorithm, the original image is partitioned into non
overlapping groups of pixels pairs following either horizontally or vertically, like any space filling curve.
Aim of this work is to develop semicustom,
the original image is partitioned into 88 or 3232 non-overlapping block of pixels. Later on each
block is partitioned into pairs of pixel values.

Fig 1. Dataflow diagram of watermark embedding


301

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)











Read cover image


Read watermark image
Partition watermark image into 32x32 blocks
Partition watermark image into 32x16 blocks
Mapping
Divide pixel pairs into different sets
Initialize various index values
Set S1 contains all pixels that satisfy the difference value condition
Set S2 contains all expandable pixel pairs that are not in S1
Set S21 contains all pixel pairs whose difference value is less than threshold
Set S22 contains all pixel pairs whose difference value is greater than threshold
Set S3 contains all changeable pixel pairs embedding is performed in set S3

Fig 2. Data path for step-1 of watermark embedding

curve. Aim of this work is to develop semicustom,

Fig 3. Data path for step-2 of watermark embedding

Fig 4. Data path for step-3 of watermark embedding

Fig 2 indicates that the resultant value is left shifted by 1 bit with 1 padding. Similarly the watermark
bit is embedded into the LSB of y. Fig 3 indicates the Step-2 operation of watermark embedding,where
LSB of xis made 0 by two consecutive shifting operations.The value of x is first right shifted by 1 bit to
discard its LSB, then 1bit left shifting operation with 0 padding is performed to generatethe final result. In
the similar way watermark data is embeddedinto the LSB of y.Fig 4 indicates that Step-3, is the simplest
amongthe three.
IV. WATERMARK EXTRACTION
The steps are similar as in watermark embedding process. Here the watermarked image is partitioned
into smaller block size of 88 or 3232 and each block is a gain partitioned into pairs of pixels and using
302

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
horizontal partitioning method as used in embedding. The marked image is partitioned a gain into pairs of
pixels. Then any one step is followed subject to two different conditions check as. The process is repeated
until it covers all the pixel pairs. The following steps represents the watermark extraction process and
recovery of the original/ host image.

Fig 5. Data flow diagram of watermark extraction

Load stego image


Partition watermark image into 32x32 blocks
Divide pixel pairs into different sets initialize various index values
Set S1 contains all pixel pairs that satisfy the difference value condition
Set S2 contains all expandable pixel pairs that are not in S1
Set S21 contains all pixel pairs whose difference value is less than threshold
Set S22 contains all pixel pairs whose difference value is greater than threshold
Set S3 contains all changeable pixel pairs embedding is performed in set S3
Set S4 contains all non-changeable pixel pairs
Determine size of watermarked image in Height and Weight
Use LSB of watermarked image to recover

Fig 6. Data path for step-1 of watermark extraction

Fig 7. Data path for step-2 of watermark extraction


303

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Fig 8. Data path for step-3 of watermark extraction

Fig 6. Illustrates that in step 1 for watermark extraction, the input is given as x1. Then it is left
shifted by 1 bit for 2 times and it is added with the value w, divided by 3 and the output will be x. same
process repeated for y1.Fig 7 illustrates that in step 2 for watermark extraction, the input is given as x1,
and it is right shifted by 1 bit and the left shifted by 1 bit with 1 padding. The output will be same process
is repeated for y and LSB also extracted.Fig 8 illustrates that in step 3 for watermark extraction, the input
is given as x1 and it is right shifted by 1 bit and left shifted by 1 bit with extracted payload bit padding and
the output will be x. Next y1 is given as input without change the output will be y.
V. HDL CODER
A hardware description language enables a precise, formal description of an electronic circuit that
allows for the automated analysis, simulation, and simulated testing of an electronic circuit. It also allows
for the compilation of an HDL program into a lower level specification of physical electronic components,
such as the set of masks used to create an integrated circuit.The HDL Workflow Advisor in HDL Coder
automatically converts MATLAB code from floating-point to fixed-point and generates synthesizable
VHDL and Verilog code. This capability lets model the algorithm at a high level using abstract MATLAB
constructs and System objects while providing options for generating HDL code that is optimized for
hardware implementation.
VI. RESULTS AND DISCUSSION

Fig 9. Input image

Fig 10. Secret Image

Fig 11. Watermarked image contains secret image


304

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Fig 12. Stego image

Fig 13. Extracted watremark

Fig 14. Semicustom Layout

Fig 13 illustrates that the input image, Fig 10 illustrates that secret image, Fig 11 illustrates that
watermark image contains secret image, Fig 12 illustrates that stego image, Fig 13 illustrates that extracted
watermark,Fig 14 illustrates that semicustom layout. After generating HDL code, net list is created. By
using net list, semicustom is implemented in mentor graphics.
305

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
VII. CONCLUSION
The pixel values marked by rectangular block indicate a particular case where those pixel values are
not considered for embedding operations. So need to store their LSBs of x as a side information or overhead
information. On the other hand, for the rest of the pixel values, need not have to store the corresponding
LSB values of x. This is marked by X. The area in semicustom layout is 2.85nm.
REFERENCES
[1] B. Yang M. Schmucker W. Funk C. Busch S. Sun (2009), Integer dct-based reversible watermarking
for images using companding technique, in: E.J. Delp III, P.W. Wong (Eds.), in: Proceedings of the
SPIE, Security, Steganography, and Watermarking of Multimedia Contents VI, page no(s):405415
[2]

C.W. Honsinger P.W. Jones M. Rabbani J.C. Stoffel (2001), Lossless recovery of an original image
containing embedded data, U.S. Patent No. 6, page no(s):278,791.

[3]

D. Zheng Y. Liu J. Zhao A. El Saddik (2007), A survey of RST invariant image watermarking
algorithms, ACM Comput. Surv, page no: 39 (2).

[4]

G. Xuan C. Yang Y. Zhen Y.Q. Shi Z. Ni(2005), Reversible data hiding using integer wavelet
transform and companding technique, in: Lecture Notes in Computer Science, Digital Watermarking,
vol. 3304, Springer, Berlin, Heidelberg, page no: 115124.

[5]

J. Fridrich M. Goljan R. Du (2002), Lossless data embedding new paradigm in digital


watermarking, EURASIP J. Appl. Signal Process, page no(s):185196.

[6]

J.-M. Guo (2008), Watermarking in dithered halftone images with embeddable cells selection and
inverse half toning, Signal Process, page no(s):14961510.

[7]

J.-M. Guo (2012), J.-J. Tsai, Reversible data hiding in low complexity and high quality compression
scheme, Digit. Signal Process, page no(s):776785.

[8]

J. Feng I. Lin C. Tsai Y. Chu (2006), Reversible watermarking: current status and key issues, Int.
J. 2 (3), page no(s) :161170.

[9]

R. Caldelli F. Filippini R. Becarelli (2010), Reversible watermarking techniques: an overview and


a classification, EURASIP J. Inform. Security,page no(s) :119.

[10] K.S. Kim, M.J. Lee, H.Y. Lee, H.K. Lee, Reversible data hiding exploiting spatial correlation
between sub-sampled images, Pattern Recognit. 42 (2009)30833096.
[11] X. Li, B. Yang, T. Zeng, Efficient reversible watermarking based on adaptiveprediction-error
expansion and pixel selection, IEEE Trans. Image Process. 20(2011) 35243533.
[12]

C.C. Lin, N.L. Hsueh, A lossless data hiding scheme based on three-pixel blockdifferences, Pattern
Recognit. 41 (2008) 14151425.

[13] C.C. Lin, S.P. Yang, N.L. Hsueh, Lossless data hiding based on difference expansion without a
location map, in: Congress on Image and Signal Processing, Vol.2, 2008, pp. 812.
[14]

L. Luo, Z. Chen, M. Chen, X. Zeng, Z. Xiong, Reversible image watermarking usinginterpolation


technique, IEEE Trans. Inf. Forensics Secur. 5 (2010) 187193.

[15]

Z. Ni, Y.Q. Shi, N. Ansari, W. Su, Reversible data hiding, IEEE Trans. Circuits Syst.Video Technol.
16 (2006) 354362.[15] D. Thodi, J. Rodriguez,

306

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Survey on Cloud Computing Security Issues


E.Dinesh#1 and Dr.S.M.Ramesh*2
#Assitant Professor/Research Scholar,
ECE, M.Kumarasamy College of Engineering,
Karur, India
* Associate Professor,
ECE, Bannari Amman Institute of Technology,
Sathyamangalam, India
Abstract Cloud computing has quickly become one of the most famous catchphrase in the IT world
due to its revolutionary model of computing as a service. It promises increased flexibility, scalability, and
reliability, while promising decreased operational and support costs. However, many latent cloud users
are hesitant to move to cloud computing on a large scale due to the unaddressed security issues present
in cloud computing. In this paper, we investigate the major security issues present in cloud computing
today based on a skeleton for security subsystems adopted from cloud service providers. We present the
solutions proposed by other researchers, and address the potency and flaw of the solutions. Although
considerable progress has been made, more research needs to be done to address the all-around security
concerns that exist within cloud computing. Security issues relating to consistency, multi-tenancy, and
federation must be addressed in more depth for cloud computing to overcome its security hurdles and
progress towards widespread adoption.
IndexTerms CSP,IntegralSolutions,NISTmodels,Scheme

I. INTRODUCTION
Cloud computing has become one of the most modern topics in the IT world today. Its model of
computing as a resource has changed the scenery of computing as we know it, and its promises of increased
flexibility, greater consistency, massive scalability, and decreased costs have enchanted businesses and
individuals alike.
Cloud computing, as defined by NIST, is a model for enabling always-on, convenient, on-demand
network access to a shared pool of configurable computing can be rapidly provisioned and released with
nominal management effort or service provider interaction [1]. It is a new model of providing computing
resources that utilizes existing technologies. At the heart of cloud computing is a datacenter that uses
virtualization to isolate instances of applications or services being hosted on the cloud. The datacenter
provides cloud users the ability to hire computing resources at a rate dependent on the datacenter services
being requested by the cloud user. Refer to the NIST definition of cloud computing, [1], for the core tenets
of cloud computing.
In this paper, we refer to the organization providing the datacenter and related management services
as the cloud
provider. We refer to the organization using the cloud to
host applications as the cloud service provider (CSP).
Lastly, we refer to the individuals and/or organizations
using the cloud services as the cloud clients or cloud users.
NIST defines three main service models for cloud
computing:
Software as a Service (SaaS) The cloud provider provides the cloud consumer with the capability

to deploy an application on a cloud infrastructure [1].
Platform as a Service (PaaS) The cloud provider provides the cloud consumer with the capability

to develop and deploy applications on a cloud infrastructure using tools, runtimes, and services
supported by the CSP [1].
Infrastructure as a Service (IaaS) The cloud provider provides the cloud consumer with essentially

a virtual machine. The cloud consumer has the ability to provision processing, storage, networks,
etc., and to deploy and run arbitrary software supported by the operating system run by the virtual

machine [1].
307

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Figure 1: Cloud Service Delivery Model

NIST also defines four deployment models for cloud computing: public, private, hybrid, and
community clouds.
Private cloud The cloud infrastructure is provisioned for exclusive use by a single organization

comprising multiple consumers (e.g., business units). It may be owned, managed, and operated by

the organization, a third party, or some combination of them, and it may exist on or off premises.
Community cloud - The cloud infrastructure is provisioned for exclusive use by a specific
community of consumers from organizations that have shared concerns (e.g., mission, security
requirements, policy, and compliance considerations). It may be owned, managed, and operated
by one or more of the organizations in the community, a third party, or some combination of them,

and it may exist on or off premises.
Public cloud - The cloud infrastructure is provisioned for open use by the general public. It may be

owned, managed, and operated by a business, academic, or government organization, or some

combination of them. It exists on the premises of the cloud provider.
Hybrid cloud - The cloud infrastructure is a composition of two or more distinct cloud infrastructures

(private, community, or public) that remain unique entities, but are bound together by standardized

or proprietary technology that enables data and application portability (e.g., cloud bursting for

load balancing between clouds).

Figure 2: Cloud Service Delivery Model

One of the most appealing factors of cloud computing is its pay-as-you-go model of computing as
a resource. This new model of computing has allowed businesses and organizations in need of computing
power to purchase as many resources as they need without having to put forth a large capital investment in
the IT infrastructure. Other advantages of cloud computing are massive scalability and increased flexibility
for a relatively constant price. [2].
Despite the many advantages of cloud computing, many large enterprises are hesitant to adopt
cloud computing to replace their existing IT systems. In the Cloud Computing Services Survey done by
308

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
IDC IT group in 2009, over 87% of those surveyed cited security as the number one issue preventing
adoption of the cloud [3]. For adoption of cloud computing to become more extensive, it is important that
the security issue with cloud computing be analyzed and addressed, and proposed solutions be implemented
in existing cloud offerings.
II. SCHEME FOR ANALYZING SECURITY IN THE CLOUD
Beginning in the 1980s, governmental initiatives were established around the world to define
requirements for evaluating the effectiveness of security functionality built into computer systems. In 1996,
initiatives from the US, Europe, and Canada were combined into a document known as the Common
Criteria. The Common Criteria document was approved as a standard by the International Organization
for Standardization in 1999 and has opened the way for worldwide mutual recognition of product security
solutions [4].
The Common Criteria, however, serve primarily as a benchmark for security functionality in
products [4]. For this reason, cloud service providers consolidated and reclassified the criteria into five
functional security subsystems. We have used these subsystems as the framework within which we assess
the security issues present in cloud computing and evaluate solutions proposed.
The five functional security subsystems defined by IBM are as follows:

Figure 3: Security Architecture Subsystems

a. Audit and Compliance: This subsystem addresses the data collection, analysis, and archival
requirements in meeting standards of proof for an IT environment. It captures, analyzes, reports,
archives, and retrieves records of events and conditions during the operation of the system [4].
b. Access Control: This subsystem enforces security policies by gating access to processes and
services within a computing solution via identification, authentication, and authorization [4]. In
the context of cloud computing, all of these mechanisms must also be considered from the view of
a federated access control system.
c. Flow Control: This subsystem enforces security policies by gating information flow and visibility
and ensuring information integrity within a computing solution [4].
d. Identity and Credential Management: This subsystem creates and manages identity and permission
objects that describe access rights information across networks and among the subsystems,
platforms, and processes, in a computing solution [4]. It may be required to adhere to legal criteria
for creation and maintenance of credential objects.
e. Solution Integrity: This subsystem addresses the requirement for reliable and proper operation of
a computing solution [4].

III. INVESTIGATION OF ISSUES AND LATENT SOLUTIONS INSIDE CLOUD COMPUTING


SECURITY
A. Review and Agreement
Cloud computing raises issues regarding compliance with existing IT laws and regulations and with
the division of compliance responsibilities.
Compliance with laws and regulations
Regulations written for IT security require that an organization using IT solutions provide certain
309

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
audit functionality. However, with cloud computing, organizations use services provided by a third-party.
Existing regulations do not take into account the audit responsibility of a third-party service provider [5].
The division of audit responsibilities required for regulatory compliance must be clearly delineated
in the contracts and service-level agreements (SLAs) between an organization and the cloud provider.
In order to comply with audit regulations, an organization defines security policies and implements
them using an appropriate infrastructure. The policies defined by an organization may impose more
stringent requirements than those imposed by regulations. It falls on the customer of the cloud services to
bridge any gap between the audit functionality provided by the CSP and the audit mechanisms required for
compliance [5].
The CSA states that the SLA between the cloud consumer and provider should include a Right to
Audit clause, which addresses audit rights as required by the cloud consumer to ensure compliance with
regulations and organization-specific security policies [5].
Even though a general approach to involve legal has been described by the CSA, no formal APIs
or frameworks for integration of multiple audit systems have been defined. Additionally, there are no
specific standards or models that define the separation of responsibilities between CSP and cloud service
consumer.
B. Admission control
Admission management is one of the toughest issues facing cloud computing security [5]. One of
the fundamental differences between traditional computing and cloud computing is the distributed nature
of cloud computing. Within cloud computing, access management must therefore be considered from
a federated sense, where an identity and access management solution is utilized across multiple cloud
services and potentially multiple CSPs.
Admission control can be separated into the following functions:
Authentication
An organization can utilize cloud services across multiple CSPs, and can use these services as
an extension of its internal, potentially non-cloud services. It is possible for different cloud services to
use different identity and credential providers, which are likely different from the providers used by the
organization for its internal applications. The credential management system used by the organization must
be consolidated or integrated with those used by the cloud services [5].
The CSA suggests authenticating users via the consumers existing identity provider and using
federation to establish trust with the CSP [5]. It also suggests using a user-centric authentication method,
such as OpenID, to allow a single set of credentials to be used for multiple services [5].
Use of an existing identity provider or a user-centric authentication method reduces complexity and
allows for reuse of existing systems. If done using standardized federation service, it also increases the
potential for seamless authentication with multiple different types of cloud services.
The CSA states that in general, CSPs and consumers should give preference to open standards,
which provide greater transparency and hence the ability to more thoroughly evaluate the security of the
approach taken.
Authorization
Requirements for user profile and access control policy vary depending on whether the cloud user is
a member of an organization, such as an enterprise, or as an individual. Access control requirements include
establishing trusted user profile and policy information, using it to control access within the cloud service,
and doing this in an auditable way [5].
Once authentication is done, resources can be authorized locally within the CSP. Many of the
authorization mechanisms that are used in traditional computing environments can be utilized in a cloud
setting.
Federated sign-on
A federation is a group of two or more organizations that have agreed upon standards for operation
[6]. Federations allow multiple, disparate entities to be treated in the same way. In cloud computing,
federated sign-on plays a vital role in enabling organizations to authenticate their users of cloud services
using their chosen identity provider.
If an organization uses multiple cloud services, it could suffer from the difficulty of having to
310

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
authenticate multiple times during a single session for different cloud services. The Cloud Computing Use
Cases Discussion Group suggests that the multiple sign-on problem can be solved by using a federated
identity system. The federated identity system would have a trusted authority common to multiple CSPs,
and provide single or reduced sign-on through the common authority [7].
C. Flow control
Information flow control is central to interactions between the CSP and cloud consumer, since in
most cases, information is exchanged over the Internet, an unsecured and uncontrollable medium. Flow
control also deals with the security of data as it travels through the data lifecycle within the CSP creation,
storage, use, sharing, archiving, and destruction.
A cloud is shared by multiple service consumers, and by their very nature, cloud architectures are
not static and must allow flexibility and change. Securing the flow of data across the cloud service consumer
and providers and across the various components within a CSP becomes challenging and requires extensions
of mechanisms used in more static environments of today.
Flow control can be separated into the following functions:
Secure exchange of data:

Since most cloud services are accessed over the Internet, an unsecured domain, there is
the utmost need to encrypt credentials while they are in transit [5]. Even within the cloud providers internal
network, encryption and secure communication are essential, as the information passes between countless,
disparate components through network domains with unknown security, and these network domains are
shared with other organizations of unknown reputability.
Controls should be put in place at multiple levels of the network stack. At the application layer,
Shipping Chen et. al. [8] suggest using application-specific encryption techniques to ensure adequate
security of the data for the particular application. At the transport layer, Xiao Zhang et. al. [9] suggest using
standard cryptographic protocols, such as SSL and TLS. At the network layer, Chen et. al. [8] suggest using
network-layer controls, such as VPN tunneling, to provide easy-to-implement, secure connection with a
CSP.
Data security lifecycle
The data security lifecycle tracks the phases through which data goes from creation to destruction. It
is composed of the six phases given below. Refer to [5] and [10] for descriptions of these phases.
Build phase: As soon as data is created, it can be tampered with. It could be improperly classified or
have access rights changed by intruders, resulting in loss of control over the data [10]. The CSA suggests
that organizations use data labeling and classification techniques, such as user tagging of data, to mitigate
the improper classification of data [5].
Store phase: Because CSPs are third-parties, the complete security of CSP systems is unknown, so
data must be protected from unauthorized access, tampering by network intruders, and leakage [10]. Due to
the multi-tenant nature of cloud computing, controls must be put in place to compensate for the additional
security risks inherent to the commingling of data.
In order to prevent legal issues based on the physical location of data, the CSA suggests that the
cloud consumer stipulate its ability to know the geographical location of its data in the SLA and ensure that
the SLA include a clause requiring advance notification of situations in which storage may be seized or data
may be subpoenaed [5].
Use and Share phase: During the use phase, which includes transmission between CSP and consumer
and data processing, the confidentiality of sensitive data must be protected from mixing with network traffic
with other cloud consumers. If the data is shared between multiple users or organizations, the CSP must
ensure data integrity and consistency. The CSP must also protect all of its cloud service consumers from
malicious activities from its other consumers [10].
Record phase: As with the storage phase, data must be protected against unauthorized access by
intruders, and from malicious co-tenants of the cloud infrastructure. In addition, data backup and recovery
schemes must be in place to prevent data loss or premature destruction [5].For data in a live production
database, the CSA suggests using at-rest encryption having the CSP encrypt the data before storage [5].
For data that will be archived, it recommends that the cloud consumer perform the encryption locally before
sending the data to the CSP to decrease the ability of a malicious CSP or co-tenant from accessing archived
data [5].
311

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
Demolish phase: Data persistence is the biggest challenges present in the destroy phase. For data
to be completely destroyed, it must be erased, rendered unrecoverable, and as appropriate, physically
discarded [5].The CSA suggests a plethora of techniques to be used by CSPs to ensure that data is
completely destroyed, including disk wiping, physical data destruction techniques, such as degaussing, and
crypto-shredding [5].
D. Identity/credentials (management)
Within cloud computing, identity and credential management entails provisioning, deprovisioning,
and management of identity objects and the ability to define an identity provider that accepts a users
credentials (a user ID and password, a certificate, etc.) and returns a signed security token that identifies
that user. Service providers that trust the identity provider can use that token to grant appropriate access to
the user, even though the service provider has no knowledge of the user [7].
An organization may use multiple cloud services from multiple cloud providers. Identity must be
managed at all of these services, which may use different identity objects and identity management systems.
In addition, provisioning and deprovisioning of identities for an organizations IT system is
traditionally done manually and infrequently. With cloud computing, access to services changes more
rapidly than it would in a traditional IT application, so provisioning and deprovisioning of identities must
be dynamic.
Federated identity management allows an organization to rapidly manage access to multiple cloud
services from a single repository. An organization can maintain a mapping of master identity objects to
identities used by multiple applications within the organizations IT system. Cloud customers should
modify or extend these repositories of identity data so that they encompass applications and processes in
the cloud [5].
Currently, CSPs provide custom connectors for communication of identity and access control
objects. The capabilities currently provided by CSPs are inadequate for enterprise consumers. Custom
connectors unique to cloud providers increase management complexity, and are not flexible, dynamic,
scalable, or extensible [5].
Researchers at IBM Research China [11] suggest using a brokered trust model, where a third-party
broker server is used to establish the trust with a cloud service user. The business agreement between the
CSP and the identity broker allows the CSP to place trust in the broker, allowing it to act as an agent for the
CSP to establish trust with other parties, such as organizations using cloud services [11]. The organizations
can then take advantage of their own identity federation services to relay credential information for
authentication with the cloud service.
Such an approach reduces the CSPs cost of establishing multiple trust relationships with multiple
service users. It also pushes complexity to the trust broker, which can support more forms of federated
identities. From the consumers perspective, if multiple CSPs utilize same trust broker, establishing trust
with multiple different types of services can be done by establishing trust with single trust broker.
E. Solution integrity
Within the realm of cloud computing, solution integrity refers to the ability of the cloud provider to
ensure the reliable and correct operation of the cloud system in support of meeting its legal obligations, e.g.,
SLAs, and any technical standards to which it conforms. This encompasses protecting data while it is on
the cloud premises, both cryptographically and physically; preventing intrusion and attack and responding
swiftly to attacks such that damage is limited; preventing faults and failures of the system and recovering
from them quickly to prevent extended periods of service outage; and protection of cloud tenants from the
activities of other cloud tenants, both direct and indirect.
Incident response and remediation

Even though solutions are run by the cloud provider, cloud providers have an obligation
to both their customers and to regulators in the event of a breach or other incident. In the cloud environment,
the cloud consumer must have enough information and visibility into the cloud providers system to be able
to provide reports to regulators and to their own customers.
The CSA suggests that cloud customers clearly define and indicate to cloud providers what they
consider serious events, and what they simply consider incidents [5]. For example, a cloud consumer may
consider a data breach to be a serious incident, whereas an intrusion detection alert may just be an event
that should be investigated.
312

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
Fault acceptance and failure recovery
For a CSP, one of the most devastating occurrences can be an outage of service due to a failure of
the cloud system. For example, Amazons EC2 service went down in April 2011, taking with it a multitude
of other popular websites that use EC2 to host their services. Amazon Web Services suffered a huge blow
from this outage. CSPs must ensure that zones of service are isolated to prevent mass outages, and have
rapid failure recovery mechanisms in place to counteract outages.
The CSA recommends that cloud customers inspect cloud provider disaster recovery and business
continuity plans to ensure that they are sufficient for the cloud customers fault tolerance level [5].
V: CONCLUSIONS AND FUTURE WORK
Cloud computing is an extension of existing techniques for computing systems. As such, existing
security techniques can be applied within individual components of cloud computing. However, because of
the inherent features of cloud computing, such as resource pooling and multitenancy, rapid elasticity, broad
network access, and on-demand self-service, existing security techniques are not in themselves adequate to
deal with cloud security risks.
Cloud providers exist in the market today, so the cloud paradigm has already overcome its initial
security hurdles and moved from theory into reality. However, current cloud providers have provided
extremely proprietary solutions for dealing with security issues. Execution of a single business process
requires the participation of multiple, interoperating providers and consumers. In addition, for cloud
computing to be used in a wide scale and really deliver on its promised benefits of elasticity, scalability,
flexibility, and economies of scale, the focus of security needs to shift towards devising techniques to
enable federation of security functions that are used today.
Further, the federation should allow the cloud consumers to commission and decommission
services from various CSPs with flexibility and agility. Finally, interest research problems will arise
when we consider cloud computing security together with classical quality-of-serve issues [12,13] and
distributed computing issues [14] in a network-wide scope where cloud (storage) systems are implemented
in a distributed manner.
Due to multitenancy, there is a need to logically isolate the data, computing, manageability, and
audit ability of users co-tenant on the same physical infrastructure at an individual component level, across
architectural layers, and across multiple providers. Hence, security mechanisms and approaches that enable
the abovementioned isolation in a standardized way need more scrutiny in the future.
V: REFERENCES
[1] National Institute of Standards and Technology, NIST Definition of Cloud Computing, Sept 2011.
[2]

Armbrust, M. et. al., (2009), Above the clouds: A Berkeley view of Cloud Computing, UC
Berkeley EECS, Feb 2013.

[3]

Ramgovind, S.; Eloff, M.M.; Smith, E., "The management of security in Cloud computing,"
Information Security for South Africa, 2010 , vol., no., pp.1-7, 2-4 Aug. 2014.

[4]

IBM Corporation, Enterprise Security Architecture Using IBM Tivoli Security Solutions, Aug 2007.

[5]

Cloud Security Alliance, Security Guidance for Critical Areas of Focus in Cloud Computing V2.1,
2009.

[6] Federated identity management. Internet: http://en.wikipedia.org/wiki/Federated_identity_


management, [Dec. 16, 2011].
[7]

Cloud Computing Use Case Discussion Group, Cloud Computing Use Cases Whitepaper v4.0, July
2010.

[8] Shiping Chen; Nepal, S.; Ren Liu, "Secure Connectivity for Intra-cloud and Inter-cloud
Communication," Parallel Processing Workshops (ICPPW), 2011 40th International Conference on
, vol., no., pp.154-159, 13-16 Sept. 2011.
[9]

Xiao Zhang; Hong-tao Du; Jian-quan Chen; Yi Lin; Lei-jie Zeng, "Ensure Data Security in Cloud
Storage," Network Computing and Information Security (NCIS), 2011 International Conference on
, vol.1, no., pp.284-287, 14-15 May 2011.
313

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
[10]

Xiaojun Yu; Qiaoyan Wen, "A View about Cloud Data Security from Data Life Cycle," Computational
Intelligence and Software Engineering (CiSE), 2010 International Conference on , vol., no., pp.1-4,
10-12 Dec. 2010.

[11] He Yuan Huang; Bin Wang; Xiao Xi Liu; Jing Min Xu, "Identity Federation Broker for Service
Cloud," Service Sciences (ICSS), 2010 International Conference on , vol., no., pp.115-120, 13-14
May 2010.
[12] Shigang Chen, Meongchul Song, Sartaj Sahni, Two Techniques for Fast Computation of Constrained
Shortest Paths, IEEE/ACM Transactions on Networking, vol. 16, no. 1, pp. 105-115, February 2008..
[13] King-Shan Lui, Klara Nahrstedt, Shigang Chen, Hierarchical QoS Routing in Delay-Bandwidth
Sensitive Networks, in Proc. of IEEE Conference onLocal Area Networks (LCN2000), pp. 579588, Tampa, FL, November 2000.
[14] Shigang Chen, Yi Deng, Attie Paul, Wei Sun, Optimal Deadlock Detection in Distributed Systems
Based on Locally Constructed Wait-for Graphs, in Proc. of 16th IEEE International Conference on
Distributed Computing Systems (ICDCS96),HongKong,May1996.

314

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Minimize Response Delay In Mobile Ad Hoc Networks


using DynamicReplication
Karthik K1 Suthahar P2
Student, M.E (CSE), M.KCE,Karur,
E-mail:vangalkarthi@gmail.com
Asst professor, M.E (CSE),M.KCE, Karur
,E-mail:psutha@engineer.com
Abstract A mobile ad hoc network (MANET) is a most interest area of research in network . The
communication range and the node mobility that are affect the efficiency of file querying. The MANET
having the P2P file sharing mechanism. The Main advantages of P2P file are files can be shared without
base stations, and it avoids overload on server. File replication has major advantage it enhancing file
availability and reduce file querying delay. The current replication protocol having the drawbacks, they
are node storage and the allocation of resources in the replications In this paper, we introduce a new
concept of Distributed File Replication techinique which considers file dynamics such as file addition
and deletion in dynamic manner. This protocol achieves minimum average response delay at minimum
cost of other replication protocols.
Keywords Mobile Ad hoc Network (MANET), file Replication, peer-to-peer, Query Delay.

I. INTRODUCTION
In mobile ad hoc networks (MANETs), the movement of nodes that makes the network partition
, where the nodes in one partition cannot be access the data by the nodes of other partitions. File
replication is the better solution to improve file availability in distributed systems. By replicating
the file at mobile nodes who are not in the owner of the source file.the file availability can
be improved because of there are multiple replica files in the network and the probability of
identifying one copy of the file is higher. Also,the file replication can be minimize the query
delay.the mobile nodes can be obtain the file from some nearby replicas. But the most
of the mobile nodes only have limited amount of memory space, range, and power,and
hence it is difficult for one node to collect and hold all the files considering these constraint and
independent nodes in MANETs cause file unavailability for the requesters. When a mobile node
that only replicates part of the file, there will be a trade-off between query delay and the file
availability.
MANET varying significantly from the wired networks from network topology,configuration
of network and network resources. Features of MANETs are dynamic topology due to host
movements, partition of network due to untrusted communication and minimum resources such as
limited power and limited memory capacity [1, 2]. File sharing is one of the important functionality
to be supported in MANETs. Without this facility, the performance and usage of MANET is greatly
minimizes [3].The best example where file sharing is important, in the conference where several
users share their presentations on discussing on a particular issue, and it is also applicable in
defence application, rescue operation, disaster management etc. The method used for file sharing
deeply depends upon the features of the MANET [3]. The sequential network partition due to host
movements or limited battery power minimize the file availability in the network. To overcome
file un-availability, the replication technique deals all these problems such that file is available at all
times in the network.
File replication
File Replication is a technique which improves the file availability by creating copies of
file. Replication allows better file sharing. It is a key approach for achieving high availability. File
replication has been widely used to maximize file availability in distributed systems, and we will
apply this technique to MANETs. It is suitable to maximize the response time of the access requests,
to distribute the load of processing of these requests on several servers and to eliminate the overload of
the paths of transmission to a unique server. The replications that are accessed in the time variations.
315

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Fig. 1 File Replication in MANETs

BENEFITS OF FILE REPLICATION


In distributed systems ,the files should be accessed in multiple locations,so it is beneficial to replicate
through out the network.
A.increased file availability

The multiple replications of files that improves the file availability and reliability in the case any of
network failures.
B.faster query response
The Queries initiated from the nodes where replicas are stored that can be satisfied directly without
affecting network transmission delays from remote nodes.
C.load sharing
The computational load of responding to the queries can be distributed in the number of nodes in
the network.
RESEARCH ISSUES RELATED TO FILE REPLICATION
A. Power consumption
The Mobile nodes in the MANET are used battery power. If a node with less power is replicated
with many frequently accessed file items, it soon gets drained and it cannot provide services any more. Thus
replication algorithm should replicate file in the nodes that need sufficient power by periodically checking
the remaining battery power of each node.
B. Node mobility
In MANET, hosts are mobile which leads to dynamic topology. Thus replication technique has to
support movement prediction such that if a host is likely to move away from the network, its replicas will
be changed in some other nodes which is expected to retain in the network for a particular unit of time.
C. Resource availability
Every nodes participating in MANET are portable hand held devices, stroage capacity is limited.
Before sending a replica to the node, the technique has to find whether a node has sufficient storage capacity
to hold the replication files or not.
D. Real-time applications
MANET applications like rescue and military operations are time-critical and may have both firm
and soft real-time transactions. Therefore, the replication technique should be able to deliver correct
information before the expiry of processing limits, taking into consideration both real-time firm and soft
transaction types in order to minimize the number of transactions missing their deadlines.
E. Network partitioning
The frequent disconnection of mobile nodes,the network partitioning occurs more often in MANET
databases than in traditional databases. Network partitioning is a important problem in MANET when the
server that contains the required file is isolated in a separate partition, thus reducing file accessibility to a
large extent. Therefore, the replication technique should be able to determine the time at which network
316

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
partitioning may having a replicate file items.
B.Peer-to-Peer Replication model
The peer-to peer model removes the restrictions in the client server model. Replica files can be
transmitted to hosts with out necessity of all hosts that are including the communication in the network. the
peer-to-peer model is useful for mobile systems that have poor network connectivity. The single point of
failure in naturally eliminated.

FIG 2:Peer-to-Peer File Replication.

AN OVERVIEW OF EXISTING TECHNIQUEs:


Kang Chen[4] was proposed distributed file replication protocol name as priority competition and
split replication protocol (PCS) that realizes the optimal replication rule in a fully distributed manner .The
usage of replica distribution on the average querying delay under constrained available resources with two
movement models, and then derived an optimal replication rule that can allocate resources to le replicas
with minimam average querying delay.
T.Hara, 2001 [5] proposed the effective replica allocation methods in mobile ad hoc networks
for improving file accessibility.The writer proposed three replica allocation techniques to improve file
accessibility by replicating files on mobile nodes i.e. Static Access Frequency (SAF) techniques, Dynamic
Access Frequency and Neighbourhood (DAFN) techniques and Dynamic Connectivity based Grouping
(DCG) techniques. These techniques make the following assumptions: (i) each file items and each mobile
node is assigned a seperate identier,(ii) Every mobile node has nite storage space to store replica files;
(iii) There are no modify processings; and (iv) The access frequency of each file item, which is the number
of times a particular mobile node accesses that file item in a unit time interval, is known and does not
change. The decision of which file items are to be replicated on which mobile node is based on the file
items access frequencies and this decisions are taken during a particular period of time, called the relocation
period. In the SAF techniques, a mobile host allocates replications with huge access frequencies. In the
DAFN method, replicas are preliminary allocated based on the SAF techniques, and then the replica
duplication is overcome among neighbourhood mobile nodes. In the DCG techniques, static groups
of mobile nodes are created, and replica files are shared in each partition. The simulation result shows
that in most cases the DCG techniques gives the maximum accessibility, and the SAF techniques gives
the lowest traffic.
Yang Zhang et.al, 2012 [5] describes, in MANETs, nodes mobility freely .The link and node
failures are common, which occurs to repeated network partitions. When a network partition occurs,
mobile nodes in one partition are not able to access files replicated by nodes in other partitions, and
hence significantly reduce the performance of file access. To solve this problem, file replication
techniques are used.
V. Ramany and P. Bertok 2008 [6] studied solutions for replicating location dependent data in
MANETs to handle untrusted network connections. Replication aims to improve accessibility, shorter
response time and fault tolerance. When the file is combined with one location in the subnetwork
and valid only within a location around that network, the advantages from replication will apply only
317

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
within this region.
PROBLEM DEFINITION
There are many file replication protocols available, the main problem with them is they lack a rule
to allocate limited resource to different files for replica creation in order to achieve the minimum global
average querying delay that is global search efficiency optimization under limited resource. They simply
consider storage as the resource for replicas, but neglect that a nodes frequency to meet other nodes also
controls the availability of its files. Files in a node with a higher meeting ability have higher availability. So
there is a problem of how to allocate the limited resource in the network to different files for replication and
how to create and delete the files in dynamically.
PROPOSED SOLUTION
In this section we propose a new distributed file replication protocol to minimize the average
querying delay.Priority Based Dynamic Replication(PBDR) techinique is used adding and deleting the
replica files based on the priority.the file have been succeded in the priority compettion then adding the
replica files,otherwise delete the replicas.

Algorithm 1 psuedo-code for File_Adding in PBDR


i.FILE_ADD_REPLICA(k) //node i tries to create replica files
k.FILE_ADD_REPLICA(i) //node k tries to create replica files
Begin
If|Fi|<MAX then //checks the available storage of nodes
Qi->0 //initialize count
Files_priority_check()
For(each file f in current node)
If(node.test_file(f)==true)
Then
Node(i).FILE_ADD_REPLICA()
Else
Count=count+1 //select the another neighbour
End
Procedure priority_check()
While(resource<j.size)
File(f)<Si //to check the file size is less or greater
If(priority_level >pj)
Then
Return file_addition() //priority test successful
Else
318

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Si<FILE(f) //priority test fails


Return false
End procedure
End
Algorithm 2 psuedo-code for File_Deleting in PBDR
i.FILE_DEL_REPLICA(k) //node i tries to delete replica files
k.FILE_DEL_REPLICA(i) //node k tries to delete replica files
Begin
while(file(f) in current node)
True
Files_priority_check()
If(node.test_file(f)==true)
Then
Node(i).FILE_DEL_REPLICA()
Else
Count=count+1 //select the another neighbour
End
Procedure priority_check()
While(resource<j.size)
If(priority_level <pj)Then
Return file_deletion() //priority test successful
Else
Si<FILE(f) //priority test fails
Return false
End procedure
End
DESIGN OF PDBR FILE REPLICATION TECHINIQUE
In PBDR, each node dynamically updates its meeting ability (Vi) and the average meeting ability
of all hosts in the system (V ). The replication is transfered among all the neighbour nodes.Each node also
periodically calculates the Pj of each of its les. The qj is calculated by using R, where q and R are the
number of received requests for the le and the total number of queries generated in a unit of time period,
respectively. Note that R is a pre-dened system parameter. replicating node should keep the average
meeting ability of the replica nodes for le j around V . Node i rst checks the meeting abilities of neighbors
and then chooses the neighborhood node k that does not contain le j.
The protocol first choosing the neighbor nodes such as k,then check the node is current node or not.
the priority test to be conducted by the requested file j.note the test will be succeded then ADD_replica
,otherwise Del_replica file in the each peers in the network.
Node k creates replicas for the les in a top-down manner periodically. Algorithm 1 presents the
pseudo-code for the process of PBDR file addition between two encountered nodes. In detail, suppose node
i needs to replicate le j, which is on the top of the list.
The neighbor node repeats above process until available storage is no less than the size of le j.Next,
the node fetches the le from the top of the list and repeats the process. If le j fails to be replicated after
K attempts, the node stops launching competition until the next period. if the selected neighbors available
storage is larger than the size of le j , it creates a replica for le j directly. Otherwise, a the priority test is
happen among the replica of le j and replicas already in the neighborhood node based on their Ps. The
priority value of the new replica is set to half of the original les k. If le j is among the selected les,
it fails the priority test and will not be replicated in the neighbor node. Otherwise, all selected les are
removed and le j is replicated. If le j fails, node i will test another attempt for le j until the maximum
number of attempts (K) is reached. The setting of K attempts is to ensure that each le can priority test with
a sufcient subset of replicas in the system. If node i fails to create a replica for le j after K attempts, then
replicas in node i whose Ps are smaller than that of le j are unlikely to win a priority test. Thus, at this
319

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
moment, node i stops replicating les until next round.
The le replication stops when the communication session of the two involved node sends.
Then,each node continues the replication process for its les after excluding the disconnected node from
the neighborhood node list. Since le popularity, k, and available system resources change as time goes
on,each node simultaneously executes PBDR to dynamically handle these time-varying factors.Each node
also periodically calculates the popularity of its les (qj) to reect the changes on le popularity(due to
node querying pattern and rate changes) in different time periods. The periodically le popularity updates
can be automatically handle le dynamism.

FIG 3:File Replicas Addition and Deletion Process in PBDR.

PERFOMANCE
The output of each technique in the simulation test on NS-2. We see the hit rates and average delays
of the four protocols.
We used the following metrics in the experiments:
Hit Rate
It is the number of requests successfully handled by either original les or replica files.
Average delay
This is the average time of all requests that finish execution. The delay that calculate using the
throughput and the performance of the requests.
Hit Rate
Figs. 4(a)The hit rates of the four methods with the simulations results .The hit rates continue
SAF>DAFN> DCG >PBDR. The PBDR achieve higher hit rate than other methods. since PBDR realizes
distributed way, it presents slightly differ from performance compared to others. PBDR considers the
intermediate connection properties of disconnected MANETs and replications. DCG only considers
temporarily con- nected group for fille replication, which is not stable in MANETs. Therefore, it has
a low hit rate. Random assigns resources to les randomly, which means it cannot create more replicas
for popular les, leading to the lowest hit rate. Such a result proves the effectiveness of the proposed
PBDR on improving the over- all le availability and the correctness of our MANETs.
320

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

FIG 4(a) Hit Rate

Average Delay
Figs. 4(b) demonstrate the average delays of the four methods with simulation results.The average
delays shows PBDR<SAF<DAFN<DCG which is in reverse order of the relationship between the four
methods on hit rate as shown in Figs. 4a . This is because the average delay is related to the overall file
availability in decending order. The PBDR have high file availability .SAF distributes every le to
different Nodes while DCG only shares data among simultaneously identify neighbor nodes, and DAFN
has a low file availability since all les receive equal amount of memory resources for replicas. The PBDR
has the minimum average delay in the simulation results.

FIG 4(b) Average Delay

Replication Cost
Fig. 4(c) show the replication costs of the four methods. PBDR have the lowest replication cost
while the costs of other three methods continues PBDR<DAFN<DCG<SAF. PBDR, nodes only need
to communicate the file server for replica list, leading to the lowest cost. DCG generates the highest
replication cost since network partitions and its members need to transfer a huge amount of files to remove
duplicate replicas.In PBDR, a node tries at most K times to create a replica for each of its les, producing
much lower replication cost than SAF and DCG. Such the result demonstrates the high energy-efciency
of PBDR. Combining all above results, we conclude that PBDR has the highest overall le availability and
efciency compared to existing methods, and PBDR is effective in le replication in MANETs.

FIG 4(c)Replication Cost


321

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
Replica Distribution
Fig. 4d show the proportion of resources allocated to replicas in each protocol in the simulation.
We see in both gures, PBDR presents very close similarity to DAFN and the other two follow SAF and
DCG. SAF also presents similarity to PBDR on the replica distribution. However, the difference between
PBDR and SAF is that PBDR assigns priority for popular les and check the priority test for files in the
networks. DAFN gives even priority to all les. Since popular les are queried more frequently, SAF still
leads to a low performance in the file replications.

FIG 4(d) Replica Distributions

Therefore, the resources are allocated more strictly following the PBDR, leading to efficiant. The
other replication protocols having the higher replication costs. The other three methods that favor popular
les, we nd that the closer similarity with PBDR a protocol. The PBDR has the better performance over
all in manets. The storage capacity of the file replication can be overcome due to the file dynamics. The file
distributions among all the nodes in the distributed network having better performances. The file distributes
across the different partitions.This proves the correctness of our theoretical analysis and the resultant for
MANETs.
CONCLUSION
In this paper, we analyze the problem of how to allocate limited resources in the replications and
manage the resources in MANETs. Although previous protocols that only consider storage and resources,
we also consider the file additions and deletions in dynamic manner in the peer-to-peer communication in
distributed systems.the Priority Based Dynamic Replication(PBDR) techinique that are efficiently adding
and deleting the file replications and manage the replicas in the particular time intervals. NS-2 simulator
that are analysis the effectiveness of the PBDR techinique.The hit rate is higher then the previous protocols
and average query delay is reduced and the replication cost is lower then the previous protocols. Finally,
the PBDR protocol that minimize the average response delay in MANETs.
REFERENCES
[1] S. C.Sivaram Murthy and B.S Manoj,Ad Hoc Wireless Networks ", Pearson Education, Second
Edition India, 2001.
[2]

MANET Tutorial by Nithin Vaidya, INFOCOM, 2006.

[3]

Lixin Wang ,File Sharing on a mobile ad hoc Network, Master Thesis,Department of Computer
Science at the University of Saskatchewan, Canada, 2003.

[4]

Kang Chen, Maximizing P2P File Access Availability in Mobile Ad Hoc Networks though
Replication for Efcient File Sharing, IEEE TRANSACTIONS ON COMPUTERS, VOL. 64, NO.
4, APRIL 2015.

[5]

Yang Zhang et.al, Balancing the Trade-Offs between Query Delay and Data Availability in
MANETs, IEEE Transactions on Parallel and Distributed Systems, Vol. 23, No. 4, pp.643-650,
2012.

[6]

T. Hara, Effective replica allocation in ad hoc networks for improving data accessibility, IEEE
INFOCOM, 2001.

[7]

V. Ramany and P. Bertok, Replication of location-dependent data in mobile ad hoc networks,

322

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
ACM Mobile, pp. 3946, 2008.
[8]

Q. Ren, M. Dunham, and V. Kumar, Semantic caching and query processing, IEEE Transactions
on Knowledge and Data Engineering, Vol. 15, No. 1, pp. 192210, 2003.

[9]

F. Sailhan and V. Issarny, Scalable service discovery for MANET, IEEE International Conference
on Pervasive Computing and Communications, pp. 235244, 2005.

[10] L. Yin and G. Cao, Supporting cooperative caching in ad hoc networks, IEEE Transaction on
Mobile Computing, Vol. 5, No. 1, pp. 77-89, 2006.
[11]

J. Cao, Y. Zhang, G. Cao, and L. Xie, Data consistency for cooperative caching in mobile
environments, IEEE Computer, Vol. 40, No. 4, pp. 6066, 2007.

[12] B. Tang, H. Gupta, and S. Das, Benefit-based data caching in ad hoc networks, IEEE Transactions
on Mobile Computing, Vol. 7, No. 3, pp. 289304, 2008.
[13] X. Zhuo, Q. Li, W. Gao, G. Cao, and Y. Dai, Contact Duration Aware Data Replication in Delay
Tolerant Networks, Proc. IEEE 19th Intl Conf. Network Protocols (ICNP), 2011.
[14] ] X. Zhuo, Q. Li, G. Cao, Y. Dai, B.K. Szymanski, and T.L. Porta, Social-Based Cooperative
Caching in DTNs: A Contact Duration Aware Approach, Proc. IEEE Eighth Intl Conf. Mobile
Adhoc and Sensor Systems (MASS), 2011.
[15] Z. Li and H. Shen, SEDUM: Exploiting Social Networks in Utility-Based Distributed Routing
for DTNs, IEEE Trans. Com- puters, vol. 62, no. 1, pp. 83-97, Jan. 2012. [21] V. Gianuzzi,
Data Replication Effectiveness in Mobile Ad-Hoc Networks, Proc. ACM First Intl Workshop
Performance Evaluation of Wireless Ad Hoc, Sensor, and Ubiquitous Networks (PE-WASUN), pp.
17-22, 2004.
[16]

S. Chessa and P. Maestrini, Dependable and Secure Data Storage and Retrieval in Mobile Wireless
Networks, Proc. Intl Conf. Dependable Systems and Networks (DSN), 2003.

[17] X. Chen, Data Replication Approaches for Ad Hoc Wireless Net- works Satisfying Time
Constraints, Intl J. Parallel, Emergent and Distributed Systems, vol. 22, no. 3, pp. 149-161, 2007.
[18] J. Broch, D.A. Maltz, D.B. Johnson, Y. Hu, and J.G. Jetcheva, A Performance Comparison of
Multi-Hop Wireless Ad Hoc Net- work Routing Protocols, Proc. ACM MOBICOM, pp. 85-97,
1998.
[19] M. Musolesi and C. Mascolo, Designing Mobility Models Based on Social Network Theory, ACM
SIGMOBILE Mobile Computing and Comm. Rev., vol. 11, pp. 59-70, 2007.
[20] P. Costa, C. Mascolo, M. Musolesi, and G.P. Picco, Socially- Aware Routing for Publish-Subscribe
in Delay-Tolerant Mobile Ad Hoc Networks, IEEE J. Selected Areas in Comm., vol. 26, no. 5, pp.
748-760, June 2008.

323

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Voltage Sag Mitigation in Photovoltaic System


B.Pavitra#1 and C.Santhana Lakshmi*2
PG Scholar, EEE Department, Sona College of technology,
Salem, TamilNadu, India

* Assistant Professor(Sr.G), EEE Department, Sona College of technology,


Salem, TamilNadu, India
Abstract Abstract This work presents the mitigation of the voltage sag in the photovoltaic system
using Dynamic Voltage Restorer (DVR). Modeling of photovoltaic system (PV), dynamic voltage
restorer (DVR) and local grid are implemented and the results of simulation are presented. In order to
extract the maximum power from PV and to increase efficiency P&O algorithm based Maximum Power
Point Tracker (MPPT) is used. For the exchange of real and reactive power cascaded H-bridge inverter
is used. Modeling of the proposed system was developed by MATLAB Simulink. The objective of the
proposed system is to alleviate the voltage hang down.

NOMENCLATURE
PV

Photovoltaic system

MPPT

Maximum Power Point Tracking

DVR

Dynamic Voltage Restorer

SMC

Sliding Mode Controller

SDCS

Separate dc Sources

P&O

Perturb and Observe

Index Terms Dynamic Voltage Restorer, H-bridge multi level inverter, Photovoltaic system,

sliding mode controller


I. INTRODUCTION
Energy market is directed towards the renewable energies. Now a day, photovoltaic (PV)
generation is assuming increased importance because of their advantages such as simplicity, no fuel
cost, no maintenance, lack of noise and wear due to the absence of moving parts. For the grid connected
type, the main problem is the power quality issues which includes voltage hang down, voltage flicker,
transient [1].
DVR is one of the custom power devices that is similar to that of series type of FACTS device.
The significance of this device is to protect the sensitive load from wilt and deviations in the supply
side by quick succession voltage booster to compensate for the fall or grow in the supply voltage. This
series device will inject the distorted voltage to counteract the harmonic voltage whenever there is a
distortion in the source voltage. [2]-[3].
Among the existing control methods of DVR, SMC technique has high simplicity and robustness,
the proposed controller improves the disturbance rejection, uses invariant control system to modify the
system by switching the controlled variable according to the known system state and
make the system locus move on the predefined sliding surface[4].
Photovoltaic system directly converts the sunlight into electricity when exposed to solar
radiation. Modeling and simulation of the PV array is presented [5]. In order to extract the maximum
output from PV source, MPPT is used [7]. In this paper PV power generation is used as energy source
for DVR when disturbance occurs. It can be done through various converter topologies. The general
function of this multilevel inverter is to synthesize the desired voltage from several separate dc sources
which may be obtained from batteries or solar cells [15].
The objective of this paper includes

1. Voltage mitigation for sag or swell.
324

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

2. To effectively use the renewable energy resources.


3. Compensation for the voltage disturbance using

Dynamic Voltage Restorer (DVR).
In section II proposed system model is presented and discussed. The simulation results are
discussed for different condition in section III. Conclusion and future scope of work is given in section
IV.
II. PROPOSED POWER SYSTEM MODELING
The proposed power system model comprises PV system, DVR, SMC, energy storage devices
and Cascaded (H- bridge) multilevel converter as shown in fig.6.
A.Modeling of PV cell

The photovoltaic cells convert the incident photons to electron or hole pairs. The photovoltaic
module is the result of associating a group of PV cells in series and parallel and it represents the
conversion unit in this generation system. The relationship between the PV cell output current and
terminal voltage according to the single-diode model is governed by equation (1), (2)and(3).

Practical modules are compose of several connected PV cells which requires the inclusion of
additional parameters, Rs and Rp, which is given by (3)

Where, Iph is the current generated by the incident light, ID is the diode current, I0 is the
reverse saturation current, q is the electron charge, k is the Boltzmann constant, is the ideality factor.
T is the temperature, Rs is the series resistance, Rp is the parallel resistance.

Fig.1. Equivalent circuit of PV array

The numbers of cells are connected to form a PV array. The equivalent circuit of PV array
is shown in fig.1. where Ns be number of cells in series and Np denote number of cells in parallel
arrangements. The cells connected in series provide greater voltage output and similarly the cells
connected in parallel provide greater current outputs.
The I-V characteristics of PV device not only depends on the internal characteristic but also
with external influences such as temperature and irradiation which is influenced by equation (4)

Where G is irradiation on surface, Gn is nominal irradiation.


B. MPPT algorithm

MPPT or Maximum Power Point Tracking is a technique used for extracting maximum
available power from PV module under certain conditions. The voltage at which PV module can
325

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
produce maximum power is called maximum power point (or peak power voltage). Perturbation and
Observation (P&O) method is the most frequently used algorithm to track the maximum power due
to its simple structure and fewer required parameters. This method finds the maximum power point of
PV modules by means of iteratively perturbing, observing and comparing the power generated by the
PV modules [11].
The power versus voltage curve for the P&O algorithm in fig.2 shows the terminal voltage and
output power generated by a PV module can be observed that regardless of the magnitude of the sun
irradiance and the terminal voltage of PV modules, the maximum power point is obtained while the
condition dP/dV=0 is accomplished. The advantages of the P&O method are simple structure, easy
implementation and less required parameters.
C. DC-DC converter
DC-DC converters can be used as switching mode regulators to convert an unregulated dc
voltage to a regulated dc output voltage. The regulation is normally achieved by pulse width modulation
at fixed frequency and the switching device is BJT, MOSFET or IGBT. Here boost converter is used
and hence the voltage boost occurs across the load which causes the output voltage to be higher than
the input voltage

Fig.2. Power versus Voltage curve for P&O algorithm

.D. Sliding mode controller

Sliding mode control is a form of variable structure control. It is a non linear control that alters
the dynamics of a non linear system by the application of a high frequency switching signal. The main
strength of the sliding mode control is its robustness [4].
Sliding mode control is a non linear control methodology, which uses a combined control of
continuous equivalent control (ueq) and discontinuous switching control (usw) which are used to
force the state trajectory to reach a predefined sliding surface/ switching surface(s) in the phase plane
and then forces it remain on the surface in the sliding mode until the desired state is reached. The
equivalent control ensures that the operating point slides along the sliding surface until the error
approaches to zero. The dynamics of the system is given by equation(5) which tends to zero if the
poles is on the left hand plane and condition of overshoot does not occur and act as the state feedback
control system. The principle of sliding mode control is shown in fig. 3.
X1(t) = (A+BK)X(t)

(5)

Fig.3. Principle of sliding mode controller


326

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
E. Multilevel inverter
A cascaded multi level inverter consists of a series of H-bridge (single phase, full bridge)
inverter units. The general function of this multi level inverter is to synthesize a desired voltage from
several separate dc sources (SDCSs), which may be obtained from batteries, fuel cells, or solar cells
[11]. Fig. 4 shows the basic structure of single phase cascaded inverter with SDCSs. Each SDCS is
connected to an H-bridge inverter. The switches are controlled to generate three discrete output voltage
with levels Vdc, 0, -Vdc. Each H-bridge unit generates a quasi square waveform by phase shifting the
positive and negative phase leg switching timing. It is to be noted that each switching device always
conducts for 180 (or half cycle) regardless of the pulse width of the quasi- square wave. The switching
method makes all of the switching device current stress equal.

Fig. 4 Basic structure of H- bridge inverter

F. Dynamic voltage restorer

Dynamic Voltage Restorer (DVR) is a solid state device that injects the voltage into the system
in order to regulate the load side voltage [13]. The basic components of the DVR as shown in fig.5
which comprises of Injection transformer,

Fig. 5 Basic Components of DVR

Harmonic filter, voltage source converter, Dc charging unit. The primary function is to boost
up the load side voltage in the event of disturbance in order to avoid the power disruptions to the load.
The difference between the pre sag voltage and the sag voltage is injected by the DVR by supplying
the real power from the energy storage element and the reactive power. During the normal operation
as there is no sag, DVR will not supply any voltage to the load.
The momentary amplitude of the three injected phase voltages is controlled. This means that
any differential voltage caused by the transient disturbances in the ac feeder will be compensated by
327

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
an equivalent voltage generated by the converter and injected on the medium voltage level through
booster transformer.
III. SIMULATION RESULTS AND DISCUSSIONS
Simulation results are given from fig.7 to fig.11. Voltage disturbance are sensed and is compared
with the desired voltage and based on the error signal the modulating signal varies the PWM signal and
necessary voltage is injected by the DVR.

Fig. 6 Overall block diagram of the proposed system

Fig. 7 PV output voltage waveform

Fig. 8 DC-DC Converter output waveform


328

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Fig. 9 Multilevel inverter output waveform

Fig. 10 Load voltage without DVR waveform

Fig. 11 Load voltage with DVR waveform

Fig. 5 shows the overall block diagram of the system. Fig. 7 and 8 shows the PV output and the
DC-DC converter output.
Multilevel inverter output is shown in the fig.9. Fig. 10 and 11 shows the load output voltage
before interfacing with DVR and after interfacing with DVR respectively.
IV. CONCLUSION
In this work, the voltage in the distribution side was improved when the disturbance occurs in the
load feeder by means of DVR which has excellent compensation for voltage disturbances. Simulation
was carried out with PV interfaced multilevel inverter and DVR using MATLAB/SIMULINK software.
The future scope of the present work is to increase multi level inverter stages to ensure the
harmonic free oscillation. We can get the desired results for the non linear load by means of the fuzzy
329

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
based controller to drive the power switches in the inverter.
REFERENCES
[1]
Ghosh A. and Ledwich G, Power Quality Enhancement Using Custom Power Devices, Kluwer
Academic Publishers, 2002.
[2]

Ahmed M.Massoud and Shehab Ahmed, Evaluation of Multilevel cascaded type Dynamic Voltage
Restorer employing discontinuous space vector modulation IEEETransactions on Industrial
electronics, Vol.57,No.7,July 2010.

[3]

Benachaiba Chellali and Ferdi Brahim, Voltage Quality Improvement Using DVR, Electrical
Power quality and Utilisation, Journal Vol.14,No.1,2008.

[4]

Jaume Miret, Jorge Luis Sosa and Miguel Castilla, Sliding mode Input-Output Linearization
Controller for the DC/DC ZVS resonant converter, IEEE Transactions on Industrial electronics,
Vol.59,No.3,pp.1554-1564,March 2012.

[5]

Dash.P.P and Yazdani, A mathematical model and Performance for a single stage grid connected
Photovolatic (PV) system , International Journal of Emerging Electric Power Systems, Vol.9,Issue
6.Article 5,2008.

[6]

Ernesto Ruppert Filho, Jonas Rafael Gazoli and Marcelo Gradella, Comprehensive Aprroach to
Modelling and Simulation of Photovoltaic Arrays ,IEEE Transactions on Industrial Electronics,
Vol.24,No.5,pp.1198-1208, May 2009.

[7]

Chapman P.L and Esram T, Comparison of photovoltaic array maximum poer point tracking
techniques, IEEE Transactions on Energy Conversion, Vol.22,No.2,pp.434-449, June 2007.

[8]

Arindam Ghosh, avinash Joshi and Rajesh Gupta, Performance comparison of VSC based shunt
and series compensators used for load voltage control in distribution systems , IEEE Transactions
on Power Delivery, Vol.26.No.1,pp.268-278,January 2011.

[9]

Ritwik Majumder, Reactive power compensation in single phase operation of Microgrid, IEEE
Transactions on Industrial Electronics, vol.60,No.4,pp.1403-1416, April 2013.

[10] Bhim singh, Sabha Raj Arya, Adapative theory based Improved linear Sinusoidal Tracer control
algorithm for STATCOM , IEEE Transactions on Power electronics, Vol.28,No.8,pp.37683778,August 2013.
[11] Basim Alsayid and Samer Alsadi, Maximum
power point tracking simulation for Photovoltaic
system using Perturb and Observe algorithm , International Journal of Innovation and Technology
, Vol.2,Issue.6, December 2012.
[12] Gaurag Sharma and Jay Patel, Modeling and Simulation of solar photovoltaic module using Matlab/
Simulink , International Journal of Research in Engineering and Technology, Vol.02.Issue.03,
March 2013.
[13] Boochiam P and Mithulanathan N, Understanding of Dynamic Voltage Restorers through Matlab
, Thammasat International Jounal of Science and Technology, Vol.11,No.3,September 2006.
[14] Illindala M, Ventaramanan G and Wang B, Operation and control of a Dynamic voltage restorer
using transformr coupled H-bridge converters , IEEE Transactions on Power Electronics, Vol.21.
No.4,pp-1053-1061, July 2006.
[15] Divya Subramanian and Rebiya Rasheed, Cascaded Multilevel Inverter using the pulse width
modulation technique , International Journal of Engineering and Innovative Technology,
Vol.3,Issue.1,July 2013.

330

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Integral Controller for Load Frequency


Control in Deregulated environment
S.Ramyalakshmi#1 and V.Shanmugasundaram*2
PG Scholar, EEE Department, Sona College of
Technology, Salem, TamilNadu, India

Assistant Professor, EEE Department, Sona College of


Technology, Salem, TamilNadu, India

Abstract This paper deals with the Automatic Generation Control of the two area therma-thermal
system in the restructured power system environment. The main objective of the automatic generation
control is to regulate the power output of electric generator within an area in response to the changes
in the system frequency and tie line loading. In the present competitive electricity market, fast power
consumption may cause a problem of frequency oscillation. The oscillation of the system frequency
may sustain and grow to cause a series frequency stability problem if no adequate damping is available.
The concept of DISCO Participation matrix is introduced and reflected in the two area thermal-thermal
system. The AGC in restructured power system environment should be designed in such a way that can
contract individually with the GENCO for power. The concept of DISCO Participation matrix (DPM) is
presented to simulate the GENCO and DISCOs. By using DPM, the dynamic response are obtained to
satisfy the AGC requirements.

NOMENCLATURE

LFC

Load Frequency Control

AGC Automatic Generation Control


ACE

Area Control Error

CPF

Contract Participation Factor

DISCO Distribution Company

GENCO Transmission Company

DPM DISCO Participation Factor


R

Governor speed regulation

Frequency Bias factor

Integral gain

Ki
TG

Governor Time constant

TP

Time Constant of Power system

TT

Turbine Time constant

KP

Gain Constant of power system

Area Index (1,2)

Nominal System Frequency

Ptie

Change in Tie line power (p.u. MW)

Schd Scheduled

Act

Actual

Index Terms AGC, DISCO, DISCO Participation Matrix, GENCO, Restructured Power System.

331

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
I. INTRODUCTION
Large scale power system are normally composed of control areas or regions presenting coherent
groups of generators. An interconnected power system is basically a large power system consisting a
number of power system. These power sytem or areas are connected by tie line. The objectives of a
control strategy is to generate and deliver power in an interconnected power system as economically
and reliably as possible, while maintaining the voltage and frequency within permissible limit. The
Load Frequency Control (LFC) loop controls the real and frequency, while Automatic Voltage regulator
(AVR) loop controls the reactive power and voltage. With the growth of interconnected power system,
LFC has gained more importance.
It is a primary goal of the AGC to control the tie line power flow at the scheduled value defined
by the contracts among various VIUs, to maintain a generation equal to the local load, thus controlling
the frequencies of the control areas as close to the nominal value as possible during normal load
changes. In cases of loss of generation in an area the neighboring utility will come to help it. In the
classical AGC system, this balance is achieved by detecting the frequency and tie line power deviations
to generate the ACE (area control error) signal which is in turn utilized in the integral feedback control
strategy for a two-area system. It should be noted that this is a linearized model of the AGC, hence is
based on an assumption that the frequency and tie line power deviations are small as referred in [2].
In the restructured power system, the engineering aspects of planning and operation have to be
reformulated with the essential ideas remain the same. With the emergence of the distinct identities of
GENCOs, TRANSCOs, DISCOs and the ISO, many of the ancillary services of a vertically integrated
utility will have a different role to play and hence have to be modeled differently. In the new scenario,
a DISCO can contract individually with a GENCO for power and these transactions are done under the
supervision of the ISO as referred in [5].
The concept of DISCO participation matrix (DPM) is utilized to make the visualization and
implementation of the contracts. The information flow of the contract is composed of traditional AGC
and the simulation that reveal some interesting patterns. The trajectory sensitivities are helpful in
studying the parameters as well as optimization of AGC parameters[4],[6].
The objectives of this paper includes
1. The frequency of the various bus voltages are maintained at the scheduled frequency.
2.The tie line power are maintained at the scheduled levels.
3.The total power is shared by all the generators economically.
4.Dynamic responses obtained should be satisfactory to the requirements of AGC.
In section II linearized model of an interconnected two area system with restructured power
system is presented and discussed. The mathematical formulation is provided in section III and the
simulation results are discussed under the section IV. Conclusions and future scope are presented in
section V.
II. SYSTEM INVESTIGATED
A. Linearized model of an interconnected two area system
In two area system, two single area systems are interconnected via the tie line. Interconnection
established increases the overall system reliability. Even if some generating units in one area fail, the
generating units in the other area can compensate to meet the load demand. The power flowing across
a transmission line can be modeled using the DC load flow equation as referred in [11].



This tie flow is a steady-state quantity. For purposes of analysis here, we will perturb the above
equation to obtain deviations from nominal flow as a function of deviations in phase angle from
nominal.

332

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

The areas are connected by a single transmission line. The power flow over the transmission
line will appear as a positive load to one area and an equal but negative load to the other, or vice versa,
depending on the direction of flow. The direction of flow will be dictated by the relative phase angle
between the areas, which is determined by the relative speed-deviations in the areas.

Fig. 1 Block diagram of interconnected system

Fig. 1 represents that the tie line power flow was defined as going from area 1 to area 2.
Therefore, the flow appears as a load to area 1 and a power source (negative load) to area 2. If
one assumes that mechanical powers are constant, the rotating masses and tie line exhibit damped
oscillatory characteristics known as Synchronizing oscillations. It is quite important to analyze the
steady-state frequency deviation, tie-flow deviation and generator output for an interconnected outputs
for an interconnected area after a load change occurs. The net tie flow is determined by the net change
in load and generation in each area.
B. Linearized model of an interconnected two area restructured power system
The traditional power system industry has a vertically integrated utility (VIU) structure. In the
restructured or deregulated environment, vertically integrated utilities no longer exist. The utilities no
longer own generation, transmission, and distribution; instead, there are three different entities, viz.,
GENCOs (generation companies), TRANSCOs (transmission companies) and DISCOs (distribution
companies). As there are several GENCOs and DISCOs in the deregulated structure, a DISCO has the
freedom to have a contract with any GENCO for transaction of power. After deregulation any DISCOs
can demand for the power supply from any GENCOs. There is no boundation on the DISCOs for
purchasing of electricity from any GENCOs. For understanding the concept of this kind of contracts
DISCO participation matrix (DPM) is presented [9].
A DISCO has freedom to make contract with a GENCO in another control area and such
transaction are called bilateral transactions. All such transactions are completed under the supervision
of independent system operator (ISO). The ISO controls various ancillary services, one of which is
AGC.
A DPM is a matrix with the number of rows equal to the number of GENCOs and the number
of columns equal to the number of DISCOs in the system [9]. Each entry in this matrix can be thought
of as fraction of a total load contracted by a DISCO (column) towards a GENCO(row). The sum of all
the entries in a column DPM is unity [9].
333

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Fig. 2 Two area system in restructured power system

The DPM may be defined as


where cpfjd = Contract Participation factor of jth GENCO in the load following of dth DISCO.
ACE participation factors are apf1 =0.5, apf2=1-apf1=0.5; apf3=0.5, apf4=1-apf3=0.5. Thus,
the load is demanded only by DISCO1 and DISCO2 as defined in [2][14].

III. PROBLEM FORMULATION


The objective of AGC is to establish primary frequency regulation, restore the frequency to
its nominal value as quickly as possible and minimize the tie line power flow oscillations between
neighboring control areas. In the present work, an Integral Square Error (ISE) criterion is used to
minimize the objective function as follows. The Area Control Error may be given as

Fig. 3 shows the block diagram of the two area thermal-thermal system with the Restructured
power system.
334

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Fig. 3 Linearized model of an interconnected system in Restructured power system

IV. SIMULATION RESULTS AND DISCUSSION


An interconnected power system is considered as being divided into control areas, which are
connected by tie lines. In each control area, all generators are assumed to form a coherent group. Some
of the areas in the power system are considered having load perturbations having same magnitudes.
The detailed block diagram of the interconnected power system is given in fig. 3.
Each area supplies its user pool, and tie-lines allow electric power to flow between areas.
Therfore, each area affects others, that is, a load perturbation in one of the areas affects the output
frequencies of other area as well as power flow on tie-lines. Due to this, the control system of each area
needs information about the transient situation in all areas to bring the local frequency to its steady
state value. While the information about each area is found in its frequency, the information about the
other areas are in the perturbations of tie line power flows.
In this paper, simulations are performed using Matlab/Simulink and deregulation part on the two
area power system. A step load perturbation of 0.2p.u. was applied in area 1 and frequency oscillation
and tie line power deviations are investigated before and after deregulation. The frequency oscillations
and tie line power flow are investigated. The investigations are carried out considering the two cases.
Case 1:
Response of two area system before deregulation

Fig. 4 Frequency deviation in area 1


335

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Fig. 5 Frequency deviation in area 2

Fig. 6 Power deviation in area 1

Fig. 7 Power deviation in area 2

Fig. 8 Change in tie line power


336

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
Case II:
Responses of the two area system after deregulation

Fig. 9 Frequency deviation in area 1

Fig. 10 Frequency deviation in area 2

Fig. 11 Power deviation in area 1

Fig. 12 Power deviation in area 2


337

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Fig. 13 Responses after Deregulation

Figs. 4 - 8 shows the response of the two area system before deregulation. The system
performance are in the terms of f1, f2 ,P1, P2 and Ptie of area 1 and 2. Figs. 9 - 13 shows the
response of the two area system after deregulation. The system performance are in the terms of f1,
f2 ,P1, P2 and Ptie of area 1 and 2.

338

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
V. APPENDIX

VI. CONCLUSION
AGC is important in the power system. The frequency and tie-line power deviation responses
are obtained for 20% SLP. In this work, we compare the dynamic responses of frequency and tieline power for before and after deregulation. The concept of DISCO and GENCO are very useful
in the deregulated environment. The design of Integral controller also plays an important role in
obtaining the results in both before and after deregulation. The simulation results are satisfactory for
two different operating cases in AGC before and after deregulation.
The future scope of the present work includes the coordinated control of SMES and SSSC can
be proposed in the deregulated environment. The PID controller can also be used instead of Integral
controller in order to improve the dynamic response of the two area thermal-thermal power system. In
future we can apply some other artificial intelligent techniques for better result.
REFERENCES
[1] Jaleeli, N., Van Slyck, L.S., Ewart, D.N., Fink, L.H., and Hoffmann, A.G.: "Understanding automatic
generation control", IEEE Trans. Power Syst., vol. 3, no. 7, pp. 11061122, 1992.
[2]

V. Donde, M. A. Pai, I. A. Hiskens, Simulation of Bilateral Contracts in an AGC System After


Restructuring, IEEE Trans, Power Syst., vol. 4, no. 6, 2003.

[3]

Praghnesh Bhatt, R. Roy and S.P. Ghoshal, "Optimized multiarea AGC simulation in restructured
power system", accepted for the publication in Int. J. Electrical Power Energy Syst., 2010.

[4]

B. H. Bakken, and O. S. Grande, Automatic Generation Control in a Deregulated Power System,


IEEE Tran. Power Systems, vol. 13, No. 4, pp. 1401-1406, 1998.

[5]

Vaibhav Donde, M. A. Pai, Ian A. Hiskens, Simulation and Optimization in an AGC System after
Deregulation, IEEE Trans, Power systems, vol. 16, no. 3, 2001.

[6]

A.Suresh babu, Ch.Saibabu, S.Sivanagaraju, Tuning of Integral Controller for Load Following of
SMES and SSSC based Multi Area System under Deregulated Scenario, IOSR Journal, e-ISSN:
2278-1676 Volume 4, Issue 3,PP 08-18, 2013.

[7]

O. I. Elgerd, Electric Energy Systems Theory: An Introduction, McGraw Hill, 1982.

[8]

R. D. Christie, and A. Bose, Load Frequency Control Issues In Power System Operation after
Deregulation, IEEE Transactions on Power Systems, vol. 11, No. 3, August 1996, pp. 1191-1200.

[9]

Vijay Rohilla, K. P. Singh Parmar, Sanju Saini, Optimization of agc parameters in the restructured
power system environment using GA, ISSN: 2231 6604, Volume 3, Issue 2, pp: 30-40, 2012.

[10] D. P. Kothari and I. J. Nagrath, Power System Engineering, 2nd edition, TMH, New Delhi, 2010.
[11] P. Kundur, Power system stability & control. New York: McGraw-Hill, 1994, pp. 418-448.
[12] Barjeev Tyagi and S. C. Srivastava, Automatic Generation Control Scheme based on Dynamic
Participation of Generators in Competitive Electricity Markets, Fifteenth National Power Systems
339

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
Conference (NPSC), IIT Bombay, December 2008.
[13] N. Bekhouche, Automatic Generation Control Before After Deregulation, in Proc. of the Thirty
Forth Southeastern Symposium on System Theory, March 2002, pp. 321-323.
[14] Arun Kumar, Sirdeep Singh, Automatic Generation Control Issues in Power System Operation
after Deregulation Review, International Journal of Advanced Research in Computer Science and
Software Engineering, Volume 4, Issue 5, 2014.
[15] Hadi Saadat, Power System Analysis, McGraw-Hill,2002.

340

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

A Survey On Wavefront Sensorless Techniques for


Free Space Optical Communication System
K.Goamthi1 , T.pasupathi2 N.Gayathri3
and S.Barakathu nisha4
1,3,4
2

U.G Student, Kings College of Engineering, Thanjavur,India

Assistant Professor, Kings College of Engineering, Thanjavur,India

Abstract Free Space Optical Communication is a wireless optical technology in which the laser beam
is travelled through the atmospheric channel. The line-of-sight is maintained at the atmosphere between
the transmitter and receiver. When the laser beam is propagated through the free space atmosphere,
it can be severely affected by the atmospheric turbulence. Adaptive Optics is used to compensate the
atmospheric turbulence thereby improving the quality of an optical system, in which the wavefront sensor
(WFS) plays an important role in measuring the phase aberration. In this paper we presented an overview
of different sensor-less techniques that can be used for FSOC to compensate the atmospheric turbulences.
Keywords FSOC, Adaptive Optics, deformable mirror, wavefront sensorless techniques.

I. INTRODUCTION
Free space optical communication is the technique that propagate the light in the free space
which is similar to that of the wireless transmission. Growing commercial deployment of FSOC leads
to increase in the research and development activities over the past few years. Currently, FSOC allows
the transmission of data upto the rate of 2.5 Gbps. Unlike microwave and RF wireless communication
it is the secured licenece free technique. The main concern which has to be consider in the FSOC is the
Line-Of-Sight(LOS). The Line-Of-Sight should be maintained during transmission inorder to achieve
the better BER(Bit Error Rate).The block diagram of the FSOC system is as shown in the figure 1.
The major advantages of the FSOC system is the High-rate of data transmission at the very
high speed [3],high security, license free. Comparing to the fiber optics communication the cost of
installation is less. Immunity to electromagnetic interference and it is invisible and eye safe hence
there is no health hazards.
Most widely used in the Telecommunication and Computer networking, cellular communication
backhaul, Military and security applications and disaster recovery among other emerging applications.

Fig 1.Block diagram of the FSOC system

The only limiting factor of the FSOC system is the atmospheric turbulence. Since the
outdoor environment which acts as the transmission medium for this technique could be depend on
the unpredictable weather conditions. Effect of fog, rain, dust and other dispersing particles in the
atmosphere could leads to the beam dispersion, scattering, beam attenuation beam spreading and
scintillation takes place. This results in the wave distortion and fluctuations in the phase and amplitude
341

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
of the wave.
To overcome the limiting factors in the external environment various corrective techniques are
adapted. Some of the wavefront correction techniques used in the sensorless Adaptive optics system
are:


(i) Modal-based Tabu Search(MBTS) algorithm[4].

(iii) Simulated Anneling(SA)[7].

(ii) Stochastic Parallel Gradient Descent (SPGD)[5].

II. CONVENTIONAL ADAPTIVE OPTICS


Adaptive optics is used to correct the waveform distortion in the astronomical and optical
communication system. This technique is implemented to recover the waveform which is degraded by
the atmospheric turbulence.
In a Conventional adaptive optics system, the disturbance in the optical wave could be detected
by the wavefront sensor. In this design, the laser beam that propagates in the free space is compensated
with the actuators of the deformable mirror. The sensed information could be decoded by the control
system. The system monitor the disturbance and control the actuators of the deformable mirror for
the compensation of the distorted wavefront. The design of adaptive optics system is illustrated in the
fig.2.

Fig 2. Schematic diagram of adaptive optics system.

Adaptive optics application was first applied in the retinal imaging , astronomical imaging[1],
microscopy[12], vision science[11], laser communication system[13].
III. SENSORLESS ADAPTIVE OPTICS
For last few years, the wavefront sensor such as Schack-Hartmann sensor is used for the
detection of the atmospheric aberration in the incoming laser beam and the deformable mirror is one
of the adaptive elements used to introduce some additional distortion that eliminate other aberration
in the system. Since there is complexity in the hardware of the wavefront sensor of conventional AO
system,we are going for the wavefront sensorless approach.
The major concern of the sensorless AO system is to determine the DM shape that removes all
other aberration in the laser beam. The control algorithm is designed for providing the relationship
between the second order atmospheric aberration and far-field intensity. The main advantage of the
wavefront sensor-less system is the far-field intensity is acts as the feedback signal.
The algorithm for wavefrontsensorless adaptive optics system is widely classified into two:
Image based and stochastic algorithm. The stochastic based algorithms which is widely in use are
Genetic algorithm and Ant colonies. The image based algorithms are sensorless modal correction,
low spatial frequency, point spread function,Optical Coherent Tomography(OCT) and Laser process
optimization.
1. Stochastic algorithms for wavefront sensor-less correction:
A. Genetic Algorithm:
342

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
In Genetic Algorithm, the solution of a problem could be searched by the simulation of the
evolutionary process. The algorithm selects the strongest element that can survive from the possible
solutions of the population and they have ability to reproduce to generate the new generation[10]. This
algorithm help us to solve the large class of problems without any reference and the basic knowledge.
The major steps of the Genetic Algorithm is selection function, reproduction function and
evaluate population.
Step 1: In the selection process, the function may be either deterministic or probabilistics. In the

case of probabilistic, the strongest element could have the chance of being selected and

reproducing the next generation.
Step 2: The reproduction produces the new generation from the old ones.
Step 3: In evaluation process, there are two different functions, they are: mutation and crossover.
Crossover: It is the process of mixing of two parent genes by slightly modifying them and

obtaining the new generation.
Mutation: In this process, the genes of the parents could be modified randomly.
The above steps are followed for finding the strongest individual with better survival of fitness
function. Thus the algorithm is repeated to determine the strongest element from the large class of
random population.
The application of the genetic algorithm in the adaptive optics is the laser focalization is
explained as follows.
In the optical communication system, laser beam alignment is not that much simple to reach
accurately. The intensity of the laser beam in the focal spot is depends on its focal point and it could
be eliminated with the adaptive optics system. When the laser beam reached at the end, the genetic
algorithm is provided as the feedback which has the harmonic distortions to select the better focal
point to obtain the undistorted laser beam.
B.Ant colonies:
Ant colonies optimization is one of the successful algorithm for finding possible path to reach
the destination with the desired outcomes. In general, it has adapted the behavior of the ant in nature.
The following steps explains the algorithm for ant colony optimization.
Step 1: Initially, the position of ant is set on the trail.
Step 2: Calculate the length of the path.
Step3: Update vector value.
Step 4:Change the position of the ants on the trail.
The application of the ant colony in the optical system is the quantum optics.

Fig 3: Ant Colonies Algorithm.


343

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
2.Image based algorithms:
Even the stochastic algorithms have been implemented to analyze the important parameters
of the optical communication, the new methods are adapted to be more efficient. The modal based
approach of the aberrations, help us to sorted the some of the drawbacks such as need of preliminary
knowledge to find the algorithm terms and long consumption of time. This image based algorithm is
effective in both in laser optimization and visual optics.
A.Devices for sensor-less modal correction:
Electrostatic membrane deformable mirrors depend on the electrostatic pressure between an
actuator pad array and a thin metalized membrane[8]. To that extent, the mirror can be controlled by
the better wavefront analysis and the more actuators. The optimization algorithm was used in the
deformable mirror which generates the each electrode with deformation acquisition.
B.Low Spatial Frequencies:
In the optical imaging system, the sharpness of image depends on the quality of the wavefront.
In an image, low spatial frequency can be used to perform the optimization in the form of metric. It
performed by the acquisition of a images in series with the help of a predefined aberration. To remove
the aberration, the image having the information which is to be corrected. The special feature of the
equation is that the image sharpness I(ai) result of the Lukosz polynomials coefficients {ai}[8], which
is quadratic:
The above equation implies that the each mode optimizaton can be carried out independently.
Then, the result is interpolating with the quadratic function which is used to find the best point for
each aberration.
C.Point spread function :
By using software, wavefront sensor-less AO system is analysing the images by optical
system and getting the shape of the information using point source image . The required optimization
technique is continued until the given aberration is reduced. In this method ,the image can be detected
much more difficult. By using the application of camera, light can be passed to the retinal as a source
and the wavefront aberration can be analysed .The visual optics is one of the application for PSF
optimization is detecting the aberration of the system and it is stable. The limitation of this method
is having poor signal. In this function , the images can be improved by using many paths and the
aberration can be improved by using deformable mirror and the images can be developed.
D.Optical Coherence Tomography (OCT):
In this method, an optical three-dimensional imaging can be detected by the samples of
an image. The given optimization is having axial and lateral resolution. The axial resolution can be
calculated using bandwidth and wavelength of the light source. The lateral resolution can be calculated
using laser and image of an adaptive optics system. OCT is having high potential and isotropic
resolution. The OCT method can be joined to AO system and it can be classified into interferometric
and adaptive optics subsystem. The wavefront sensing and correction could be added in the adaptive
optics subsystem .In the adaptive optics-OCT subsystem is to calculate and correct aberration of the
adaptive optics system. The correction of the sensorless system can be developed in OCT by using
important type of deformable mirror.
E.Laser process optimization:
The laser source was an optical parametric amplifier with highly tunable mid IR energy which
was at the repetition rate of 10Hz. In the experimental setup[8], laser souce reflected by the two
elements (i.e) plane mirror (M1,M2) and a resistive MDM. There was a interaction chamber which is
used for generation of the laser harmonics. A krypton gasjet in the interaction chamber was interacted
with laser pulse thereby producing the laser harmonics.

344

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Fig 4: Schematic diagram for laser beam optimization.

The photomultiplier present at the output of the monochromator which detects the increases of
the harmonics signal. The photomultiplier visualized the result on the photon flux.
IV. CONCLUSION
The main problem in the FSOC links results from attenuation and fluctuation of optical signal at
the receiver . In this paper , we are attempting to improve free space optical communication by use of
several sensorless adaptive optics. Sensor-less adaptive optics having a great strength for finding new
application in current and future technologies.
V. REFERENCES
[1] Hardy J.W., Adaptive Optics for Astronomical Telescopes, (Oxford University Press,ISBN-10:
0195090195, USA, 1993
[2]

E. Fedrigo, R . Muradore, and D. zilio, High performance adaptive optics system with fine tip/tilt
control, Control Engineering Practice 17, 122-135(2009).

[3]

R.J. Noll, Zernike polynomials and atmospheric turbulence, J. Opt. Soc. Amer., vol. 66 , pp. 207211, 1976

[4]

H. Song, R. Fraanje, G. Schitter, H. Kroese, G. Vdovin, and M. Verhaegen , Modal-based aberration


correction in a closed loop wavefront-sensor-less adaptive optics system, optics Express, vol. 18,
no. 23, pp. 24070-24084, 2010.

[5]

Bing Dong, Dequing Ren, Xi Zhang ,Stochastic parallel Gradient Descent Based Adaptive Optics
used for high contrast imaging coronagraph

[6]

Steffen Mauch et.al Real-time spot detection and ordering for a shack-Hartmann wavefront sensor
with a low-cost FPGA IEEE Transaction on Instrumentation and Measurement, vol. 63, no. 10,
October 2014.

[7]

S. Bonora et.al Devices and Techniques for Sensor-less Adaptive optics INTECH publication:http://
dx.doi.org/10.5772/53550.

[8]

M. J. Booth, Wavefront sensorless adaptive optics for large aberrations, opt. let. 32(1), pp. 5-7, 2007.

[9]

M.S. Zakynthinaki and Y.G. Saridakis, Stochastic optimization for a tip/tilt adaptive correcting
system, Computer Physics Communications, vol. 150, no. 3, pp. 275-292, 2003.

[10]

M. A. Ealey and J. T. Trauger, High-density deformable mirrors to enable coronographic planet


detection, Proc. SPIE 5166, 172-179(2004).

[11] N. Doble and D. R. Williams,The application of MEMS technology for adaptive optics in vision
science, IEEE J, Sel. Top. Quant. 10(3),629-635(2004).
[12]

O. Azucena et al., Adaptive optics wide-field microscopy using direct wavefront sensing, Opt.
Lett. 36(6), 825-827(2011).

[13] T.A.Planchon et al., Adaptive wavefront correction on a 100-TW/10-Hz chirped pulse amplification
laser and effect of residual wavefront on beam propagation, Opt.Commun.252(4-6),222-228(2005)..
345

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Small Signal Stability Analysis of Power System Network Using


Water Cycle Optimizer
Dr.R.Shivakumar#1 and M.Rangarajan*2
#

Professor, EEE, SONA College of Technology,


salem India

PG scholar, EEE, SONA College of Technology,


salem India

Abstract The low frequency electromechanical oscillations caused by swinging generator rotors are
inevitable in interconnected power systems. These oscillations limit the power transmission capability
of a network and, sometimes even cause a loss of synchronism and an eventual breakdown of the entire
system, thus making the system unstable. Power system stabilizer is used to damp out these oscillations
and hence improve the stability of the system. In this project, nature inspired Water cycle algorithm based
stabilizer design is carried out to mitigate the power system oscillation problem. The proposed controller
design is formulated as an optimization problem based on damping ratio and eigen value analysis. The
effectiveness of the proposed controller is tested by performing nonlinear time domain simulations of the
test power system model under various operating conditions and disturbances. The system performance
with Water cycle algorithm is also compared with conventional lead-lag controller design.
Index Terms Eigen value analysis, Low frequency oscillation, Multi machine infinite bus system,

Water Cycle optimization.


I. INTRODUCTION
Low frequency oscillations (0.1 to 2 Hz) after a disturbance in a power system, if not properly
damped, it can lead the system to unstable condition. A Power System Stabilizer (PSS) is one of the
cost effective damping controller to improve the power system stability. The main objective of PSS
is to add damping to the electromechanical oscillations by controlling the generator excitation using
auxiliary signal. In recent years, several techniques based on modern control theory have been applied
to PSS design, refer simply to the reference number, as in [1]-[3] .These include variable structure
control, adaptive control and intelligent control. Despite these techniques, power system researchers
still prefer the conventional lead lag controller design. Conventional PSS are designed using the theory
of phase compensation in frequency domain and it can provide effective damping performance only
for a particular operating condition and system parameters. Also, the fuzzy logic and neural networks
had been implemented in damping controller
design .But these controllers suffer from the following drawbacks: There is no systematic
procedure for the fuzzy controller design and also the membership functions of the controller are tuned
subjectively, making the design more complex and time consuming. With respect to neural based
controller, it is more difficult to understand the behavior of the neural network in implementation
refer simply to the reference number, as in [5].Recently, as an alternative to the conventional and
uncertainty methods, Bio inspired optimization techniques are considered as powerful techniques to
obtain optimal solution in power system optimization problems .These techniques include Evolutionary
programming, Simulated annealing, Bacterial foraging, Harmony search algorithm, Ant colony
optimization, Genetic algorithm and Water cycle algorithm. In this project Water cycle algorithm
based PSS designs are implemented in optimizing the power system stabilizer parameters, suitable for
multi-machine stability enhancement.
II. TEST SYSTEM MODEL
A. Test multi machine power sytem modeling
The test three machine nine bus power system model taken for modeling and analysis. For
analysis and simulation, the Heffron-Phillips block diagram of synchronous generator model was
used. refer simply to the reference number, as in [3].
346

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Fig.1. Three machine nine bus power system model.

x=Ax+Bu (1)
Where x = Vector of State variables.
A, B = State vector matrix and Input matrix respectively.
B. Power system stabilizer structure
In this paper a dual input PSS is used, the two inputs to dual-input PSS are and Pe, with
two frequency bands, lower frequency and higher frequency bands, unlike the conventional singleinput () PSS. PSS3B is found to be the best one within the periphery of the studied system
model. This dual input PSS configuration is considered for the present work and its block diagram
representation is shown in Figure 2. refer simply to the reference number, as in [4].

.Fig. 2. IEEE type PSS3B structure.

Hence Ks, T1, T2 are the PSS parameters which should be computed using CPSS and optimally
tuned using Water cycle optimizer PSS.
III. PROPOSED OPTIMIZATION CRITERION
To increase the damping over a wide range of operating conditions and configuration of power
system, a robust tuning of controllers must be implemented. The objective functions are represented
as,

EMODE in equations (2) and (3) represent the electromechanical mode of oscillations.
The maximum value of real part [Max(i)] of the eigen value will be located in right half of splane, making the system unstable. The weekly damped electromechanical mode will have minimum
value of damping ratio [Min(i)] among all the damping ratios of the system. The objective is to
minimize the objective function [J1] and maximize the [J2]. It involves shifting the real part of the ith
electromechanical eigen value to stable locations in left half of complex s-plane and the damping ratio
of the weakly damped electromechanical mode of oscillations will be enhanced to make the system
more stable. The single machine Heffron-Phillips generator model is extended to perform the modeling
347

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
of multimachine system. Because of the interaction among various generators in the multimachine
system, the branches and loops of the single machine generator model become multiplied.For instance,
the constant K1 in the single machine model becomes K1ij,i=1,2cn; j=1,2c.n in the multimachine
modeling. In this work, n will be equal to 3, representing the number of generators in the multimachine
system considered. Similarly all the K constants (K1 to K6), damping factor D,inertia M and the state
variables used in the single machine model are generalized for n-machine notation.
As in [1],[3].

Fig .3. Heffron Philips generator model

The power system stabilizer optimization parameters (Ks, T1,T2) are taken as constants for the
proposed optimization problem.

The typical values for the optimized parameters are taken as [0.1-60] for k, [0.2-1.5] for T1
and [0.02-0.15] for T2 .The time constant Tw is considered as 10.0s [20].The damping ratio of the ith
critical mode is given by

The objective function J1 and J2 in equation (2) and (3) along with the constraints in (4),(5),(6)
is the proposed optimization criterion formulated in this paper to enhance the system stability.
IV . PROPOSED METAHEURISTIC OPTIMIZATION METHOD
The idea of the proposed Water Cycle Algorithm (WCA) is inspired from nature and based on
the observation of water cycle and how rivers and streams flow downhill towards the sea in the real
world. The evaporated water is carried into the atmosphere to generate clouds which then condenses
in the colder atmosphere, releasing the water back to the earth in the form of rain or precipitation. This
process is called the hydrologic cycle. refer simply to the reference number, as in [7].
The smallest river branches are the small streams where the rivers begins to form. These tiny
streams are called first-order streams. Wherever two first-order streams join, they make a
3
second-order stream . Where two second-order streams join, a third-order stream is formed and
so on until the rivers finally flow out into the sea .

where dmax is a small number (close to zero). Therefore, if the distance between a river and
348

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
sea is less than dmax, it indicates that the river has reached/joined the sea. In this situation, the
evaporation process is applied and as seen in the nature after some adequate evaporation the raining
(precipitation) will start. A large value for dmax reduces the search while a small value encourages the
search intensity near the sea. Therefore, dmax controls the search intensity near the sea (the optimum
solution).
where LB and UB are lower and upper bounds defined by the given problem, respectively.
Again, the best newly formed raindrop is considered as a river flowing to the sea. The rest of new
raindrops are assumed to form new streams which flow to the rivers or may directly flow to the sea.In
order to enhance the convergence rate and computational performance of the algorithm for constrained
problem is used only for the streams which directly flow to the sea. This question aims to encourage
the generation of streams which directly flow to the sea in order to improve the exploration near sea
(the optimum solution) in the feasible region for constrained problems. refer simply to the reference
number, as in [7],[8].
A. Constraint handling
In the search space, streams and rivers may violate either the problem specific constraints or
the limits of the design variables. In the current work, a modified feasible-based mechanism is used to
handle the problem specific constraints based on the following four rules [17]:
Rule1:Any feasible solution is preferred to any infeasiblesolution.
Rule2: Infeasible solutions containing slight violation of the constraints (from 0.01 in the first

iteration to 0.001 in the last iteration) are considered as feasible solutions.
Rule 3: Between two feasible solutions, the one having the better objective function value is

preferred.
Rule 4: Between two infeasible solutions, the one having the smaller sum of constraint violation

is preferred.
B.The steps of WCA
Step 1: Choose the initial parameters of the WCA: Nsr, dmax, Npop, max_iteration.
Step 2: Generate random initial population and form the initial streams (raindrops), rivers, and

sea .
Step 3: Calculate the value (cost) of each raindrops .
Step 4: Determine the intensity of flow for rivers and sea .
Step 5: The streams flow to the rivers.
Step 6: The rivers flow to the sea which is the most downhill place .
Step 7: Exchange positions of river with a stream which gives the best solution, as shown in Fig.
Step 8:Similar to Step 7, if a river finds better solution than the sea, the position of river is

exchanged with the sea .
Step 9: Check the evaporation condition using the Psuocode .
Step 10: If the evaporation condition is satisfied, the raining process will occur .
Step 11: Reduce the value of dmax which is user defined parameter .
Step 12: Check the convergence criteria. If the stopping criterion is satisfied, the algorithm will

be stopped, otherwise return to Step 5.
The Water cycle algorithm is easier to implement and it provides the global solution required
for parameter optimization in complex engineering problems. It provides an optimal solution for the
damping controller parameters, so that the system stability is enhanced to a greater extent possible.
V SIMLATION AND STABILITY ANALYSIS
For the modeling and simulation, MATLAB tool is used. In this work, the power system
stabilizers are installed in the generator.The stimulated graph of system without PSS in speed devation
and power angle respectively, and Coventional Power Syatem Stabilizer(CPSS) is compared with
WCAPSS in the figures(6,7) respectively.

349

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Fig.4. System without PSS in speed deviation

Fig. 5.System without CPSS in power angle

350

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
The performance of the system is evaluated by considering different operating conditions.
The electromechanical modes and the damping ratios obtained for different conditions both with and
without proposed controllers in the system are given in Table 1. When Water cycle algorithm is not
incorporated in PSS, it can be seen that some of the modes are poorly damped and in some cases,
are unstable. It is also clear that the system damping with the proposed Water cycle based tuned
PSS controller are significantly improved. Moreover, it can be seen that electromechanical mode
controllability via Water cycle algorithm based PSS controller is higher than without PSS and CPSS
based Controller. Table 2 and 3 shows the percentage overshoot and settling time in dynamic response
for speed deviation and power angle deviation. The settling time is less for Water cycle based PSS
Controller than other controllers.
The parameters of the damping controller are obtained using Water cycle algorithm. The Water
cycle algorithm is made to run several times and then optimal set of stabilizer parameters is selected.
The final values of the optimized parameters with and without Water cycle based PSS.

Fig.6.System with WCAPSS and CPSS in speed deviation

Fig.7. System with WCAPSS and CPSS in power angle

The following are the dominant features of WCA based controller observed in this paper with
regard to stability improvement.
Better placement of closed loop eigen values in stable locations for all operating conditions
involved.
Provide more damping to the system for all conditions. (i.e.)Damping ratios more than the
threshold level (T= 0.07) and also more than the damping ratios of other controllers.
351

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
Rotor speed and power angle deviation overshoots are minimized and deviations are settled at
a quicker time compared to other controllers for all conditions considered.
Optimal solution got at lesser iterations (generations) compared to others.
VI . CONCLUSION
This paper provides an efficient solution to damp the low frequency electromechanical
oscillations experienced in the multi machine power system model. The salient features of the work
carried out in this paper for multi machine system stability enhancement are as follows:
The stability analysis has been carried out based on the computed eigen values, damping ratios
and also based on the error deviations minimization.
Also, oscillations damping analysis involving wide variations in operating conditions have been
performed based on the damping performance of the proposed controllers.
A detailed state space modeling of the test power system has been performed. In order to
compute the optimal controller parameters, a tri objective optimization criterion has been formulated
and the proposed algorithms have been implemented effectively.
Multi machine power system stability is improved to a greater level.
VII . REFERENCES
[1] Carlos E. Ugalde-Loo , Enrique Acha , Eduardo Licaga-Castro april 2013 Multi-machine power
system state-space modeling for small-signal stability assessments ELSEVIER
[2]

] M. K. Bhaskar, Avdhesh Sharma, N. S. Lingayat Analysis of Power oscillations in Single Machine


Infinite Bus (SMIB) System and Design of 6 Damping Controller. ijetae (ISSN 2250-2459, ISO
9001:2008 Certified Journal, Volume 3, Issue 6, June 2013

[3]

M Ravindra Babu, C Srivalli Soujanya, S V Padmavathi Design of PSS3B for Multimachine system
using GA Technique ISSN: 2248-9622 www.ijera.com Vol. 2, Issue 3, May-Jun 2012, pp.1265-1271

[4]

P. Pavan kumar Dynamic analysis of Single Machine Infinite Bus system using Single input and
Dual input PSS International Electrical Engineering Journal (IEEJ) Vol. 3 (2012) No. 2, pp. 632-641
ISSN 2078-2365

[5]

G. Jeelani Basha1,N. Narasimhulu P. Malleswara Reddy Improvement of Small Signal Stability of a


Single Machine Infinite Bus Using SSSC Equipped With Hybrid Fuzzy Logic Controller IJAREEIE
FEB 2014.

[6]

Swaroop Kumar.Nallagalva,Mukesh Kumar Kirar, Dr.Ganga Agnihotri Transient Stability Analysis


of the IEEE 9-Bus Electric Power System ijset, Volume No.1, Issue No.3, pg : 161-166

[7]

Ardeshir bahreininea , Ali sadollah water cycle algorithm A novel metaheuristic optimization
method for solving constrained engineering optimization problems. RESEARCH GATE.

[8]

Navid Ghaffarzadeh water cycle algorithm for power system stabilizer based robust design for power
system . Journal of ELECTRICAL ENGINEERING, VOL. 66, NO. 2, 2015, 9196.

[9]

Tridib K. Das and Ganesh K. Venayagamoorthy Bio-inspired Algorithms for the Design of Multiple
Optimal Power System Stabilizers: SPPSO and BFA IEEE

[10] Mostafa Abdollahi*, Saeid Ghasrdashti, Hassan Saeidinezhad and Farzad Hosseinzadeh
Multi Machine PSS Design by using Meta Heuristic Optimization Techniques 2013 JNAS
Journal-2013-2-9/410-416 ISSN 2322-5149 2013 JNAS
[11]

Ashik Ahmed Optimization of Power System Stabilizer for Multi-Machine Power System using
Invasive Weed Optimization Algorithm International Journal of Computer Applications (0975
8887) Volume 39 No.7, February 2012

[12] Saibal K. Pal, Dr. Comparative Study of Firefly Algorithm and Particle Swarm Optimization for
Noisy Non-Linear Optimization Problems I.J. Intelligent Systems and Applications, 2012, 10,
50-57 Published Online September 2012 in MECS (http://www.mecs-press.org/) DOI: 10.5815/
ijisa.2012.10.06
[13]
352

Eng. Andrei STATIVA PhD1 Student, Prof. Eng. Mihai GAVRILAS PhD1 a metaheuristi approach
for power system stability enhancement. Buletinul AGIR nr. 3/2012 iunie-august.

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Minimization Of Thd In Cascaded Multilevel Inverter Using


Selective Harmonic Elimination With Artificial Neural Networks
M.Ramesh#1 ,P.Maniraj*2 , P.Kanakaraj*3
M.E, Department of EEE, M.Kumarasamy College of Engineering,
Karur, India salem India

M.E, Department of EEE, M.Kumarasamy College of Engineering,

Karur, India *M.E, Department of EEE, SRM University,


Chennai, India rameshmsr44@gmail.com
Abstract The major problem in the electrical power quality is the harmonic content. There are several
methods indicating the quantity of harmonic content and the most widely used measure is the Total
Harmonic Distortion (THD). The open loop, conventional PI and neural network are designed for closed
loop control of cascaded multilevel inverter to reduce the harmonics. The comparison is made for open
loop and closed loop with PI and neural network. The comparison results reveal that the THD is reduced
to about less than 5% with neural network control compared to open loop control. The mapping between
the modulation rate and the required switching angles is learned and approximated with a feed forward
neural network. After learning, appropriate switching angles can be determined by the neural network
leading to a low-computational cost neural controller which is well suitable for real-time applications.
This technique is applied for any number of levels of multilevel inverter. A nine level cascaded multilevel
inverter power circuit is simulated in MATLAB 7.8 simulink with sinusoidal PWM technique. The results
are presented and analyzed.
Key words Multi-level inverter, Total Harmonic Distortion (THD), Lower Order Harmonic

(LOH), Neural Network (NN), Pulse Width Modulation (PWM) and Modulation Index (M)
I. INTRODUCTION
The Pulse Width Modulated (PWM) inverters can control their output voltage and frequency
simultaneously and also they can reduce the harmonic components in load currents. These features
have made them suitable in many industrial applications such as variable speed drives, uninterruptible
power supplies, and other power conversion systems. The popular single-phase inverters adopt the
full bridge type using approximate sinusoidal modulation technique as the power circuits. The output
voltage of them has three values: zero, positive and negative of supply DC voltage levels. Therefore,
the harmonic components of their output voltage are determined by the carrier frequency and switching
functions [1].
Recently the multilevel inverter topology has drawn tremendous interest in the power industry
since it can easily provide the high power required for high power applications for such uses as
static VAR compensation, active power filters, and so that large motors can also be controlled by
high power adjustable frequency drives. Multilevel inverters synthesize the AC voltage from several
different levels of DC voltages. Each additional DC voltage level adds a step to the AC voltage
waveform. These DC voltages may or may not be equal to one another [3]. From a technological point
of view, appropriate DC voltage levels can be reached, allowing use of multilevel power inverter for
the medium voltage for adjustable speed drives ASD [4]. Multilevel inverters can reach high voltage
and reduce harmonics by their own structures without transformers [5]. There are three main types of
multilevel inverters: diode-clamped, flying capacitor, and cascaded H-bridges [9]. If the DC supply
voltage increased (adding more batteries in series to maintain the voltage or to decrease the current)
for the larger power requirement, the inverter component must be able to withstand the maximum
DC supply voltage. Apart from other multilevel inverters, is the capability of utilizing different DC
voltages on the individual H-bridge cells. The cascaded topology has many inherent benefits with one
particular advantage being its modular structure. In particular, the cascaded inverter has been reported
for use in applications such as medium voltage industrial drives, electric vehicles and grid connection
of photovoltaic cell generation systems.
The proposed inverter can reduce the harmonic components using sinusoidal PWM technique
353

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
[4] under the condition of identical DC sources with sinusoidal PWM technique. The comparison
is also made for open-loop, closed-loop PI and Neural implementing sinusoidal PWM technique.
The control scheme involves the generation of PWM pattern for the range of modulation index (m)
between 0.1 to 1. The closed loop control of cascaded multilevel inverter is done by implementing
conventional PI controller.
CASCADED MULTILEVEL INVERTER
A cascaded multilevel inverter consists of a series of H-bridge (single phase, full bridge) inverter
units. The general function of this multilevel inverter is to synthesize a desired voltage from several
separate dc sources (SDCSs), which may be obtained from batteries, fuel cells, or solar cells. [10]
A. Nine-Level Cascaded Multilevel Inverter
Among three types of topologies of multi-level inverter, cascaded type is considered for
this work. In this topology, four single phase H-bridges are serially connected for nine level inverter.
In general, the number of bridges required for an m level inverter is ((m-1)/2). All the switches in the
inverter are switched only at the fundamental frequency and the voltage stress across the switches is
only the DC source voltage magnitude.
In the cascaded multilevel inverter all the voltage sources need to be isolated from one another.
Thus for nine level inverter four DC sources are needed. The switching stress can be reduced because
of its better switch utilization [8]. In the proposed system identical DC source voltages are used for
four H-bridges of the multi-level inverter. The structure of single phase cascaded nine level inverter
is shown in figure .1. Each bridge module comprises four Metal Oxide Semiconductor Field Effect
Transistors (MOSFET).

Figure. 1 Structure of single phase cascaded nine level inverter

Each bridge is energized by separate DC sources. Each separate dc source is connected to a


single-phase full bridge inverter. The inverter level can generate three different voltage outputs +Vdc,
0 and Vdc. The number of output phase voltage levels in a cascaded multi-level inverter is then
(S*2)+1, where S is the number of dc sources [2]. An example phase voltage waveform for a nine
level cascaded inverter is shown in Figure 3.2. The maximum output phase voltage is given by V0 =
V1+V2+V3+V4.
Each bridge unit generates a quasi-square waveform by phase shifting its position and negative
phase-leg-switching timings. It should be noted that each switching device always conducts for 180
(or half-cycle), regardless of the pulse width of the quasi-square wave. This switching method makes
all of the switching device current stresses equal. With enough levels and an appropriate switching
algorithm, the multilevel inverter results in an output voltage that is near sinusoidal.
B. Method of Working
The output voltage of the nine level cascaded multi-level inverter is shown in figure 2. The steps
to synthesize the nine level voltages are as follows [1].
1. For an output voltage level V0 = V1, turn on the switches M11, M12, M22, M32 and M42.
354

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)


2. For an output voltage level V0 = V1+V2, turn on all the switches as mentioned in step
1 and M21.
For an output voltage level V0 = V1+V2+V3, turn on all the switches as mentioned in step
2 and M31.
For an output voltage level V0 = V1+V2+V3+V4, turn on all the Switches as
amentioned in step 3 and M41.

Figure .2 Output voltage of single phase cascaded nine level inverter

HARMONIC REDUCTION TECHNIQUE


The power electronic equipment such as inverter have switching devices and their operation
produces current and voltage harmonics into the system from which they are working. These
harmonics affect the operation of their equipments connected o the same system through the injection
of harmonics. The even order harmonics are eliminated by using filters. The odd order harmonics can
be eliminated by various techniques.The analysis of the harmonics and harmonic reduction by PWM
techniques are described in this section.
A. Importance of PWM Control
The performance of each of these PWM control methods are based on the following parameters:
a) Total harmonic distortion (THD) of the voltage and current at the output of the inverter, b) Switching
losses within the inverter, c) Peak-to-peak ripple in the load current, and d) Maximum inverter output
voltage for a given DC rail voltage. Thus the choice of a particular PWM technique depends upon the
permissible harmonic content in the inverter output voltage. From above mentioned PWM control
methods are applied in the proposed inverter since it has various advantages over other techniques.
Sinusoidal PWM inverters provide an easy way to control amplitude, frequency and harmonics
contents of the output voltage [6].
The SPWM aims at generating a sinusoidal inverter output voltage without low-order harmonics.
Sinusoidal pulse width modulation is one of the primitive techniques, which is used to suppress
harmonics presented in the quasi-square wave. In the modulation techniques, there is an important
parameter i.e., the ratio M = Ar/Ac known as modulation index, where Ar is reference signal amplitude
and Ac is carrier signal amplitude.
B.Harmonic Elimination Switching Angles (HESA)
The HESA is assumed to be the quarter-wave symmetric. Fourier series of the quarter-wave
symmetric S H-bridge cell multilevel inverter output waveform is written as follows

Where the optimized switching angles, which must satisfy the following condition

355

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
The amplitude of all odd harmonic components including fundamental one, are given by

The switching angles of the waveform will be adjusted to get the lowest output voltage THD.
If need to control the peak value of the output voltage to be V1 and eliminate the third and fifth order
harmonics, modulation index is given by

The resulting harmonic equations are

The Newton Raphson method is used to solve the harmonic elimination switching angles of 5th
and 7th order for nine level cascaded multilevel inverter.
IV. ARTIFICIAL NEURAL NETWORKS
An artificial neural network (ANN), usually called "neural network" (NN), is a mathematical
model or computational model that tries to simulate the structure and/or functional aspects of biological
neural networks. It consists of an interconnected group of artificial neurons and processes information
using a connectionist approach to computation. In most cases an ANN is an adaptive system that
changes its structure based on external or internal information that flows through the network during
the learning phase. Neural networks are non-linear statistical data modeling tools. They can be used to
model complex relationships between inputs and outputs or to find patterns in data. A neural network
is an interconnected group of nodes, akin to the vast network of neurons in the human brain.
A. Structure of Feed Forward Network

Fig. 3 Feed Forward Neural Networks

The feed forward neural network as shown in Fig. 3, the feed forward neural network was the
first and arguably simplest type of artificial neural network devised. In this network, the information
moves in only one direction, forward, from the input nodes, through the hidden nodes (if any) and to
the output nodes.
B. Training of Neural Network Using MATLAB Coding
To determine the switching angles for Selective Harmonic Eliminated PWM (SHEPWM)
cascaded multilevel inverter. Such switching angles are defined by a set of nonlinear equations to be
solved. In the case of two possible solutions for an angle i, the criteria for selecting one of them can
be the Total Harmonic Distortion (THD). The best angle values are therefore the ones leading to the
lowest THD. The flow chart representation of neural network implementation as shown in fig. 4. ANNs
have gained increasing popularity and have demonstrated superior results compared to alternative
methods in many studies.
356

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Fig. 4 Flow chart for proposed neural network technique

In the case of two possible solutions for an angle i, the criteria for selecting one of them can be
the Total Harmonic Distortion (THD). The best angle values are therefore the ones leading to the lowest
THD. ANNs have gained increasing popularity and have demonstrated superior results compared to
alternative methods in many studies. Indeed, ANNs are able to map underlying relationship between
input and output data without prior understanding of the process under investigation. This mapping
is achieved by adjusting their internal parameters called weights from data. This process is called
the learning or the training process. Their interest comes also from their generalization capabilities,
i.e., their ability to deliver estimated responses to inputs that were not seen during training. Hence,
the application of ANNs to complex relationships and processes makes them highly attractive for
different types of modern problems [9, 14]. We use a neural network to learn the switching angles
previously provided by the resultant theory method. The number of inputs and outputs depends from
the considered process. In our application, the feed forward neural network has to map the underlying
relationship between the modulation rate (i/p) and the switching angles (o/p) as shown in figure 5.

Fig. 5 Plot for Modulation index Vs corresponding Switching Angles


357

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
The figure 6 shows that the relationship between modulation rate and its calculated THD for
their corresponding switching angles. The neural network is trained for 1000 epochs for the given set
of inputs and target (desired switching angles calculated by solving harmonic elimination equation).
Table-I Switching angles for various modulation index and their respective THD values

The switching angles for different modulation index and their corresponding THD values are
shown in table-I. From that it has been inferred that the modulation index increases which results
reduction of THD values.

Fig. 6 Plot for Modulation index Vs THD in percentage

SIMULATION RESULTS
The simulink model of the proposed nine level cascaded multilevel inverter systems for SPWM
techniques with open loop, closed loop PI and neural implementation are described by following
simulation diagrams.
A. Sinusoidal Pulse Width Modulation Open Loop

Figure 7. Simulink model for Nine-Level Cascaded Multilevel Inverter system


358

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
The simulation was done for a cascaded nine level inverter with sinusoidal pulse width
modulation is described in following figures 7. The gating signals generated using sinusoidal PWM
technique is given to the power circuit switches to obtain the output voltage. The Figure.8 shows the
simulated output voltage and current and Figure. 9 show the frequency spectrum of output voltage.

Fig. 8 Simulated output voltage and current waveform for sine PWM

Fig. 9 Frequency Spectrum of the output voltage in Sine PWM

B. Closed Loop - PI
In closed loop the control of inverter can be performed using conventional PI controller. The
output voltage of the cascaded inverter is compared with reference sine wave and the output error
is given to the PI controller to generate the gating signals. The simulated circuit with PI controller
is shown in Fig.10. The Frequency spectrum of the output voltage is shown in Fig. 11. The THD is
measured for various values of m. The THD is found to be 4.21 % for m = 0.85.

Fig. 10 Simulated circuit of the inverter (PI controller)


359

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Fig. 11 Frequency spectrum of the output Voltage.

C. Proposed scheme for Nine Level Cascaded


Multilevel Inverter with Neural Network
In view to confirm the validity of obtained results using the harmonics elimination technique
with neural network, a test is carried out for M=0.85. The switching angles for test data are obtained
from the neural network such as its output (switching angle). From that switching angles, the desired
level of carrier signal levels are obtained. The triangular carrier signal is compared with sinusoidal
reference which results the triggering pulses for various switches. These pulses are used to trigger the
nine level cascaded multilevel inverter. The output voltage and current waveform and its harmonics
spectrum are shown in figure 12, 13.

Fig. 12 Simulated output voltage and current waveform for sine PWM with Neural Network

Fig. 13 Frequency Spectrum of the output voltage in Sine PWM with Neural Network
360

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
Comparison Results
The Final results are obtained by comparing open and closed loop operation of cascaded
multilevel inverter being used with sinusoidal PWM technique. The modulation index can be varied
from 0.775 to 1. Figure.14 shows the graph of THD (%) Vs modulation index (m). The Total Harmonic
Distortion (THD) is compared for open and closed loop with PI and neural network, the result obtain
from neural is better response than other controller.

Fig. 14 Comparison of modulation index Vs THD for open loop, closed loop PI, neural network

IV . CONCLUSION
A complete analysis of the nine level cascaded multilevel inverter has been presented for open
and closed loop with PI controller and neural network comparison has been brought out. The neural
network approach is based on the learning and approximating of the relationship between the modulation
rate and the switching angles with a feed forward network. The resulting neural implementation of
the harmonic elimination strategy uses very few computational costs, high performance and technical
advantages of the neural implementation of the harmonic elimination strate
REFERENCES
[1] MEENAKSHI, J. ; SREEDEVI, V.T, Simulation of a transistor clamped H-bridge multilevel
inverter and its comparison with a conventional H-bridge multilevel inverter IEEE International
Conference on Circuit, Power and Computing Technologies (ICCPCT), 2014 , 20-21 March 2014
[2]

Kumar, D.V.A. ; Babu, C.S. New multilevel inverter topology with reduced number of switches
using advanced modulation strategies IEEE International Conference on Power, Energy and Control
(ICPEC), 2013 pages 693 699.

[3]

Haiwen Liu, Leon M. Tolbert, Surin Khomfoi, Burak Ozpineci, Zhong Du, Hybrid Cascaded
Multilevel Inverter with PWM control Method, conference and proceedings , pp: 162-166 , June
2008.

[4]

P.C. Loh, D.G. Holmes, T.A. Lipo, Implementation and control of distributed PWM cascaded
multilevel inverter with minimum harmonic distortion and common moode voltages, IEEE
Transactions on Power Electronics, vol. 20, no. 1, pp. 90-99 , Jan. 2005.

[5]

Chiasson, J.N.; Tolbert, L.M.; McKenzie, K.J.; Zhong Du,A Unified approach to solving the
harmonic Elimination Equation in multilevel converters, IEEE Transactions On Power Electronics,
Vol. pp: 478 490, March 2004

[6]

B. P. McGrath and D. G. Holmes, Multicarrier PWM Strategies for multilevel inverters, IEEE
Trans. Ind. Electron. vol. 49, no. 4, pp. 858867, Aug. 2002.

[7]

L.M.Tolbert and T.G.Habetler, Novel Multilevel Inverter Carrier Based PWM methods, Proc.
IEEE trans. Ind Applications, Vol.35, pp. 1098-1107, Sept.1999.

[8]

C. Wang, Q. Wang, K. Ren, and W. Lou, Privacy-preserving public auditing for data storage
security in cloud computing, in INFOCOM, 2010 Proceedings IEEE. IEEE, 2010, pp. 19.

[9]

G. Carrara, S. Gardella, M. Marchesoni, R. Salutari, and G. Sciutto, A new multilevel PWM


361

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
method: A theoretical analysis, IEEE Trans. Power

Electron., vol. 7, pp. 497505, July 1992.

[10] N.S. Choi, J. G. Cho, and G.H. Cho, A general circuit topology of multilevelinverter, in Proc.
IEEE PESC91, pp. 96-103.
[11] C. K. Duffey, R. P. Stratford, Update of harmonic standard IEEE-519, IEEE recommended practices
and requirement for harmonic control in electric power systems, IEEE Transactions on Industry
Applications, vol. 25, no. 6, pp. 1025-1034,Nov. /Dec. 1989.
[12] S. Walfram, Mathematica, a System for Doing Mathematics by Computer, 2nd ed. Reading MA:
Addison-Wesley, 1992.
[13] H. S. Patel and R. G. Hoft, Generalized harmonic elimination and voltage control in thyristor
converters: Part I-harmonic elimination, IEEE Trans. on Ind. Appl., Vol. 9, pp. 310-317, May/June
1973.
[14] H. S. Patel and R. G. Hoft, Generalized harmonic elimination and voltage control in thyristor
converters: Part II-voltage control technique, IEEE Trans. on Ind. Appl., Vol. 10, pp. 666-673,
Sept. /Oct. 1974.
[15] N. Mohan, T. M. Undeland and W. P. Robbins, 2003 -Power Electronics: Converters , Applications,
and Design, 3rd Edition. J. Wiley and Sons.
[16] Khomfoi, S., Tolbert, L. M., Fault Diagnostic System for a Multilevel Inverter Using a Neural
Network, in IEEE Transactions on Power Electronics, Vol. 22, No. 3,pp. 1062-1069, May
2007.

362

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

A Survey Of Design Of Novel Arbitary Error Correcting


Technique For Low Power Applications
D.Ragavi,PG Scholar, S.Mohan Raj,Associate Professor
Deptartment of ECE, M.Kumarasamy College Of Engineering
Karur-639113, Tamilnadu, ragaviamirtha@gmail.com
Deptartment of ECE M.Kumarasamy College Of Engineering
Karur-639113, Tamilnadu, mohanmahi17@gmail.com
Abstract Error correction is one of the important techniques for detecting and correcting errors in
communication channels, recollections, all types of memories. But the NAND FLASH memories are
competing in the market due to its stumpy power, soaring density, cost helpfulness and design scalability.
So, various DSP algorithms are used to overcome the delays by increasing the sampling charge. BCH
codes are extensively been worn for error recognition and rectification. In the proposed scheme, both
syndrome calculator and encoder are combining implemented. Linear Feed Back Shift Register (LFSR)
are used to implement the design of encoder for polynomial division and the decoder design is based on
inversion-less Berlekamp-Massey algorithm (BMA),Chien search algorithm and syndrome calculator.
The main improvement of LFSR is that it operates at high speed, but its main drawback is that the
inputs are prearranged in bit serial. To prevail over these drawbacks, DSP algorithms such as Selecting a
better unfolding value reduce the mock-up period, decrease the clock succession, and increases the speed,
power and the throughput.
Key words BMA, Built-in-Self-Test, DPBM, NAND FLASH.

I. INTRODUCTION
VLSI stands "Very Large Scale Integration".The field which involves packing more logic
devices into smaller areas.VLSI devices are used in your computer, your car, your brand new state-ofthe-art digital camera, the cell-phones, and what enclose you.Everyone this involve a lot of expertise
on many fronts within the same field.VLSI has been roughly for a protracted time, there is zilch new
about it but as a side effect of advances in the world of computers, in attendance has been a dramatic
production of tools that can be used to design VLSI circuits. Alongside, obeying Moore's law, the
aptitude of an IC has augmented exponentially over the years.
In terms of computation power, utilization of available area, yield. The united achieve of these
two advances is with the aim of people can put different functionality into the IC's, open new frontiers.
These two fields are kind a related and their description can easily getting into another article.
Polynomial basis multipliers
Polynomial basis multipliers operate polynomial basis and no origin converters required.
These multipliers are by far implemented, because of hardware efficient and the time to produce the
outcome is the alike as for Berlekamp or Massey-Omura multipliers. The bit-serial polynomial basis
multipliers are operate serial-in parallel-out multipliers.In several applications, additional register
being obligatory is the result and adds an superfluous m clock cycles to the totaling time. This is the
main reason why polynomial basis multipliers are habitually disregarded for use in code design.

Fig 1 Polynomial basis multipliers


363

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
DPBM:
The DPBM is particularly useful if the time taken to generate a product is critical. The DPBM
offers a half-way solution between a bit-serial and
a bit-parallel multiplier. Both DPBMs are used to make hardware efficient and in some
situations the DPBM offers a reduction in hardware since the intermediate value z does not have to be
stored. The structure of the DPBM depends on the irreducible polynomial for GF(2m).

Fig 2. DPBM

II. SURVEY
A. High Throughput LFSR Design for BCH Encoder using Sample Period Reduction Technique for
MLC NAND based Flash Memories.
Errors that transpire in MLC NAND based flash memories a choice of coding techniques can
be functional for error detection and correction. There is an assortment of codes available for error
detection and correction. The codes are divided into two types; 1). conventional code and 2). block
codes Cyclic code is one of the classifications of block codes. The rift of the block code is BCH code.
BCH code initially forms a generator polynomial by the use of finite field (GF) concept and generates
a parity (check) bits to be appended to the message bits to form a codeword. Unfolding is a renovation
modus operandi, which describes J consecutive iterations of the original DSP aigorithm. It increases
the several Iteration bound to J. In order to trim down the sampling period it is important to calculate
the iteration bound ahead of unfolding the system to select the unfolding factor.
B. MPCN-Based Parallel Structural design in BCH Decoders for NAND Flash Memory Devices
This brief has provided a novel MPCN-based parallel structural design in long BCH decoders
for NAND Flash memory plans. Different previous approaches performing CFFMs calculations, the
proposed design has exploited MPCNs to improve the hardware efficiency since one MPCN require
the XOR gate requirement is
at most m - 1, whereas that of one CFFM is usually proportional to m.
The future MPCN-based structural design can combine the syndrome calculator and the Chien
search leading to significant hardware reduction. The parallel-32 BCH (4603, 4096; 39) decoder
contrast to design ,the proposed combine Chien search and syndrome calculator has gate count saving
46.7% according to the combination results in the CMOS 90-nm technology.
C. A Fully Parallel BCH Codec with Double Error Correcting neither Capability for NOR Flash
Applications
Double error correcting (DEC) BCH codes are necessary; however, their iterative processing
are not suitable for the latency-constrained memories. Thus, fully parallel architecture is proposed in
this paper at the cost of increasing area. New method is to combine encoder and syndrome calculator
by using matrix operations developed. In addition to reduce the degree of error location polynomial, a
new error location polynomial is defined.so that the hardware cost in Chien search will be sufficiently
reduced.
364

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
D. An Area Efficient (31, 16) BCH Decoder for Three Errors
BCH codes are one of the most powerful error-correcting codes used to correct errors occurred
during the transmission of the data. This paper presents a low complexity and area efficient errorcorrecting BCH decoder used to for detect and correct three errors occurred during transmission. The
advanced Peterson error locator algorithm computation, which reduces calculation complexity is
proposed for the IBM block in the (31, 6) BCH decoder. syndrome calculator and chien search are
modified to reduce hardware complexity. The design methodologies of the BCH decoder models have
considerably low hardware complexity and latency than those using conventional algorithms. The
proposed BCH decoder (31, 16) over GF (5), leads to a reduce the complexity compared to that of
conventional BCH decoder.
E. Encoder And Decoder For (15,11,3) and (63,39,4) Binary BCH Code With Multiple Error
Correction.
There are numerous types of error amendment codes based on the type of error expected, the
communication medium and weather re-transmission, etc. various of the Error correction codes, which
are widely used these days, are BCH, Turbo, Reed Solomon, and LDPC. These codes are unusual from
each other in their complexity and implementation it is highly doable that the data or message get
corrupted for the period of transmission and reception through a noisy channel. To get the error free
communication need Error correction code.
F. An Enhanced (15, 5) BCH Decoder Using VHDL
This scheme covers the full clarification about the necessity of Error correcting code along
with the comparison of several error correcting codes and (15, 5) high clock speed BCH code Encode
and Decode design. The preceding confers encoder and decoder the design method and the designs
behavior of the is described using Verilog. The simulation of the code is done in Xilinx Modelsim.
Also, the synthesis of encoder and decoder design is done in Xilinx ISC 14.3 wabepack to generate the
gate level results. This design step follow input signals and output signals along with the simulation
results of design and synthesis result are discussed.
G. Implementation of Parallel BCH Encoder Employing Tree-Type Systolic Array Architecture.
In this system Basically NAND type is used as high density data storage, whereas NOR category
is worn for code storeroom and unswerving effecting. The decoding letdown reported is the cipher of
errors exceeded the intended aptitude of the ECC circuits. In a flash memory, the read in addition to
write operations of data are conducted in bytes at apiece clock cycle. Accordingly, byte-wise parallel
encoding and decoding shall be preferred for high-speed flash memories. The presumption of error
detecting and correcting codes deals through the unswerving storage of data. Information media is
not utterly unswerving in practice since the noise habitually causes data to be distorted. In particular,
BCH code for compound error amendment is broadly used in MLC flash memory. in the main, BCH
encoder can be implemented whichever by hardware or software methods. from the time when software
implementation of BCH code cannot accomplish the considered necessary limited speed, the hardware
design is preferred for high-speed applications.
H. High-speed Parallel Structural design and Pipelining for LFSR
The mathematical proof to show that a transformation exists in state space . By which help can
reduce the complexity of the parallel LFSR feedback loop. This paper present a new novel method for
high speed parallel completion of linear feedback shift register based on IIR filtering and this is the
proposed method. This proposed method can reduce the critical path and the hardware cost at the same
time. This design is appropriate for any type of LFSR structural design. In the combined pipelining
and parallel processing technique of IIR filtering, critical path in the feedback part of the design can
be reduced. For the future work can use this proposed design with combined parallel and pipelining
for long BCH codes.
I. A Novel Method Implementation of a FPGA using (n, k) Binary BCH Code
The mixture and timing simulation, shows the (15, 5, 3). According to requirement of the
speed,BCH Encoder and decoder are advantageous over the other two method. It can correct error 3 at
the receiver side when the original data corrupt by the noise. But when allowing for area then (15, 11,
1) is better which can correct only 1 bit error. Also redundancy is less and data rate is large in it. BCH
365

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
codes have to be excellent error correcting codes among codes of short lengths. They are relatively
simple to encode and decode. Due to these qualities, there is much interest in the exact of these codes.
The device utilization and speed can be improved by adopting parallel approach methods.
J. Improved Error Correction Capability in Flash Memory using Input / Output Pins
Thus the technique is found to be useful and efficient in inaccuracy correcting in the product
codes, design that for 8 and 16 kB page sized memories, regular product schemes achieve one decade
lower BER when raw BER ranges from to compare to plain RS codes or BCH code with similar code
length. A comparison of the area, latency, additional storage shows that product schemes have lower
hardware and latency than RS codes. The bit error rate is reduced by introducing I/O pins by reduce
the programming cycles .
III. COMPARISON TABLE

IV. CONCLUSION
Since the NAND FLASH memories require less delay encoders, a high throughput encoder
is premeditated by unfolding the LFSR of the BCH encoder by scrutiny design criteria for selecting
the unfolding factor .Moreover area, clock cycle and power is analyzed by simulating the design.
The obtained results reveal that unfolding increases the throughput, this in turn decreases the timer
cycle which automatically increases the speed but it increases the area and power. various-pipelining
techniques can be introduced to reduce the critical path of the encoder of BCH. Retiming also can be
functional to advance increase the speed and to reduce the power consumption and area.
IV. REFERENCE
[1] Manikandan.S.K,Nisha Angeline. M, Sharmitha.E.K, Palanisamy. C High Throughput LFSR
Design For Bch Encoder Using Sample Period Reduction Technique For MLC NAND Based Flash
MemoriesInternational Journal Of Computer Applications (0975 8887) Volume 66 No.10,
March 2013.
[2]
366

Yi-Min Lin, Chi-Heng Yang, Chih-Hsiang Hsu, Hsie-Chia Chang, And Chen-Yi Leea Mpcn-Based
Parallel Architecture In BCH Decoders For Nand Flash Memory Devices IEEE Transactions On

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
Circuits And SystemsIi: Express Briefs, Vol. 58, No. 10, October 2011.
[3]

M. Prashanthi1, P. Samundiswary2 An Area Efficient (31, 16) Bch Decoder For ThreeErrors
International Journal Of Engineering Trends And Technology(IJETT)-Volume 10 Number 13-Apr
2014.

[4]

Chia-Ching Chu, Yi-Min Lin, Chi-Heng Yang, And Hsie-Chia Chang A Fully Parallel Bch Codec
With Double Error Correcting Capability For Nor Flash Applications IEEE Transaction ICASSP
2012 978-1-4673-0046-9/12/$26.00 2012

[5]

R.Elumalai, A.Ramachandran, J.V.Alamelu, Vibha B Raj Encoder And Decoder For (15,11,3)
And(63,39,4) Binary BCH Code With Multiple Error CorrectionInternational Journal Of Advanced
Research(An ISO 3297:2007 Certified Organization)Vol.3,Issue 3,March 2014.

[6]

Rajkumar Goverchav, V.Sreevani An Enhanced (15, 5) Bch Decoder UsingVhdl International


Journal Of Advanced Research (An ISO 3297:2007 Certified Organization)Vol.3,Issue 11,November
2014

[7]

Je-Hoon Lee , Sharad ShakyaImplementation Of Parallel Bch Encoder Employing Tree-Type


Systolic Array ArchitectureInternational Journal Of Sensor And Its Applications For Control
Systems Vol.1, No.1 (2013), Pp.1-12.

[8]

Vinod Mukati High-Speed Parallel Architecture And Pipelining For LFSRInternational Journal Of
Scientific Research Engineering &Technology(IJSRET) Issn:2278-0882 IEERET-2014 Conference
Proceeding,3-4 November ,2014

[9]

Mahasiddayya R Hiremath, Manju Devi A Novel Method Implementation Of A FPGA Using (N, K)
Binary Bch Code International Journal Of Research In Engineering Technology And Management
ISSN 2347 7539 IJRETM-2014-Sp-008 ,June-2014.

[10] J .Shafiq Mansoor, A .M. Kiran Improved Error Correction Capability In Flash Memory Using Input
/ Output Pins International Journal Of Advanced Information Science And Technology(IJAIST)IS
SN:2319:2682Vol.12,No.12,April2013.

367

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Determination Of Physical And Chemical


Characteristics of Soil Using Digital Image Processing
M.Arun pandian1, .C.S.ManikandaBabu2,
Department of ECE(PG VLSI DESIGN), Sri Ramakrishna Engineering College, Coimbatore
Email : arunwatrap@gmail.com1
Abstract Soil is the most valuable natural source for all the soil usage field. Digital image analysis
is used to minimize the manual involvement. Different soil samples are taken from the various places.
It determined both physical properties(water content, coefficient of curvature, liquid limit, plastic limit,
shrinkage limit, coefficient of uniformity, field density) and chemical properties (pH and pH index). This
system helps to mainly reduce the human errors involvement and time consumption. Samples are taken
by digital camera. Physical recognition is based on fractal dimension calculation using box counting
method. Soil pH recognition is based on Red-Green-Blue values of the image or Intensity-Hue-Saturation
model of the samples. It also helps to nutrition level of the soil. It has the great potential in the agriculture
management.
Key words Digital Image, Fractal Dimension, RGB Model, LABVIEW 2014

I. INTRODUCTION
Soil uses source material in many places. Laboratory approach is the traditional method of
analysis. It has lots of time consumption and human errors. Image processing is the method of analysis
using digital images. Digital image is full of information in the form of digital values. This analysis
is used to reduce the human errors and time consumption. It helps the immediate action of the soil
sources. Samples are taken in the suitable weather condition by digital camera. This analysis split into
two ways: i. physical characterization of soil ii.Chemical characterization of soil.
This analysis of soil already depicts the physical characteristics in box counting method [6].
The box counting method of analysis using LabVIEW 2014. This analysis of soil already depicts the
pH and pH of soil in RGB model of soil sample [3].The RGB method of analysis the acidic and basic
characterization of soil using LabVIEW 2014.
Images are captured by the sony digital camera. These capture the place with three different
positions with the specific light condition. These images are in the pixel value of 150x150 resolutions.
2. METHODOLOGY
A. Physical characterization
Physical characterization analyzed by using fractal dimension. This fractal dimension is
calculated in various methodologies like area perimeter method, line divider method, skyscraper
method and Box counting method. Box counting method is widely used to calculate the fractal
dimension. This method is used to estimate the fractal dimension in the form of binary images. To
cover the number of 1s present in the binary image. Depending upon the box size and number of 1s
present the fractal dimension is
FD= log(N(s))/log(1/s) (1)
FD- Fractal Dimension of sample.
N(s)- Number of 1s present in box.

Fig .1: Flow chart for Physical characterization


368

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
Average Fractal dimension are calculated with varying threshold values of the image. Depending
upon the fractal dimension correlated with the physical properties of the soil. Correlate with the curves
obtained by the laboratory to give specific polynomial equation of each character. These equations are
used to determine the all the above physical characteristics [6].
B. Chemical characterization
This characteristic analyzed in the following method of RGB-model analysis. Using LabVIEW
to convert the 24-bit input color image is converted into 8-bit RGB image and extract the pixel values
of each planes are Red-Green-Blue. This pH index of the digital image is calculated by using the
following equation [3],
Soil pH index = Red/ Green/ Blue (2) Where Red, Green, Blue represented the pixel values of
corresponding plane.
Using this pH index formula is used to calculate the pH index value of each pixel value.
Depending upon the pH index value of the soil is used to calculate the pH of the soil of each pixel
value.

Fig .2: Flow chart for Chemical characterization

These pixel values are extracted in Centre of the image because of purity of intensity value.
Averaging the pH values and get the pH value of the sample.
C. Input samples
These samples are prepared by the capturing with the resolution of 150x150. These are all 24bit depth square images. Image is taken in watrap, virudhunagar district in fig 3. This place is used in
the field of paddy cultivation.

Fig 3: Input sample

These images are captured by the sony digital camera and the specified weather condition.
D. Thresholding
Thresholding is the process conversion of 24-bit color image into binary image. It is used to
conversion of pixel values into 0s and 1s. This type of representation is used to box counting method.
369

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Fig 4: Thresholding- conversion of Binary image

Background represented as red color and darkness of the objective as block color shown I fig
4. The pixel values of red and block is 0 and 1. Threshold value varies from the component of RGB
is 80-120.
E. Fractal dimension
Fractal dimension (FD) is defined as a mathematical descriptor of image feature which
characterizes the physical properties of soil images. The fractal, introduced in 1975 by Mandelbrot
(Buczko 2005) provides a framework for the analysis of natural phenomena in various scientific
domains. The fractal is an irregular geometric object with an infinite nesting of structure of different
sizes. Fractals can be used to make models of any natural object, such as soil, islands, rivers, mountains,
trees, clouds.
E. Plane extraction
Input color image is extracted the three different planes of RGB with their pixel values. These
pixel values are used to compute the pH index of the soil. Using pH index of soil, the pH value of
samples are determined.
RED GREEN BLUE

RED

GREE

BLUE

Fig 5: Plane Extraction

This single RGB image is extracted into three different components of Red-Green-Blue with
own pixel value shown in fig 5.
3. RESULTS AND DISCUSSION
Initially, input samples are converted into binary image using thrsholding. This pixel value of
binary image is taken under the box counting method and the formula (1), to find the average fractal
dimension of the sample. liquid limit, plastic limit, shrinkage limit, coefficient of uniformity, field
density are determined. Threshold value varies from 80-120. Depending upon the threshold value,
N(s) calculated by Box Counting method. The average fractal dimension was calculated and given in
370

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

In table 2 shows all the physical parameters of soil sample of fractal dimension 1.511.
Using the fractal dimension all the parameters was calculated. These are all the parameters used
in the field of civil Engineering and where the soil used in the field of Engineering.
The correlation between the fractal dimension and physical parameters represented as graph
[6]. Using the plane extraction, the pixel value of each plane is calculated. Using formula (2), pH index
371

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
of sample was calculated. Depending upon the pH index and color of sample, pH of the sample was
calculated shown in table 3.This pH of soil determines the chemical characteristics of the soil.
4. CONCLUSION AND FUTURE WORK
A. Conclusion
Determination of soil physical and chemical properties was calculated successfully. These
output of pH value of the sample compared with the laboratory report. The percentage of error between
conventional laboratory and image analysis approach varies from 1%. These soil physical properties is
used in the field of civil and agriculture management. Soil pH value is used to identify the acidic and
basic nature of the soil. This system reduces the manual assessment and time. It also reduces human
errors and delay of testing.
B. Future work
In this system, further analyzed more samples of various places and improves reliability of the
system with various resolutions. And also estimate soil nutrition distribution such as Mg, Ca, K, Na
using image processing. The whole system implemented into FPGA processor.
REFERENCES
[1] Amy Kaleita b, Farshad Vesali a, Hossein Mobli, Mahmoud Omid a, (2015) Development of an
android app to estimate chlorophyll content of corn leaves based on contact imaging, Computers
and Electronics in Agriculture 116 211220.
[2] Anastasia Sofou, PetrosMaragos, (2004) Image Analysis of SoilMicromorphology: Feature
Extraction, Segmentation, and Quality Inference, EURASIP Journal on Applied Signal Processing
6, 902912.
[3]

Binod kumar, Mukesh kumar, Rakesh kumar and Vinay kumar,(2014) Determination of soil PH by
using Digital Image Processing Technique, Journal of Applied and Natural Science 6 (1): 14-18.

[4]

Buczko, Mikoajczak, Olena, Pawe, (2005) Shape analysis of MR brain images based on the fractal
dimension, Annales UMCS Informatica AI 3 (2005) 153-158.

[5]

Schulte.E.E and Kelling.K.A,(1993) soil calcium to magnesium ratios, university of Wisconsinextension SR-11-93.

[6] Karisiddappa, Ramegowda, Shridhara, (2010)Soil characterization based on Digital Image


Analysis, Indian Geotechnical Conference, GEOtrendz December 1618, 2010 IGS Mumbai
Chapter & IIT Bombay.
[7] Kshitija.S, Naphade,(2010) Soil characterization using digital image processing A Thesis
Presented to the Graduate and Research Committee Of Lehigh University
[8]

Qihao Weng, Xuefei Hu,(2008) Medium Spatial Resolution Satellite Imagery for Estimating
and Mapping Urban Impervious Surfaces Using LSMA and ANN, IEEE TRANSACTIONS ON
GEOSCIENCE AND REMOTE SENSING, VOL. 46, NO. 8.

[9]

Chin-yuan lien, chien chuan huang, pei-yin chen, yi-fan lin, An Efficient Denoising Architecture
For Removal Of Impulse Noise In Images,Ieee Transactions On Computers(April 2013).

[10] sajan p.philip,s.p.prakesh,dr.s.valarmathy,An Efficient Decision Tree Based Archiitecture For


Random Impulse Noise Noise Removal In Images, international journal of advanced research in
electrical, electronics and instrumentation engineering, april 2013.

372

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Blind Multiuser Detection Using Ofcdm


Sujitha J#1 , and Baskaran K*2
#Assistant Professor, Department of ECE,
Sri Ramakrishna Engineering College, Coimbatore, India
* Associate Professor, Department of EEE,
Government College of Technology, Coimbatore, India
Abstract The main objective of future communication system is to provide extremely high-speed
data transmission. The broadband Orthogonal Frequency Code Division Multiplexing (OFCDM) with
two dimensional spreading is becoming a very promising multiple access technique for high-rate data
transmission in fourth generation communication system due to its advantages over other access schemes.
There are numerous previous researches that have mainly investigated the OFCDM performance based on
subcarrier allocation and variable spreading factor. However, several system parameters may affect the
performance of the system. In OFCDM, one of the vital processes is Multi User Detection (MUD). This
article develops an OFCDM model with blind technique for multiuser detection over Rayleigh fading
channel as a realistic channel in wireless communication system and on different modulation techniques,
which is affected by different number of users. The proposed system is created in the form of computer
simulation using MATLAB programming in order to show the impact of some parameters on OFCDM
performance including different modulation schemes, spreading factor and number of users. Empirical
results demonstrate that the proposed blind multiuser detection method offers substantial gains over the
other blind multiuser detection methods and outperforms better in terms of bit error rate. It also suggests
that the OFCDM system with BPSK modulation outperforms than other modulation schemes.
Index Terms Blind Multiuser Detection (BMUD), OFCDM, Rayleigh Fading, Two Dimensional

Spreading, Walsh Hadamard Code.


I. INTRODUCTION

In present scenario, wireless communication technology is an emerging field which has grown
rapidly to satisfy high demand of information technology and multimedia services. The enormous uptake of mobile phone technology, wireless local area networks and the exponential growth of the web
have resulted in an increased demand for new techniques to obtain high capacity wireless networks
[1]. The two major candidates for 4G wireless communication systems are WiMAX 802.16e and Long
Term Evolution (LTE) with transmission rates upto 100 Mbps with fully mobility and 1 Gbps with
limited mobility. There are various multi user access schemes which are offered for the broadband
downlink transmission in 4G systems namely Code Division Multiple Access (CDMA)and Orthogonal
Frequency Division Multiplexing (OFDM) [2].
The CDMA access scheme is not so appropriate to be implemented in wireless communication
system and to support the 4G applications due to generating too much Multipath Interference (MPI).
The OFDM system is a multicarrier approach, which has drawn a lot of attention in high-speed wireless networks. OFDM system is most suitable to be implemented in broadband channel with high data
rate and the system can combat MPI. Although, OFDM access system is an attractive tool for broadband channel, it does not have coherent frequency diversity that can cause vulnerable of inference
from closely cell in subcarrier [3]. To unravel the drawbacks of OFDM and CDMA, a combination of
them has been introduced to be a promising multiple access schemes for high data rate in the future 4G
wireless technologies named as Orthogonal Frequency Code Division multiplexing (OFCDM) system.
OFCDM is one of the attractive multi user access scheme that has received more attention among researchers and been adored as a promising candidate for downlink transmission in future 4G wireless
communication system. Also, OFCDM does not provide all the benefits of OFDM only and exploits
the spreading feature of CDMA but without the adverse effect of increasing frequency selectivity in
the channel [4]. Furthermore, OFCDM system uses two dimension spreading codes, spreading data on
both time and frequency domain. OFCDM system with both time and frequency domain spreading, the
total spreading factor (Stot) is the product of time domain spreading factor (ST) and frequency domain
spreading factor (SF), i.e., Stot = ST x SF. Blind multiuser detection for OFCDM has been a hot topic
of interest for a longtime [14]. This is specifically because of two core reasons. First, the receiver does
373

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
not have any knowledge of the channel of propagation or of the signals except for the spreading codes.
Second, the bandwidth available is to be better utilized. It avoids the need for any training or pilot
signals.
This paper proposes a blind multiuser detection for OFCDM system. Several recent works
have addressed the use of blind methods for MUD in other access scheme. Two step blind multiuser
reception for Direct Sequence Code Division Multiple Access (DS-CDMA) systems in introduced by
Gopalakrishnan [5]. The technique uses hybrid approach to the blind reception of linear multiple-input
multiple-output (MIMO) and the channels have a small scale flat fading behavior. The receiver has the
knowledge of only the spreading code of interest. Improved independent component analysis (ICA)
based CDMA receiver for multiple access communication channels [6] reduces the bias caused by the
channel noise in ordinary ICA algorithms and further decreases the noise by dimension reduction. Fast
ICA algorithm is asymptotically efficient; its accuracy given by the residual error variance attains the
Cramer-Rao lower bound. Channel estimation in OFDM using pilot based block type channel estimation techniques by LS and MMSE algorithms introduced by [7]. The algorithm starts with comparisons
of OFDM using BPSK and QPSK on different channels, followed by modeling the LS and MMSE estimators. The authors conclude that LS algorithm gives less complexity but MMSE algorithm provides
comparatively better results. Yiqing Zhou et al. [5] have developed a model for downlink performance
of the OFCDM system with hybrid multi-code interference (MCI) cancellation and minimum mean
square error (MMSE) detection. The weights of MMSE are derived and updated stage by stage of MCI
cancellation. The experimental results show that the hybrid detection scheme performs much better
than pure MMSE when good channel estimation is guaranteed. Nasaruddin et al. [8] have introduced
a model to investigate the impact of many parameters on the performance system of OFCDM over
Rayleigh Fading channel. Parameters are number of carriers, size of symbol, symbol rate, bit rate, size
of guard interval and spreading factor. The results show that the higher number of carriers, larger size
of symbol, higher symbol rate, higher bit rate and larger spreading factor are giving the better system
performance in terms of Bit Error Rate (BER). However, the larger guard interval is giving the worst
system performance. Honig et al. [9] have established a canonical representation for blind multiuser
detectors and used stochastic gradient algorithms such as LMS to implement the blind adaptive mean
output energy (MOE) detector. Subspace blind adaptive multiuser detection for CDMA is introduced
by Roy et al. [10], MOE detector has a smaller eigen value spread than the training-based adaptive
LMS detector; hence, the blind LMS algorithm always provides faster convergence than the training
driven LMS-MMSE receiver but at the cost of increased tap-weight fluctuation or misadjustment.
Zhang et al. [11] proposed a simple and effective state space model for the multiuser detection problem in a stationary or slowly fading channel and employed Kalman filter as the adaptive algorithm.
This detector demonstrates lower steady-state excess output energy in adaptation when compared with
LMS and RLS technique.
The contribution of this work is three fold. First of all, the paper show that based on mean
square error, detector can be obtained blindly. The second contribution of this work is the implementation of the OFCDM system with blind concept using three different modulation schemes BPSK, QPSK
and 16 QAM techniques over Rayleigh fading channel and Additive White Gaussian Noise (AWGN)
channel. Broadband channel is typically affected by severe multipath propagation due to multipath
scattering from mobile station objects. The scattering generates the vacillation of the received signal
envelope that is Rayleigh distributed. It is seen that by employing this blind algorithm for multiuser
detection in OFCDM, BPSK produce minimum error when compared with other modulation schemes.
The third contribution of this work is to find the impact of various parameters such as Spreading Factor
(SF), different number of users and two different channels. Experimental result shows that the proposed system performs better in terms of Bit Error Rate (BER) with BPSK modulation over Rayleigh
fading channel.
The rest of this paper is organized as follows: section II describes the OFCDM system model.
The experimental results and performance comparison are given in section III. Finally, section IV
concludes this paper followed by relevant references.
SYSTEM DESIGN MODEL
A simple simulation model of blind multiuser detection for OFCDM system is developed in this
paper to investigate the performance of OFCDM system. In this proposed model, input data is gener374

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
ated using random data generator. This randomly generated data is passed through OFCDM transmitter where 2D spreading is carried out. The OFCDM receiver performs the reverse process of OFCDM
transmitter. Finally, BER is calculated by comparing the transmitted data with the received data.
A. OFCDM Transmitter
The description of the OFCDM transmitter block diagram is shown in Fig.1.Input information
bits (N) are generated randomly using random data generator [8], [12]. The input data is first converted
from Serial to Parallel (S/P) form, MB streams. The output of serial to parallel converter is a matrix of
data bits with number of rows for large numbers of subcarriers which are to be used for every symbol.
Parallel data streams are obtained and then processed by a modulation unit based on the modulation
method at the modulator. The proposed blind multiuser detection system uses three different modulation techniques namely BPSK, QPSK and 16QAM to find the suitable modulator for OFCDM. Every
symbol of the modulated data streams will be spread with ST chips in time domain and SF chips in
frequency domain. Therefore, total spreading factor is expressed as Stot = ST x SF.
Fig.1 Transmitter Structure for OFCDM
At the same time, each pilot symbol of the streams is spread with ST chips in time domain only.
Code multiplexer unit combines all code channels. After multiplexing totally N chips should be

connected to N sub carriers and transmitted in parallel. An N-point IFFT carries out this operation. An effective OFCDM symbol is obtained with N samples after IFFT operation. The use of IFFT
will guarantee orthogonality among subcarrier and give computation per unit efficiency.
In a realistic broadband channel, signal transmission takes place in the atmosphere and nearer
the ground. A signal can voyage from transmitter to receiver over multipath fading. Therefore, the
proposed system considered Additive white Gaussian Noise (AWGN) channel and Rayleigh fading
channel for simulation. Furthermore, noise presence in the medium affects the signal and creates distortion in the information content. The channel simulation will allow examination of the noise effect
and multipath.
As described above, there are M_B=N/ST modulated symbols spread in frequency domain at
the same time. Thus, the transmitted OFCDM data symbol of the mth subcarrier on the ith symbol is
expressed as,
(1)
Where,P is the signal power of the data code and the

is represented by ,

(2)
375

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
Where, Dq denotes the data symbol of the qth code channel, Sm,irepresents the ith chip on the mth
sub carrier , fmis the center frequency of the mth subcarrier. In order to achieve effective channel estimation, a code multiplexed pilot channel is used on each sub carrier. Pilot symbols are spread in time
domain only. The baseband pilot data of the mth sub carrier on the ith OFCDM symbol can be given by,
(
3)


(4)
Where is the power ratio of pilot to one data channel, Dp denotes the known pilot symbol. The
total baseband equivalent transmitted signal in ST OFCDM symbol duration is expressed as,


(5)
Although the wireless communication channels is highly frequency selective ,the transmitted
signal on each sub-carrier experiences a flat fading channel .Let the fading rate is slow and it is approximately fixed over ST OFCDM symbols.H_m denotes the complex channel fading for the m^th
sub-carrier over ST symbols. The amplitude and phase of H_m are assumed to be Rayleigh distribution with E{|H_m^2 |}=1 and distributed in [0,2]. Additionally, correlation coefficient between
H_m and H_m is represented by,

(6)
Where, _fdenotes the frequency separation between m^th and m ^th sub-carrier, f_c is the
coherence bandwidth of the channel and (.)* represents the conjugate operation.
B. OFCDM Receiver

The receiver of the proposed OFCDM system is shown in Fig.2. The received OFCDM signal
is processed by the receiver until became to original data output. The signal received by the receiver is
usually corrupted by noise and channel distortion. The received OFCDM signal is changed from time
domain to frequency domain.
Fig.2 Receiver Structure for OFCDM
An N-point FFT realizes this operation. The receiver carries out the reverse process of the
transmitter to decode the received OFCDM symbol. After FFT block, the N chips involved with N sub
carrier are obtained. The output of FFT is accumulated to carry out channel estimation. Time domain
spreading code is assigned to the pilot; the channel estimator is realized by the summation time domain. The estimated channel fading will be used for channel equalization. Then the signal is passed to
376

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
the frequency domain despreader for data channels. After dispreading in frequency domain, the signal
is despreader in time domain. Blind MUD is used to recover symbols for demodulation [13]. Demodulation is the method by which the original information is recovered from the modulated signal received
at the receiver end. The signals are demodulated and Parallel to Serial (P/S) converted. Finally, data or
information bits at the desired code channel by comparing transmitted information with the received
information for all code channels.
The signal received at them -th subcarrier of the ith OFCDM symbol is


is the complex additive white Gaussian noise on the m - th sub carrier. Both and are assumed
to be independent. Then, the received signal for the m-th subcarrier is

Equation (8) contains three terms. First and second term stands for data and pilot signal respectively. Third term is noise with variance,

(9)
Channel estimation is necessary to demodulate the data signal. Pilot signals are spread in time
domain only, the output of Fast Fourier Transform is an input to the time domain summation for the
pilot channel to perform channel estimation. So, the resultant output of the summation of pilot signal
is given by,

The equation (10) represents that there is no interference from data channels due to orthogonality of time domain spreading codes. Np is the noise variance term

The channel estimation for ST OFCDM symbols (i=0, 1,.ST -1) is given by,





is the channel estimation error with variance

To take advantage of frequency domain orthogonality, frequency domain despreading is carried out firstly. The proposed system uses same time domain spreading code and different frequency
domain spreading code for data channel. The KC interference codes in F can be given by K= KFST,
KF = 1, 2, 3, . KC. Then IT,0((_m^-)) is expressed as,

Where IT,0(
) is the inference term from the KC code channels in T. Then the OFCDM
signal is despreaded in time domain,
(15)
Blind Minimum Mean Square Error (MMSE) detection is employed to recover the data symbol
on each code channel. The blind MMSE is given by,
377

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Where, x is the symbol of received data to be detected, Pk is the received data of the kth user
and SFk is the spreading chips of kth user.
C. Two Dimensional Spreading
As mentioned before, the total spreading of OFCDM with two dimensional spreading is represented as Stot = ST x SF. The modulated symbol transmitted on the pth (p = 0,P-1) code channel
is spread by a one dimensional code with length Stot denoted C(p) = {C0(p), C1(p) ,,Cp-1(p)}[13], [15].
The Stot chips of the code consist of ST chips per sub carrier over SF sub carrier. A square matrix with
elements +1, -1 and size h, whose distinct row vectors are mutually orthogonal is referred to as an
Hadamard matrix of order h. The Hadamard matrix H2n is a 2n 2n matrix such that the first row of
the matrix contains all +1s and the other rows contains n of +1s follow by n of 1s. The rows of the
Hadamard matrix are then mutually orthogonal.



Where, the matrix H2n is formed by using matrix Hnrecursively until matrix H2. This code fulfills completely the orthogonality between each other. For simulation, Hadamard code is chosen as
the spreading code, a corresponding frequency domain Hadamard code spreading code of length SF ,

and the corresponding time domain Hadamard spreading code of len
gth
, .The relationship among,
and is expressed as,
.So, the pthcode channel employs a two dimensional code
.
III. EXPERIMENTAL RESULTS AND DISCUSSIONS
Simulations are conducted to evaluate the performance of the proposed blind multiuser detection for OFCDM based communication system using MATLAB 7.10 in a AWGN channel and Rayleigh
fading channel for 4, 8, 16 and 32 users on BPSK,QPSK and 16 QAM modulation schemes [7,8].
It also investigates the impact of several parameters. Experimental results show BER performance of
some parameters including different modulations, number of users and spreading factor. Parameters
are employed in this simulation are shown in Table 1. Empirical results confirmed that the proposed
method outperformed and its BER performance superior with BPSK compared to other modulation
schemes. The proposed blind multiuser system also suggests that the BPSK modulation scheme is the
best choice for OFCDM system.
Table I Simulation Parameters

A. Impact of Number of Users


Fig.3 and Fig.4 shows the comparison of BER of the OFCDM over Rayleigh channel for three
modulation schemes which is influenced by the number of users at spreading factor 4 and 8 respectively. To evaluate the impact of number of users, there are four different users in the simulation that
378

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
are 4, 8, 16 and 32. For example, when Eb/N0 is equal to 14 dB and SF = 4, the BER for number of users of 4, 8, 16 and 32 are 2.5x10-6, 2.5x10-6, 7.5x10-6 and 7.1x10-6 with BPSK modulation, 1.8x10-4,
1.1x10-4, 5.69x10-5 and 7.22x10-5 with QPSK modulation, 2.2x10-4, 2.1x10-4, 1.5x10-4 and 1.9x104 with 16 QAM modulation respectively. From both the graph, it can be seen that the performance of
the system with BPSK modulation is better and produce small BER compared with other modulation.
It also shows that the OFCDM system with BPSK modulation achieves better BER performance for
more number of users than other modulation.
-2

10

user 4
user 8
user 16
user 32

-3

BER

10

-4

10

-5

10

-6

10

10
Eb/No (dB)

12

14

16

18

20

(a) BPSK
-2

10

user 4
user 8
user 16
user 32

-3

BER

10

-4

10

-5

10

-6

10

10
Eb/No (dB)

12

14

16

20

18

(b) QPSK
-2

10

user 4
user 8
user 16
user 32

-3

BER

10

-4

10

-5

10

-6

10

10
Eb/No (dB)

12

14

16

18

20

(c) 16 QAM
Fig. 3 BER vs Eb/N0 by varying No. of Users at SF = 4
-2

10

user 4
user 8
user 16
user 32

-3

BER

10

-4

10

-5

10

-6

10

10
Eb/No (dB)

12

14

16

18

20

(a) BPSK

Fig. 4 BER vs Eb/N0 by varying No. of Users at SF = 8


-2

10

-3

BER

10

-4

10

-5

10

user 4
user 8
user 16
user 32

379

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
B. Impact of Constellation
The BER against Eb/N0 has been simulated for different modulations. Fig.5 and Fig.6 shows
the simulation result for the performance of OFCDM which is affected by different modulation over
Rayleigh channel at spreading factor 4, 8 and different users. Based on the Fig.5 and Fig.6, the better performance was obtained in which BER versus Eb/N0 with BPSK modulation is lower than that
QPSK and 16 QAM. As a result, when Eb/N0 is equal to 16 dB and SF = 4, the BER with modulation
BPSK, QPSK, 16 QAM, 32 users 6.25x10-7, 3.84x10-5 and 1.1x10-4 respectively. Eb/No is equal
to 20 dB and SF = 8, the BER with modulation BPSK, QPSK, 16 QAM for 32 users are 6.25x10-7,
1.63x10-5 and 1.28x10-4 respectively. It is noticeable to say that the performance of OFCDM also
has been affected by the type of modulation. This is crucial factor for the successful transmission of
OFCDM based communication system.

10

BPSK
QPSK
16QAM

-1

10

-2

BER

10

-3

10

-4

10

-5

10

-6

10

10
Eb/No (dB)

12

14

16

18

20

10

BPSK
QPSK
16QAM

-1

10

-2

BER

10

-3

10

-4

10

-5

10

-6

10

380

10
Eb/No (dB)

12

14

16

18

10

-1

10

-2

10

BPSK
QPSK
16QAM

20

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
0

10

BPSK
QPSK
16QAM

-1

10

-2

BER

10

-3

10

-4

10

-5

10

-6

10

10
Eb/No (dB)

12

14

16

(b) 8
Users

10

18

20

BPSK
QPSK
16QAM

-1

10

-2

BER

10

-3

10

-4

10

-5

10

-6

10

10
Eb/No (dB)

12

14

16

18

20

(c) 16 Users
0

10

BPSK
QPSK
16QAM

-1

10

-2

BER

10

-3

10

-4

10

-5

10

-6

10

10
Eb/No (dB)

12

14

16

18

20

(d) 32 Users
Fig. 5 BER vs Eb/N0 by varying Modulation at SF = 4

IV Helpful Hints
C. Impact of Spreading Factor
Spreading factor is one of the most important parameter in OFCDM system. The BER against
Eb/N0 has been simulated for two different spreading codes. Fig.7. shows the simulation results for
the performance of OFCDM which is affected by spreading factor over Rayleigh fading channel. The
SF is set to 4 and 8 in this simulation. From Fig.7, it is observed that the higher of value of spreading
factor gives better performance because it has better interference rejection. As a result of comparison,
when Eb/N0 is equal to 28 dB and 4 users, BER for SF of 4 and 8are 1.5610-6 and 3.1310-7 respectively. Therefore, the BER is improved as the SF increases since the spreading codes can cancel
correlated noise.

381

BER

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
10

10

-1

10

-2

10

-3

10

-4

10

-5

10

-6

SF=4
SF=8

BER

10

15
Eb/No (dB)

20

25

30

(a) 4
Users

10

10

-1

10

-2

10

-3

10

-4

10

-5

10

SF=4
SF=8

15
Eb/No (dB)

25

20

30

(b) 32
Users
Fig. 7 BER vs Eb/N0
by varying Spread Factors

BER

D. Impact of Channel Model


The comparison of BER with different SNRs on BPSK constellation in OFCDM simulation
using two different channel models and two spreading codes is depicted in Fig.8 and Fig.9. The result
reveals that the small SNR values the calculated BER is quite large because of relative high power of
noise. Therefore, the performance of BER is improved as the SNR is increased as shown.
10

10

-1

10

-2

10

-3

10

-4

10

-5

RAYLEIGH
AWGN

BER

10
Eb/No (dB)

14

16

(a) 4
Users

10

10

-1

10

-2

10

-3

10

-4

10

-5

10
Eb/No (dB)

(b) 8
382

12

18

20

RAYLEIGH
AWGN

12

14

16

18

20

BER

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
10

10

-1

10

-2

10

-3

10

-4

10

-5

RAYLEIGH
AWGN

10
Eb/No (dB)

12

14

16

20

18

BER

(c) 16 Users
10

10

-1

10

-2

10

-3

10

-4

10

-5

10

-6

RAYLEIGH
AWGN

10
Eb/No (dB)

12

14

20

18

16

(c) 16 Users

BER

Fig. 8 Comparison of BER for SF = 4 on BPSK in AWGN and Rayleigh Channel


10

10

-1

10

-2

10

-3

10

-4

10

-5

RAYLEIGH
AWGN

BER

10
Eb/No (dB)

12

14

16

(a) 4
Users

10

10

-1

10

-2

10

-3

10

-4

10

-5

10
Eb/No (dB)

(b) 8
Users

18

20

RAYLEIGH
AWGN

12

14

16

18

20

383

BER

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
10

10

-1

10

-2

10

-3

10

-4

10

-5

10

-6

RAYLEIGH
AWGN

10
Eb/No (dB)

12

14

16

18

20

(c) 16 Users
10

10

-1

Fig. 9 Comparison of BER for SF = 8 on BPSK in AWGN


and Rayleigh Channel
RAYLEIGH
AWGN

E. Comparison of BER Performance


Table II compares10the performance of BER of the OFCDM system over Rayleigh channel. From
the table, it is observed that the proposed blind multiuser system with BPSK modulation achieves bet10
ter BER than other modulation.
-2

BER

-3

10

-4

10

-5

10

-6

10
Eb/No (dB)

12

14

16

18

20

(d) 32 Users
IV. Conclusion

This paper proposed a novel blind algorithm for multiuser detection of OFCDM in frequency
selective fading channel. The proposed system is simulated in MATLAB 7.10 and the performance
of the proposed blind multi user detector is evaluated with various parameters. The investigated
parameters of OFCDM system are number of users, different modulation schemes, different channel
model and spreading factor. From the comparison results, it is clear that the proposed blind multiuser
system with more number of users and large spreading factor were given better performance of
OFCDM over Rayleigh fading channel. And also the system performs better with BPSK constellation
than QPSK and 16 QAM modulation schemes for more users and large spreading factor. In future, the
algorithm can be tested with some evolutionary algorithms for MIMO systems.
REFERENCES
[1] S. Verdu, Multiuser Detection, Cambridge, UK. Cambridge University Press, 1998.
[2]

Yiqing Zhou and Tung-Sang Ng, Jiangzhou Wang, Kenichi Higuchi and Mamoru Sawahashi,
OFCDM: A Promising Broadband Wireless Access Technique, IEEE Communications
Magazine,Vol. 46, no. 3, pp. 38 - 49, March 2008.

[3]

S.M. Zafi S.Shah, A.W.Umrani and Aftab A.Memon, Performance comparison of OFDM, MCCDMA and OFCDM for 4G Wireless Broadband Access and Beyond, PIERS Proceedings,
Marrakesh, MOROCCO, pp. 1396 1399, 2011.

[4]

Yiqing Zhou, Jiangzhou Wang and Mamoru Sawahashi, Downlink Transmission of Broadband
OFCDM Systems - Part I: Hybrid Detection, IEEE Transactions on Communications, Vol. 53, no.
4, pp. 718 - 729, April 2005.

[5]

Dr.E.Gopalakrishna Sarma and Dr.Sakuntala S. Pillai, A Robust Technique for Blind Multiuser
CDMA detection in Fading Channels, International Journal of Hybrid Information Technology,
Vol. 4, no. 2, pp.13 - 22, 2011.

[6]

Zheng Mao, Zheng Yu-Li and Yuan Ji-Bing, Improved Independent Component Analysis for Blind

384

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
Multiuser Detector, 3rd International Conference on Advanced Computer Theory and Engineering
(ICACTE), Vol. 5, pp. 207 - 210, 2010.
[7]

Sajjad Ahmed Ghauri, SherazAlam, M. Farhan Sohail, AsadAli and FaizanSaleem, Implementation
of OFDM and Channel Estimation using LS and MMSE estimators, International Journal of
Computer and Electronics Research ,Vol.2, no. 1, pp. 41 - 46, 2013,.

[8]

Nasaruddin, Melinda and Ellsa Fitria Sari, A Model to Investigate Performance of Orthogonal
Frequency Code Division Multiplexing, TELKOMNIKA, Vol. 10, no. 3, pp. 579 585, 2012.

[9]

M. L. Honig, U. Madhow, and S. Verdu, Blind adaptive multiuser detection, IEEE Trans. Inform.
Theory, Vol. 41, pp. 944 960, 1995.

[10] S. Roy, Subspace blind adaptive multiuser detection for CDMA, IEEE Transaction on
Communication, Vol. 48, no. 1, pp. 169 175, 2000.
[11] P.Shi, H.Li and M. Ren, Multiuser detector based on blind adaptive Kalman filtering, Computer
Engineering and Applications, Vol. 48, pp. 131 134, 2012. (In Chinese)
[12] Y. Zhou, Tung-Sang, J Wang, K Higuchi and M. Sawahashi, Downlink Transmission of Broadband
of Broadband OFCDM Systems-Part V: Code Assignment, IEEE Transaction on Wireless
Communication, Vol. 7, Issue: 11, pp. 4546 - 4557, 2008.
[13] Bin Hu, Lie-Liang Yang and Lajos Hanzo, Time and Frequency-Domain-Spread Generalized
Multicarrier DS-CDMA Using Subspace-Based Blind and Group-Blind SpaceTime Multiuser
Detection, IEEE Transactions on Vehicular Technology, Vol. 57, no. 5, pp. 3235 - 3241, 2008.
[14] Khalifa Hunki, Ehab M. Shahee, Mohamed Soliman and K.A.El-Barbary, Performance Comparison
of Blind Adaptive Multiuser Detection Algorithms, International Journal of Research in Engineering
and Technology, Vol. 2, Issue: 11, pp. 454 - 461, 2013.
[15] Zia Muhammad, and Zhi Ding, Blind Multiuser Detection for Synchronous High Rate Space-Time
Block Coded Transmission, IEEE Transactions on Wireless Communications, Vol. 10, no. 7, pp.
2171 - 2185, 2011.

385

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Human Activity Recognition by


Triliteration Hmsa
Yogesh Kumar M1, Sindhanai Selvan K2
Student, M.E (Computer Science Engineering),
MKCE, Karur, Tamilnadu, India,
yogesh301991@gmail.com
Asst proff, M.E (Computer Science Engineering),
MKCE, Karur, Tamilnadu, India,
sindhanai@gmail.com
Abstract Elegant strategy such as Smartphone application able to give the functions of a pedometer
by the accelerometer. To attain a high correctness the devices contain to be damaged on specific on-body
location such as on an armband or in footwear. Usually public carry elegant devices such as Smartphone in
different positions, thus making it not practical to use these devices due to the abridged correctness. Using
the implanted Smartphone accelerometer in a low-power mode there an algorithm named Energy-efficient
Real-time Smartphone Pedometer which accurately and energy-efficiently infers the concurrent person
step count within 2 seconds with the Smartphone accelerometer. Technique involves take out 5 features
from the Smartphone 3D accelerometer devoid of the need for noise filtering or exact Smartphone onbody placement and compass reading; Energy-efficient Real-time Smartphone Pedometer categorization
correctness is around 94% when validated using information collected from 17 volunteers.
Key words Pedometer, Accelerometer, Smartphone, Activity categorization

I. INTRODUCTION
Smart phones provide sophisticated real-time sensor information for dispensation. Researchers
contain studied a large number of sensors such as accelerometer, gyroscope, rotation vector, and
direction sensors in person step count projects. Of these the accelerometer is the majority precious nontransceiver sensor used to give the information for activity monitoring as it gives more information
concerning movement armed forces. Therefore the core center of this system is on using solely the smart
phone accelerometer for person pace count. The motivation for MBS, in contrast to LBS, includes:
1. Adapting dynamically the types of mobility information services based upon the travel mode,

e.g., a pedestrian map triggered after detecting walking, shows safer places to cross roads

whereas a motorist map focuses more on main road routes.
2.Mobility profile driven social and societal behaviour analysis changes via gamification

and incentives, e.g., to promote greater low carbon transportation modes and low-energy

transport usage.
3.Real-time human mobility profiling, such as determining the degree of physical exercise, the

usage patterns for types of public and private transport, lowcarbon transport usage and the

time spent at a location (This latter aspect can indirectly indicate human activities even

personal preferences at that location e.g., spending more time near one shop location rather

another one can indicate shopping and a greater user preference or interest for one shop as

compared to another.
4.Human activity driven system control and optimization, e.g., switching off power hungry

location sensors such as the GPS receiver and Wi-Fi when out of range, i.e., when travelling

in an underground train.
The accelerometer has three input advantages over transceiver based place signal sensors such
as GPS. First, low energy spending of 60 mW. Second, there is no wait when starting the accelerometer,
however receiving position updates in GPS depends on the start mode. In a hot start form the TermedTime-to-Subsequent-Fix is about 10 seconds and in a cold create mode the Time To-First-Fix could
take up to 15 minutes. Third, sensors interpretation are incessantly available with the accelerometer
as compare to GPS and Wi-Fi which could be thwarted as of signals transmit by GPS satellites and
being out of range of Wi-Fi signals in that order. Person movement categorization using smart phones
386

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

requires a movement condition gratitude technique that can function regardless of the position of
the smart phone because placing accelerometers on exact parts of the body makes it not practical for
use in the real-world. Acceleration information differs for similar behavior, thus making it harder to
finely secernate between certain types of activity. Limits have been found in the range of movement
activities identified by use of an only one sensor and; due to the complexity of person movement and
noise of sensor signal, action categorization algorithms tend to be probabilistic. They have in its place
designed a various modal sensor panel that concurrently captures information from many sensors. A
major challenge in the design of ubiquitous, context-aware smart phone applications is the increase
of algorithms that can find the person action using noisy and equivocal sensor information. There a
technique called Energy-efficient Real-time Smart phone Pedometer; an Android based smart phone
application to accurately calculate person steps. The novelty of this investigate as compared to existing
systems are: ERSP extracts five features this scheme works an energy-efficient frivolous arithmetical
model to process in real-time the activity accelerometer information with no need for noise filtering
and works in spite of of the smart phone on-body placement and orientation.
2. RELATED WORK
Takamasa Higuchi, Hirozumi Yamaguchi, and Teruo Higashino proposed a novel social
navigation framework, called PCN that leads users to their friends in a crowd of neighbors. PCN
provides relative positions of surrounding people based on sensor readings and Bluetooth RSS, both
of which can be easily obtained via o-the-shelf mobile phones. Through a field experiment in a real
trade fair, demonstrated that PCN improves positioning accuracy by 31% compared to a conventional
approach owing to its context-supported error correction mechanism. Furthermore, showed that the
geometrical clusters in the estimated positions are highly consistent with actual activity groups, which
would help users to easily identify actual nearby people.
Emiliano miluzzo, nicholas d. Lane, kristof fodor, ronald
peterson,mirco musolesi, shane b. Eisenman, xiao heng, hong lu, andrew t. Campbell proposed
the execution, evaluation, and user experiences of the CenceMe request, which represents one of the
primary application to without human intervention get back and issue sensing attendance to common
networks by Nokia N95mobile phones. Described a complete system execution of CenceMe with its
presentation assessment. Discussed a number of significant design decisions wanted to resolve various
limitations that are there when annoying to deploy an always-on sensing request on a profitable mobile
phone. Also obtainable the results from a long-lived experiment where CenceMe was used by 22 users
for a three week period. Discussed the user study and lessons learn from the deployment of the request
and tinted how might get better the application moving forward.
Jialiu
Lin
Yi
Wang,Murali
Annavaram,
QuinnA.Jacobson,
JasonHong,Bhaskar
Krishnamachari,Norman Sadeh,
Presented the design, execution, and evaluation of an Energy Efficient Mobile Sensing System
(EEMSS). The center part of EEMSS is a sensor organization scheme for mobile devices that operates
sensors hierarchically, by selectively turning on the minimum set of sensors to monitor user state and
triggers new set of sensors if necessary to achieve state transition findion. Energy consumption can
be reduced by shutting down unnecessary sensors at any particular time. Implementation of EEMSS
was on Nokia N95 devices that use sensor management scheme to manage built-in sensors on the N95,
including GPS, Wi-Fi find or accelerometer and microphone in order to achieve person daily activity
recognition. Also proposed and implemented novel categorization algorithms for accelerometer and
microphone readings that work in real-time and lead to good performance. Finally, we evaluated
EEMSS with 10 users from two universities and were able to provide a high level of accuracy for
state recognition, acceptable state transition findion latency, as well as more than 75% gain on device
lifetime compared to existing system
Donnie H. Kim, Jeffrey Hightower, Ramesh Govindan, Deborah Estrin proposed a Place Sense
provides a significant improvement in the aptitude to find out and be familiar with places. Precision and
recall with Place Sense are 89% and 92% versus the previous state-of-the-art Beacon Print approach at
82% and 65% precision and recall. Because it uses response rate to select representative beacons and
suppresses the influence of infrequent beacons, Place Senses accuracy gains are particularly noticeable
in challenging radio environments where beacons are inconsistent and coarse. Place Sense also finds
387

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
position entry and exit times with over twice the accuracy of previous approach thanks to
sensible use of buffering and timing. It has the aptitude to overlap the exit fingerprint of one place with
the entrance fingerprint of the following position. Lastly, position Sense is accurate at discovering
places visited for short durations or places where the device remains mobile. correctness in shortduration and passing places is a important payment since these types of places are valuable to emerging
applications like life-logging and social location sharing.
Ionut Constandache, Romit Roy Choudhury,Injong Rhee, proposed the growing status of
location based services calls for better quality of localization, counting greater ubiquity, correctness,
and energy-efficiency. Present localization schemes, although efficient in their target environments,
may not scale to meet the evolving needs. This system proposes CompAcc, a easy and sensible method
of localization using phone compasses and accelerometers. CompAccs core idea has been known for
centuries, yet, its adoption to person scale localization is not obvious
2. ALGORITHMS
3.1 MOVEMENT CATEGORIZATION ALGORITHMS
Acceleration information also varies for similar activities, thus making it extra difficult to finely
differentiate certain types activity. A major challenge in the design of ubiquitous, context-aware
smart phone applications is the growth algorithms that can find the person movement state using
noisy and equivocal sensor information. Limits have been found in the range of movement activities
recognized by use of single sensor mainly and; due to the difficulty of person movement and noise of
sensor signals, movement categorization algorithms tend to be probabilistic.
3.2. ACCELEROMETER BASED ALGORITHM
This algorithm works an energy-efficient light-weight exact model to process in real-time the
movement accelerometer information without the need for noise filtering and it works regardless of
the smart phone on-body placement and compass reading. This method adapts the standard Support
Vector Machine (SVM) and exploits fixed-point arithmetic for computational cost reduction. In terms
of person movement analysis, our accelerometer based algorithm can be used separately or as part of
mixture structural design, e.g., it can be used in a joint accelerometer and location strength of mind
approach.
3.3. DIFFERENT PERSON MOVEMENT PATTERNS TEND TO BE GENERATED ALGORITHMS
The algorithm have to be able to adapt to the various variation as a user is performing an
activity, e.g., what is classified as walking for a sure group might be confidential as jogging for another
group. The first step involves personalizing EHMS by reconfiguring the algorithm based on the smart
phone accelerometer information gathered for the exact activity. A comparison with the traditional
SVM shows a significant improvement in terms of computational costs while maintaining similar
accuracy, which can contribute to develop more sustainable systems for Ambient Intelligence. To
personalize the application based on a specific action, the user performs the activity for a one-off time
of 14 seconds. Fourteen seconds was chosen because a minimum of 56 accelerometer samples are
required to cover the T range from 0 to 6.

Accelerometer Noise Filtering

The Kalman filter outstanding the algorithms aptitude to efficiently computes accurate estimate
of the true value given noisy capacity. The accelerometer readings give sensibly precise information
for movement findion, and for this cause the Kalman filter algorithm is well suited for ltering the
Gaussian process and to aid in real-time person movement state calculation. Also there is no need to
retain historical measurements and estimates as only the present and self-assurance estimate levels
388

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
are required. From the unprocessed information the same feature, extracted intended for
categorization. It takes less memory. To personalize the application based on a specific action, the
user performs the activity for a one-off time of 14 seconds.
4. RESULTS
The experiments conducted involved the study of accelerometer data gathered from various
activities. The key objective of the scientific experiment is to investigate features such as peaks and
troughs that can be extracted from the accelerometer readings to classify similar human mobility states
such as travel by light rail train vs. underground train. The data collection process was conducted by
15 participants for 12 different activities. In order to validate EHMS we required a wide range of
realistic user data to stress test the algorithm. The activities were selected because they were amongst
the most popular types of modality and offered a wide range of normal urban commuting activities.
Table 2 shows the activities recorded by each user. EHMS uses aggregated classes for user activity
classification, e.g., although users 12 and 13 only performed two activities, their samples are classified
using the aggregated class (which has all 12 types of activities). Users 1 to 13 were permitted to carry
the smartphone regardless of the on-body placement. Users 14 and 15 had to place the smartphone
in predetermined body positions. This allowed us to study the differences in accelerometer readings
based upon different smartphone on-body placements and orientations. For each activity we used 1,250
training data points. This is equivalent to 312.5 seconds per activity. We chose 1,250 samples because
the data gathering process required each participant to perform an activity for a minimum of 360
seconds. It should be noted that we found 14 seconds (56 samples) was sufficiently long to personalize
the EHMS Android application for a specific user activity. In this paper real world data was gathered
using Android based smartphones. No accelerometer data noise filtering or data simulation was used.
In several cases even similar activities cannot be grouped together, e.g., it can be argued that different
kinds of low and high speed over ground trains will generate different human mobility profiles. We
selected a small subset of human mobility states for demonstration purposes since EHMS can be
dynamically applied to a wide range of human mobility states. 4.1 Accelerometer Noise Filtering For
optimum classification accuracy, a comparatively low sampling frequency of 4 Hz is used by EHMS
and the window size for feature extraction is 2 seconds. If the frequency isnt 4 Hz then EHMS still
uses eight accelerometer samples per cycle for classification, but will misclassify activities since the
window size is no longer 2 seconds. The Kalman filter is a parametric model that can be applied to
both stationary and in-motion human mobility data analysis [24]. We investigated whether or not a
discrete Kalman filter algorithm could filter the accelerometer noise thus ameliorating the activity
state detection accuracy estimation. We chose to use the Kalman filter due the algorithms ability to
efficiently compute accurate estimates of the true value given noisy measurements. The accelerometer
readings provide reasonably accurate data for mobility detection, and for this reason the Kalman filter
algorithm is well suited for filtering the Gaussian process and to aid in real-time human mobility state
prediction. Also there is no need to retain historical measurements and estimates as only the current
and confidence estimate levels are required.
5. ANALYSIS
Evaluated EHMS using existing classifiers. The classifiers are J48, decision table (DT),
bagging, and naive bayes. Fig. shows the precision and remembers contrast of EHMS vs. known
existing classifiers with and without personalization.

Fig.1 EHMS vs. known existing classifiers


389

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
Trained the classifiers using an information set comprised of pre-classified accelerometer
information for the following activities: light rail train, car, jogging, lying down, stationary and
walking. To obtain a model for the classifiers, the classifiers were trained using the same set of
1,250 accelerometer samples for every action with a 10 fold cross-validation. From the unprocessed
information the same feature, extracted intended for categorization. It takes less memory. Contrast
with other existing classifier EHMS is makes the better accuracy it shown in below chart.
6. CONCLUSION
Concurrent person movement state categorization algorithm without need for referencing
historical information. Categorization of the person movement state regardless of the smart phone
position and on-body placement. The proposed representation is comparatively insensitive to noisy
information. Found even though the noise was reduced when Kalman filtering was applied, the
computational features were stymied in the output making it use superfluous in classifying between
different person movement conditions. Light-weight accelerometer information feature extraction.
EHMS extracts five novel features counting one derived feature from the accelerometer information.
Further there is no need for a remote server link for computational purposes as all processing is
performed inside the smart phone. More energy-efficiency due to the small computational algorithms
and smart phone implanted accelerometer sensing mode at four samples per instant.
7. REFERENCES
[1] V. Devadas and H. Aydin, On the interplay of voltage/frequency scaling and device power
management for frame-based real-time embedded applications, IEEE Trans. Comput., vol. 61, no.
1, pp. 3144, Jan. 2012.
[2]

I. Constandache, S. Gaonkar, M. Sayler, R. R. Choudhury, and L. Cox,EnLoc: Energy-efficient


localization for mobile phones, in Proc. 28th IEEE Int. Conf. Comput. Commun., 2009, pp. 2716
2720.

[3]

F. A. Levinzon, Fundamental noise limit of piezoelectric accelerometer, IEEE Sensors J., vol. 4,
no. 1, pp. 108111, Feb. 2004.

[4]

T. Choudhury, G. Borriello, S. Consolvo, B. Harrison, J. Hightower, A. LaMarca, L. LeGrand, A.


Rahimi, A. Rea, B. Hemingway, P. Klasnja, K. Koscher, J. A. Landay, J. Lester, D. Wyatt, and D.
Haehnel, The mobile sensing platform: An embedded activity recognition system, IEEE Pervasive
Comput., vol. 7, no. 2, pp. 3241, Apr.Jun. 2008.

[5]

H. Zeng and M. D. Natale, An efcient formulation of the realtime feasibility region for design
optimization, IEEE Trans. Comput., vol. 62, no. 4, pp. 644661, Apr. 2013.

[6]

G. Hache, E. D. Lemaire, and N. Baddour, Movement change-ofstate findion using a smartphonebased approach, in Proc. IEEE Int. Workshop Med. Meas. Appl., 2010, pp. 4346.

[7]

A. M. Khan, Y.-K. Lee, S. Y. Lee, and T.-S. Kim, Person activity recognition via an accelerometer
enabled smart phone using kernel discriminant analysis, in Proc. 5th Int. Conf. Future Inf. Technol.,
2010, pp. 16.

[8]

M. B. Kjaergaard, J. Langdal, T. Godsk, and T. Toftkjr, EnTracked: Energy-efcient robust


position tracking for mobile devices in Proc. 7th Int. Conf. Mobile Syst., Appl., Serv., 2009, pp.
221234.

[9]

T. O. Oshin, S. Poslad, and A. Ma, A method to evaluate the energy-efficiency of wide-area location
determination techniques used by smart phones, in Proc. 15th IEEE Int. Conf. Comput. Sci.Eng.,
2012, pp. 326333.

390

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Reactive Power Compensation And Voltage


Fluctuation Mitigation Using Fuzzy Logic In Micro Grid
Gokul Kumar.M, PG Student, M.Sangeetha
Assistant professor EEE
The Kavery Engineering College,
Abstract During the past two decades, the increase in electrical energy demand has presented higher
requirements from the power industry. More power plants, substations, and transmission lines need to be
constructed.
However, the most commonly used devices in present power grid are the mechanically-controlled circuit
breakers. The long switching periods and discrete operation make them difficult to handle the frequently
changed loads smoothly and damp out the transient oscillations quickly.
It increases the complexity of the system, Therefore, investment is necessary for the studies into the
security and stability of the power grid, as well as the improved control schemes of the transmission
system. Different approaches such as reactive power compensation and phase shifting have been applied
to increase the stability and the security of the power systems.
Key words STATCOM, Microgrid, Inverter, active filters, Fuzzy logic controller,lineimpedence,Non

linear loads.mitigation
I. INTRODUCTION
microgrid has a AC-bus and DC-bus, interconnectedtogether with a tie line DC -AC converter.
AC-bus isconnected to wind power plants, pico-hydro plant, localAC-loads and to the electricity grid
with an islanding scheme.Power quality on AC bus has to be maintained in both themodes of operation
of microgrid (islanded and non-islanded).Sudden islanding of utility grid creat significantvoltage
disturbances on AC bus.The AC bus has grid tie inverters, AC-DC-AC converters, conventional
synchronous generatorsas the sources supplying dynamic real power loads as well as reactive power
loads. Supply of reactive power reducesthe maximum amount of real power that can be supplied by
the sources thereby resulting into poor utilization of theircapacity. This provokes need of dynamic
reactive power source on AC bus.STATCOM and SVC both are Flexible AC Transmission System
(FACTS) devices that can be used for addressing the described problem. STATCOM has a better
response time and better transient stability compared to SVC. This makes STATCOM an ideal choice
for microgrid.This paper describes modelling and optimization STATCOM on AC bus of Microgrid.
The paperbegins with explaining STATCOM as a potential solution tovoltage fluctuations and reactive
power demand on the ACbus and extends to dealing with the control strategies required for the
operation of STATCOM.
II. VOLTAGE FLUCTUATION PROBLEM ON AC BUS
In non-islanded mode of operation, in absence of STATCOM, local excessive reactive power
demand is supplied by the utility grid. Sudden transients in the reactive power demand are taken care
of by utility grid and the AC bus voltage is maintained. However, in islanded mode of operation, in
absence of STATCOM, reactive power demand is completely supplied by the converters of the power
sources
such as wind power plants, solar plants and the conventional synchronous generators of the
pico-hydro plants. With limited capability to supply the reactive power demand, islanded AC-bus of
microgrid shows drastic fluctuations in the voltage.This provokes need of AC-bus voltage regulating
control system to be embedded in STATCOM.
III. DESIGN OF STATCOM
A. Power Circuit
Power circuit contains the main topology of DC-AC conversion.The power circuit consists of
three parallel legs, each leg consisting of two IGBTs(FGA25N120NTD) which are switched using the
391

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
switching pulses obtained from the driver

circuit.
A Driver circuit is interfaced with the power circuit to ensure required driving characteristics
of the IGBTs. The IGBTs are switched at a frequency of 2kHz. This leads to problem of high voltage
spikes across the switch due to circuit inductance and also it leads to ringing. To eliminate this, RC
snubber circuit is used in the STATCOM circuit. When the switch gets open, circuit eliminates the
voltage transients and ringing, as it provides alternate path for the current flowing through circuits
intrinsic leakage inductance. Also it dissipates the energy in resistor and thus junction temperature is
reduced.
B. Control System
STATCOM includes a 2-level voltage source inverter with a capacitor bank in DC link. The
voltage source inverter is driven by 3 phase SPWM waves. SPWM waves are equipped with dead band
programming in high side and low side IGBT circuit. Frequency, power angle and voltage magnitude
of STATCOM can be all controlled by controlling the SPWM waves. STATCOM is synchronized to
the utility grid using synchronizing control systems.It includes,
1) Frequency control:
A feedback of line to line voltage of grid is fed to the frequency measurement unit. The measured
frequency is then given to the SPWM generator. Response time of frequency control systems is
crucial for us to avoid power instability.
2) Phase-lock control system:
Feedback of grid voltage is fed to SPWM generator and SPWM is held in a constant phase
relation (power angle) with respect to the grid voltage. Reference given to phase control decides real
power transaction with the grid.
3) Charging and maintaining capacitor voltage:
With noactive source on DC side, charging of DC link capacitor is done by consuming real
power from the grid (fig. 3). Power

STATCOM with its control systems


392

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
angle is deliberately kept lagging so as to charge the capacitor.Under steady state conditions,
power angle is constant and lagging just sufficient for the STATCOM to supply real power losses in
the power circuit and filter circuit. The job of charging and maintaining the DC link capacitor voltage
is done by the DC link voltage regulating control systems.
4) Supply and consumption of reactive power:
The STATCOMdelivers reactive power or absorbs reactive power based on the formula

For positive VAR (supply of reactive power), STATCOM voltage has to be higher than the
grid voltage. Increasing the modulation index of the SPWM waves serves the purpose.Reactive power
flow out of the STATCOM can directly be controlled by controlling the modulation index of SPWM
waves. The actual control systems are configured to maintain the AC bus voltage constant to the
specified reference; which itself is indirectly done by controlling the modulation index.

Fig. 4: Reactive power control

IV. SIMULATION
STATCOM is simulated using a 2-level voltage source inverter in MATLAB Simulink (fig. 6).
SPWM generator block generates the gating pulses required for the 6 IGBTs.SPWM generator block
has input of Modulation control and phase angle ( delta). Modulation Index control is used for Control
of the AC bus voltage and delta control for control of the DC bus capacitor voltage. PID controllers
are used
for control of AC bus voltage and capacitor voltage in the DC link by setting some reference
accordingly. Controlling the AC bus voltage automatically controls demand of reactive power. LCL
filter filters the output waveform. Terminals DC2 and DC4 are connected to DC link capacitor.
Terminals AC1, AC2, AC3 are the output terminals after the filtering action. Model is simulated in
discrete mode.Ode-45 equation solver is used and the sampling time is 5e6.

393

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
VI. RESULTS
A. Response of DC voltage regulating Control System
With sudden change in the reactive power demand on ACbus, capacitor voltage on the DC
link of the STATCOMtends to decrease drastically.Losses in the power circuitare increased due
to increased reactive power output ofSTATCOM. To cope up with the increased losses the deltaof
STATCOM is made more lagging by the regulating Fuzzy logic control system. Capacitor voltage
experiences a droop in the start butis observed to return back to the reference value. From thecontrol
effort, it can be seen that the delta has settled to agreater lagging value.

VII. CONCLUSION
STATCOM is designed for the reactive power compensation of micro grid and AC bus voltage regulation.
STATCOM is simulated along with the micro grid in MATLAB to observeand improve transient using
multiple microcontrollers interfaced with personal computer via USART communication interface.
REFERENCES
[1] Balaguer I.J. ; Dept. of Electrical Eng. , Michigan State Univ. , East Lansing, MI, USA ; Control
for Grid-Connected and Intentional Islanding Operations of Distributed Power Generation IEEE
Transactions on Industrial Electronics, vol. 58, No. 1 , Dec. 2010.
[2]

Majumder R. ; ABB Corp. Res., Vasteras, Sweden, Reactive Power Compensation in Single-Phase
Operation of Microgrid IEEE Transactions on Industrial Electronics, vol. 60, No. 4 ,Nov. 2012.

[3]

MehrdadAhmadiKamarposhti, MostafaAlinezhad, Comparison of SVC and STATCOM in static


voltage stability margin enhancement , World Academy of Science, Engineering and Technology
Vol:3 2009-02- 20

[4]

Hingorani, N. ;Gyugyi, L. ; Understanding FACTS:Concepts and Technology of Flexible AC


Transmission Systems , Chapter 5 :Static Shunt Compensators: SVC and STATCOM , Page(s): 135
- 207 , Copyright Year: 2000

[5]

Jamal Alnasseir Theoretical and Experimental Investigations on Snubber Circuits for High Voltage
Valves of FACTS Equipment for Over Voltage Protection Master Thesis Project Erlangen 2007.

[6]

PraneshRao and M. L. Crow, STATCOM Control for Power System Voltage Control Applications
IEEE Transactions on Power Delivery, vol. 15, NO. 4, October 2000.

[7]

Madhusudan,Sir C.R. Reddy College ,RamamohanRao, Modeling and simulation of a distribution


STATCOM (D-STATCOM) for power quality problems-voltage sag and swell based on Sinusoidal
Pulse Width Modulation (SPWM) IEEE Transactions, March 2012

[10] W. S. Meyer and H. W. Dommel "Numerical Modelling of Frequency- Dependent TransmissionLine Parameters in an Electromagnetic Transients Program", IEEE Trans. Power Apparatus and
394

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
Systems, vol. PAS- 93, pp.1401 -1409 1974
[11] J. Carrasco , L. Franquelo , J. Bialasiewicz , E. Galvan , R. PortilloGuisado , M. Prats , J. Leon and
N. Moreno-Alfonso "Power-electronic systems for the grid integration of renewable energy sources:
A survey", IEEE Trans. Ind. Electron., vol. 53, no. 4, pp.1002 -1016 2006
[12]

M. Prodanovic and T. Green "High-quality power generation through distributed control of a power
park microgrid", IEEE Trans. Ind. Electron., vol. 53, no. 5, pp.1471 -1482 2006

[13]

S. Goldwasser, S. Micali, and R. Rivest, A digital signature scheme secure against adaptive chosen
message attacks, SIAM Journal of Computing, vol. 17, no. 2, pp. 281308, 1988.

[14] E. Figueres , G. Garcera , J. Sandia , F. Gonzalez-Espin and J. Rubio "Sensitivity study of the
dynamics of three-phase photovoltaic inverters with an $LCL$ grid filter", IEEE Trans. Ind.
Electron., vol. 56, no. 3, pp.706 - 717 2009
[15]

N. D. Hatziargyriou and A. P. S. Meliopoulos "Distributed energy sources: Technical challenges",


Proc. IEEE Power Eng. Soc. Winter Meeting, vol. 2, pp.1017 -1022 2002

[16]

C. L. Smallwood "Distributed generation in autonomous and nonautonomous micro grids", Proc.


IEEE Rural Electric Power Conf., pp.D1/1 - D1/6 2002

[17] R. H. Lasseter and P. Piagi "Microgrid: A conceptual solution", Proc. Power Electronics Specialists
Conf., vol. 6, pp.4285 -4290 2004

395

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Wireless Sensor Networks Using Ring Routing


Approach With Single Mobile Sink
Sudharson.S1,Ms.R.Jennie Bharathi, M.E2
1

PG Scholar, Department OF ECE, 2Assitant Professor, Department of ECE


Sri Krishna College of Engineering and Technology, Kuniamuthur P.O.,
Coimbatore-641008, Tamil Nadu, India.
1sssudharson54gmail.com,2jenniebharathi@skcet.ac.in

1, 2

Abstract In wireless sensor networks (WSNs), energy efficiency is considered to be a crucial issue due
to the limited battery capacity of the sensor nodes. Considering the usually random characteristics of the
deployment and the number of nodes deployed in the environment, an intrinsic property of WSNs is that
the network should be able to operate without human intervention for an adequately long time. In existing
system various hierarchical approaches have been experimented, in which each approach suffers from
overhead, hotspot and flooding problem. In this paper we propose ring routing approach energy-efficient
mobile sink routing protocol is introduced, which aims to minimize this overhead while preserving the
advantages of mobile sinks.The technique forms a ring of nodes from the available regular nodes. Ring
is formed with the help of certain radius from the centre, nodes closer to the ring which is defined by the
radius is formed. The location of the sink node is found through a ring node and is shared between all
the ring nodes. Then the source node with the data forwards it to the sink through the anchor nodes.The
proposed system achieves higher performance, lifetime and less delaywhile compared with the existing
system.
Key words Anchor Node, Wireless Sensor Networks, Hotspot, Flooding, Mobile sinks, Energy

Efficiency.
I. INTRODUCTION
1.1 INTRODUCTION ABOUT WIRELESS SENSOR NETWORK
A Wireless sensor network is a group of specialized transducers with a communications
infrastructure intended to monitor and record conditions at diverse locations. Commonly monitored
parameters are temperature, humidity, pressure, wind direction and speed, illumination intensity,
vibration intensity, sound intensity, power-line voltage, chemical concentrations, pollutant levels and
vital body functions.
1.2 OVERVIEW
A sensor network consists of multiple detection stations called sensor nodes, each of which
is small, lightweight and portable. Every sensor node is equipped with a transducer, microcomputer,
transceiver and power source. The transducer generates electrical signals based on sensed physical
effects and phenomena. The microcomputer processes and stores the sensor output. The power for
each sensor node is derived from the electric utility or from a battery. The solutions to the problem is
rectified by mobile sink [1],[6],[7],[10].

Figure 1.1 Sensor network


396

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
The WSN is built of "nodes" from a few to several hundreds or even thousands, where each
node is connected to one or several sensors. A sensor node might vary in size from that of a shoebox
down to the size of a grain of dust, although functioning "motes" of genuine microscopic dimensions
have yet to be created. The topology of the WSNs can vary from a simple star network to an advanced
multi-hop wireless mesh network. Size and cost constraints on sensor nodes result in corresponding
constraints on resources such as energy, memory, computational speed and communications bandwidth.
1.3. WSN CHARACTERISTICS AND ARCHITECTURE
A WSN is a homogenous or heterogeneous system consisting of hundreds or thousands of
low cost and low-power tiny sensors to monitor and gather real-time information from deployment
environment. Common functionalities of WSNs' nodes are broadcasting and multicasting, routing,
forwarding and route maintenance. The sensor's components are: sensor unit, processing unit, storage
unit, power supply unit and wireless radio transceiver; these units are communicating to each other.
Wireless communications and weak connections, Low reliability and failure capability in sensor
nodes, Dynamic topology and self-organization, Hop-by-hop communications (multi-hop routing),
Inter-nodes broadcast-nature communications.

Figure 1.2The typical architecture of the sensor node

The main components of a sensor node are a microcontroller, transceiver,external memory,


power source and one or more sensors. The controller performs tasks, processes data and controls the
functionality of other components in the sensor node. The functionality of both transmitter and receiver
are combined into a single device known as a transceiver. The operational states are transmit, receive,
idle, and sleep. Most transceivers operating in idle mode have a power consumption almost equal to
the power consumed in receive mode. External memory used for storing application related or personal
data, and program memory used for programming the device. An important aspect in the development
of a wireless sensor node is ensuring that there is always adequate energy available to power the system.
The sensor node consumes power for sensing, communicating and data processing. More energy is
required for data communication than any other process. Sensors are hardware devices that produce a
measurable response to a change in a physical condition like temperature or pressure. Sensors measure
physical data of the parameter to be monitored. The continual analog signal produced by the sensors is
digitized by an analog-to-digital converter and sent to controllers for further processing.
1.4 NEED FOR PROPOSED SYSTEM
Ring Routing establishes a virtual ring structure that allows the fresh sink position to be easily
delivered to the ring and regular nodes to acquire the sink position from the ring with minimal overhead
whenever needed.
The ring structure can be easily changed.The ring nodes are able to switch roles with regular
nodes by a straightforward and efficient mechanism, thus mitigating the hotspot problem.The wireless
sensor network is battery operated. So the energy consumption is playing the major role. This work
397

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
attempts to provide the fast delivery through quick accessibility and it can deal with increasing data
rate successfully.Large sensor network can be managed efficiently as the multiple sink node reduces
the energy consumption.Mobile sink can also be used for habitat monitoring, where a robot designated
as the mobile sink gathers information from the sensors located in different areas of a large field [13].
2. EXISTING METHOD
In a typical wireless sensor network, the batteries of the nodes near the sink deplete quicker than
other nodes due to the data traffic concentrating towards the sink, leaving it stranded and disrupting
the sensor data reporting. To mitigate this problem, mobile sinks are proposed. The existing system
introduced different routing protocol approaches with various drawback. The existing approaches are,
2.1 (LBDD)-LINE BASED DATA DISSEMINATION
LBDD stands for line based data dissemination defines vertical strips of nodes centered on area
of deployment. The information or data is first sent in to the line, the node which is first present in
the line encounter the data. Sink node collects the data from the sensor nodes. In this LBDD straight
forward mechanism is followed.
The drawback in this method are,
It suffers from flooding problem which cause significant increase in overall energy consumption.
In which flooding is the way to distribute routing information updates quickly to every node in a large
network.

Fig 2.1 line approach

2.2 QUADTREE (QDD) APPROACH


QDD stands for Quad Tree based data dissemination protocol, by partitions of networks in to
successive quadrants.
The quadrants are further divided in to smaller quadrants. The overhead that is present in the
quadtree approach is minimum as compared to the other approaches.
The drawbacks present in this approach are,
This method suffers from the hotspot problem, sensor nodes near to the base station are critical
for the lifetime of sensor network because these nodes need to transmit more data than nodes away
from base station. The structure of this approach is not flexible.

Fig 2.2 quadtree approach


398

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
2.3 RAIL ROAD APPROACH
Rail road constructs a structure called the rail, in w hich the strip of nodes are in closed loop.
The nodes on the rail are called rail nodes. The sensor node data sends information to the node which
is the nearest rail node. When the sink node is ready to collect data from the sensor nodes,that is when
it comes nearer to the railroad all the sensor nodes will become active and results in wastage of energy.
The drawback in this approach it suffers from protocol overhead, resulting in delay and excess
use of resources.

Fig 2.3 rail road approach

3. PROPOSED METHOD
Proposed system introduced a Ring Routing mechanism, a hierarchicalrouting protocol for
wireless sensor networks with amobile sink. The protocol imposes three roles on sensornodes:(i) Ring
nodes (ii) The Regular nodes (iii)anchor nodes. The three sensor roles are not fixed on its roles,
meaning that sensor nodes can change their functions while operating in the wireless sensor network.
The location of the sink node is found through a ring node and is shared between all the ring nodes.
Then the source node with the data forwards it to the sink.
3.1 RING ROUTING WITH SINGLE MOBILE SINK
Ring routing it establishes a ring structure. The ring is formed with a certain radius from the
centre node.It can able to form the ring structure with the node which has higher energy, ring can be
easily changed.The ring consists of a node-width distance of one, strip of nodes that is closed are
called the ring nodes. The shape of the ring may not be perfect as long as it forms a closed loop. After
the deployment of the WSN, the ring is initially constructed by the following mechanism: An initial
radius for the ring is determined.
The nodes nearer to the ring, that is defined by this radius and the center of network, by a
certain threshold point are determined to be ring node candidates. From the starting of certain node
(e.g. the node closest to the leftmost point on the ring) by geographic forwarding in a certain direction
(clockwise/counter clockwise), the ring nodes are selected in a greedy manner until the starting node
is reached and the closed loop is complete. If the starting node is not at a reachable distance, the
procedure is repeated with selection of different neighbors at each hop. If after a certain number
of trials the ring cannot be formed, the radius is set to a different value and the procedure above is
repeated. The advantages of this approach are given

Fig 3.1 ring routing approach


399

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
Protocol overhead is very low.
Ring structure is flexible.
Sink mobility pattern is random.
3.2 PROCEDURE FOR CREATING SINK POSITION ADVERTISEMENT
When the sink moves, it selects anchor nodes (ANs) with in its neighbors. The AN serves as a
delegate managing the communications between the sink and the sensor nodes. As a first step, the sink
chooses the closest node (e.g., the node with the greatest SNR value) as its anchor, and broadcasts an
AN Selection (ANS) packet. Before the sink leaves the communication range of the AN, it selects a new
anchor nodes and informs about the position of old anchor node and the MAC address of the new AN
by another ANS packet. Since now the old AN knowsabout the new AN, it can relay any data which is
destined for it to the new AN. The current AN relays data packets directly to the sink. This mechanism
is referred to as the follow-up mechanism. The AN selection and follow-up mechanisms are based on
progressive footprint chaining [7]. Progressive AN selection provides a challenge in terms of finding
out when and how a new AN should be selected, which is closely dependent on continuous link quality
estimation. Although in ideal radio channel conditions, distance to the neighboring nodes, calculated
via their geographic coordinates, may be indicative of the status of the link, it is rarely the case,not
only the distance factor affects the radio link quality it can be affected by many factors. One of the
more resilient methods of link quality estimation is beaconing. In this approach, the sink broadcasts
periodically beacon messages, and a link quality estimation metric (e.g., RSSI) is calculated from the
reply messages originating from the neighboring nodes. Depending on the value of this metric, the sink
node takes decision whether to change the current AN and which node to select as the new AN. The
period of the beacon messages should be tuned according to the mobility and speed of the sink, which
are assumed to be known since the sink itself usually makes the mobility decisions.

Fig 3.2 advertisement of sink position

4. ARCHITECTURE DIAGRAM
Starting from a certain node (e.g. the node closest to the leftmost point on the ring) by geographic
forwarding in a certain direction (clockwise/counter clockwise), the ring nodes are selected in a greedy
manner until the starting node is reached and the closed loop is complete. If the starting node cannot be
reached, the procedure is repeated with selection of different neighbors at each hop. If after a certain
number of trials the ring cannot be formed, the radius is set to a different value and the procedure
above is repeated.

Fig 4.1 ring architecture


400

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
5. SOFTWARE SPECIFICATION
5.1 ABOUT NS-2
NS-2 is an open-source simulation tool running on Unix-like operating systems. It is a discreet
event simulator targeted at networking research and provides substantial support for simulation of
routing, multicast protocols and IP protocols, such as UDP, TCP, RTP and SRM over wired, wireless
and satellite networks. It has many advantages that make it a useful tool, such as support for multiple
protocols and the capability of graphically detailing network traffic. Additionally, NS-2 supports
several algorithms in routing and queuing. LAN routing and broadcasts are part of routing algorithms.
Queuing algorithm includes fair queuing, deficit round robin and FIFO.
NS-2 started as a variant of the REAL network simulator. REAL is a network simulator
originally intended for studying the dynamic behavior of flow and congestion control schemes in
packet-switched data networks. In 1995 ns development was supported by Defense Advanced Research
Projects Agency DARPA through the VINT project at LBL, Xerox PARC, UCB, and USC/ISI.
5.2 PERFORMANCE EVALUATION
The performance comparision is done for various hierarchical approaches. Life time comparision
is done for various nodes (n= 30) for ring routing, LBDD, and rail road. Ring Routing slightly in most
cases while Railroad has the worst performance for all cases. This behavior is due to the ANPI request/
response mechanism employed by Ring Routing and Railroad. LBDD sends data packets directly to
the line which relays them to the sink, thus eliminating the delay cost of waiting for the response to
an ANPI request.
The delay comparisions for various nodes (n=30), is done for various approaches such as ring
routing, rail road and LBDD approaches.
The total delay for data deliveries are broken down into two components. The ANPI request/
response delay per data component is the time until a response to the ANPI request is received by a
source node. The second com- ponent is the actual data dissemination delay of the path from the source
to the sink.
The two components of Ring Routings data delivery delays are compared with LBDDs
average reporting delays. The delay cost of the request/ response mechanism is apparent. The actual
data dissemination delays of Ring Routing is lower than LBDDs reporting delays.

Fig 5.1 life time comparisions for various methods

(n= 30) nodes


401

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Fig 5.2 delay comparisions for various methods (n=30) nodes

6. CONCLUSION
A novel mobile sink routing protocol is proposed Ring Routing, by both considering the
benets and the drawbacks of the existing protocols in the literature. Ring Routing is an hierarchical
routing protocol based on a virtual ring structure which is designed to be easily accessible and easily
recongurable. The design requirement of our protocol is to mitigate the anticipated hotspot problem
observed in the hierarchical routing approaches and minimize the data reporting delays considering
the various mobility parameters of the mobile sink. The performance of Ring Routing is evaluated
extensively by simulations conducted in the network simulation environment. A wide range of
different scenario with varying network sizes and sink speed values are dened and used. Comparative
performance evaluation results of Ring Routing with two efcient mobile sink protocols, LBDD and
Railroad, which are also implemented in NS 2, are provided. The results show that Ring Routing indeed
is an energy-efcient protocol which extends the network lifetime. The reporting delays are conned
within reasonable limits which proves that Ring Routing is suitable for time sensitive applications.
In the future, we can modify Ring Routing to support multiple mobile sinks, and clustering
approach is used for large wireless sensor network to mitigate traffic and congestion problem.
REFERENCES
[1] M. Buettner, G. V. Yee, E. Anderson, and R. Han, X-MAC: A short preamble mac protocol for
duty-cycled wireless sensor networks, in Proc. 4th Inter. Conf. Embedded Network Sensor Syst.,
ser. SenSys 06. New York, NY, USA: ACM, 2006, pp. 307320.
[2]

I. Chatzigiannakis, A. Kinalis, and S. Nikoletseas, Efficient data propagation strategies in wireless


sensor networks using a single mobile sink, Comput. Commun., vol. 31, no. 5, pp. 896914, 2008.

[3]

Can Tunca, SinanIsik, Mehmet YunusDonmez, and CemErsoy,Ring Routing: An Energy-Efcient


Routing Protocol for Wireless Sensor Networks with a Mobile Sink,IEEE transactions on mobile
computing, vol. 14, no. 9, september 2015.

[4]

M. Di Francesco, S. K. Das, and G. Anastasi, Data collection in wireless sensor networks with
mobile elements: A survey, ACM Trans. Sens. Netw., vol. 8, no. 1, pp. 131, 2011.

[5]

D. K. Goldenberg, J. Lin, A. S. Morse, B. E. Rosen, and Y. R. Yang, Towards mobility as a network


control primitive, in Proc. 5th ACM Int. Symp. Mobile ad hoc Networ. comp., ser. MobiHoc 04.

402

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
New York, NY, USA: ACM, 2004, pp. 163174.
[6]

A. Gopakumar and L. Jacob, Localization in wireless sensor net- works using particle swarm
optimization, in Proc. IET Int. Conf. Wireless, Mobile Multimedia Netw., 2008, pp. 227230.

[7]

R. Jaichandran, A. Irudhayaraj, and J. Raja, Effective strategies and optimal solutions for hot spot
problem in wireless sensor networks (WSN), in Proc. 10th Int. Conf. Inf. Sci. Signal Process. Appl.,
2010, pp. 389392.

[8]

I. Kang and R. Poovendran, Maximizing static network lifetime of wireless broadcast ad hoc
networks, in Proc. IEEE int. Conf. Commun., vol. 3, 2003, pp. 22562261.

[9]

W. Liang, J. Luo, and X. Xu, Prolonging network lifetime via a controlled mobile sink in wireless
sensor networks, in Proc. IEEE Global Telecommun. Conf., 2010, pp. 16.

[10] K. Lin, M. Chen, S. Zeadally, and J. J. Rodrigues, Balancing energy consumption with mobile
agents in wireless sensor networks, Future Generation Comput. Syst., vol. 28, no. 2, pp. 446 456,
2012.
[11] C.-J. Lin, P.-L. Chou, and C.-F. Chou, HCDD: Hierarchical cluster-based data dissemination in
wireless sensor networks with mobile sink, in Proc. Int. Conf. Wireless Commun. Mobile Comput.,
2006, pp. 11891194.
[12]

X. Li, J. Yang, A. Nayak, and I. Stojmenovic, Localized geographic routing to a mobile sink with
guaranteed delivery in sensor networks, IEEE J. Sel. Areas Commun.., vol. 30, no. 9, pp. 1719
1729, Sep. 2012.

[13] J. Luo and J.-P. Hubaux, Joint mobility and routing for lifetime elongation in wireless sensor
networks, in Proc. INFOCOM 24thAnnu. Joint Conf. IEEE Comput. Commun. Soc., vol. 3, 2005,
pp. 17351746.
[14] D. Moss, and P. Levis, BoX-MACs: Exploiting physical and link layer boundaries in low-power
networking, Stanford Univ., Stan- ford, CA, USA, Tech. Rep. SING-08-00, 2008.
[15] D. Niculescu, Positioning in ad hoc sensor networks, IEEE Netw., vol. 18, no. 4, pp. 2429, Jul.
2004.
[16] S. Oh, Y. Yim, J. Lee, H. Park, and S.-H. Kim, Non-geographical shortest path data dissemination
for mobile sinks in wireless sen- sor networks, in Proc. IEEE Veh. Technol. Conf., Sep. 2011, pp.
15.
[17] S. Olariu and I. Stojmenovi, Design guidelines for maximizing lifetime and avoiding energy holes
in sensor networks with uniform distribution and uniform reporting, in Proc. IEEE INFOCOM,
2006, pp. 112.
[18]

J. Rao and S. Biswas, Network-assisted sink navigation for distributed data gathering: Stability and
delay-energy trade-offComput. Commun., vol. 33, no. 2, pp. 160175, 2010.

[19]

Z. Wang, S. Basagni, E. Melachrinoudis, and C. Petrioli, Exploiting sink mobility for maximizing
sensor networks lifetime, in Proc. 38th Annu. Hawaii Int. Conf. Syst. Sci., 2005, p. 287.

[20] Y. Yun and Y. Xia, Maximizing the lifetime of wireless sensor networks with mobile sink in delaytolerant applications, IEEE Trans. Mobile Comput., vol. 9, no. 9, pp. 13081318, Jul. 2010.

403

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Design Of A Fuzzy Based Multi-Stack Voltage Equalizer


for Partially Shaded PV Modules
R .Senthilkumar, S.Murugesan
Dept. Of Electrical and Electronics Engineering, M.Kumarasamy College Of Engineering,
Karur,Tamil Nadu, India, senthilkumarr.eee@mkce.ac.in
Dept. Of Electrical and Electronics Engineering, M.Kumarasamy College Of Engineering,
Karur,Tamil Nadu, India, murugesans.eee@mkce.ac.in
Abstract This work presents an intelligent approach to the improvement and optimization of control
performance of a photovoltaic system with multi-stacked voltage equalizer based on fuzzy logic control.
A single-switch voltage equalizer using multi-stacked SEPIC is proposed to settle the partial shading
issues to extract maximum energy from the Photovoltaic (PV) Systems. The single-switch topology can
considerably simplicity the circuitry compared with conventional equalizers, requiring multiple switches
in proportion to number of PV module/substring. The proposed voltage equalizer can be derived by
stacking capacitor-inductor-diode(CLD) filter on SEPIC converter. Local MPPs were eliminated and
extractable maximum power increased by the equalizer. fuzzy logic algorithms are simulated using
MATLAB fuzzy logic toolbox. An adaptive fuzzy logic control (AFLC) algorithm is employed to online
regulate the equalization period according to the voltage difference between panel voltage, not only greatly
abbreviating the balancing time but also effectively preventing over-equalization. A prototype with three
PV panel is implemented. the equalization efficiency is higher than 98% equalization compared with the
traditional analog control algorithm. This paper explains the basic results of fuzzy logic algorithms and
provides the better algorithm for maximum output voltage.
Index Terms partial shading, photovoltaic system, SEPIC, voltage equalizer, Fuzzy controller

tool in MATLAB.
I. INTRODUCTION
In the recent days, PV power generation has gained more importance due its numerous
advantages such as fuel free, requires very little maintenance and environmental benefits. To improve
the energy efficiency, it is important to operate PV system always at its maximum power point. Partial
shading on a photovoltaic (PV) string comprising multiple modules/substrings triggers issues such as
a significant reduction in power generation and the occurrence of multiple maximum power points
(MPPs), including a global and local MPPs, that encumber MPP tracking algorithms. Single-switch
voltage equalizers using multi-stacked SEPIC is proposed to settle the partial shading issues. The
single-switch topology can considerably simplify the circuitry compared with conventional equalizers
requiring multiple switches in proportion to the number of PV modules/substrings.
One of the biggest reliability issues of PV systems is the difference between its expected and
actual power outputs. This problem can be called PV mismatch. It can have many sources, and the one
addressed in this paper is the partial shading of PV modules. Many authors have proposed ideas to
mitigate the effects related to partial shading. Solutions range from alternative interconnections among
the PV modules within a plant to PV module embedded PE applications.

404

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
Fig. 1. IV characteristics and maximum power points for various irradiances at fixed temperature.

In photovoltaic (PV) energy systems, PV modules are often connected in series for increased
string voltage; however, I-V characteristics mismatches often exist between series connected PV
modules, typically as a result of partial shading, manufacturing variability and thermal gradients.
Since all modules in a series string share the same current, the overall output power can be limited
by underperforming modules. A bypass diode is often connected in parallel with each PV module to
mitigate this mismatch and prevent PV hot spotting, but the efficiency loss is still significant when
only a central converter is used to perform MPPT on the PV string.
II. PROBLEMS OF THE PARTIAL SHADING IN PV MODULES
A PV module is partially shaded when the light cast upon some of its cells is obstructed by
some object, creating a shadow. In this paper, a shadow is considered to have a shape and opacity. The
opacity of the shadow is called shading factor (SF), varying from zero to one. An SF of zero means
that all the available irradiance shines on the PV module. On the contrary, an SF of one means that all
available irradiance is filtered by the shadow before reaching the PV module. The shape of the shadow
is determined by its length and width. The number of shaded cells or cell groups connected in parallel
determines the width of the shadow. Its length represents the number of shaded cells or cell groups
connected in series. The cells composing the PV modules studied in this
work are all considered to be connected in series. Thus, their shadows have no width. The
shaded PV cells will produce less current than the others, which will lead voltage mismatch.
1) The other cells impose their current over the shaded cells, making them work under negative
voltage, dissipating power and risking destruction.
2) The MPPT will track the current of the shaded cells, imposing it over the others and making
them produce less energy.
To protect the shaded cells from being destroyed and to minimize losses in power production,
PV modules are equipped with bypass diodes. They prevent the shaded cells from working under
reverse voltage by short-circuiting them, thus allowing the other cells to work at their normal current.
However, bypass diodes deform the IV curves while activated, interfering with the MPPT. This
makes the tracking of the MPP impossible for simple algorithms.
III. SINGLE-SWITCH VOLTAGE EQUALIZER USING SEPIC

Fig. 2

Fig. 3. SEPIC based Voltage Equalizer


405

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
The proposed single-switch voltage equalizers for three PV substrings are shown in the above
Fig.3. Each PV equalizer is based on one of the buck-boost converters with stacked CLD filters. In all
proposed equalizer topologies, an asymmetric square-wave voltage is produced across the switch Q
(as will be shown in Figs. 8 and 10), and capacitors C1C3 act as coupling capacitors, allowing only
the ac component to flow through. Although the stacked CLD filters have different dc voltage levels,
the same asymmetric square voltage wave is applied to all inductors L1L3 due to the ac coupling,
producing a uniform output voltage for each PV substring.
IV. CIRCUIT ANALYSIS
Based on Kirchhoffs current law in Fig. 3, the average current of Li, ILi, equates to that of Di,
IDi, because the average current of Ci must be zero under a steady-state condition. the equalization
current supplied to PVi, Ieq-i is
Ieq-I = ILi = IDi . (1)
Therefore, both IL1IL3 and ID1ID3 are dependent on partial-shading conditions.

(a) Transformed circuit

(b) Simplified circuit

Fig. 4. (a) Transformed and (b) simplified circuits of the SEPIC-based voltage equalizer
Based on Kirchhoffs current law in Fig. 3, the average current of Li, ILi, equates to that of Di,
IDi, because the average current of Ci must be zero under a steady-state condition. the equalization
current supplied to PVi, Ieq-i is
Ieq-I = ILi = IDi . (1)
Therefore, both IL1IL3 and ID1ID3 are dependent on partial-shading conditions.
As mentioned in Section III, the ac coupling of C1C3, all inductors of L1L3 are driven
by the same asymmetric square-wave voltage, although the stacked CLD filters are at different dc
voltage levels. Since the stacked CLD filters are ac-coupled, the series-connected substrings can be
equivalently separated and grounded as shown in Fig. 4(a), in which a dc voltage source with VString
that is equivalent to the sum of VPV1VPV3 is used to power the equalizer. In this transformed voltage
equalizer, both the CLD filters and PV1 PV3 are connected in parallel. Accordingly, the transformed
circuit shown in Fig. 4(a) can be simplified to the equivalent circuit shown in Fig. 4(b), which is
identical to a traditional SEPIC shown in Fig. 2. This allows the overall operation of the proposed
voltage equalizer to be analyzed and expressed.
In the simplified equivalent circuit, the current of Ltot, iLtot, equates to the sum of iL1iL3;
iLtot = iL1+iL2+ iL3. (2)

the total of Ieq1Ieq3, Ieq-tot ,is
406

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
Ieq-tot = Ieq1+ Ieq2+ Ieq3. (3)
Since all inductors in the CLD filters are driven by the same asymmetric square-wave voltage
of vL, the inductance of Ltot in the simplified circuit, Ltot, is yielded as
Ltot = VL(dt/diL1+ dt/diL2+ dt/diL3) (4)
Ltot = Li/3 (5)
where Li is the inductance of L1L3 . Capacitors C1C3 and Cout1Cout3, respectively, are
virtually connected in parallel, and therefore, the capacitances of Ctot and Cout in the simplified
circuit, Ctot and Cout, are
Ctot = 3Ci , Cout = 3Cout-i , (6)
where Ci and Cout-i, respectively, are the capacitances of C1C3 and Cout1Cout3.
V. Mode of operation
The proposed voltage equalizers operate either in continuous conduction mode (CCM) or
discontinuous conduction mode (DCM), similar to traditional buck-boost converters. The open-loop
operation in DCM is no longer advantageous, and the CCM operation is considered desirable from the
perspective of current rating of components. The operational analysis of continuous and discontinuous
operation modes are explained using the original circuit shown Fig. 4(a), while mathematical analyses
will be performed and equations developed for the simplified circuit.

(a) Ton period

(b) Toff-a period


407

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

(c) Toff-b period (for DCM only)

Fig. 5. Current flow paths in period of (a) Ton, (b) Toff-a, (c) Toff-b, (for CCM only).
The key operation waveforms and current flow paths in DCM under the partially shaded
condition are shown in Figs. 5 and 6, respectively
A. DCM Operation
During the on period, Ton, all inductor currents, iLin and iLi, linearly increase and flow through
the switch Q. iL1iL3 flow through C1C3 and Cout1Cout2. The lower the position of Cout1Cout3,
the higher the current tends to flow; the current of Cout1, iCout1, shows the largest amplitude. For
example, iL2 only flows through Cout1, whereas iL3 flows through both Cout1 and Cout2. Thus,
currents flowing through the upper smoothing capacitors are superimposed on lower ones.
As Q is turned off, the operation moves to Toff-a period. Diodes D1D3 start conducting,
and the inductor current linearly declines. Energies stored in L1L3 in the previous Ton period are
discharged to respective smoothing capacitors in this mode. iLin is distributed to C1-D1C3-D3
branches and flows toward Cout1Cout3. Depending on the shading conditions, some diodes that
correspond to slightly-shaded or unshaded substrings cease to conduct sooner than the others. For
example, iD3 reaches zero sooner than iD1 and iD2, as can be seen in Fig. 8. After iD3 declines to
zero, iL3 flows toward C1 and C2 through C3. Until all diode currents decrease to zero, this Toff-a
period lasts and all inductor currents keep linearly
decreasing. Similar to the Ton period, the higher current tends to flow through smoothing
capacitors in the lower place due to the current superposition.
The Toff-b period begins as all diodes cease. Since the applied voltages of all inductors in this
period are zero, all currents, including iCi, remain constant.
The duty cycle of Toff-a period, Da, is given by
Da = DVstring / (VPVi + VD) , (7)
where D is the duty cycle of Q, and VD is the forward voltage drop of the diodes.

Fig. 6. Key operation waveforms in DCM


408

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Fig. 7. Key operation waveforms in CCM

The average currents of Ltot and Lin, ILtot and


ILin, are expressed as
ILtot=((VStringDDaTs)/2)*((Lin+Ltot)/( LinLtot)) (8)
ILin=((VStringD2Ts)/2)*((Lin+Ltot)/( LinLtot)) (9)
where TS is the switching period, Li and Lin are the inductances of Li and Lin, respectively.
From (5) and (6),
ILtot/ILin = Da/D , (10)
D and Da increase with the demand for
Ieq-tot (= ILtot).
If Da > 1 D, the equalizer operates in CCM. The boundary between DCM and CCM is
established based on
the critical duty cycle, Dcritical;
Dcritical = VPV1+VD/(VString+VPV1+VD) (11)
B.CCM Operation
The operational waveforms in CCM are shown in Fig. 7. The current flow paths in CCM are
similar to those in DCM in Ton and Toff-a, shown in Figs. 5(a) and (b), respectively. Similar to the
traditional SEPIC, the voltage conversion ratio and current relationship in CCM are given by
VPVi=(D/(1-D))*(Vstring-VD) (12)
ILtot/ILin=(1-D)/D (13)
VI. RESULT AND DISCUSSION
In order to valiadate the performance of the above equalizer a simulation model was designed in
matlab-simulink targeting a 100-W PV panel consisting of three sub panels each with typical operating
voltage range of 1024 V . The proposed equalizer is validated under three different scnearious with
the parameters listed in the table.8.

409

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
COMPONENTS
RATING
Equalizer Capacitors
C1-C3-40F
Output capacitors
150 F
Inductors Equalizer
68 H
Lout
86 H
Cin
1000 F
Solar panels-3
12-20 volts 17.4 Vmax-mpp
A. PV PANLE CHARACTERISTICS :
The proposed equalizer is validated among three conditions which are listed below:
Case 1 : normal mode all on equal voltage balancing.
Case 2: Panel 1 with shaded and panel 2&3 are in normal condition.
Case 1:
At case 1 which is considered to be the normal mode of operation , in this mode all of the solar
panels exhibit equal voltage and current characteristics , under this mode all the
voltage values of the panels are extracted to it MPPT conditions , which tracks to it maximum
power point conditions . This modes are shows the ideal operating condition of the parallel connected
sources . In intial operating conditions the panels get a common voltage , once the variations are
obtained .

410

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Fig 9. PV panel charecteristics sed for experimental equalization

Fig 10 : Input and out voltage waveforms of the mode-1

Input is around 19.4 volts as common for all the panels and the output boost voltage is at 44V
represents the boost mode operation of the proposed converter on ideal mode.
B. INPUT AND OUTPUT POWER RESPONSE:

Fig : 11 Shows the input and out power chrecteristcis of the converter
411

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Fig : 12 Shows the capacitor characteristics of the proposed equalizer .

In fig.12. shows the Compenstation current patterns of the proposed equalizer, the proposed
equalization current is at 0. In ideal conditions all the configurations are at equal conditions which
exhibits equal current input currents, provides a zero compenstation current.

Fig 13: Shows the diode curents of the proposed equalization technique under uniform radiation
shows a equalization current around 8 A .
Case 2:
In this mode equalization pattern of the proposed equalizer is studied by changing the irradiation
of the solar panel, which makes the panel as partial shading , panel 1 is partially shaded and the voltage
compensation is done by adjusting the duty ratio according to the voltage variations of the panel.

Fig 14 : Input and output voltages

VII. CONCLUSION
Single-switch voltage equalizers for partially-shaded PV modules have been proposed in this
project. The single switch topology can simplify the circuitry compared with conventional DPP
converters and voltage equalizers requiring numerous switches proportional to the number of PV
substrings/modules in series.
412

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
In this project , the proposed voltage equalizers can be derived by stacking CLD filters on
the traditional converter topologies ,the proposed structure of the converter is simpler and easier to
implement , the proposed fuzzy based equalization strategy based on the fuzzy logic controller has better
voltage regulation characteristics under partial shading conditions , the Fuzzy controller provides the
necessary controller action to control the voltage equalizers to supply excessive equalization currents
from the unshaded panels for shaded substrings, which unturn needlessly increasing power conversion
loss.
The proposed fuzzy based equalization technique provides a better compensation over the
conventional equalizers which is light weight in algorithm can be implemented towards a simpler
hardware .The extractable maximum powers with equalization were considerably increased compared
with those without equalization, demonstrating the efficiency of the proposed voltage equalizer.
VIII. REFERENCES
[1] L. F. L. Villa, X. Pichon, F. S. Ardelibi, B. Raison, J. C. Crebier, and A. Labonne, Toward the
design of control algorithms for a photovoltaic equalizer: choosing the optimum switching strategy
and the duty cycle, IEEE Trans. Power Electron., vol. 29, no. 3, pp. 14471460, Mar. 2014.
[2]

H. J. Bergveld, D. Buthker, C. Castello, T. Doorn, A. D. Jong, R. V. Otten, and K. D. Waal, Modulelevel dc/dc conversion for photovoltaic systems: the delta-conversion concept, IEEE Trans. Power
Electron., vol. 28, no. 4, pp. 20052013, Apr. 2013.

[3] S. Qin and R. C. N. P. Podgurski, Sub-module differential power processing for photovoltaic
applications, IEEE Applied Power Electron. Conf. Expo., pp. 101108, 2013.
[4] S. Qin, S. T. Cady, A. D. D. Garcia, and R. C. N. P. Podgurski, A distributed approach to MPPT
for PV sub-module differential power processing, IEEE Energy Conversion Conf. Expo., pp. 2778
2785,2013.
[5]

L. F. L. Villa, T. P. Ho, J. C. Crebier, and B. Raison, A power electronics equalizer application for
partially shaded photovoltaic modules, IEEE Trans. Ind. Electron., vol. 60, no. 3, pp. 11791190,
Mar. 2013.

[6] J. T. Stauth, M. D. Seeman, and K. Kesarwani, Resonant switched-capacitor converters for submodule distributed photovoltaic power namagement, IEEE Trans. Power Electron., vol. 28, no. 3,
pp. 11891198, Mar. 2013.
[7]

J. Du, R. Xu, X. Chen, Y. Li, and J. Wu, A novel solar panel optimizer with self-compensation for
partial shadow condition, IEEE Applied Power Electron. Conf. Expo., pp. 9296, 2013.

[8]

S. Poshtkouhi, V. Palaniappan, M. Fard, and O. Trescases, A general approach for quantifying the
benefit of distributed power electronics for fine grained MPPT in photovoltaic applications using
3-D modeling, IEEE Trans. Power Electron., vol. 27, no. 11, pp. 46564666, Nov. 2012.

[9] M. Uno and K. Tanaka, Single-switch cell voltage equalizer using multistacked buckboost
converters operating in discontinuous conduction mode for series-connected energy storage cells,
IEEE Trans. Veh. Technol., vol. 60, no. 8, Oct. 2011, pp. 36353645.
[10] V. Eng and C. Bunlaksananusorn, Modeling of a SEPIC converter operating in discontinuous
conduction mode, in Proc. 6th ECTI-CON, pp. 140143, May 2009.

413

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Design of Single Phase Seven Level


PV Inverter Using FPGA
S.Murugesan
Assistant Professor in Electrical and
Electronics Engineering
M.Kumarasamy College of Engineering, Karur
murugesans.eee@mkce.ac.in

R.Senthilkumar
Assiatant Professor of Electrical and Electronics Engineering
M.Kumarasamy College of Engineering, Karur
Senthilkumarr.eee@mkce.ac.in

AbstractThis paper proposes an implementation of single phase seven level grid connected
inverter for photovoltaic systems by using FPGA based pulse width modulated control process.
Three types of reference signals that are identical to each other with an offset that is equivalent to
the amplitude of the triangular carrier were used to generate the PWM signals. The inverter is able
to produce seven levels of output voltage levels (Vdc, 2Vdc/3, Vdc/3, 0, -Vdc, -2Vdc/3, -Vdc/3)
from the dc supply voltage. A digital PI current control algorithm was implemented in a Xilinx
XC3S250E FPGA to keep the current injected into the grid as sinusoidal. The proposed system was
designed and verified through simulation and implemented in a prototype.
Key words: Grid Connected, Modulation index, Multi level inverter, Photo Voltaic (PV) system,
Pulse Width modulation (PWM), Total harmonic distortion (THD).

I. INTRODUCTION
The ever increasing energy consumption, fossil fuels, soaring costs and exhaustible nature, and
worsening environment have created a booming interest in renewable energy generation systems, one of
which is photovoltaic. Such a system generates electricity by converting the suns energy directly into
electricity. Energy generated by photovoltaic system and can be delivered to power system networks
through grid connected inverters.
A single-phase grid-connected inverter is usually used for residential or low-power applications
of power ranges that are less than 10 kW [1]. Types of single-phase grid-connected inverters have been
investigated [2]. A common topology of this inverter is full bridge three-level. The three level inverter should
the satisfy specifications through its very high switching, but it may be unfortunately increase switching
losses, acoustic noise, and level of interference to other equipment. By improving its output waveform
reduces its harmonic content and hence, also the size of the filter used and the level of electromagnetic
interference (EMI) generated by the inverters switching operation [3].
Multilevel inverters are guaranteed that they have nearly sinu- soidal output-voltage waveforms,
output current with better harmonic profile, less stressing of electronic components owing to decreased
voltages, switching losses that are lower than those of conventional two-level inverters, a smaller filter size,
and lower EMI, all of which make them
414

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
cheaper, lighter, and more compact [3], [4].
Various topologies for multilevel inverters have been pro- posed over the years. Common ones are
diode-clamped [5] [7], flying capacitor or multicell, cascaded H-bridge, and modified H-bridge multilevel.
This paper recounts the development of a novel modified H-bridge single-phase multilevel
inverter that has two diode embedded bidirectional switches and a novel pulse width- modulated (PWM)
technique. The designed topology was applied to a grid-connected photovoltaic system with considerations
for a maximum-power-point tracker (MPPT) and a current-control algorithm.

Fig.1. Proposed single phase seven level grid connected PV inverter


A multilevel power converter structure has been introduced as an alternative in high power and
medium power situations. A multilevel converter not only assures high power ratings, but also enables
the ease of usage for renewable energy sources such as photovoltaic, fuel cells and wind, can be easily
interfaced to a multilevel converter system for a high power application.

The term multilevel begun with the three level, subsequently, several multilevel converter
topologies has been developed over the years. However, the elementary concept of a multilevel converter to
achieve higher power is to use a series power semiconductor switches with several lower voltage DC sources
to perform the power conversion by synthesizing a staircase voltage waveform. Batteries, Capacitors, and
renewable energy voltage sources can be used as the multiple DC sources in order to achieve high voltage at
the output; however, the calculated rated voltage of the power semiconductor switches depends only upon
the rating of the DC voltage sources to which they are connected.

A multilevel converter gives more advantages over a conventional three level inverter that uses
high switching frequency pulse width modulation (PWM).
415

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
II. PROPOSED INVERTER

The circuit diagram for PV connected single phase seven level grid connected inverter shown
in fig. 3.2. Photovoltaic arrays were connected to the inverter via a dc-dc boost converter. The power
generated by the inverter is to be delivered to the power network, so the utility to grid, rather than a load,
was used. The dc-dc boost converter was required because the PV arrays had a voltage that is lower than the
grid voltage. High dc bus voltages make more importance to ensure that power flows from the PV arrays to
the grid. A filtering inductance Lf was used to filter the current injected into the grid. Proper switching of
the inverter can produce seven

The proposed inverters operation can be divided into seven switching states, they are shown in Fig. 2(a)
to 2(g). Fig. 2(a), (d), and (g) shows a conventional inverters operational states in sequence, while Fig.
2(b), (c), (e), and (f) shows additional states in the proposed inverter synthesizing one- and two-third levels
of the dc-bus voltage. The required seven levels of output voltage were generated as follows
1)

Maximum positive output (Vdc ): When S1 is ON state, connecting the load positive terminal
to Vdc , and S4 is ON, con- necting the load negative terminal to ground. Remaining controlled
switches are OFF; the voltage applied to the load terminals is Vdc . Fig. 2(a) shows the current paths
that are active at this stage.

2)

Two-third positive output (2Vdc /3): The bidirectional switch S5 is ON state, connecting the load
positive terminal, and S4
is ON, connecting the load negative terminal to ground. Remaining
controlled switches are OFF; the voltage applied to the load terminals is 2Vdc /3. Fig. 2(b) shows
the current paths that are active at this stage.

3)

One-third positive output (Vdc /3): The bidirectional switch S6 is ON state, connecting the load
positive terminal, and S4
is ON, connecting the load negative terminal to ground. Remaining
controlled switches are OFF; the voltage applied to the load terminals is Vdc /3. Fig. 2(c) shows the
current paths that are active at this stage.

4)

Zero output: This level can be produced by two switching combinations; switches S3 and S4 are
ON, or S1 and S2 are ON state, and remaining controlled switches are OFF; terminal ab is a short
circuit level and the voltage applied to the load terminals is zero. Fig. 2(d) shows the current paths
that are active at this stage.

5)

One-third negative output (Vdc /3): The bidirectional switch S5 is ON state, connecting the load
positive terminal, and S2 is ON, connecting the load negative terminal to Vdc . Remaining switches
are OFF; the voltage applied to the load terminals is Vdc /3. Fig. 2(e) shows the current paths that
are active at this stage.

6)

Two-third negative output (2Vdc /3): The bidirectional switch S6 is ON, connecting the load
in positive terminal, and S2 is ON, connecting the load negative terminal to ground. Remaining
controlled switches are OFF; the voltage applied to the load terminals is 2Vdc /3. Fig. 2(f) shows
the current paths that are active at this stage.

7)

Maximum negative output (Vdc ): When S2 is ONstate, connecting the load negative terminal
to Vdc , and S3 is ON, con- necting the load positive terminal to ground. Remaining controlled
switches are OFF; the voltage applied to the load terminals is Vdc. Fig. 2(g) shows the current paths
that are active at this stage.

416

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

417

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

418

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

TABLE I
OUTPUT VOLTAGE ACCORDING TO T HE SWITCHES ONOFF CONDITION

419

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
III. PWM TECHNIQUES
A novel PWM modulation technique was introduced to generate the PWM switching signals.
Three reference signals (Vref1, Vref2, and Vref3) were compared with a carrier signal (Vcarrier). The
corresponding reference signals had the same frequency, amplitude and were in phase with an offset value
that was equivalent to the amplitude of the carrier signal. The reference signals were each compared with
the carrier signal. When the value of Vref1 had exceeded the peak amplitude of Vcarrier, Vref2 is made
the comparison with Vcarrier until it had exceeded the peak amplitude of Vcarrier. Then, onward, Vref3
would take charge and would be compared with Vcarrier until it makes the value which is reached to
zero. Once Vref3 had reached zero, the value of Vref2 would be compared until it reached zero. After
that moment Vref1 would be compared with Vcarrier. Fig. 3 shows the result of switching pattern. The
switches S1, S3, S5, and S6 would be switching at the rate of the carrier signal frequency, whereas S2 and
S4 would operate at a frequency that was equivalent to the fundamental frequency.
For one cycle of the fundamental frequency, the proposed inverter operated through six modes. Fig. 4
shows the per- unit output-voltage signal for one cycle. The six modes are described as follows:

Fig.3. Switching pattern for the single-phase seven-level inverter.


420

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Fig.4. Seven-level output voltage (Vab ) and switching angles.

The phase angle of the device depends on the modulation index Ma. Theoretically the modulation index is

Where Ac is the peak to peak value of the carrier signal and Am is the peak to peak value of the voltage
reference signal Vref.
When the modulation intex more than 0.33 and less than 0.66. The phase angle displacement is given by
421

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
IV. CONTROL SYSTEM

FPGA is Programmable Logic Device developed by one of the vital vendor of VLSI that is
Xilinx, Inc. It comprises of millions of logic gates. From that some of them combined together to form a
Configurable Logic Block (CLB). CLB simplifies higher-level circuit design. Netlist of gates that means
Gates interconnections using software are defined through SRAM or ROM. Thus it makes the flexibility
modification in the designed circuit without altering the hardware part. When considering the Concurrent
operation, it requires less hardware, easy and fast circuit modification, especially low cost for a complex
circuitry and rapid prototyping make it as the most favorable choice for prototyping an ASIC.

The carrier wave is compared with the multiplied modulating signal which is derived from the
look-up table. The corresponding data of the look up table are stored in the internal ROM unit. The external
multiplicands and already stored data will determine the modulation index of the PWM. The data stored
in the look-up table (ROM) consists of 60 data from the Red phase and another 60 data from the Blue
phase. Most Part of Yellow phase is derived through addition of Red and Blue phases. Selector Unit and
Multiplexer are used in the selection of required signal to the appropriate channel as to form a proper PWM
output pattern at the output terminals.

The shifting of signal waveform is essential in order to vary the power factor of the system. The
process is carried out by delay or advance the reset signal. The reset signal is fully connected to the entire
module. A signal of positive triggering edge during positive and negative cycle is used as a reference by the
reset signal. By producing the advancement and delay through reset signal by the external command will
force the current in the main circuit to lead or lag the voltage supply.
V. SIMULATION AND EXPERIMENTAL RESULTS
MATLAB SIMULINK simulated the proposed configuration before it was physically implemented in
a prototype. The PWM switching patterns were generated by comparing three reference signals (Vref1 ,
Vref2 , and Vref3 ) against a triangu- lar carrier signal (see Fig. 6). Subsequently, the comparing process
produced PWM switching signals for switches S1 S6 , as Fig.5 and the output voltage of seven level
inverter shown in fig.6

Fig.5. Simulation Output voltage of seven level inverter


422

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
One leg of the inverter operated at a high switching rate that was equivalent to the frequency of the carrier
signal, while the other leg operated at the rate of the fundamental frequency (i.e., 50 Hz). Switches S5 and
S6 also operated at the rate of the carrier signal.
B.EXPERIMENTAL RESULTS

The hardware output voltage for single phase seven level inverter shown in fig.7 and hardware
photocopy shown in fig.8.

Fig.6. Output voltage for single phase seven level inverter

Fig.7.Hardware Photocopy of proposed system.

C.THD RESULT FOR MULTILEVEL INVERTER

Fig.8. THD results for 3-level inverter


423

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Fig.9. THD results for Proposed system

VI.CONCLUSION
Using Xilinx FPGA to generate the PWM provides flexibility to modify the designed circuit without
altering the hardware part. When Concurrent operation is used, it requires less hardware, easy and fast
circuit modification, especially low cost for a complex circuitry and rapid prototyping make it as the most
favorable choice for the PWM generation. From the analysis of simulation and experimental results it is
confirmed that the harmonic distortion of the output current waveform of the inverter fed to the grid is
within the stipulated limits laid down by the utility companies, the THD is less than five and three level
inverter. All the above advantages have made the inverter configuration highly suitable for grid connected
photovoltaic application (5kW).
REFERENCES
1.

M. Calais and V. G. Agelidis, Multilevel converters for single-phase grid connected photovoltaic
systemsAn overview, in Proc. IEEE Int. Symp.Ind. Electron., 1998, vol. 1, pp. 224229.

2.

S. B. Kjaer, J. K. Pedersen, and F. Blaabjerg, A review of single-phase grid connected inverters for
photovoltaic modules, IEEE Trans. Ind. Appl., vol. 41, no. 5, pp. 12921306, Sep./Oct. 2005.

3.

P. K. Hinga, T. Ohnishi, and T. Suzuki, A new PWM inverter for photovoltaic power generation
system, in Conf. Rec. IEEE Power Electron. Spec. Conf., 1994, pp. 391395.

4.

Y. Cheng, C. Qian, M. L. Crow, S. Pekarek, and S. Atcitty, A comparison of diode-clamped and


cascaded multilevel converters for a STATCOM with energy storage, IEEE Trans. Ind. Electron.,
vol. 53, no. 5, pp. 15121521, Oct. 2006.

5.

M. Saeedifard, R. Iravani, and J. Pou, A space vector modulation strategy for a back-to-back velevel HVDC converter system, IEEE Trans. Ind. Electron., vol. 56, no. 2, pp. 452466, Feb. 2009.

6.

S. Alepuz, S. Busquets-Monge, J. Bordonau, J. A.M. Velasco, C. A. Silva, J. Pontt, and J.


Rodrguez, Control strategies based on sym-metrical components for grid-connected converters
under voltagedips, IEEE Trans. Ind. Electron., vol. 56, no. 6, pp. 21622173, Jun. 2009.

7.

J. Rodrguez, J. S. Lai, and F. Z. Peng, Multilevel inverters: A survey of topologies, controls, and
applications, IEEE Trans. Ind. Electron., vol. 49, no. 4, pp. 724738, Aug. 2002.

424

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Home Automation Using IoT


Govardhanan D, Sasikumar K S, Vishalini R
and Poojaa Sudhakaran
Department of Electronics and Communication Engineering
PSG College of Technology
Abstract Internet of Things (IoT) is a concept that envisions all objects around us as part of internet. As
a part of IoTs, serious concerns are raised over the connectivity of devices. Once the devices are connected
together, they enable more smart processes that supports the essential home energy management and
automation. Thus networking of things has gained attention due to its integration into everyday life. In
the recent years, Energy conservation and energy audit has proved to be invincible in the world. Our
framework provides means for energy management at home through IoT. The proposed smart home
features in monitoring the status of the appliances and accessing them using internet. This work also
provides a means of automation through smart lamp system and scheduling of the appliances on daily or
weekly basis which helps in smoothening the load curve. It is developed in such a way that the user can
access the appliances from a remote place using internet through smart phone and cloud services. This
paper shows the different perspective of internet which gives an insight into the field of automation.
Key words Internet of Things, Smart Home, Home energy management.

I. INTRODUCTION
Today, over two billion people around the world use Internet for browsing Web, sending
and receiving emails, accessing multimedia content and services, playing games, social networking
applications and many other tasks. From the saying, A world where things can automatically
communicate to computers and each other providing services to the benet of the human kind, it is
predictable that, within the next decade, Internet will exist as a seamless tool for classic networks and
networked objects. The Internet of Things (IoT) is a network of networks where a massive number of
objects/things/sensors/devices are connected through communication and information infrastructure
to provide a value-added service[10]. The Internet of Things allows people and things to be connected
Anytime, Anyplace, with Anything and Anyone, ideally using Any path/network and Any service[10]
[11]. The innovation of IoT will be enabled by the embedding of electronics into everyday physical
objects, making them smart and letting them integrate and operate within the physical infrastructure.
Over the last thirty years, energy demand has shown a huge increase in residential as well as
industrial sectors. Electricity demand in EU-27 increased by 70% between 1980 and 2008 [1].
Therefore, creating intelligent home energy management systems which are able to save energy
while meeting user preferences has become an interesting research topic. Due to their relatively
low-cost, wireless nature, flexibility and easy deployment, wireless sensor networks represent a
promising technology for providing such systems. This ability to control usage is called as Demand
Side Management (DSM).Thus the system furnishes the need for a heterogeneous Information fusion
technology of IoT in the smart home [8]. DSM plays a major role in reducing the electricity usage cost
by altering the system load shape [12].
In the study of dynamic DSM, different techniques and algorithms have been proposed, where
the basic idea has been to reduce the energy bill corresponding to the time-of-use(TOU) tariffs
incentives offered by the utility [13].In the study of the appliance scheduling, the smart home aims to
offer the appropriate services to the user based on residents lifestyle[9].
IoT builds on three pillars, related to the ability of smart objects: (i)to be identiable (ii)to
communicate and (iii)to interacteither among themselves, building networks of interconnected objects,
or with end-users. The Three characteristics of IoT are (i) Anything communicates: Smart things have
the ability to wirelessly communicate among themselves and form networks of interconnected objects,
(ii)Anything identiable: Smart things are identied with a digital name. Relationships among things
can be specied in the digital domain whenever physical interconnection cannot be established and
(iii)Anything interacts: Smart things can interact with the local environment through sensing and
actuation capabilities whenever present[11].
425

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

This paper is organised into VI sections. Section II describes the objective of the proposed smart
home system. The objective has been explained in terms of 3 sectors namely automation, monitoring
and control. Section III describes the working of the smart home system and its efficient performance
in saving energy compared to the existing appliances. Section IV discusses the cloud storage used in
this work. Conclusions and Acknowledgement is in Section V and VI respectively.
II. THE PROPOSED SMART HOME SYSTEM
In our day-to-day life, situation comes where it is difficult to control the home appliances
in case when no one is available at home or when the user is far away from home or when the user
leaves home forgetting to switch off some appliances which leads to unnecessary wastage of power
and also may lead to accidents. Sometimes, one may also want to monitor the status of the household
appliances, staying away from home. In all the above cases the presence of the user is mandatory to
monitor and control the appliances which are not possible all the time.
This short coming can be eliminated by connecting the home appliances to the user via some
medium. However, connectivity can be established with the help of GSM, internet, Bluetooth, Zigbee.
It is reliable to connect devices via internet so that the remote user can monitor and control home
appliances from anywhere and at any time around the world. This increases the comfort of the user
by connecting the user with the appliances at home, to monitor the status of the home appliances
through a mobile app, to control the appliances from any corner of the world, to understand the power
consumption of each appliance and to estimate the tariff well in advance.
A. AUTOMATION SYSTEM
The appliances are classified according to the nature of their operation / control. Appliances like
Geyser and Toaster have to be switched ON/OFF at particular time intervals. For efficient utilization
of the appliance, the device has to be switched ON and OFF appropriately. An RTC based system can
performs the control precisely which enhances the appliances life and saves the power. When the
match takes place between the loaded time and the real time, the controller turns ON the appliance and
similarly when the duration gets over, the controller turns OFF the appliance. Thus the appliances are
controlled as per the time schedule defined by the user.
Some appliances need to work only during human presence. In this proposed work, human
movements are detected using PIR sensor and the necessary automation is done. The desired light
intensity in the room can be established using Smart Lamp. The lamp must switch ON and OFF
only during human presence which is implemented using motion sensor based lighting system. This
lighting systems performance is increased by switching them only when there is no sufficient light
and the light will not switch ON during daytime. This is done by including LDR in the system. The
desired ambient light intensity is set by varying the brightness of the lamp using PWM techniques
which helps in energy saving. By proper positioning of LDR, the light intensity of the room can be
maintained as shown in Fig 1.

Fig. 1 Block diagram describing Smart Lamp control using LDR

B. MONITORING SYSTEM
After leaving home few metres away, the user may have doubts regarding the status of the
appliances at home. In such cases, returning back to our home and checking for its status will not be
difficult. When the distance extends to about few miles, returning back becomes tedious. In case of
426

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
emergency, the user has to return, which disrupts the users routine. The appliance may also be
left as such, which may result in severe damage to the device, in case of motor or geyser. These cons
have been overcome by remote monitoring of the home appliances.

Fig.2 Flow chart on Status monitoring

Thus the PIR sensor installed at the specific points at the home senses the location of the user at
the home abiding to the location awareness system [9] which makes use of a floor mapping algorithm
for a single user. The intended usage of the PIR sensor is to detect the human presence. The Light
Dependant Resistor(LDR) placed at suitable locations determines Luminance intensity of the place at
the location and sends the value to the control system for further interpretation. Thus LDR enhances
the feature cited in [3] by making use of sensors to detect the environment factors and adjust to the
same desired by the user. The smart plug associates to the work represented in [14].These smart plugs
enhance the feature of scheduling the devices as well as controlling the power consumption of lamps
such as LED lamps.
The device status has to be monitored periodically and when the user sends a request, the status
of the appliance is presented. The status monitoring of the appliances can be realized with the help of
a flow chart shown in Fig.2.When the device is in standby mode, as soon as the PIR sensor detects the
human presence the controller calculates the room luminance value and compares with the prefixed
value. Based on the results, the lamp is either switched ON with the given brightness or switched OFF
and the cycle continues.
C. CONTROL SYSTEM
Remote controlling of the appliances can also be performed. When the user sends a command,
depending on the command received, the appliances can be switched ON and OFF accordingly.
Sometimes there arises a discrepancy where no device has been connected or fault in the existing
device. In such case, an open circuit prevails. With the help of a current sensing mechanism, fault
detection can be performed. This can be implemented by sensing the current flowing to the appliance
with the help of a current sensor as shown in Fig.3.This helps in further saving energy as well as the
427

appliances from further damage.

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Fig.3 Block diagram depicting the working of current sensor

Fig.4: Simulation showing the relation between power consumed and LUX.

As the light intensity increases, the brightness of the lights are adjusted and hence the power
consumption is reduced.The simulation results providing the relation between the light intensity and
the power consumption of the LED lights aredepicted in the Fig.4. A user friendly environment is
created at home with the help of a LCD and Keypad to let the user to enter the starting time and the
duration for which the device has to remain switched ON.The same can be provided with the help of
the mobile application at the user end.Hence an LDR based adaptive lighting system is used to save
power by varying the PWM for LED lamps rather than using fluorescent lamps with fixed power
consumption.
Table 1: Power and LUX comparison between LED using LDR and Fluorescent Lamp

428

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Fig.5 Results showing the comparison between LED and Fluorescent lamps

The results tabulated in Table 1were obtained by placing the LEDs at a distance of about 2 feet
from the fluorescent lamp and both are opposite to each other. About 60 LEDs were used in the setup.
The power consumed by the LEDs at a LUX of 2600 Lumens is approximately 7 watts whereas the
power consumed by a fluorescent lamp is 40 watts. This saves power at a rate of 33 watts/hour.The
results illustrating the difference in power consumption between the LED lamp and Fluorescent lamp
observed with the help of test setup is shown in

Fig.6 Block diagram depicting the proposed smart home system


Fig.5. When light is used for a period of 8 hours/day, a total of 264watts/day will be saved. On
an average, 7.92KW power will be saved for a month which costs about Rs.45/month. For a year, this
sums up to a cost of Rs.540. Hence, through the implementation of LDR along with LEDs, energy
consumption can efficiently be handled.

III. IOT IN SMART HOME


As already stated, with the usage of motion sensor, brightness sensor, and LEDs, a smart lamp
setup have been established along with smart plug installations for the appliances in the home. The
installations are connected with the main controller unit whose accessibility can further enhanced by
incorporating IoT features into the Home control system. Thus an internet connection is established
for the Home control system through the Wi-Fi module [15]. The IP address of the internet
connection is stored in the cloud storage application and the user IP address also gets stored.
When the system intends to upload data into the cloud, it sends the data through the Wi-Fi module
to the cloud through HTTP request commands. In the similar way, user sends the commands to the
cloud which shall then be re-routed to the home system through the home Wi-Fi router and the Wi-Fi
module. Depending on the message received, the controller responds to the query or it accesses the
intended appliances. The response is also sent back to the user through the Wi-Fi module in the
429

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
opposite direction as shown in the Fig.6. For instance, when the user sends a command to
turn ON a particular device and if the device has already been turned ON, then the controller sends a
message to the user that the device is already ON. Thus the redundancy check can be performed with
feedback and the fault detection also gets notified by the Home system instantaneously. User can also
check the power consumption of the devices. Thus when the user sends the command to enquire the
power consumption and cost, the results are displayed to the user. The working of a smart home system
is depicted in Fig.6.
Security becomes a big issue in Internet based control. Because who ever knows the IP address
of the users internet can obtain an access to the smart home system.This may lead to the misuse of
appliances in the absense of the user which can be avoided by a password protection. The user can set
a password thereby avoiding unauthorized usage. Incase if anyone enters a wrong password, then the
user gets notified about the illegal entry.
IV. THING SPEAK APPLICATION
Thing Speak is an open source Internet of Things application and API to store and retrieve data
from the devices using the HTTP protocol over the internet. Thus Thing Speak enables for sensor
data logging, location tracking and appliance access in case of Smart Homes. Thing Speak cloud
storage provides for secure means to store and retrieve data by providing 10 characters read and
write keys and is also password protected. It acts as a cloud storage for the data being logged by the
user for accessing the appliances at home and also stores the sensor readings and the statuses of the
appliances at home through the internet. The sensor readings are logged into the cloud intermittently
while when the user sends a command to access the appliance or to monitor the status of the appliance,
the command is being read from the cloud storage by the home control system and the appropriate
response is generated which gets uploaded into the cloud for user reference. The operation status of
the Air Conditioner logged into the cloud storage channel is presented in the Fig.7 along with the
Luminance intensity of the living room being read from the LDR and calibrated to give the output in
terms of Luminance intensity(Lux) as depicted in Fig.8.

Fig.7 Status of the Air conditioner logged into the cloud channel

Fig.8 Luminence intensity of living room logged into the cloud


430

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
V. CONCLUSIONS
In this proposed work, we have established a smart system for controlling home appliances.
Smart devices have been connected using internet thereby increasing the reliability of the product. We
also examined the contribution of each solution towards improving the efficiency and effectiveness
of consumers lifestyle as well as of society in general. Efficiency in managing power has been
improved by turning OFF the appliances during unneccessary times and accidents are avoided in
case of any malfunctioning of the device. A prototype of the proposed smart home control system is
also implemented. Practical experiments were conducted to demonstrate that the developed prototype
works well and that the proposed smart home control system provides an outstanding performance
of appliances and considerable energy saving. Automating the appliances with the help of RTCs,
LDRs and PIR is also done. The proposed work can further be improved in the future by developing
an application that comprises a Speech Recognition System that mitigates the need for physical
contact between the user and the smart phone. The future work also seeks to investigate the transfer
of multimedia messages.
VI. ACKNOWLEDGEMENT
We take this opportunity to acknowledge our Management, Principal, HOD for providing us the
environment and support to complete this work. We would like to extend our gratitude for the motivation and
valuable suggestions provided by our guide Dr.D.Sivaraj, Assistant Professor, Department of Electronics
and Communication Engineering, PSG College Of Technology and forever obliged to our parents for
their encouragement.
VII. REFERENCES
[1] Eurelectric, Power Statistics, 2010 edition, Tech. Rep., 2010.
[2]

Lin Liu, Yang Liu, Lizhe Wang, Albert Zomaya, and Shiyan Hu, Economical and Balanced Energy
Usage in The Smart Home Infrastructure: A Tutorial and New Results", IEEE, 2015.

[3]

Dae-Man Han and Jae-Hyun Lim, Smart Home Energy Management System using IEEE802.15.4
and ZigBee, Smart Home Energy Management System using IEEE 802.15.4 and ZigBee,2010.

[4]

Mingfu Li and Hung-Ju Lin, Design and Implementation of Smart Home Control Systems Based
on Wireless Sensor Networks and Power Line Communications, IEEE Transactions on Industrial
Electronics, Vol. 62, No. 7, July 2015.

[5] Charith Perera, Chi Harold Liu and SrimalJayawardena, The Emerging Internet of Things
Marketplace Froman Industrial Perspective: A Survey, IEEE, 2015.
[6]

Rahul Godha, Sneh Prateek, Nikhita Kataria, Home Automation: Access Control for IoT Devices,
International Journal of Scientific and Research Publications, Volume 4, Issue 10, October 2014.

[7]

Implementation of Internet of Things for Home Automation,International Journal of Emerging


Engineering Research and Technology Volume 3, Issue 2, February 2015.

[8]

Baoan Lia, Jianjun Yub Research and application on the smart home based onComponent
technologies and Internet of Things.

[9]

Suk Lee, Kyoung Nam Ha, Kyung Chang Lee, A Pyroelectric Infrared Sensor-based Indoor
Location-Aware System for the Smart Home.

[10] Luigi Atzori, Antonio Lera, The Internet of Things: A Survey, 1st June 2010.
[11]

Daniel Miorandi, Sabrina Sicari, IoT Vision, Applications and Research Challenges,2012.

[12] F. A. Qayyum, M. Naeem, A S. Khwaja, A. Anpalagan, L.Guan, and B. Venkatesh, Appliance


Scheduling Optimization in Smart Home Networks,October 29,2015.
[13] A. Agnetis, G. de Pascale, P. Detti, and A. Vicino, ``Load scheduling for household energy
consumption optimization'' IEEE Trans. Smart Grid, vol. 4, no. 4, pp. 2364_2373, Dec. 2013.
[14] Mario Collotta,Giovanni PauA Novel Energy Management Approach for Smart Homes Using
431

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
Bluetooth Low EnergyIEEE journal on selected areas in communications, vol. 33, no. 12, december 2015
[15] https://nurdspace.nl/images/e/e0/ESP8266_Specifications_English.pdf : ESP8266 specifications.

432

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

A Traffic Sign Motion Restoration Model Based on


Border Deformation Detection
T.RAMYA , G.Singaravel, V.Gomathi
Assistant Professor, Department of CSE, K.S.R College Of Engineering
Tiruchengode, email:t.ramya@ksrce.ac.in
Professor and head, Department of IT , K.S.R College Of Engineering , Tiruchengode
email:singaeavelg @gmail.com
Assistant Professor, Department of IT, Sri Ramanathan Engineering college, Thirupur
Email:ergomathi@gmail.com
AbstractThe current technology development in the field of motor vehicle Driver Assistance Systems
(DAS) is progressing; in this the safety problems associated with automatic driving have become a hot
issue in Intelligent Transportation. The traffic sign is one of the most important tools used to reinforce
traffic rules. However, traffic sign image degradation based on computer vision is unavoidable during the
vehicle movement process. In order to quickly and accurately recognize traffic signs in motion-blurred
images in DAS, a new image restoration algorithm based on border deformation detection in the spatial
domain is proposed in this paper. The border of a traffic sign is extracted using color information, and
then the width of the border is measured in all directions. According to the width measured and the
corresponding direction, both the motion direction and scale of the image can be confirmed, and this
information can be used to restore the motion-blurred image. Finally, a gray mean grads (GMG) ratio
is presented to evaluate the image restoration quality. Compared to the traditional restoration approach
which is based on the blind deconvolution method and Lucy-Richardson method, our method can greatly
restore motion blurred images and improve the correct recognition rate.

I. INTRODUCTION
With the development of urbanization and the popularization of the automobile, problems
associated with road traffic congestion, frequent traffic accidents, and the low efficiency level of road
transport has become increasingly more serious [1]. In order to alleviate these problems, a Driver
Assistance System (DAS) was designed to help or even substitute human drivers to enhance the
safety of driving [2,3]. This system films the road information in its natural scene using a camera
that is mounted inside the vehicle, and this information is subsequently processed in real time using
a relevant circuit system. Then, the system provides information, such as warnings and tips, to the
driver. This can greatly reduce driving risks and enhance road traffic and the driver's personal safety.
Strictly complying with the traffic rules can improve vehicle safety performance, and it can also
effectively reduce traffic accidents. A variety of important traffic signs placed on the road by the traffic
department communicates and supports road traffic rules for the driver [4]. Traffic signs are designed
to help drivers with piloting tasks while providing information, such as the maximum or minimum
speed allowed, the shape of the road, and any forbidden maneuvers. Therefore, recognition of traffic
signs is one of the important tasks of the DAS in Intelligent Transportation.
The fast detection and accurate identification of traffic signs hold great significance for automatic
vehicles. The ability to project a sharp image is one of the preconditions to correctly recognizing a
traffic sign. However, the relative motion between the camera and the natural scene during the exposure
time usually causes motion-blurred images, which will severely affect the image's visual quality. It is
a challenge to quickly and accurately identify traffic signs in motion-blurred images. There are two
main approaches used to solve this problem. First, by improving the performance index of the camera,
we can avoid the motion blur from a hardware perspective of image processing. However, there are
bottlenecks in technology that affect the camera's performance. The second way is to enhance and
restore the motion-blurred images by means of a motion-blurred image restoration algorithm. There
are also additional things we can do in this field to enhance image quality.
Nowadays, the recognition of traffic sign has also made great progress. The Hough transformation
and a multi-frames validation method were used by Gonzalez and Garrido [6]. A system based on
433

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
deformable models was studied, and it was immune to lighting changes, occlusions and other
forms of image variance and noise [2]. Support vector machines (SVMs) were utilized to detect and
recognize traffic signs by Bascon et al [7]. In addition, Khan et al. [8] proposed a method based on
image segmentation and joint transform correlation, which also integrated shape analysis. Barnes et al.
[9] also presented the radial symmetry detector to detect speed signs in real time.
All of these algorithms that have been used to restore motion-blurred images took place in the
frequency space. Moreover, traffic sign recognition algorithms tend to focus on traffic sign detection
and recognition. They do little to deal with traffic signs in blurred images. In order to solve this
problem, a new algorithm based on traffic sign border extraction is proposed as a method that can be
used to restore motion-blurred images in the spatial domain. The border of the traffic sign is extracted
using the images color information, and then the width of the border can be measured in all directions.
According to the width measured and the corresponding direction analyzed, the motion direction and
scale of the image can be confirmed, and then it can be used to restore the motion-blurred image.
This method has a lower computational cost and better performance. Meanwhile, the restored image
ensures accurate and reliable detection of the traffic sign.
The remainder of this paper is organized as follows: Section II presents the generation of
the motion-blurred image and the restoration principle of the motion-blurred image. In Section III,
the parameters extraction model, which is based on border deformation detection, is given. Border
parameter extraction algorithms are discussed in detail in Section IV. . Finally, a conclusion is
presented in Section V.
Methods
All of the experimental images in the paper are from the German Traffic Sign Recognition
Benchmark (GTSRB) [10]. Physical traffic sign instances are unique within the dataset. There are
more than 40 classes and more than 50000 images in total. In addition, the GTSRB dataset is free to
use.
2.1 Restoration principle of motion-blurred images
A motion-blurred image is generated by the relative motion between the target and the camera
during the images exposure time. The study of motion blur produced by uniform motion is of general
significance, because the variable speed and the linear motion blur can be approximately considered
as uniform motion in the shooting moment. Following motion-blurred degradation and additive noise
superposition, the output result is a blurred image [11]. This degradation process can be shown in

In this model, the output is calculated by means of the following formula [23]
where g(x,y) is the blurred image, f(x,y) is the undegraded image, n(x,y) is system noise, h(x,y)
is the point spread function (PSF), and is the convolution in spatial domain. Since the space domain
convolution is equal to frequency domain multiplication, the frequency domain representation of Eq
(1) is G(u,v) = F(u,v)H(u,v)+N(u,v).
Motion-blurred restoration involves reversing the image degradation process and adopting the
inverse process to obtain clear images. Motion-blurred is one case that was featured in the model of
Lin et al [11]. The model assumes that the target or camera moves at a certain speed and direction,
and a distance, s, is moved during the exposure time, T. Regardless of the effect of noise, it can be
presented by the formula
g(x,y)=1T0Tf(xx(t),yy(t))dt.
In addition x(t),y(t) are the time-varying components of motion in the x-direction and
y-direction.
434

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
The Fourier transform of g(x,y) is
G(u,v)=--g(x,y)ej2(ux+vy)dxdy=1T0Tej2[ux(t)+vy(t)]dtF(u,v)
(2)
The spectrogram of the blurred image is the modulus square of Eq (2), which leads to an undegraded image where the phase shift is absorbed, since its value multiplied by its complex conjugate
is equal to unity. By definingH(u,v)=1T0Tej2[ux(t)+vy(t)]dt. Eq (2) can be expressed in the form
of G(u,v) = F(u,v)H(u,v), so it is possible to restore the motion blurred image if H(u,v) is known.
The inverse Fourier transform of H is h(x,y) = 1/vT = 1/s. This shows that, the unknown variable
is s, which includes the direction and scale. So theoretically, if the two parameters are known, it is
possible to obtain the spectrogram and restored image of the blurred image. Therefore, a new method
based on the border in the spatial domain is proposed to extract the parameters quickly and efficiently.
From the perspective of image restoration, the minimum mean square error filtering (Wiener
Filtering) method is adopted, which can avoid the effect of white noise. The method is shown by the
expression
F^(u,v)=[1H(u,v)|H(u,v)|2|H(u,v)|2+K]G(u,v)
(3)
where K is a modifying factor representing the power ratio of the noise and the signal. Because
this value cannot be accurately obtained, a fixed value is replaced in practice. The inverse Fourier
transform of F^(u,v) is the restored image.
2.2 Parameter extraction based on border in the spatial domain
Through the above description about the restoration algorithms of motion-blurred images, it
is obvious that extracting the movement direction and scale is a key step in the process, and that
determining how to detect the two parameters quickly and accurately is the key problem. There were
already some algorithms in existence that can extract these parameters, and most of these algorithms
try to do it in the frequency domain by measuring the zero pattern of the blurred image. However, these
methods lack of robustness and are easily affected by noise. Therefore, it is necessary to find a new
method that can calculate the two parameters.
2.2.1 Border deformation description of motion-blurred traffic signs
Traffic signs usually have a color circle border and a black number, or some other patterns,
within it, as shown in Fig 2(a) and 2(b). By analyzing the traffic signs in motion-blurred images, we
found that no matter what the direction of motion is, the width of the border will deform regularly.
Fig 2(c) and 2(d) feature the motion-blurred images of Fig 2(a) and 2(b). When comparing the blurred
image with the sharp image, we can conclude that the border will become wider on the front and back
sides in the direction of motion, and that the blurred image has lower color saturation. On the right and
left sides, the boundary change is small compared with the sharp image.

Fig 2

Traffic signs images.


In order to explain the changing laws of the border clearly, the blurring process is simulated.
Fig 3(a) shows the border of the traffic sign. We were then able to produce motion blur in the direction
of 0 and 30, and we were able to extend it by 15 pixels. Lastly, we obtained the border's motionblurred images [see Fig 3(b) and 3(c)]

It can be clearly observed that motion causes the border's regularity to change. Fig 3(a) is the
original border before it became motion blurred. Fig 3(b) moved in a direction of 0. The border
435

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
around the 0 direction had apparently become blurred and its boundary became wider. Moreover,
its color saturation dropped much more markedly than in the border around the 90 direction. Fig 3(c)
is an image that presents similar features, though the direction is 30.
2.2.2 Parameter extraction model
In the case of horizontal motion [Fig 3(b)], each line of the image is a sequence that could be
expressed as f(x,y), and each sequence is considered as a one-dimensional sequence, which can be
expressed as f(x). Assuming that the value of the border pixels is 1, and others are 0..Both a and b are
the boundaries of the traffic sign borders.
The sequence moved n pixels towards the right, and n<b-a. Through inverse Fourier transform,
it can be known that h(x) = 1/n (0xn-1).Ignoring the effect of noise, we can express the sequence
after motion as
g(x)=k=+f(k)h(kx)
(4)
By simplifying Eq (4), we can get
g(x)={1nxa1nax<a+n1a+nxb1nx+b+1nb<x<b+n0others
(5)
The corresponding illustration of the function x solution of Eq (5) is shown in Fig 4.

From Eq 11 and Fig 4, we can see that when axa+n, g(x) grows from 0 to 1. When
a+nxb, the value of g(x) is 1, and when b<x<b+n, g(x) decreases from 1 to 0. In the image the
saturation of the pixels in the middle of the border is the highest, and it decreases gradually from the
middle to the edge of the border. The threshold is set as 1, a+nxb, and we consider it as the width
of the sequence after blurring. This is to say that the width of the pixels whose saturation equals 1 in
original sequence is d, and the scale of motion is n. Thus, the width following motion is d' = d+n.
When considering the entire border, the width of the border along the motion direction would
apparently change, while the width of the border perpendicular to the motion direction changes little.
Finally, the width between the two directions changes gradually, which is shown in Fig 5.

Fig 5
Binary image of motion-blurred border segmented by a certain threshold.
Given that the circle is isotropic, no matter which direction is the image blurred in, the width of
the border would change in the same way, which makes it possible to confirm both the blurred direction
and scale. Specifically, after measuring the width of all the directions, two maximum values (dmax)
and two minimum values (dmin) could be extracted. By connecting two groups of points, respectively,
these two lines are perpendicular, and the blurred direction is the direction of the minimum value line.
We can also obtain the scale from the width of the border. Assuming that the width of the border
before the motion is d, it is easy to know that the maximum value is dmax = d and that the minimum
value is dmin = d-n, so the scale is n = dmax-dmin. However, the result is easily affected by threshold
determination, so the results should be corrected. An appropriate coefficient (K) is introduced to the
436

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
result, so we could obtain the corrected result, n = K (dmax-dmin).
Thus far, the parameters of direction and scale are extracted from the motion-blurred images.
2.3 Border deformation detection for image restoration
According to the border deformation characteristics of the motion-blurred images of traffic
signs, the process of realizing the algorithm is drawn in Fig 6. The first step in the process is to remodel
the image from RGB (Red, Green, Blue) to HIS (Hue, Saturation, Intensity); then, the traffic sign
border should be extracted by defining the appropriate threshold of HSI. After setting the center of the
border, we can measure the width of the border in all directions. Then, the two parameters of direction
and scale can be calculated. At last, the motion-blurred image of the traffic sign can be restored using
the results obtained with this method

2.3.1 Border extraction


There are some methods that can be used in border extraction [12,13]. Since the border is
blurred, the traditional method cannot meet the requirements needed to extract the borders with
accuracy and integrity.
The traffic sign's borders are red, so it is possible to confirm whether the pixel belongs to the
border or not by checking its color. The RGB image cannot confirm the color directly, so the method is
based on the HSI model. The HSI color space is well suited to describe color in a way that is practical
for human interpretation In the HSI model, the variation of light does not greatly affects the value of
hue . so it is easy to confirm what the color of the pixel is. The border of the traffic sign is red, and
according to the statistical results, the H values of most border pixels fall in the range of 0 ~36 and
324 ~360. In addition, in this method the intensity component was not used in the calculation, which
reduces much computational capacity. There may be some noise in the resulting extraction; as such,
we use a filter to remove it. The results of this method are shown in Fig 7.

Fig 7
437

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
Result of border extraction using the HSI model.
2.3.2 Measurement of border width in all directions
In order to measure the width of the border in all directions, we should count the pixels from
the center of the circle to the edge of the image in all directions, confirming the center of the border
is the first step. Theoretically, the blurred border is centro symmetric, so the center of the border is
the center of gravity. Assuming the size of the image is MN, the center of the border O(Ox,Oy) is
represented by
Ox=n=1Nnf(n)/n=1Nf(n) and Oy=m=1Mmf(m)/m=1Mf(m).
In addition, f(n) and f(m) are the summation of the white pixels in the m column or in the n row.
To measure the width of the border, we compute the border by a step of 1. Through theoretical
analysis and experimental verification, we found that the parameters are the same when the motion
blurred directions are and (+180) . So, the direction of the motion-blurred image can be normalized
from 0 to 180. The result of the measurement is shown in Fig 8(a). In order to eliminate noise, the
result is processed by the mean filter. Fig 8(b) is the result after filtering.

Fig 8
Measurement result of motion-blurred traffic sign.
Following this, we can then determine the two extreme points (the maximum point and the
minimum point), and the direction of the minimum point is the motion direction. To ensure that the
results are more precise, we use the direction of the maximum point to correct any errors.
Conclusions
This paper proposed a new method to measure two important parametersdirection and scale
of motion-blurred traffic signs in the spatial domain. . This method is robust, and it can reduce the
impact of changing illumination on parameter extraction. Using the measured parameters to restore the
motion-blurred traffic sign images, we obtained good results that could meet the system's requirements
in image recognition. The results illustrated that the method can deal with recognition-based problems
associated with motion-blurred traffic sign images. Compared with the methods based on the frequency
domain, the impact of noise on parameters extraction is much smaller. In conclusion, application of the
algorithm offers an advantage in traffic signs recognition. This method can improve the performance
of the DAS and help to improve automatic driving and road safety.
As for future work, we will continue to investigate this subject by providing a more detailed
background of this problem, and we will work to improve the robustness of border extraction with
more suitable features in reducing the effects of the environment.
References
1. Ji RR, Duan LY, Chen J, Yao HX, Yuan JS, Rui Y, et al. Location discriminative vocabulary coding
for mobile landmark search. International Journal of Computer Vision, 2012; 96: 290314.
2.

De la Escalera A, Armingol JM, Pastor JM, and Rodrguez FJ. Visual Sign Information Extraction
and Identification by Deformable Models for Intelligent Vehicles. IEEE Transactions on Intelligent
Transportation Systems, 2004; 5: 5768.

3.

Gao Y, Tang JH, Hong RC, Dai QH, Chua TS, Jain R. W2Go: a travel guidance system by automatic

438

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

landmark ranking. Proceedings of the international conference on Multimedia, 2010; 123132.

4.

Franke U, Gavrila D, Gorzig S, Lindner F, Paetzold F, Wohler C. Autonomous driving goes


downtown. IEEE Intelligent Systems, 1998; 13: 4048.

5.

Gonzalez A, Garrido MA, Llorca DF, Gavilan M, Fernandez JP, Alcantarilla PF, et al. Automatic
Traffic Signs and Panels Inspection System Using Computer Vision. IEEE Transactions on Intelligent
Transportation Systems, 2011; 12: 485499.

6.

Maldonado-Bascon S, Lafuente-Arroyo S., Gil- Jimenez P., Gomez-Moreno H., Lopez-Ferreras F.


Road-Sign Detection and Recognition Based on Support Vector Machines. IEEE Transactions on
Intelligent Transportation Systems, 2007; 8: 264278

7.

Khan JF., Bhuiyan SMA., Adhami RR. Image Segmentation and Shape Analysis for Road-Sign
Detection.IEEE Transactions on Intelligent Transportation Systems, 2011; 12: 8396.

8.

Barnes N, Zelinsky A, Fletcher LS. Real-Time Speed Sign Detection Using the Radial Symmetry
Detector.IEEE Transcations on Intelligent Transportation Systems, 2008; 9: 322332.

9.

Stallkamp J, Schlipsing M, Salmen J, Igel C. The German Traffic Sign Recognition Benchmark: A
multi-class classification competition. In Proceedings of the IEEE International Joint Conference on
Neural Networks, 2011; 14531460.

10

. Lin HT, Tai YW, Brown MS. Motion Regularization for Matting Motion Blurred Objects. IEEE
Transactions on Pattern Analysis and Machine Intelligence, 2011; 33: 23292336. doi: 10.1109/
TPAMI.2011.93 [PubMed]

11. Jiang XY, Cheng DC, Wachenfeld S, Rothaus K. Motion Deblurring, Available: http://cvpr.unimuenster.de/teaching/ws04/seminarWS04/downloads/MotionDeblurring-Ausarbeitung.pdf.
12. Fang C, Fuh CS, Chen SW, Yen PS. A road sign recognition system based on dynamic visual
model. IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2003; 1:
750755.
13.

Fleyeh H., Davami E. Eigen-based traffic sign recognition. IET Intelligent Transport Systems, 2011;
5:190196.

439

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Minimize Response Delay In Mobile Ad Hoc


Networks using DynamicReplication
Karthik K1 Suthahar P2
Student, M.E (CSE), M.KCE,Karur,
E-mail:vangalkarthi@gmail.com.
Asst professor, M.E (CSE),M.KCE, Karur
,E-mail:psutha@engineer.com.
AbstractA mobile ad hoc network (MANET) is a most interest area of research in network . The
communication range and the node mobility that are affect the efficiency of file querying. The MANET
having the P2P file sharing mechanism. The Main advantages of P2P file are files can be shared without
base stations, and it avoids overload on server. File replication has major advantage it enhancing file
availability and reduce file querying delay. The current replication protocol having the drawbacks, they
are node storage and the allocation of resources in the replications In this paper, we introduce a new
concept of Distributed File Replication techinique which considers file dynamics such as file addition
and deletion in dynamic manner. This protocol achieves minimum average response delay at minimum
cost of other replication protocols.
Keywords Mobile Ad hoc Network (MANET), file Replication, peer-to-peer, Query Delay.

I. INTRODUCTION
In mobile ad hoc networks (MANETs), the movement of nodes that makes the network partition
, where the nodes in one partition cannot be access the data by the nodes of other partitions. File
replication is the better solution to improve file availability in distributed systems. By replicating the
file at mobile nodes who are not in the owner of the source file.the file availability can be improved
because of there are multiple replica files in the network and the probability of identifying one copy
of the file is higher. Also,the file replication can be minimize the query delay.the mobile nodes can be
obtain the file from some nearby replicas. But the most of the mobile nodes only have limited amount
of memory space, range, and power,and hence it is difficult for one node to collect and hold all the
files considering these constraint and independent nodes in MANETs cause file unavailability for the
requesters. When a mobile node that only replicates part of the file, there will be a trade-off between
query delay and the file availability.
MANET varying significantly from the wired networks from network topology,configuration of
network and network resources. Features of MANETs are dynamic topology due to host movements,
partition of network due to untrusted communication and minimum resources such as limited power
and limited memory capacity [1, 2]. File sharing is one of the important functionality to be supported
in MANETs. Without this facility, the performance and usage of MANET is greatly minimizes [3].
The best example where file sharing is important, in the conference where several users share their
presentations on discussing on a particular issue, and it is also applicable in defence application,
rescue operation, disaster management etc. The method used for file sharing deeply depends upon the
features of the MANET [3]. The sequential network partition due to host movements or limited battery
power minimize the file availability in the network. To overcome file un-availability, the replication
technique deals all these problems such that file is available at all times in the network.
File replication
File Replication is a technique which improves the file availability by creating copies of
file. Replication allows better file sharing. It is a key approach for achieving high availability. File
replication has been widely used to maximize file availability in distributed systems, and we will
apply this technique to MANETs. It is suitable to maximize the response time of the access requests,
to distribute the load of processing of these requests on several servers and to eliminate the overload of
the paths of transmission to a unique server. The replications that are accessed in the time variations.
440

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Fig. 1 File Replication in MANETs

BENEFITS OF FILE REPLICATION


In distributed systems ,the files should be accessed in multiple locations,so it is beneficial to
replicate through out the network.
A.increased file availability
The multiple replications of files that improves the file availability and reliability in the case

any of network failures.
B.faster query response
The Queries initiated from the nodes where replicas are stored that can be satisfied directly

without affecting network transmission delays from remote nodes.
C.load sharing
The computational load of responding to the queries can be distributed in the number of nodes

in the network.
RESEARCH ISSUES RELATED TO FILE REPLICATION
A. Power consumption
The Mobile nodes in the MANET are used battery power. If a node with less power is replicated
with many frequently accessed file items, it soon gets drained and it cannot provide services any more.
Thus replication algorithm should replicate file in the nodes that need sufficient power by periodically
checking the remaining battery power of each node.
B. Node mobility
In MANET, hosts are mobile which leads to dynamic topology. Thus replication technique
has to support movement prediction such that if a host is likely to move away from the network, its
replicas will be changed in some other nodes which is expected to retain in the network for a particular
unit of time.
C. Resource availability
Every nodes participating in MANET are portable hand held devices, stroage capacity is limited.
Before sending a replica to the node, the technique has to find whether a node has sufficient storage
capacity to hold the replication files or not.
D. Real-time applications
MANET applications like rescue and military operations are time-critical and may have both
firm and soft real-time transactions. Therefore, the replication technique should be able to deliver
correct information before the expiry of processing limits, taking into consideration both real-time
firm and soft transaction types in order to minimize the number of transactions missing their deadlines.
E. Network partitioning
The frequent disconnection of mobile nodes,the network partitioning occurs more often in
MANET databases than in traditional databases. Network partitioning is a important problem in
MANET when the server that contains the required file is isolated in a separate partition, thus reducing
file accessibility to a large extent. Therefore, the replication technique should be able to determine the
time at which network partitioning may having a replicate file items.
441

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
B.Peer-to-Peer Replication model
The peer-to peer model removes the restrictions in the client server model. Replica files can
be transmitted to hosts with out necessity of all hosts that are including the communication in the
network. the peer-to-peer model is useful for mobile systems that have poor network connectivity. The
single point of failure in naturally eliminated.

FIG 2:Peer-to-Peer File Replication.

AN OVERVIEW OF EXISTING TECHNIQUEs:


Kang Chen[4] was proposed distributed file replication protocol name as priority competition
and split replication protocol (PCS) that realizes the optimal replication rule in a fully distributed
manner .The usage of replica distribution on the average querying delay under constrained available
resources with two movement models, and then derived an optimal replication rule that can allocate
resources to le replicas with minimam average querying delay.
T.Hara, 2001 [5] proposed the effective replica allocation methods in mobile ad hoc networks
for improving file accessibility.The writer proposed three replica allocation techniques to improve
file accessibility by replicating files on mobile nodes i.e. Static Access Frequency (SAF) techniques,
Dynamic Access Frequency and Neighbourhood (DAFN) techniques and Dynamic Connectivity based
Grouping (DCG) techniques. These techniques make the following assumptions: (i) each file items
and each mobile node is assigned a seperate identier,(ii) Every mobile node has nite storage space
to store replica files; (iii) There are no modify processings; and (iv) The access frequency of each
file item, which is the number of times a particular mobile node accesses that file item in a unit time
interval, is known and does not change. The decision of which file items are to be replicated on
which mobile node is based on the file items access frequencies and this decisions are taken during a
particular period of time, called the relocation period. In the SAF techniques, a mobile host allocates
replications with huge access frequencies. In the DAFN method, replicas are preliminary allocated
based on the SAF techniques, and then the replica duplication is overcome among neighbourhood
mobile nodes. In the DCG techniques, static groups of mobile nodes are created, and replica files are
shared in each partition. The simulation result shows that in most cases the DCG techniques gives the
maximum accessibility, and the SAF techniques gives the lowest traffic.
Yang Zhang et.al, 2012 [5] describes, in MANETs, nodes mobility freely .The link and node
failures are common, which occurs to repeated network partitions. When a network partition occurs,
mobile nodes in one partition are not able to access files replicated by nodes in other partitions, and
hence significantly reduce the performance of file access. To solve this problem, file replication
techniques are used.
V. Ramany and P. Bertok 2008 [6] studied solutions for replicating location dependent data in
MANETs to handle untrusted network connections. Replication aims to improve accessibility, shorter
442

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
response time and fault tolerance. When the file is combined with one location in the subnetwork
and valid only within a location around that network, the advantages from replication will apply only
within this region.
PROBLEM DEFINITION
There are many file replication protocols available, the main problem with them is they lack a
rule to allocate limited resource to different files for replica creation in order to achieve the minimum
global average querying delay that is global search efficiency optimization under limited resource.
They simply consider storage as the resource for replicas, but neglect that a nodes frequency to meet
other nodes also controls the availability of its files. Files in a node with a higher meeting ability have
higher availability. So there is a problem of how to allocate the limited resource in the network to
different files for replication and how to create and delete the files in dynamically.
PROPOSED SOLUTION
In this section we propose a new distributed file replication protocol to minimize the average
querying delay.Priority Based Dynamic Replication(PBDR) techinique is used adding and deleting
the replica files based on the priority.the file have been succeded in the priority compettion then adding
the replica files,otherwise delete the replicas.

Algorithm 1 psuedo-code for File_Adding in PBDR


i.FILE_ADD_REPLICA(k) //node i tries to create replica files
k.FILE_ADD_REPLICA(i) //node k tries to create replica files
Begin
If|Fi|<MAX then //checks the available storage of nodes
Qi->0 //initialize count
Files_priority_check()
For(each file f in current node)
If(node.test_file(f)==true)
Then
Node(i).FILE_ADD_REPLICA()
Else
Count=count+1 //select the another neighbour
End
Procedure priority_check()
While(resource<j.size)
443

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
File(f)<Si //to check the file size is less or greater
If(priority_level >pj)
Then
Return file_addition() //priority test successful
Else

Si<FILE(f) //priority test fails


Return false
End procedure
End
Algorithm 2 psuedo-code for File_Deleting in PBDR
i.FILE_DEL_REPLICA(k) //node i tries to delete replica files
k.FILE_DEL_REPLICA(i) //node k tries to delete replica files
Begin
while(file(f) in current node)
True
Files_priority_check()
If(node.test_file(f)==true)
Then
Node(i).FILE_DEL_REPLICA()
Else
Count=count+1 //select the another neighbour
End
Procedure priority_check()
While(resource<j.size)
If(priority_level <pj)Then
Return file_deletion() //priority test successful
Else
Si<FILE(f) //priority test fails
Return false
End procedure
End
DESIGN OF PDBR FILE REPLICATION TECHINIQUE
In PBDR, each node dynamically updates its meeting ability (Vi) and the average meeting
ability of all hosts in the system (V ). The replication is transfered among all the neighbour nodes.
Each node also periodically calculates the Pj of each of its les. The qj is calculated by using R, where
q and R are the number of received requests for the le and the total number of queries generated in
a unit of time period, respectively. Note that R is a pre-dened system parameter. replicating node
should keep the average meeting ability of the replica nodes for le j around V . Node i rst checks the
meeting abilities of neighbors and then chooses the neighborhood node k that does not contain le j.
The protocol first choosing the neighbor nodes such as k,then check the node is current node or
not.the priority test to be conducted by the requested file j.note the test will be succeded then ADD_
replica ,otherwise Del_replica file in the each peers in the network.
Node k creates replicas for the les in a top-down manner periodically. Algorithm 1 presents
the pseudo-code for the process of PBDR file addition between two encountered nodes. In detail,
suppose node i needs to replicate le j, which is on the top of the list.
The neighbor node repeats above process until available storage is no less than the size of
le j.Next, the node fetches the le from the top of the list and repeats the process. If le j fails to
be replicated after K attempts, the node stops launching competition until the next period. if the
selected neighbors available storage is larger than the size of le j , it creates a replica for le j
directly. Otherwise, a the priority test is happen among the replica of le j and replicas already in the
neighborhood node based on their Ps. The priority value of the new replica is set to half of the original
444

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
les k. If le j is among the selected les, it fails the priority test and will not be replicated in
the neighbor node. Otherwise, all selected les are removed and le j is replicated. If le j fails, node i
will test another attempt for le j until the maximum number of attempts (K) is reached. The setting of
K attempts is to ensure that each le can priority test with a sufcient subset of replicas in the system.
If node i fails to create a replica for le j after K attempts, then replicas in node i whose Ps are smaller
than that of le j are unlikely to win a priority test. Thus, at this moment, node i stops replicating les
until next round.
The le replication stops when the communication session of the two involved node sends.
Then,each node continues the replication process for its les after excluding the disconnected node
from the neighborhood node list. Since le popularity, k, and available system resources change as
time goes on,each node simultaneously executes PBDR to dynamically handle these time-varying
factors.Each node also periodically calculates the popularity of its les (qj) to reect the changes
on le popularity(due to node querying pattern and rate changes) in different time periods. The
periodically le popularity updates can be automatically handle le dynamism.

FIG 3:File Replicas Addition and Deletion Process in PBDR.

PERFOMANCE
The output of each technique in the simulation test on NS-2. We see the hit rates and average
delays of the four protocols.
We used the following metrics in the experiments:
Hit Rate
It is the number of requests successfully handled by either original les or replica files.
Average delay
This is the average time of all requests that finish execution. The delay that calculate using the

throughput and the performance of the requests.
445

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
Hit Rate
Figs. 4(a)The hit rates of the four methods with the simulations results .The hit rates continue
SAF>DAFN> DCG >PBDR. The PBDR achieve higher hit rate than other methods. since PBDR realizes
distributed way, it presents slightly differ from performance compared to others. PBDR considers the
intermediate connection properties of disconnected MANETs and replications. DCG only considers
temporarily con- nected group for fille replication, which is not stable in MANETs. Therefore, it has
a low hit rate. Random assigns resources to les randomly, which means it cannot create more replicas
for popular les, leading to the lowest hit rate. Such a result proves the effectiveness of the proposed
PBDR on improving the over- all le availability and the correctness of our MANETs.

FIG 4(a) Hit Rate

Average Delay
Figs. 4(b) demonstrate the average delays of the four methods with simulation results.The
average delays shows PBDR<SAF<DAFN<DCG which is in reverse order of the relationship between
the four methods on hit rate as shown in Figs. 4a . This is because the average delay is related to the
overall file availability in decending order. The PBDR have high file availability .SAF distributes
every le to different Nodes while DCG only shares data among simultaneously identify neighbor
nodes, and DAFN has a low file availability since all les receive equal amount of memory resources
for replicas. The PBDR has the minimum average delay in the simulation results.

FIG 4(b) Average Delay

Replication Cost
Fig. 4(c) show the replication costs of the four methods. PBDR have the lowest replication
cost while the costs of other three methods continues PBDR<DAFN<DCG<SAF. PBDR, nodes only
need to communicate the file server for replica list, leading to the lowest cost. DCG generates the
highest replication cost since network partitions and its members need to transfer a huge amount of
files to remove duplicate replicas.In PBDR, a node tries at most K times to create a replica for each of
its les, producing much lower replication cost than SAF and DCG. Such the result demonstrates the
high energy-efciency of PBDR. Combining all above results, we conclude that PBDR has the highest
overall le availability and efciency compared to existing methods, and PBDR is effective in le
446

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
replication in MANETs.

FIG 4(c)Replication Cost

Replica Distribution
Fig. 4d show the proportion of resources allocated to replicas in each protocol in the simulation.
We see in both gures, PBDR presents very close similarity to DAFN and the other two follow SAF
and DCG. SAF also presents similarity to PBDR on the replica distribution. However, the difference
between PBDR and SAF is that PBDR assigns priority for popular les and check the priority test
for files in the networks. DAFN gives even priority to all les. Since popular les are queried more
frequently, SAF still leads to a low performance in the file replications.

FIG 4(d) Replica Distributions

Therefore, the resources are allocated more strictly following the PBDR, leading to efficiant.
The other replication protocols having the higher replication costs. The other three methods that
favor popular les, we nd that the closer similarity with PBDR a protocol. The PBDR has the better
performance over all in manets. The storage capacity of the file replication can be overcome due to
the file dynamics. The file distributions among all the nodes in the distributed network having better
performances. The file distributes across the different partitions.This proves the correctness of our
theoretical analysis and the resultant for MANETs.
CONCLUSION
In this paper, we analyze the problem of how to allocate limited resources in the replications
and manage the resources in MANETs. Although previous protocols that only consider storage and
resources, we also consider the file additions and deletions in dynamic manner in the peer-to-peer
communication in distributed systems.the Priority Based Dynamic Replication(PBDR) techinique that
are efficiently adding and deleting the file replications and manage the replicas in the particular time
intervals. NS-2 simulator that are analysis the effectiveness of the PBDR techinique.The hit rate is
higher then the previous protocols and average query delay is reduced and the replication cost is lower
then the previous protocols. Finally, the PBDR protocol that minimize the average response delay in
MANETs.
REFERENCES:
[1] S. C.Sivaram Murthy and B.S Manoj,Ad Hoc Wireless Networks , Pearson Education, Second
Edition India, 2001.
447

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
[2]

MANET Tutorial by Nithin Vaidya, INFOCOM, 2006.

[3]

Lixin Wang ,File Sharing on a mobile ad hoc Network, Master Thesis,Department of Computer
Science at the University of Saskatchewan, Canada, 2003.

[4] Kang Chen, Maximizing P2P File Access Availability in Mobile Ad Hoc Networks though
Replication for Efcient File Sharing, IEEE TRANSACTIONS ON COMPUTERS, VOL. 64, NO.
4, APRIL 2015.
[5] Yang Zhang et.al, Balancing the Trade-Offs between Query Delay and Data Availability in
MANETs, IEEE Transactions on Parallel and Distributed Systems, Vol. 23, No. 4, pp.643-650,
2012.
[6] T. Hara, Effective replica allocation in ad hoc networks for improving data accessibility, IEEE
INFOCOM, 2001.
[7] V. Ramany and P. Bertok, Replication of location-dependent data in mobile ad hoc networks,
ACM Mobile, pp. 3946, 2008.
[8]

Q. Ren, M. Dunham, and V. Kumar, Semantic caching and query processing, IEEE Transactions
on Knowledge and Data Engineering, Vol. 15, No. 1, pp. 192210, 2003.

[9] F. Sailhan and V. Issarny, Scalable service discovery for MANET, IEEE International Conference
on Pervasive Computing and Communications, pp. 235244, 2005.
[10] L. Yin and G. Cao, Supporting cooperative caching in ad hoc networks, IEEE Transaction on
Mobile Computing, Vol. 5, No. 1, pp. 77-89, 2006.
[11] J. Cao, Y. Zhang, G. Cao, and L. Xie, Data consistency for cooperative caching in mobile
environments, IEEE Computer, Vol. 40, No. 4, pp. 6066, 2007.
[12] B. Tang, H. Gupta, and S. Das, Benefit-based data caching in ad hoc networks, IEEE Transactions
on Mobile Computing, Vol. 7, No. 3, pp. 289304, 2008.
[13] X. Zhuo, Q. Li, W. Gao, G. Cao, and Y. Dai, Contact Duration Aware Data Replication in Delay
Tolerant Networks, Proc. IEEE 19th Intl Conf. Network Protocols (ICNP), 2011.
[14] X. Zhuo, Q. Li, G. Cao, Y. Dai, B.K. Szymanski, and T.L. Porta, Social-Based Cooperative Caching
in DTNs: A Contact Duration Aware Approach, Proc. IEEE Eighth Intl Conf. Mobile Adhoc and
Sensor Systems (MASS), 2011.
[15] Z. Li and H. Shen, SEDUM: Exploiting Social Networks in Utility-Based Distributed Routing
for DTNs, IEEE Trans. Com- puters, vol. 62, no. 1, pp. 83-97, Jan. 2012. [21] V. Gianuzzi,
Data Replication Effectiveness in Mobile Ad-Hoc Networks, Proc. ACM First Intl Workshop
Performance Evaluation of Wireless Ad Hoc, Sensor, and Ubiquitous Networks (PE-WASUN), pp.
17-22, 2004.
[16] S. Chessa and P. Maestrini, Dependable and Secure Data Storage and Retrieval in Mobile Wireless
Networks, Proc. Intl Conf. Dependable Systems and Networks (DSN), 2003.
[17] X. Chen, Data Replication Approaches for Ad Hoc Wireless Net- works Satisfying Time
Constraints, Intl J. Parallel, Emergent and Distributed Systems, vol. 22, no. 3, pp. 149-161, 2007.
[18] J. Broch, D.A. Maltz, D.B. Johnson, Y. Hu, and J.G. Jetcheva, A Performance Comparison of
Multi-Hop Wireless Ad Hoc Net- work Routing Protocols, Proc. ACM MOBICOM, pp. 85-97,
1998.
[19] M. Musolesi and C. Mascolo, Designing Mobility Models Based on Social Network Theory, ACM
SIGMOBILE Mobile Computing and Comm. Rev., vol. 11, pp. 59-70, 2007.
[20] P. Costa, C. Mascolo, M. Musolesi, and G.P. Picco, Socially- Aware Routing for Publish-Subscribe
in Delay-Tolerant Mobile Ad Hoc Networks, IEEE J. Selected Areas in Comm., vol. 26, no. 5, pp.
748-760, June 2008.
448

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Efficient Distributed Deduplication System With Higher


Reliability Mechanisms In Cloud
P.Nivetha, M.Tech Information Technology,
Vivekananda College of Engineering for Women,
Tiruchengode-637205
AbstractData deduplication is a methodology of reducing storage needs by eliminating redundant data.
Only one unique instance of the data should be retained on storage media, such as disk or tape. Redundant
data is replaced with a pointer to the data copy and it has been widely used in many cloud storage
technique to reduce the amount of storage space and save bandwidth. To maintain the confidentiality of
sensitive data while supporting deduplication, the convergent encryption technique has been proposed
to encrypt data while outsourcing. To better protect data security, this work makes an attempt to address
the problem of authorized data deduplication. The several deduplication techniques are implemented
in Hybrid Cloud architecture. It uses hashing technique to maintain the uniqueness of textual data and
Transformation techniques to maintain the same in images.
Keywords Deduplication, Reliability.

I. INTRODUCTION
Cloud Computing is a composition of two or more agents like clouds (private, community
or public), users and auditors that remain distinct but are bound together and offering the benefits
of multiple deployment agents. Hybrid cloud can also be defined as ability to connect collocation,
managed and dedicated service with cloud resources attached with it . This hybrid cloud services
crosses isolation and provider boundaries so that it cant be simply gathered in one category of private,
public, or community cloud service. It allows extending either the capability or the capacity of a cloud
service, by aggregation, assimilation or customization with a different cloud service.
Varieties of use cases for hybrid cloud composition exist. For example, an organization may
lay up susceptible client data in the warehouse on a private cloud application, but interconnects that
application to business intelligence applications which is provided on a public cloud as a software
service. This example of hybrid cloud storage extends the capability of the enterprise to deliver specific
business services through the addition of externally available public cloud services.
Yet another example of hybrid cloud computing is about IT organizations which uses public
cloud computing resource to meet capacity requirements that cannot be met by private cloud. This
capabilities enables hybrid clouds enables employing of cloud bursting for scaling across number
of clouds. Cloud bursting is an application deployment model which runs in a private cloud or data
center along with "bursts" to a public cloud when there is a claim for computing capacity increases.
The primary advantages of cloud bursting and a hybrid cloud model is that the association can pay for
work out possessions only when they are needed.
To make efficient data management in cloud computing, deduplication of data has been a wellknown technique and has attracted more attentions recently. Data deduplication is specialized data
compression based technique used for eliminating duplicate copies of repeating data in storage. The
scheme is to expand storage utilizations and can be also applied to network data transfer to diminish
the number of bytes that should be sent. Instead of maintaining multiple data copy with the same
content, deduplication remove unneeded data by keeping only one physical copy and to refer other
redundant data to that copies. Deduplication occurs at each of the file level and the block level. For
file level, it eliminates duplicate copies of same file. Deduplication can take place at block level too,
which eliminate duplicate block of data that occur in non-identical filesystem.
Even though data deduplication brings number of benefits, security with privacy concerns arise
as users sensitive data are vulnerable to both inside and outside attacks. Traditional encryptions,
while providing confidentiality of data, is incompatible with data deduplication system. Traditional
encryption require different users to encrypt their datum with their own keysets. Thus, identical data
copy for different users will lead to different cipher text, making deduplication daunting. Convergent
encryption has been used to implement data privacy while making deduplication technique feasible. It
449

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
encrypts/ decrypts a data copy with a convergent keys, which is obtained by computing cryptographic
hash value of the content of data copy. After key creation and data encryption, users retain the keys
and send the ciphertexts to the cloud. This work perform duplication elimination on both image and
text data so that the user cannot store this type of data in a repeated way which increase the overhead
of the server.
2. THE DISTRIBUTED DEDUPLICATION SYSTEMS-THE EXISTING SYSTEM
2.1 Secret Sharing Scheme
There are two algorithms in a secret sharing scheme, which are Share and Recover. The secret
is divided and shared by using Share. With enough shares, the secret can be extracted and recovered
with the algorithm of Recover. In our implementation, we will use the Ramp secret sharing scheme
(RSSS) to secretly split a secret into shards. Specifically, the (n, k, r)-RSSS (where n > k > r 0)
generates n shares from a secret so that (i) the secret can be recovered from any k or more shares, and
(ii) no information about the secret can be deduced from any r or less shares. Two algorithms, Share
and Recover, are defined in the (n, k, r)-RSSS.
Share divides a secret S into (k r) pieces of equal size, generates r random pieces of the same

size, and encodes the k pieces using a non-systematic k-of-n erasure code into n shares of the

same size;
Recover takes any k out of n shares as inputs and then outputs the original secret S.
It is known that when r = 0, the (n, k, 0)-RSSS becomes the (n, k) Rabins Information Dispersal

Algorithm (IDA) [9]. When r = k 1, the (n, k, k 1)-RSSS becomes the (n,k) Shamirs

Secret Sharing Scheme (SSSS) [11].
2.2 Tag Generation Algorithm
In our system below, two kinds of tag generation algorithms are defined, that is, TagGen and
TagGen. TagGen is the tag generation algorithm that maps the original data copy F and outputs a
tag T (F ). This tag will be generated by the user and applied to perform the duplicate check with the
server. Another tag generation algorithm TagGen takes as input a file F and an index j and outputs a
tag. This tag, generated by users, is used for the proof of ownership for F .
2.3 The File-level Distributed Deduplication System
To support efficient duplicate check, tags for each file will be computed and are sent to S-CSPs.
To prevent a collusion attack launched by the S-CSPs, the tags stored at different storage servers are
computationally independent and different. We now elaborate on the details of the construction as
follows.
System setup In our construction, the number of storage servers S-CSPs is assumed to be
n with identities denoted by id1, id2, , idn, respectively. Define the security parameter as 1 and
initialize a secret sharing scheme SS = (Share, Recover), and a tag generation algorithm TagGen. The
file storage system for the storage server is set to be .
File Upload To upload a file F , the user interacts with S-CSPs to perform the deduplication.
More precisely, the user firstly computes and sends the file tag F = TagGen(F ) to S-CSPs for the file
duplicate check.
If a duplicate is found, the user computes and sends F;idj = TagGen(F, idj ) to the j-th server
with identity idj via the secure channel for 1 j n (which could be implemented by a cryptographic
hash function Hj (F ) related with index j). The reason for introducing an index j is to prevent the server
from getting the shares of other S-CSPs for the same file or block, which will be explained in detail in
the security analysis. If F;idj matches the metadata stored with F , the user will be provided a pointer
for the shard stored at server idj .
Otherwise, if no duplicate is found, the user will proceed as follows. He runs the secret
sharing algo-rithm SS over F to get {cj } = Share(F ), where cj is the j-th shard of F . He also computes
F;idj = TagGen(F, idj ), which serves as the tag for the j-th S-CSP. Finally, the user uploads the set
of values {F , cj , F;idj } to the S-CSP with identity idj via a secure channel. The S-CSP stores these
values and returns a pointer back to the user for local storage.
File Download. To download a file F , the user first downloads the secret shares {cj } of the
file from k out of n storage servers. Specifically, the user sends the pointer of F to k out of n S-CSPs.
450

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
After gathering enough shares, the user reconstructs file F by using the algorithm of Recover({cj
}).
This approach provides fault tolerance and allows the user to remain accessible even if any
limited subsets of storage servers fail.
2.4 The Block-level Distributed Deduplication System
In this section, we show how to achieve the fine-grained block-level distributed deduplication.
In a block-level deduplication system, the user also needs to firstly perform the file-level deduplication
before uploading his file. If no duplicate is found, the user divides this file into blocks and performs
block-level deduplication. The system setup is the same as the file-level deduplication system, except
the block size parameter will be defined additionally. Next, we give the details of the algorithms of
File Upload and File Download.
File Upload. To upload a file F , the user first performs the file-level deduplication by sending F
to the storage servers. If a duplicate is found, the user will perform the file-level deduplication, such as
that in Section 3.2. Otherwise, if no duplicate is found, the user performs the block-level deduplication
as follows.
He firstly divides F into a set of fragments {Bi} (where i = 1, 2, ). For each fragment
Bi, the user will perform a block-level duplicate check by comput-ing Bi = TagGen(Bi), where the
data processing and duplicate check of block-level deduplication is the same as that of file-level
deduplication if the file F is replaced with block Bi.
Upon receiving block tags {Bi }, the server with identity idj computes a block signal vector
Bi for each i.
i) If Bi =1, the user further computes and sends Bi;j = TagGen(Bi, j) to the S-CSP with

identity idj . If it also matches the corresponding tag stored, S-CSP returns a block pointer of

Bi to the user. Then, the user keeps the block pointer of Bi and does not need to upload Bi.
ii) If Bi =0, the user runs the secret sharing al-gorithm SS over Bi and gets {cij } = Share(Bi),
where cij is the j-th secret share of Bi. The user also computes Bi;j for 1 j n and uploads
the set of values {F , F;idj , cij , Bi;j } to the server idj via a secure channel. The S-CSP
returns the corresponding pointers back to the user.
File Download. To download a file F = {Bi}, the user first downloads the secret shares {cij }
of all the blocks Bi in F from k out of n S-CSPs. Specifically, the user sends all the pointers for Bi to
k out of n servers. After gathering all the shares, the user reconstructs all the fragments Bi using the
algorithm of Recover({}) and gets the file F ={Bi}.


3. THE PROPOSED SYSTEM


In this work we have implement SHA(Secured Hash Algorithm) Algorithm to determine the
text level duplication by generating a single hash value for the whole file which reduce the overhead
of maintaining more number of hash value for a single file. Also, KDC(Key Distribution centre) is
used to generate a random number using registration details of the user which will be used to maintain
the authentication of the user. (SWQ)Scalar Wavelet Quantization Algorithm is used to avoid the
duplication of image. It can be done by converting the image into coefficients so that if the coefficient
values are same, then we can judge then we can judge that the images are identical.
3.1 IMPLEMENTATIONS:
The following entities will be involved in this deduplication system:

Private cloud

Public cloud

Data user

Deduplication system
3.2 Private cloud:
Compared with the traditional deduplication architecture in cloud computing, this is a new
entity introduced for facilitating users secure usage of cloud service. particularly, since the computing
resources at data user/owner side are limited and the public cloud is not fully trusted in practice, private
cloud is able to provide data user/owner with an execution environment and infrastructure working as
451

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
an interface between user and the public cloud. The private keys for the privileges are managed by the
private cloud, who answers the file token requests from the users. The interface offered by the private
cloud allows user to submit files and queries to be securely stored and computed respectively.
3.3 Public cloud:
This is an entity that provides a data storage service in public cloud. The public cloud provides
the data outsourcing service and stores data on behalf of the users. To reduce the storage cost, the
public cloud eliminates the storage of redundant data via deduplication and keeps only unique data.
In this paper, we assume that public cloud is always online and has abundant storage capacity and
computation power.
3.4 Data user:
A user is an entity that wants to outsource data storage to the public cloud and access the data
later. In a storage system supporting deduplication, the user only uploads unique data but does not
upload any duplicate data to save the upload bandwidth, which may be owned by the same user or
different users. In the authorized deduplication system, each user is issued a set of privileges in the
setup of the system. Each file is protected with the convergent encryption key and privilege keys
to realize the authorized deduplication with differential privileges. For more security we use an
cryptographic technique called encryption using Jenkins hah function algorithm.
3.5 Deduplication System:
In this deduplication module, both image and text data will be processed to avoid duplication
of content. In the image data duplication, the SWQ algorithm is used to determine the coefficient of
the images. Once the coefficients are determined they will be compared and the similarity ratio will
conclude whether to consider the second image or to discard.
Also text based deduplication is performed by generating the hash value using SHA algorithm for
each document. If the hash values for both the documents are same, then it will be discarded otherwise it
will be considered for storage. Deduplication According to the data granularity, deduplication strategies
can be categorized into two main categories: file-level deduplication and block-level deduplication,
which is nowadays the most common strategy. In block-based deduplication, the block size can
either be fixed or variable. Another categorization criteria is the location at which deduplication is
performed: if data are deduplicated at the client, then it is called source-based deduplication, otherwise
target-based. In source-based deduplication, the client first hashes each data segment he wishes to
upload and sends these results to the storage provider to check whether such data are already stored:
thus only undeduplicated data segments will be actually uploaded by the user. While deduplication
at the client side can achieve bandwidth savings, it unfortunately can make the system vulnerable to
side-channel attacks whereby attackers can immediately discover whether a certain data is stored or
not. On the other hand, by deduplicating data at the storage provider, the system is protected against
side-channel attacks but such solution does not decrease the communication overhead.
3.5 Secure Hash Algorithm(SHA)
Using Secure Hash algorithm for authentication purpose, SHA is the one of several cryptographic
hash functions, most often used to verify that a file has been unaltered. SHA-3 uses the sponge
construction,[10][11] in which message blocks are XORed into a subset of the state, which is then
transformed as a whole. In the version used in SHA-3, the state consists of a 55 array of 64-bit words,
1600 bits total. The authors claim 12.5 cycles per byte on an Intel Core 2 CPU. However, in hardware
implementations, it is notably faster than all other finalists.
Keccak's authors have proposed additional, not-yet-standardized uses for the function,
including an authenticated encryption system and a "tree" hash for faster hashing on certain
architectures. Keccak is also defined for smaller power-of-2 word sizes w down to 1 bit (25 bits total
state). Small state sizes can be used to test cryptanalytic attacks, and intermediate state sizes (from w
= 8, 200 bits, to w = 32, 800 bits) can be used in practical, lightweight applications.
3.6 Scalar Wavelet Quantization Algorithm (SWQ)
In quantization and entropy coding, the same choices are available to wavelet compression as
to other transform-based compression techniques.For quantization, one can choose between scalar
and vector quantizers. Similarly, for ent ropy coding, a variety of lossless bit-packing techniques can
452

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
be used, such as run-length encoding. In the transform stage, however, wavelet compression
differs significantly from other transform based compression techniques. Specifically, in standard
transform-based compression such as DCT, the transform is a matrix-vector multiplication, where the
vector represents the signal to be transformed and compressed, while the matrix is fixed for any given
signal size.
In wavelet compression, on the other hand, the transform is a pair of filters, where one is lowpass and the other is high-pass, and the output of each filter is down sampled by two. Low pass filter
is filtering only the low frequency data, high pass extracts only high frequency. Scalar quantization is
the dividing signals into individual signal, vector is grouping of signal.
Each of those two output signals can be further transformed similarly, and this process can be
repeated recursively several times, resulting in a tree-like structure, called the decomposition tree.
Unlike in DCT compression where the transform matrix is fixed, in wavelet transform the designer has
the choice of what filter-pair to use and what decomposition tree structure to follow.
3.7

Key Distribution Center


We proposed a privacy-preserving decentralized Key Distribution Center(KDC) scheme to
protect the users privacy. In our scheme, all the users secret keys are tied to his identifier to resist
the collusion attacks while the multiple authorities cannot know anything about the users identifier.
Notably, each authority can join or leave the system freely without the need of reinitializing the system
and there is no central authority. Furthermore, any access structure can be expressed in our
scheme using the access tree technique. Finally, our scheme relies on the standard complexity
assumption, rather than the non-standard complexity assumptions.

4. CONCLUSION
In this work, the notion of authorized data deduplication was proposed to protect the data
security by including differential privileges of users in the duplicate check.We also presented several
new deduplication constructions supporting authorized duplicate check in hybrid cloud architecture,
in which the duplicate-check tokens of files are generated by the private cloud server with private
keys.We showed that our authorized duplicate check scheme incurs minimal overhead compared to
convergent encryption and network transfer.
5. ACKNOWLEDGMENTS
Our thanks to the almighty god, our experts and friends who have contributed towards
development of the paper and for their innovative ideas.
6. REFERENCES
[1] Amazon, Case Studies, https://aws.amazon.com/solutions/casestudies/#backup.
[2]

J. Gantz and D. Reinsel, The digital universe in 2020: Bigdata, bigger digi tal shadows, and biggest
growth in thefar east, http://www.emc.com/collateral/analyst-reports/idcthe-digital-universein-2020.pdf, Dec 2012.

453

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
[3] M. O. Rabin, Fingerprinting by random polynomials, Center for Research in Computing
Technology, Harvard University, Tech.Rep. Tech. Report TR-CSE-03-01, 1981.
[4]

J. R. Douceur, A. Adya, W. J. Bolosky, D. Simon, and M. Theimer,Reclaiming space fromduplicate


files in a serverless distributed file system. in ICDCS, 2002, pp. 617624.

[5] M. Bellare, S. Keelveedhi, and T. Ristenpart, Dupless: Serveraided encryption for deduplicated
storage, in USENIX Security Symposium, 2013.
[6]

, Message-locked encryption and secure deduplication, in EUROCRYPT, 2013, pp. 296312.

[7] G. R. Blakley and C. Meadows, Security of ramp schemes, in Advances in Cryptology:


Proceedings of CRYPTO 84, ser. Lecture Notes in Computer Science, G. R. Blakley and D.Chaum,
Eds.Springer-Verlag Berlin/Heidelberg, 1985, vol. 196, pp. 242268.
[8]

A. D. Santis and B. Masucci, Multiple ramp schemes, IEEE Transactions on Information Theory,
vol. 45, no. 5, pp. 17201728,Jul. 1999.

[9]

M. O. Rabin, Efficient dispersal of information for security, load balancing, and fault tolerance,
Journal of the ACM, vol. 36, no. 2,pp. 335348, Apr. 1989.

[10] A. Shamir, How to share a secret, Commun. ACM, vol. 22, no. 11,pp. 612613, 1979.
[11]

J. Li, X. Chen, M. Li, J. Li, P. Lee, and W. Lou, Secure deduplication with efficient and reliable
convergent key management, in IEEE Transactions on Parallel and Distributed Systems, 2014, pp.
vol. 25(6), pp. 16151625.

[12] S. Halevi, D. Harnik, B. Pinkas, and A. Shulman-Peleg, Proofs of ownership in remote storage
systems. in ACM Conference on Computer and Communications Security, Y. Chen, G.Danezis,
and V. Shmatikov, Eds. ACM, 2011, pp. 491500.
[13] J. S. Plank, S. Simmerman, and C. D. Schuman, Jerasure: A library in C/C++ facilitating erasure
coding for storage applications- Version 1.2, University of Tennessee, Tech. Rep. CS-08-627,August
2008.
[14] J. S. Plank and L. Xu, Optimizing Cauchy Reed-solomon Codes for fault-tolerant network storage
applications, in NCA-06: 5th IEEE International Symposium on Network Computing Applications,
Cambridge, MA, July 2006.
[15] C. Liu, Y. Gu, L. Sun, B. Yan, and D. Wang, R-admad: High reliability provision for large-scale
de-duplication archival storage systems, in Proceedings of the 23rd international conference on
Supercomputing, pp. 370379.
[16] M. Li, C. Qin, P. P. C. Lee, and J. Li, Convergent dispersal:Toward storage-efficient security in a
cloud-of-clouds, in The 6th USENIX Workshop on Hot Topics in Storage and File Systems, 2014.
[17] P. Anderson and L. Zhang, Fast and secure laptop backups with encrypted de-duplication, in Proc.
of USENIX LISA, 2010.
[18] Z. Wilcox-OHearn and B. Warner, Tahoe: the least-authority filesystem, in Proc. of ACM
StorageSS, 2008.
[19] A. Rahumed, H. C. H. Chen, Y. Tang, P. P. C. Lee, and J. C. S.Lui, A secure cloud backup system
with assured deletion and version control, in 3rd International Workshop on Security in Cloud
Computing, 2011.
[20] M. W. Storer, K. Greenan, D. D. E. Long, and E. L. Miller, Secure data deduplication, in Proc. of
StorageSS, 2008.
[21] J. Stanek, A. Sorniotti, E. Androulaki, and L. Kencl, A secure data deduplication scheme for cloud
storage, in Technical Report,2013.
[22] D. Harnik, B. Pinkas, and A. Shulman-Peleg, Side channels in cloud services: Deduplication in
cloud storage. IEEE Security &Privacy, vol. 8, no. 6, pp. 4047, 2010.
454

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
[23]

R. D. Pietro and A. Sorniotti, Boosting efficiency and security in proof of ownership for
deduplication. in ACM Symposium on Information, Computer and Communications Security, H.
Y. Youm and Y. Won, Eds. ACM, 2012, pp. 8182.

[24] J. Xu, E.-C. Chang, and J. Zhou, Weak leakage-resilient client-side deduplication of encrypted data
in cloud storage, in ASIACCS, 2013, pp. 195206.
[25]

W. K. Ng, Y. Wen, and H. Zhu, Private data deduplication protocols in cloud storage. in Proceedings
of the 27th Annual ACM Symposium on Applied Computing, S. Ossowski and P.Lecca, Eds.ACM,
2012, pp. 441446.

[26] G. Ateniese, R. Burns, R. Curtmola, J. Herring, L. Kissner, Z. Peterson, and D. Song,Provable


data possession at untrusted stores, in Proceedings of the 14th ACM conference on Computer and
communications security, ser. CCS 07. New York, NY, USA: ACM, 2007,pp.598609. [Online].
Available:http://doi.acm.org/10.1145/1315245.1315318, IEEE Transactions on Computers Volume:
PP Year: 2015
[27] A. Juels and B. S. Kaliski, Jr., Pors: proofs of retrievability for large files, in Proceedings of the
14th ACM conference on Computer and communications security, ser. CCS 07. New York, NY,
USA: ACM, 2007, pp. 584597. [Online]. Available:
http://doi.acm.org/10.1145/1315245.1315317
[28] H. Shacham and B. Waters, Compact proofs of retrievability, in

A SIACRYPT, 2008, pp. 90107.

455

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Passive Safety System For Mordern Electronic


Gadgets Using Airbag Technology
C.V.GAYATHRI MONIKA, R.S.KESHIKA,
G.RAMASUBRAMANIAM, S.SAKTHIVEL
Department Of Electronics And Communication
Psg College Of Technology
Coimbatore, Tamilnadu, India

ABSTRACT - Most of us would have felt the pain of dropping a mobile phone or tablet, only
to find the screen is shattered beyond recognition or use. The pain is further heightened when we
receive the huge repair bill to fix or replace the screen of our smart phone. But there are those
happy moments when we retrieve our dropped mobiles from the floor to find that the screen has
remained intact. The main objective of our project is to prevent the front screen of our phone from
breakage when it slips from our hand or falls down accidentally. This can effectively be overcome
by designing a case that contains an airbag which prevents the gadgets front screen from touching
the ground. The freefall is sensed by an inbuilt accelerometer which measures the acceleration due
to gravity in all the three directions and predict the fall instantly in prior. Whenever the freefall is
detected the case protects the front panel of the mobile phone by popping small air balloons at all
the four corners of the gadget.
KEYWORDS: Free fall, accelerometer, airbag.
1.INTRODUCTION
In automobiles, a central Airbag control unit monitors a number of related sensors within the vehicle,
including accelerometers, wheel speed sensors etc. The crash is detected with the help of an accelerometer
in modern cars by measuring the change of speed [1]. If the deceleration is great enough, the accelerometer
triggers the airbag circuit. Normal breaking does not generate enough force to do this. A similar concept
to the above is used to detect the crash in modern electronic gadgets like mobile phones, tablets, iPod etc
but the only difference is to detect the fall prior to the crash. The mechanism of depletion of an air bag in
automobile involves ignition of harmless gas [2] like nitrogen or argon which is packed behind the steering
wheel whereas in mobile airbag compressed air is made to push through small tubes thus blowing small
pop up bags at all the four corners of the mobile case. Table 1 summarizes the main difference between
the car and mobile airbag.
AIRBAG IN AUTOMOBILES.
AIRBAG IN MOBILE PHONES.
1. This takes place due to chemical reactions in 1. This is done with the help of a mechanical system.
automobile system.
2. Airbag is deployed just before the accident.
2. Airbag is deployed after the accident.
3. Cost is low.
3. Cost is high.
4. Freefall should be detected.
4. Crash is to be detected.
Table 1.difference in car and mobile airbag

New glass such as Corning Gorilla Glass has been introduced to avoid the breakage of front panel. Tests
show that it could withstand around 100,000 pounds of pressure per square inch. It can withstand, without
shattering or cracking, a 535g ball being dropped on it from 1.8m above. The screen technology has already
made its way into the Samsung Galaxy S3 smart phone onwards [3]. How the phone or tablet falls to the
ground is the key to the shattering question. If it falls face down it might escape without too much damage
because the stress of impact is spread across the entire surface. It would almost certainly undergo damage,
which cannot be visualized with the naked eye. But if it is dropped onto one of the corners, the uneven
456

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
surface means the point of contact between the glass and ground is small and focused, directing the entire
force of the impact onto one small point [4] and this is where months or years of little bangs and bumps can
become relevant. With every drop the invisible cracks become greater, until a major spillage will cause it to
shatter. The following table gives the cost based justification for the project on the grounds of considering
three different mobile brands like the iphone 6s of the apple [5], s6 edge of Samsung[6] and Nexus-5 of
the Google [7]

2.BLOCK DIAGRAM


The block diagram shows how the signal from the accelerometer drives the protecting device.
The components of the device include a motor, a compressor and four airbags at the corners. The signal
from the accelerometer drives the motor which in turn forces the movement of piston that pumps air into
the four small airbag structures through small tubes. Once the accelerometer senses the freefall it sends its
acceleration due to gravity values to the processor which gives the interrupt to the motor based on looping
mechanism. The value is checked in a loop for more than three times before it gives a phase shift to the
motors. Once the value exceeds the threshold value the motor is driven at a step angle of 3.6 degree until
the piston is pushed upwards. The piston contains compressed air and on pushing it with force it will cause
the compressed air to blow in all the four tubes and thus deploying the air balloon at all the four corners of
the gadget. The reason for keeping the airbag at the corners is that the centre of mass is concentrated at the
corners. Hence according to Newtons second law, force is directly proportional to mass and so the force
for hitting the ground is more at corners due to the centre of mass being concentrated there.
457

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
3.AXES READINGS USING INBUILT ACCELEROMETER

Any object that is being acted upon only by the force of gravity is said to be in a state of free fall
[8]. When we are having the accelerometer inside the mobile phone, it eliminates to access it by using an
additional hardware accelerometer. The only constraint in accessing the inbuilt accelerometer is that it
cannot be used when the mobile is in off state but to view in a broader sense during the times of active, the
inbuilt accelerometer is way better as it minimizes space and human calibration times. Nowadays most of
smart phones carry an acceleration sensor. It is based on three mutually perpendicular silicon circuits, each
one oscillating in one direction as a ball hanging on a spring whose movement is restricted to one direction
[9]. The measurement of the acceleration sensor can be registered with a smart phone application.

Fig 2.Accelerometer readings from android app

Here a briefly present the free Android application "Accelerometer Monitor ver.1.5.0" we have used in our
experiments. This application takes 348 kB of SD card memory and can also be downloaded from Google
play website [10]. This App shows the acceleration components ax, ay and az on x, y and z- axes at each
time step. The resolution of the sensor in the measurement of the acceleration is a = 0.01197 m/s and
the average sampling time is t=0.02 s. This application also allows saving an output file, from which the
data can be retrieved for further analysis. The output of the mobile application with the acceleration data
is collected in an ASCII file as shown in figure-4. Probably, the simplest experiment we can perform with
the mobile acceleration sensor is the study of a body which falls in the gravitational field of the Earth. This
experiment was treated in reference [11]. Authors suspended a smartphone from a string. After cutting the
string, the Smartphone fell freely for a period of time until getting to a soft surface which stops its motion.
With the measurement of the fall time and the initial time and constant acceleration due to gravity during
freefall, the height to deploy the airbag can be evaluated as follows by assuming a distance of 2 meters.

d= 1/2(g*t^2); Where d is the distance.

T=2d/g; where T is the threshold level.

T=0.628.

V=2g*d ; where V is the velocity

V=6.260.

V=g*t. where t is threshold voltage.

t=3.6 volts

458

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
The following table gives different velocity and time ranges for seven different heights and enables us to
predict the time and place to deploy the airbag.

3. AIRBAG APP TO DETECT THE MINIMUM ACCELERATION



The time to deploy the airbag is calculated with the help of the developed airbag android
application. The purpose of the application is to estimate the minimum axes value along the z-direction
faces during the fall of the mobile phone. This minimum value is compared with the value obtained from
table-3 which justifies it is sufficient to open the airbag after a distance of meters irrespective if the height
as it unnecessary to deploy airbags for gadgets falling in heights less than meters since the probability
of breakage is very low. The app works on java platform via eclipse software. Eclipse is an integrated
development environment (IDE) [12]. It contains a base workspace and an extensible plug-in system for
customizing the environment. Eclipse is written mostly in Java and its primary use is for developing Java
applications, but it may also be used to develop applications in other programming languages through the
use of plug-in

Fig 3. Accessing the inbuilt accelerometer

STEPS OF THE PROCESS


Initially when the code is dumped into the android device, the app loads with three buttons like start
/reset, stop and print.

When the device in our hand after pressing the reset button the value is equal to 9.8 which is
acceleration due to the gravity.

When the phone is in free fall the value is decreased corresponding to the distance and the variation
occurs in the Z direction.
459

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

When the print button is pressed, all the value of accelerometer is printed in LOGCAT.

We have derived using equation of motion that after a threshold value of 3.6the mobile is in free fall.
So after this particular value of 3.6 something has to be indicated as a beep sound.

The above threshold value is found using the LAWS OF MOTION. The following equations are
further calculated.

STEP 1

Initially once the code is dumped into the android device the screen appears to be like the following.
It will have three states START, STOP and PRINT. The start section starts counting the acceleration due
to gravity values and the stop section stops counting the gravity values and meanwhile the print section
displays all the counted values between start to stop in the eclipse working window.

Fig 4. Initial stage of the APP

STEP 2

Once there is a movement in the device the acceleration due to gravity is sensed and displayed as
follows. Until the stop button is pressed the value keeps on changing in the Z-direction. The values will be
close to 9.8 which is the standardized value for the acceleration due to gravity

Fig 5. APP displaying the acceleration values

STEP 3
During the motion of the device towards the ground the different values are sensed and the threshold is
calculated using the laws of motion. The value displayed on the screen is the minimum acceleration faced
by the device during its freefall. This value allows us to justify our threshold for our device that any value
below 3.5g in the S-factor denotes freefall
460

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Fig 6 Minimum acceleration faced during freefall

STEP 4

This step of the project deals with the print button present in our APP, it is used to display the
different acceleration due to gravity values faced by the gadget between the start and the stop sections

Fig 7. Displaying all the acceleration faced during freefall


461

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
4.CONCLUSION

Freefall detection has number of application like monitoring baby, preventing elderly people from
falling, protecting the electronic gadget from breakage etc. The fall detection is the major challenge here
since the entire process of deploying an airbag must happen in micro seconds. Here we have implemented
freefall considering only the Z-direction readings, but to have more accuracy the value should be obtained
by taking the S-factor in to account. There are also cases where a gadget falls on a soft cushion and at the
same time the airbag also gets deployed which proves to be unnecessary, these situations are unavoidable
so we are trying to use replaceable airbags at the case which proves
to be cost effective. The future developments of the project may include detecting the time and sending an
interrupt via a processor to push the compressed air in to the tubes.
5. ACKNOWLEDGEMENTS

The authors would like to thank the PSG institute of technology (Tamilnadu, India), Head of the
department of Electronics and Communication, our mentor Dr.D.sivaraj for their kind support of teaching
and guiding.
6.REFERENCES
[1] Md.Syedul Amin, Mamun Bin Ibne Reaz, Mohammad Arif Sobhan Bhuiyan and Salwa Sheikh
Nasir, Kalman filtered GPS accelerometer based accident detection and location system
[2]

http://www.explainthatstuff.com/airbags.html

[3]

http://www.corninggorillaglass.com/en/products-with -gorilla/samsung/samsung-galaxy-siii

[4]

http://thetechjournal.com/tag/gorilla-glass

[5] https://www.apple.com/support/iphone/repair/screen-damage/
[6] http://m.gadgets.ndtv.com/motorola-google-nexus-6-2060
[7]

http://m.gsmarena.com/samsung_galaxy_s6_edge-7079.php

[8]

http://www.physicsclassroom.com/class/1Dkin/u1l5a

[9]

J.A.Monsoriu, M.H.Gimenez, E.Ballester, L.M.Sanchez Ruiz, J.C.Castro-Palacio and L.VelazquezAhad Smartphone acceleration sensors in undergraduate physics experiments

[10] Google play, http://play.google.com/store/apps


[11] Skeffington and K.Scully, Simultaneous tracking of multiple points using a wiimote,The Physics
Teacher, Vol.50, pp. 482-83,2012.
[12] https://en.wikipedia.org/wiki/Eclipse_(software)

462

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Control Analysis of Statcom Under Power System Faults


1
Dr.K.Sundararaju, 2T.Rajesh
Head of the Department, Dept. Of Electrical and Electronics Engineering,
2
Dept. Of Electrical and Electronics Engineering,
M.Kumarasamy College Of Engineering M.Kumarasamy College Of Engineering,
Karur,Tamil Nadu, India.
1

ABSTRACT- Voltage source convertors based static synchronous compensators are used in
the transmission and distribution line for voltage regulation and reactive power compensation.
Nowadays angle controlled STATCOM have been deployed in the utilities to improve the output
voltage waveform quality with lower losses compared to that of PWM STATCOMS. Even though
angle control STATCOM has lot of advantages, it suffers in their operation, when unbalanced and
fault conditions occur in the transmission and distribution lines. This paper presents an approach of
Dual Angle control strategy in STATCOM to overcome the drawbacks of the conventional angle
control and PWM controlled STATCOMS. Here, this paper will not completely changes the design
of conventional angle control STATCOM, instead it add only (ac ) AC oscillations to the output
of the conventional angle controller output (dc) to make it as a dual angle controlled. Hence the
STATCOM is called dual angle controlled (DAC) STATCOM.
Index terms- Dual angle control (DAC),hysteresis controller, STATCOM.
I. INTRODUCTION

There are lot of devices used in the power system for voltage regulation, reactive power
compensation and power factor regulation[1]. The voltage source convertor (VSC) based STATCOM is
one of the widely used device in the large transmission and distribution systems for voltage regulation and
reactive power compensation. Nowadays angle controlled STATCOM have been deployed in the utilities
to improve the output voltage waveform quality with lower losses compared to that of PWM STATCOMS.
The first commercially implemented installation was 100 MVAr STATCOM at TVA Sullivan substation
and followed by New York Power Authority installation at Marcy substation in New York state in [13] and
[16].150-MVA STATCOM at Leardo and Brownsville substation at Texas,160-MVA STATCOM at Inez
substation in Eastern Kentucky, 43-MVA PG&E Santa cruz STATCOM and 40-MVA KEPCO (Korea
Electric Power Corporation) STATCOM at Kangjin substation in South Korea are the few examples of the
commercially implemented and operating angle controlled STATCOM on worldwide.

Even though angle control STATCOM has lot of advantages compared to other STATCOMS,
it suffers in their operation by over current and possible saturation of the interfacing transformers caused
by negative sequence during unbalanced and fault conditions occur in the transmission and distribution
lines in [4]. This paper presents an approach of Dual Angle control strategy in STATCOM to overcome
the drawbacks of the conventional angle control and PWM controlled STATCOMS [2]. Here, this paper
will not completely changes the design of conventional angle control STATCOM, instead it add only
(ac ) AC oscillations to the output of the conventional angle controller output (dc) to make it as a dual
angle controlled. Hence the STATCOM is called dual angle controlled (DAC) STATCOM. Angle control
STATCOM same degree of freedom compared to that of PWM STATCOM, but it is widely used because
it has higher waveform quality of voltage compared to that of PWM STATCOM.

This paper presents a new control structure for high power angle controlled STATCOM. Here the
only control input to angle control STATCOM is phase difference between VSC and ac bus instantaneous
voltage vector. In the proposed control structure, is split into two parts, dc and ac. The DC part dc
which is the final output of the conventional angle controller is incharge of controlling the positive sequence
VSC output voltage. The oscillating part ac controls the dc link voltage oscillations. The proposed model
STATCOM has the capablity to operate under fault conditions and able to clear the faults and unbalanced
occurs in the transmission and distribution lines.
In this paper, we have implemented a new control structure in STATCOM ,which has the ability to clear
such as sag and swell and other types of which will appears in the power systems. The analysis of the
463

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
proposed control structured STATCOM is done on the MATLAB simulations and the experimental results
are satisfied.
II. CONVENTIONAL STATCOMS UNDER NORMAL AND SYSTEM FAULT CONDITIONS

Voltage source vectors which are basic one for the building of the FACTS devices can be
divided into two types based on their control methods [17]. The first one is the PWM or vector controlled
STATCOM and another is the angle controlled STATCOM. In the PWM based STATCOM, by controlling
the amplitude of firing pulses given to the voltage source convertors the final output voltage of the convertor
can be increased or decreased. These type of inverters will be uneconomical, because the switching loss
associated with the VSC are very high.

Fig.1. Control structure of vector controlled STATCOM

Then the second type is the angle controlled STATCOM .Here by changing the output voltage angle of the
STATCOM for a particular time on compared to that of line voltage angle, the inverter can be able provide
both inductive and capacitive reactive power.

Fig.2. Control structure of angle controlled STATCOM

By controlling the towards the positive and negative direction and varying the dc link voltage ,we can
able increase or decrease the final output voltage of voltage source convertors(VSC) in [2]. Here the ratio
between the dc and ac voltage in STATCOM should be kept constant. If the final output voltage of the
STATCOM is greater than the line voltage it will absorb reactive power from the line. But, if the output
voltage of the STATCOM is lesser than the line voltage ,then it will inject reactive power into the line.
Throughout this paper the performance of the proposed control structure will be shown by MATLAB
simulations.
III. ANGLE CONTROLLED STATCOM UNDER UNBALANCED CONDITIONS

VSC is the basic building block of the all conventional and angle controlled STATCOMs.
Therefore, study about this method to improve the performance of VSC under unbalanced and fault
conditions is important and practical. There are many methods are proposed in the literature about improving
the performance of the voltage source convertors. But all cannot be applicable to the angle controlled
STATCOM with only one control input angle().
464

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

There are only few methods proposed in the literature about the angle controlled STATCOM under
ac fault conditions. The paper [8] calculates the amount of dc link capacitor that minimizes the negative
sequence current flow on the STATCOM tie line. It tells that by choosing a particular value for the dc link
capacitor, the tie line with the inductor will becomes an open circuit for the negative sequence current
by positive sequence angle control. In the another paper, hysteresis controller is used in addition to the
conventional angle controller. In this controller, the VSC will detect and implement hysteresis switching to
control their phase currents. Each VSC will have its own overcurrent limit and it should not be exceeded in
normal and fault conditions.

Fig.3. Equivalent circuit of an VSC connected to AC system


This system will protects the switch and limits STATCOM current under fault conditions. The
dc-link voltage oscillations will be occurred in this method and it will cause the STATCOM to trip. The
injection of poor quality voltage and current waveforms into faulted power system will produce undesirable
stress on the power system components [7].
IV. ANALYSIS OF STATCOM UNDER UNBALANCED OPERATING CONDITIONS

In this method a set of unbalanced three phase phasor is split into two symmetrical positive
and negative sequences and zero sequence component. The line currents in the three phases of system is
represented by the equations 1,2,3 and 4 mentioned below,

In the angle controlled STATCOM the only control input angle should be identically applied to all three
phases of the invertor. Here the zero sequence components can be neglected because there is no path for the
neutral current flow in the three phase line. The switching function for an angle control STATCOM should
always be symmetric. The switching function for the three phases a,b and c are represented in the equation
5,6,7 mentioned below,

465

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
Where is the angle by which the invertor voltage leads/lags the line voltage vector and K is the factor for
the invertor which relates the dc side voltage to the phase to neutral voltage at the ac side terminals. The
invertor terminal fundamental voltage is given in the equation 8,9,10 mentioned below,


Basically, the unbalanced system can be analysed by postulating a set of negative sequence
voltage source connected in series with the STATCOM tie line. The main idea of the Dual Angle Control
strategy is to generate a fundamental negative sequence voltage vector at VSC output terminals to attenuate
the effect of negative sequence bus voltage. The generated negative sequence voltage will minimize the
negative sequence current produced on STATCOM under fault conditions. The third harmonic voltage
will be produced at VSC output terminals because of interaction between dc link voltage second harmonic
oscillations and switching function. The third harmonic voltage is positive sequence and contains phase a,
b and c which are 1200 apart. Basically, the negative sequence current will be produced in the unbalanced
ac system conditions generates the second harmonic oscillations on the dc link voltage and it will reflects
as third harmonic voltage at the VSC output terminals and fundamental negative sequence voltage. Similar
to fundamental negative sequence voltage, dc link voltage oscillations will decide the amplitude of second
harmonic voltage in [3]. Here by controlling the second harmonic oscillations on the dc link voltage ,the
negative sequence current can be reduced. Decreased negative sequence current will reduce the dc link
voltage. Reducing the dc link voltage second harmonic will reduce the third harmonic voltage and current
at the STATCOM tie line in [12]. Here the control analysis of STATCOM under fault conditions are done
in MATLAB
V. PROPOSED CONTROL STRUCTURE DEVELOPMENT

As discussed in the previous section ,the STATCOM voltage and current during unbalanced
conditions are calculated by connecting a set of negative sequence voltage in series with STATCOM tie
line are shown in Fig.

Fig.4.Equivalent circuit of STATCOM with series negative sequence voltage source

Assume second harmonic oscillation at dc link voltage as



Then the reflected negative sequence voltage at phases a,b and c STATCOM terminal are
calculated by the equation 11,12 and 13 mentioned below,


The derivative of STATCOM tie line negative sequence currents with respect to time are calculated
by the equation 14,15 and 16 mentioned below,

466

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Transformation from abc to negative synchronous frame is defined as


In proposed structure,angle is divide into two parts dc and ac .The angle dc is the output of
the positive sequence controller and ac is the output of the negative sequence controller. The angle ac is
the second harmonic oscillations which will generate negative sequence voltage vector at the VSC output
terminals to attenuate the effect of the negative sequence bus voltage on fault conditions. The ac should
be properly filtered out otherwise it will leads to higher order harmonics on the ac side.

Fig.5.Three bus AC system model

VI. EXPERIMENTAL RESULTS

Fig.6.Voltage waveform of the grid (without STATCOM)


Here the voltage is suddenly decreasing in the particular time interval due to sudden change in the
load value. when the load connected to the system does not remain constant, then the current and voltage
of the line will not remain constant. During fault occurance, the current and voltage of the grid will not
remain constant, so the STATCOM can be used to maintain the voltage .Because voltage is the important
protection parameter, it has capability to damage insulations of the transmission and protection device.
467

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Fig.7.Current waveform of the grid (without STATCOM)


Here ,the reduction in amplitude of voltage is observed because of the sudden change in the
load value. This is due to the inverse proportionality nature of voltage and current value in normal power
systems. This sudden increase of load is achieved by connecting a load to the grid by means of switch. By
giving a time sequence to the switch for connecting it with the grid, we can able to connect and disconnect
the load automatically for the particular time sequence.

Fig.8.Voltage waveform of the grid (with STATCOM)

Fig.9.Current waveform of the grid (with STATCOM)

Here, the voltage is maintained on a constant value due to the reactive power compensation by the
STATCOM.The reactive power compensation is done by STATCOM on supplying current in leading angle
to the line voltage.Here the STATCOM is connected to the grid by means of switch .
VII. CONCLUSION
This paper proposed a new control structure to improve the performance of the conventional angle
controller STATCOM under unbalanced and fault conditions occurs on the transmission line.This method
does not completely redesign the structure of the STATCOM instead it add only ac oscillations to the output
of conventional angle controller.The ac oscillations will generate negative sequence voltage at the VSC
output terminals to attenuate the effect of the negative sequence bus voltage generated at the line terminals
during fault conditions.
468

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
REFERENCES
1.

C. Schauder and H. Mehta, Vector analysis and control of advanced static VAR compensators" in .
Eng.-C, Jul. 1993, vol. 140,pp. 299306.

2.

H. Song and K. Nam, Dual current control scheme for PWM converter under unbalanced input
voltage conditions, IEEE Trans. Ind. Electr., vol. 46, no. 5, pp. 953959, Oct. 1999.

3.

Yazdani and R. Iravani, A unified dynamic model and control for the voltage-sourced converter
under unbalanced grid conditions,IEEE Trans. Power Del., vol. 21, no. 6, pp. 16201629, Jul. 2006.

4.

Z. Xi and S. Bhattacharya, STATCOM operation strategy with saturable transformer under threephase power system fault,in Proc. IEEE and .Electron. Soc., 2007, pp. 17201725.

5.

M. Guan and Z. Xu, Modeling and control of a modular multilevel converter-based HVDC system
under unbalanced grid conditions,IEEE Trans. Power Electron., vol. 27, no. 12, pp. 48584867, Dec.
2012.

6.

Z. Yao, P. Kesimpar, V. Donescu, N. Uchevin and V. Rajagopalan, Nonlinear Control for STATCOM
Based on Differential Algebra,29th Annual IEEE Power Electronics Conference, Fukuoka, vol.1,
pp.329-334, 1998.

7.

F. Liu, S. Mei, Q. Lu, Y. Ni, F.F. Wu and A. Yokoyama, The Nonlinear Internal Control of
STATCOM: Theory and Application,International Journal of Electrical Power & Energy Systems,
vol. 25, pp.421-430, 2003.

8.

M. E. Adzic, S. U. Grabic, V. A. Katic, Analysis and Control Design of STATCOM in Distribution


Network Voltage Control Mode, Sixth International Symposium Nikola Tesla, Belgrade, vol.1,
pp.1-4, 2006.

9.

N.G. Hingorani and L. Gyugyi, Understanding FACTS, Concepts and Technology of Flexible AC
Transmission Systems. Piscataway, NJ: IEEE Press, 1999.

10. P. Rao et al., STATCOM control for power system voltage control applications,IEEE Trans. Power
Del., vol. 15, no. 4, pp. 13111317, Oct. 2000.
11. D. Soto and R. Pena, Nonlinear control strategies for cascaded multilevel STATCOMs, IEEE Trans.
Power Del., vol. 19, no. 4, pp. 19191927, Oct. 2004.
12. VAR Planning With Tuning of STATCOM in a DG Integrated Industrial System T. Aziz, M.
J.Hossain, T. K. Saha and N. Mithulananthan IEEE transactions on power delivery, vol. 28, no. 2,
April 2013.
13. S. Bhattacharya, B. Fardenesh, and B. Sherpling, Convertible static compensator: Voltage source
converter based FACTS application in the New York 345 kV transmission system, presented at the
5th Int. Power Electron. Conf., Niigata, Japan, Apr. 2005.
14. P. N. Enjeti and S. A. Choudhury, A new control strategy to improve the performance of a PWM AC
to DC converter under unbalanced operating conditions, IEEE Trans. Power Electron., vol. 8, no. 4,
pp. 493500, Oct. 1993.
15. P.W. Lehn and R. Iravani, Experimental evaluation of STATCOM closed loop dynamics, IEEE
Trans. Power Del., vol. 13, no. 4, pp. 13781384, Oct. 1998.
16.

J. Sun, L. Hopkins, B. Sherpling, B. Fardanesh, M. Graham, M. Parisi, S.MacDonald, S. Bhattachary,


S. Berkowitz, and A. Edris, Operating characteristics of the convertible static compensator on the
345 kV network, in Proc. IEEE PES Power Syst. Conf. Expo., 2004, vol. 2, pp. 73273.

17. Performance analysis and location identification of STATCOM on IEEE-14 bus using power flow
analysis K. sudararaju , A. Nirmal kumar, S.Jeeva, A. Nandhakumar, Journal of Theoretical &
Applied Information Technology 2/20/2014, Vol. 60 Issue 2, p365-371. 7p.
18. Cascaded Control of Multilevel Converter based STATCOM for Power System Compensation of
Load Variation, K. Sundararaju, A. Nirmal Kumar, International Journal of Computer Applications
(0975 8887), Volume 40 No.5, February 2012
19. Performance analysis of STATCOM in real time power system , K. Sundararaju, A. Nirmal Kumar,
International Conference Advances in Electrical Engineering (ICAEE), 2014 .
469

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Performance Analysis of Four Switch


Three Phase Sepic-Based Inverter
Prabu B

PG Scholar /Dept.Of.EEE, K.S.Rangasamy College Of Engineering

Murugan M
Associate Professor/Dept.Of.EEE, K.S.Rangasamy College Of Engineering
Abstract: The proposed novel four-switch three-phase (FSTP) inverter is to design to reduce the
rate, difficulty, mass, and switching losses of the DC-AC conversion system. Here the output line
voltage cannot exceed half the input voltage in the out-dated FSTP inverter and it operates at half
the DC input voltage. Single-Ended Primary-Inductance Converter (SEPIC) is a novel design for
the FSTP inverter proposed in this paper. In this proposed topology the necessity of output filters
is not necessary for the pure sinusoidal output voltage. Related to out-dated FSTP inverter, the
proposed FSTP SEPIC inverter raises the voltage utilization aspect of the input DC supply, where
the suggested topology delivers the higher output line voltage which can be extended up to the full
value of the DC input voltage. In the proposed topology a control used called the integral slidingmode (ISM) control and this control is used to enhance its dynamics and to ensure strength of the
system during different operating conditions. Simulation model and results are used to authorise the
proposed concept and simulations results show the effectiveness of the proposed inverter.
I. INTRODUCTION

The conventional six-switch three-phase (SSTP) voltage source inverter shown in Fig 1 has found
well-known industrial tenders in different forms such as lift, cranes, conveyors, motor drives, renewable
energy conversion systems, and active power filters. However, in some low power range applications,
reduced switch count inverter topologies are considered to alleviate the volume, losses, and cost.

Some research efforts have been directed to develop inverter topologies that can achieve the
aforesaid goal. By the results obtained it shows that it has a possibility to implement a three-phase inverter
with the usage of only four Switches [1]. In four- switch three-phase (FSTP) inverter, two of the output
load phases are sustained from the two inverter legs, while the middle point of the DC-link of a splitcapacitor bank is connected to the third load phase. Recently, the FSTP inverter has attracted features like
its performance, control, and applications etc [2]-[17].

Compared to the out-dated SSTP inverter, the FSTP inverter has various benefits such as reduction
in cost and reliability increased due to the reduction in the number of switches, conduction and switching
losses is reduced by 1/3, where one complete leg is omitted, and compact number of interface circuits to
supply PWM signals for the switches. The FSTP inverter can also be operated in fault tolerant control to
solve the open/short circuit fault of the SSTP inverter [2], [8], [10].

Fig.1 Conventional FSTP voltage source inverter.


470

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
On the other hand, there are some drawbacks of the conventional FSTP inverter which should be
taken into consideration. Similar to the traditional SSTP inverter, the FSTP inverter achieves only buck
DC-AC conversion. To DC input source and the FSTP inverter. However, this adds major difficulty and
hardware to the power conversion system and waste the merits of the reduced switch count. Also, the
FSTP inverter topology is not symmetrical; while the two inverter legs are directly connected to the two
load-phases, the center tap of split DC-link capacitors is connected to the third load-phases. This forces the
current of the third phase to flow through the DC-link capacitors, hence a fluctuation will predictably seem
in the two capacitors voltages, which correspondingly changes the output voltage [18]. Additionally, if the
DC-link split-capacitors have not equal values, there is a opportunity of over-modulation of the pulse-width
modulation process in order to compensate this difficulty [16].
This paper proposes a novel design of the FSTP inverter topology based on the single-ended primary
inductance DC- DC converter (SEPIC). The SEPIC converter is a fourth-order nonlinear system that is
widely used in step-down or step-up DC-DC switching circuits, photovoltaic maximum power point
tracking [19], [20], [21] and power factor correction circuits [23] due to its encouraging features as the noninverting output voltage buck-boost capability and lower input current ripple content. Based on the abovementioned advantages, SEPIC converter has been recently researched by scholars in various topologies in
many diversified studies [19], [24].
Although the proposed FSTP SEPIC inverter has not a voltage boost competency, it can produce
an output voltage higher than that of the conventional FSTP voltage source inverter by two factors. i) The
voltage utilization factor of the input DC supply will increase. ii) Another attractive feature is that the
output voltage of the proposed SEPIC inverter is a pure sine wave, therefore the filtering requirements
is reducing at the output side. Also, there is no dynamic need to insert a dead-band between the same-leg
switches, which expressively reduces the output waveform distortion and gain non-linearity.
ii) BLOCK DIAGRAM OF THE PROPOSED MODEL

The block diagram representation of the proposed model is shown in Fig 2. The three phase load
is powered by two SEPIC converters. The PWM signals for the inverter are generated from the hall signals
corresponding to the rotor position of the motor. The hall signals are converted into respective back EMF
and based upon the back EMF value the corresponding switches are commutated.

The stator winding of the motor is energized by sinusoidal current waveform obtained from the
cascaded H-bridge multilevel inverter. When the output voltage level of the inverter gets increased pure
sinusoidal current waveform has been obtained.

Fig 2 Block Diagram of the Proposed Model

In order to improve the power quality at AC mains, bridgeless buck-boost converter is used as a frontend
rectifier. Single phase AC supply is given to the bridgeless buck-boost converter through LC filter. The
switching pulse for the rectifier is generated with the help of error in DC link voltage. The reference DC
link voltage is created from the reference speed value. The speed of the motor is controlled by varying the
DC link voltage.
471

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
iii) THE PRINCIPLE OF OPERATION OF PROPOSED FSTP SEPIC INVERTER.

Two SEPIC converters are present in the proposed FSTP SEPIC inverter, and it can attain DC
AC conversion. Where the two phases of the three-phase load is connected to the output of a two DCDC
SEPIC converters which are sinusoidally modulate [24]. While the input DC source third phase is directly
connected to the input DC source. Both SEPIC DC-DC converters produce a DC-biased sine wave output,
so that each converter produces a unipolar voltage. The sinusoidal modulation of each converter is 120
shifted to generate three-phase balanced load voltage and the DC-bias is exactly equal to the input DC
voltage. Since the DC input supply and load is connected differentially across the two converters and thus
whereas a DC bias appears at each end of the load with respect to ground, the differential DC voltage across
the load is zero and the bipolar voltage is generate across the load, which requires the DCDC SEPIC
converters to be current bi-directional. The bi-directional SEPIC DCDC converter is shown in Fig. 4,
while the configuration of the proposed FSTP SEPIC DC-AC inverter is shown in Fig.5

Fig. 3 A basic approach to achieve DC-AC conversion with four switches using two SEPIC DC-DC
converters (a) reference output voltage of the first converter, (b) reference output voltage of the second
converter.

Fig. 4 Bi-directional SEPIC converter


472

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Fig. 5 Proposed FSTP SEPIC inverter

As shown in Fig. 4, the bi-directional SEPIC converter includes DC input voltagedcV, input inductor1L,
two complementary power switches 1'1,SS, transfer capacitor 1C, output inductor 2Land output capacitor
2C feeding a load resistance 0R. SEPIC operation core implies charging the inductors 1 L and 2 L during
the ON state of the switching period taking the energy respectively from the input source and from the
transfer capacitor 1 C ,and discharging them simultaneously into the load through the switch '1S during the
OFF state of the switching period. Depend upon the duty cycle the output voltage of the SEPIC DC-DC
converter may be less or more than the input voltage. Output and input voltage relation is explained in the
equation as follows.

Where D is the duty cycle, while 0V andinV are the output and input voltage of the converter respectively.
The reference voltage of each converter with respect to the ground implies that the sinusoidal modulation
of each SEPIC converter. The reference voltage of each converter with respect to the ground is given by

Where w is the desired radian frequency, while VmL-L peak of the desired line to line output voltage. Thus,
established on Kirchhoffs voltage law in Fig. 5, the output line voltages across the load are given by:

Although the FSTP SEPIC inverter can give an output line voltage up to a value equals the voltage of the
input source vwDCVas indicated by equation (2). To avoid operating at zero duty it is recommended to
define mL L V { lower than the value of the input DC (i.e. minimum duty cycle is selected to be slightly
higher than zero).
473

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
Accurate selection of passive elements of SEPIC converter is necessary for successful DC-AC conversion
and requires information of the instantaneous capacitors voltages and inductors currents. The voltage
across the output capacitors has been given by equation (2). Based on the basics concept of DC-DC SEPIC
converter, input DC voltage is equal to the average voltage across the coupling capacitor, while the current
through the output inductor and output load current is to be equal as indicated in equations (4) and (5).

Where mI is the peak value of load current, and is the phase of the load impedance (ZL).
The input inductor current for both SEPIC converters can be achieved by applying energy balance rule for
each SEPIC converter. Assuming ideal converters, the input inductor currents for both converters are given
by,

From equation (7), it shows that the average values of both input inductor currents are equal only at a pure
resistive load (unity power factor), in this event, same amount of power to the load side will be transferred
by the both SEPIC converter. Otherwise, the average currents will be unequal (according to equation (7)),
i.e. SEPIC converters will transfer different amount of power to the load side.
The proposed inverter topology of DC input current)(itDC(t)is equal to the summation of the load current
drawn by phase A)(iA(t ), and the input inductors currents of both SEPIC converters (iLIB(t)and (iLIC(t)) as
follows. )

Where )(tiA is the load current of phase A as described in equation (6), which is drawn directly from the DC
input source. Substituting equation (6) into (9), the DC supply current could be given in the following form:

Equation (9) shows that the DC supply current drawn by the proposed inverter topology is constant.
For line-to-line voltage peak of 86.66% of the DC input voltage, the normalized load current drawn by
phase A mAlti)(, normalized input inductor current for each SEPIC convertermBLlti)( 1, and mcLlti)(1the
474

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
normalized DC input current mDClti)( are different load power factors. The input currents of both SEPIC
converters are symmetrical in unity power factor with the same
average value. At lagging/leading power factors, the input currents of both SEPIC converters have different
waveforms with unequal average value for lagging/leading power factors.
To evaluate the load deviation between the two active legs of the proposed SEPIC inverter, the RMS value
of the input inductor current of both SEPIC converters at different load power factors (for a unity RMS
value of the output load current
(2mI). It can be noted that the concentrated current drawn by converter B (BLi1) occurs at a power factor
of 0.866, while the concentrated current drawn by converter C (CLi1) occurs at unity power factor. The
maximum deviation between the input currents of the two active legs is 13.87%, and occurs at a power
factor of 0.7071. Considering the reference loading condition as unity power factor loading to select the
IGBTs current rating, lagging power factor is noted as 0.866, the deviation between its input current and
the reference unity power factor current of 3.81% in converter B. Although the percentage of this deviation
could be considered as a small ratio that does not imply the de-rating of the transistors, it is preferable to
consider it in the selection of the IGBTs current rating.

Fig. 6. SEPIC equivalent circuit for (a) switch ON and (b) switch OFF.

iv) CONTROL STRATEGY


To drive the proposed FSTP SEPIC inverter a robust control strategy is required. This is due to the fact that
the input DC voltage is equal to the voltage of one of the three-output phases with respect to the common
point. Thus, any abnormality in the output voltage of the two SEPIC DC-DC converters from the desired
DC-biased sine-wave reference leads to an important unbalance in the three-phase output line voltages.
A. Sliding Mode Control
Sliding-mode control (SMC) is a non-linear control theory which covers the properties of hysteresis control
to multivariable environments. It is able to constrain the system status to follow trajectories which lie on a
suitable surface in the state space (the sliding surface) [25]-[28]. The main advantages of SMC are the fast
dynamic in response and the guarantee of stability and robustness for large differences of system parameters
and against perturbations. Additionally, given its flexibility in terms of synthesis, SMC is relatively easy
to apply when compared to other types of non-linear control. Though, its application to power converters
should be considered for each converter severally. As a control method, SMC has been applied to basic
DC-DC and complex converters,. Although most authors discuss the generalization of their developed
methods to other high-order converters, this does not imply to all converters because the difference in
circuit topology totally changes the systems performance even if it is of the same order.
B. Sliding Surface
While the output voltage 2CVof each SEPIC converter is the final control target, it will be incredible for
the closed loop controlled system to reach stable motion on the sliding surface if 2CVis only selected to be
the direct control target, thus the other variables should be chosen.
Then, it is proposed to upturn the number of state variables as low as possible in the sliding surface. To
avoid a large number of tuning gains, a surface containing the output voltage in addition to the input current
475

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
could be chosen as given by (10).
Where a1 coefficients and a2 are gains, while 1eand 2eare the feedback errors of the state variables 1Liand
2CVrespectively, and given by (11).

The reason for choosing 1Liinstead of 2Li is to allow the sliding surface to directly control the input of each
converter in addition to its output, which is more steady than the other cases.
At an extremely high switching frequency, the
sliding-mode controller will ensure that both input inductor current and output capacitor voltage are
controlled to follow exactly their sudden references i Leref and Vc2refrespectively. However, in the case of
fixed frequency or finite frequency sliding-mode controllers, the control is unsatisfactory, where steadystate errors occur at both inductor current and output capacitor voltage. To introduce an additional integral
term of the state variables is the good method for conquering these errors into the sliding surface. Therefore,
an integral term of these errors is introduced into the sliding-mode controller as an additional controlled
state-variable to reduce these steady-state errors. This is commonly known as integral sliding-mode control
(ISMC), and the sliding surface is selected as specified by equation (12):
Where a1,a2 and a3 arepresent the desired control parameters denoted sliding coefficients, whil e1e2 and
e3 eare expressed as:

To obtain the dynamic model substituting the SEPIC state-space models under CCM into the time derivative.
Where the three-state errors time derivative given by:

To simplify the above calculation it become

Where D is the equivalent control signal denoted as the duty cycle of the converter, which could be
formulated using the invariance conditions by setting the time derivative of to zero as follows:

Solving for the equivalent control signal yields:


476

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Fig. 7 Integral sliding-mode controller for SEPIC converter.

C) DOUBLE-INTEGRAL SLIDING-MODE CONTROL


To upturn the effectiveness of the integral sliding-mode control, an additional double-integral term of the
state variables error could be presented in the sliding surface. This is the so-called double-integral slidingmode (DISM) controller. Thus, the DISM controller has the following sliding surface:

Substituting the SEPIC state-space models under CCM into the time derivative of (19) gives the dynamical
model of the system as:

The equivalent control signal deduced from setting the time derivative of (20) into zero gives:

parameters in the recommended DISM controller.


Equation (21) shows that to solve the problem of significant steady-state errors in the ISMC algorithm the
477

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
DISMC introduces an integral term of the capacitor voltage error component in the equivalent control.

Fig. 8 Double-integral sliding-mode controller for SEPIC converter

iv) SIMULATION RESULTS


During different conditions the performance of the proposed FSTP SEPIC inverter using the sliding-mode
control strategy has been investigated. The parameters of the system are summarized in TABLE I. The
simulation results are shown in Figs. 9 and 10.
Fig. 9 shows that the inverter performance during normal operating conditions, where Fig. 9a shows the
both converters output capacitor voltage, while Fig. 9b shows the three phase output line voltages of the
inverter. In Fig. 9c, the both SEPIC converters input inductor current is illustrated. The input current of the
DC supply is shown in Fig. 9d. Fig. 10 shows the step response of the inverter, where Fig. 10a exhibits the
enactment of the inverter under a step change in the load reference voltage from 50 to 100% with doubled
frequency, while Fig. 10b shows the enactment of the inverter when the load current is changed from 50
to 100%.

(a)

(b)

(c)
478

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

(d)

Fig. 9 Performance of the FSTP SEPIC inverter under normal operating conditions. (a) Output capacitor
voltage of both SEPIC converters. (b) Three phase output line voltages. (c) Input inductor current of both
SEPIC converters. (d) DC supply current.

(a)

(b)

Fig. 10 Step response of the FSTP SEPIC inverter. (a) Load voltage and load current for a step change of
the reference load voltage from 50 to 100% with doubled frequency. (b) Load voltage and load current for
a load step change from 50 to 100%.
v) CONCLUSIONS
A DC-AC four-switch three-phase SEPIC-based inverter is proposed in this paper. The proposed inverter
improves the operation of the DC bus by a two factor when it compared to the conventional four-switch
three-phase voltage source inverter. Then, without need for an output filter, it can produce a pure sinusoidal
three-phase output voltage. Unlike conventional four-switch three-phase inverter, the proposed inverter
does not suffer from the problems of voltage fluctuation across the DC link split-capacitors and without
circulation in any passive component the third phase load current is directly drawn from the DC source. A
sliding-mode controller was designed and applied to the reduced second- order model of the SEPIC DC-DC
converter. Simulation results verified the performance of the proposed inverter.
REFERRENCES
[1] H. W. V. D. Broeck and J. D. V. Wyk, A comparative investigation of a three-phase induction
machine drive with a component minimized voltage-fed inverter under different control options,
IEEE Trans. Ind. Appl., vol. IA-20, no. 2, pp. 309320, Mar. 1984.
479

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
[2]

M. B. de R. Correa, C. B. Jacobina, E. R. C. da Silva, and A. M. N. Lima, A general PWM strategy


for four-switch three-phase inverters, IEEE Trans. Power Electron., vol. 21, no. 6, pp. 16181627,
Nov. 2006.

[3] M. N. Uddin, T. S. Radwan, and M. A. Rahman, Fuzzy-logiccontrollerbased cost-effective fourswitch three-phase inverter-fed IPM synchronous motor drive system, IEEE Trans. Ind. Appl., vol.
42, no. 1, pp. 2130, Jan./Feb. 2006.
[4] C.-T. Lin, C.-W. Hung, and C.-W. Liu, Position sensorless control for four-switch three-phase
brushless DC motor drives, IEEE Trans. Power Electron., vol. 23, no. 1, pp. 438444, Jan. 2008.
[5]

J. Kim, J. Hong, and K. Nam, A current distortion compensation scheme for four-switch inverters,
IEEE Trans. Power Electron., vol. 24, no. 4, pp. 10321040, Apr. 2009.

[6]

C. Xia, Z. Li, and T. Shi, A control strategy for four-switch three-phase brushless DC motor using
single current sensor, IEEE Trans. Ind.Electron., vol. 56, no. 6, pp. 20582066, Jun. 2009.

[7]

S. B. Ozturk,W. C. Alexander, and H. A. Toliyat, Direct torque control of four-switch brushless DC


motor with non-sinusoidal back EMF, IEEE Trans. Power Electron., vol. 25, no. 2, pp. 263271,
Feb. 2010.

[8] K. D. Hoang, Z. Q. Zhu, andM. P. Foster, Influence and compensation of inverter voltage drop
in direct torque-controlled four-switch threephase PM brushless AC drives, IEEE Trans. Power
Electron., vol. 26, no. 8, pp. 23432357, Aug. 2011.
[9] Tzann-Shin Lee; Jia-Hong Liu, Modeling and Control of a Three-Phase Four-Switch PWM
Voltage-Source Rectifier in d-q Synchronous Frame, IEEE Trans. Power Electron., vol.26, no.9,
pp.2476-2489, Sept. 2011.
[10] R. Wang, J. Zhao, and Y. Liu, A comprehensive investigation of fourswitch three-phase voltage
source inverter based on double fourier integral analysis, IEEE Trans. Power Electron., vol. 26, no.
10, pp. 27742787, Oct. 2011.
[11] Wen Wang; An Luo; Xianyong Xu; Lu Fang; Thuyen Minh Chau; Zhou Li, Space vector pulsewidth modulation algorithm and DC-side voltage control strategy of three-phase four-switch active
power filters, Power Electronics, IET , vol.6, no.1, pp.125-135, Jan. 2013.
[12] Narimani, Mehdi; Moschopoulos, Gerry, A method to reduce zerosequence circulating current in
three-phase multi-module VSIs with reduced switch count, Applied Power Electronics Conference
and Exposition (APEC), 2013 Twenty-Eighth Annual IEEE , vol., no., pp.496-501, 17-21 March
2013.
[13] Tan, X.; Li, Q.; Wang, H.; Cao, L.; Han, S., Variable parameter pulse width modulation-based
current tracking technology applied to fourswitch three-phase shunt active power filter, Power
Electronics, IET , vol.6, no.3, pp.543-553, March 2013.
[14] Changliang Xia; Youwen Xiao; Wei Chen; Tingna Shi, Three effective vectors-based current control
scheme for four-switch three-phase trapezoidal brushless DC motor, Electric Power Applications,
IET , vol.7, no.7, pp.566-574, Aug. 2013.
[15] Masmoudi, M.; El Badsi, B.; Masmoudi, A, DTC of B4-Inverter-Fed BLDC Motor Drives With
Reduced Torque Ripple During Sector-toSector Commutations, IEEE Trans. Power Electron.,
vol.29, no.9, pp.4855-4865, Sept. 2014.
[16] S. Dasgupta, S.N. Mohan, S.K. Sahoo, S.K. Panda, Application of Four-Switch-Based ThreePhase Grid-Connected Inverter to Connect Renewable Energy Source to a Generalized Unbalanced
Microgrid System, IEEE Trans. Ind.Electron., vol.60, no.3, pp.1204-1215, March 2013.
[17] B. El Badsi, B. Bouzidi, A. Masmoudi, DTC Scheme for a Four-Switch Inverter-Fed Induction
Motor Emulating the Six-Switch Inverter Operation, IEEE Trans. Power Electron., vol.28, no.7,
pp.3528-3538, July 2013.
[18] R. Wang, J. Zhao, and Y. Liu, DC-link capacitor voltage fluctuation analysis of four-switch threephase inverter, in Conf. Rec. IECON, 2011, pp. 12761281.
[19] M. Veerachary, Power tracking for nonlinear PV sources with coupled inductor SEPIC converter,
480

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
IEEE Trans. Aerosp. Electron. Syst., vol. 41, no. 3, pp. 10191029, Jul. 2005. [20] S. J. Chiang, S.
Hsin-Jang, and C. Ming-Chieh, Modeling and control of PV charger system with SEPIC converter,
IEEE Trans. Ind. Electron., vol. 56, no. 11, pp. 43444353, Nov. 2009.L
[21] El Khateb, A; Abd Rahim, N.; Selvaraj, J.; Uddin, M.N., Fuzzy-LogicController-Based SEPIC
Converter for Maximum Power Point Tracking, Industry Applications, IEEE Transactions on ,
vol.50, no.4, pp.2349-2358, July-Aug. 2014.
[22] E. H. Ismail, Bridgeless SEPIC rectifier with unity power factor and reduced conduction losses,
IEEE Trans. Ind. Electron., vol. 56, no. 4, pp. 11471157, Apr. 2009.
[23] M. Mahdavi and H. Farzanehfard, Bridgeless SEPIC PFC rectifier with reduced components and
conduction losses, IEEE Trans. Ind. Electron., vol. 58, no. 9, pp. 41534160, Sep. 2011.
[24] R.O. Caceres, I. Barbi , A boost DC-AC converter: analysis, design, and experimentation, IEEE
Trans. Power Electron, vol.14, no.1, pp.134-141, Jan 1999.
[25] L. Malesani, R.G. Spiazzi, P. Tenti, Performance optimization of Cuk converters by sliding-mode
control, IEEE Trans. Power Electron, vol.10, no.3, pp.302-309, May 1995.
[26] P. Mattavelli, L. Rossetto, G. Spiazzi, and P. Tenti, General-purpose sliding-mode controller for
DC/DC converter applications, in Proc. of IEEE Power Electronics Specialists Conf. (PESC),
Seattle, June 1993, 609615.
[27] Diab, M.S.; Elserougi, A; Abdel-Khalik, Non-linear sliding-mode control of three-phase buckboost inverter, Industrial Electronics (ISIE), 2014 IEEE 23rd International Symposium on , vol.,
no., pp.600605, 1-4 June 2014.
[28] P. Mattavelli, L. Rossetto, G. Spiazzi, and P. Tenti, Sliding mode control of SEPIC converters, in
Proc. of European Space Power Conf. (ESPC), Graz, August 1993, 173178.

481

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Maximization of co-ordination gain in uplink system using


the dynamic cell clustering algorithm
Ms.M.Keerthana,

Final Year, Electronics And Communication engineering,

M.Kumarasamy
College of Engineering, Karur. Keerthimahi95@gmail.com

Ms.K.Kavitha,
Final Year, Electronics And Communication engineering
Abstract: The dynamic clustering algorithm changes the composition of clusters periodically. We
consider two well-known dynamic clustering algorithms the full-search clustering algorithm (FSCA)
and the greedy-search clustering algorithm (GSCA). The coordinated communication system as a
new parameters are to maximize the coordinated communication system. Two well-known dynamic
clustering algorithms in this paper: the full-search clustering algorithm (FSCA) and the greedysearch clustering algorithm (GSCA. Simulation results show that the MAX-CG clustering algorithm
improves the average user rate and the edge user rate and IW clustering algorithm improves the
edge users rate and reduces the complexity to only half of the existing algorithm.

I. INTRODUCTION

In the uplink communication system, the base station (BS) receives low intensity signals from
cell-edge users and signals from users at the edge of adjacent cells simultaneously. In the downlink
communication system, the user receives signals from the BS in its own cell and signals from the BSs
at the adjacent cells with similar power. The received signals from other cells act as interference and
cause performance degradation. In this case, both the capacity and the data rate are reduced by the intercell interference (ICI)[1]. In the past, the fractional frequency reuse (FFR) scheme, which is a simple ICI
reduction technique, has been used to achieve required performance in interference limited environment .
Since the FFR scheme increases the performance at the cell-edge but degrades the overall cell throughput, a
coordinated system was proposed to overcome the weakness of the FFR scheme. Also, the techniques for ICI
mitigation and performance enhancement by sharing the full channel state information (CSI) and transmit
data were studied in. [2]However, the techniques are difficult to implement in a practical communication
system because of the large amount of information to be shared between BSs. Instead of the impractical
scenario that requires full CSI and transmit data sharing among the whole network, a clustering algorithm
has been applied to practical communication systems by conguring the cluster for sharing full CSI between
a limited number of cells. The clustering algorithms are classied into two types: static clustering algorithm
and dynamic clustering algorithm. A dynamic clustering algorithm to avoid the ICI was developed , whose
objective is that the overall network has the minimum performance degradation while also improving the
performance of the cell-edge user. A clustering algorithm for sum rate maximization by using the greedy
search was proposed to improve the sum rate without guaranteeing the cell-edge users data rate. However,
when the size of the whole network is large, the complexity of the algorithm is increased rapidly. If the
complexity of the algorithm is large.the processing speed cannot adopt the change of the channels. [3]The
purpose of coordinated communication is to minimize the inter-cell interference to the cell-edge user and to
improve their performance. When the clusters are not properly congured, the performance of the cell-edge
users will be further degraded. Even though the existing algorithm improves the overall data rate, it does
not consider the goal of the coordinated communication: the improvement of the performance of cell-edge
users.
482

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
EXISTING SYSTEM
In the uplink communication system, the base station (BS) receives low Intensity signals from cell-edge
users and signals from users to the edge of adjacent cells simultaneously. In the downlink communication
system, the user receives signals from the BS on its own cell and signals from the BSs at the adjacent cells
in similar power. In the past, the fractional frequency reuse (FFR) scheme, which is a simple ICI reduction
technique, has been used to achieve required performance in interference-limited environment. In the static
clustering algorithm, the construction of clusters is xed, so that clusters are composed of a limited number
of adjacent cells . The advantage of the static clustering algorithm is that BSs share the CSI and received
data with other BSs belonging to the same cluster without the central controller[4]. So, the complexity is
low and there is no CSI sharing between clusters caused by the clustering algorithm.
PROPOSED SYSTEM
The dynamic clustering algorithms to improve the cell-edge user data rate and to reduce the complexity
the same time. The proposed clustering algorithms have lower complexity than the existing algorithms.
Moreover, improve the cell-edge users performance without increase the complexity. First, maximization
of coordination gain MAX-CG) clustering algorithm is proposed. The MAX-CG clustering algorithm
maximizes the coordination gain between he coordinated communication system and the single-cell
communication system. Its performance is close to the optimal clustering algorithm - the full search
clustering algorithm. This effect mainly comes from the usage of the coordination gain which highlights
the benefit of coordinated communication[5]. The coordination gain is increased only if the benefit of the
BS coordination is large. Second, we develop interference weight (IW) clustering algorithm which reduces
complexity and improves both the average user rate and the 5% edge user rate.
MAX-CG ALGORITHM
We dene a new parameter called as the coordination gain of data rate, which is the rate difference between
the coordinated communication and the single cell communication. The most important objective of the
BS coordination is to improve the cell-edge users performance As we explained in Section III, the static
clustering algorithm. and the GSCA do not guarantee the cell-edge users performance. The former does not
reect the change of the channel environment, while the latter considers only the sum rate maximization.
Since the GSCA tries to maximize the sum rate, the cluster is composed of the cells with many cell-center
users. Thus, the cluster which is made last has low coordination gain. To combat these problems, we use
the coordination gain to make clusters maximize the performance gain[6]. The coordination gain is the rate
difference between C(C) and C(NC). C(C) is the sum rate of users in cluster G, and C(NC) is the sum rate
of users in cluster G on the basis that those users do not coordinate.
IW CLUSTERING ALGORITHM
In this section, we propose an algorithm to supplement the MAX-CG clustering algorithm. In comparison
with the GSCA, the MAX-CG clustering algorithm improves both the sum rate and the weak users rate
and it catches up with the performance of the FSCA. Never the less complexity of the MAX- CG clustering
algorithm is higher than the GSCA because it calculates the sum rate of all combinations. Therefore, we
propose the interference weight (IW) clustering algorithm to reduce the complexity of clustering without
performance loss. We consider the pair wise relationship between two cells for clustering. In this approach,
we dont need to make all combinations of BS, so that we can reduce the complexity of the clustering
algorithm. Even though we narrow down the scope to 2 cells case for the coordination gain, is difcult to
solve without making any further simplication. In the high SINR regime, we can simplify the optimization
problem fortunately[7]. Note that the high SINR regime for user b in the cell i assumes SINR 1, which
implies that the transmit power of the user is larger than 0 (turning off the transmit power never leads to the
high SINR regime) . Since we assume that the user uses same transmit power P, the assumption for the high
SINR regime is available in this system for practical coordinated systems.
SEECH PROTOCOL
The energy efficiency is a important for wireless sensor networks in smart space and extreme environments.
The cluster is based on a communication protocols and consider a role for energy saving in hierarchical
wireless sensor networks. In most of dynamic clustering algorithm is a cluster head simultaneously serves
483

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
as a relay sensor node to transmit its cluster or clusters data packet to the data sink. As a result, each
node would have cluster head role as many relay process during network lifetime. In our view, this is
inefficient from an energy efficiency perspective because in most of cases, a node due to its position in the
network comparatively is more proper to work as a cluster head and/a relay. The new distributed algorithm
named scalable energy efficient We proposed novel dynamic cell clustering algorithms for maximizing
the coordination gain in the uplink coordinated system. The MAX-CG clustering algorithm maximizes
the coordination gain and improves the average user rate. Simulation and analytical results show that the
complexity of the MAX-CG clustering algorithm is much less than that of the FSCA. The IW clustering
algorithm reduces the complexity of the MAX-CG clustering algorithm and uses the IW to supplement the
disadvantage of the GSCA. Thus, the IW clustering algorithm improves the performance and simplies
the clustering. The IW-weak clustering algorithm improves the 5% edge user rate without the loss of the
average user rate. Therefore, the proposed clustering algorithms are simple and efcient such that they are
suitable[8].
GRAPH

FLOW CHART

484

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
RESULT
We proposed novel dynamic cell clustering algorithms for maximizing the coordination gain in the uplink
coordinated system. The MAX-CG clustering algorithm maximizes the coordination gain and improves the
average user rate. Simulation and analytical results show that the complexity of the MAX-CG clustering
algorithm is much less than that of the FSCA. The IW clustering algorithm reduces the complexity of the
MAX-CG clustering algorithm and uses the IW to supplement the disadvantage of the GSCA. Thus, the
IW clustering algorithm improves the performance and simplies the clustering. The IW-weak clustering
algorithm improves the 5% edge user rate without the loss of the average user rate. Therefore, the proposed
clustering algorithms are simple and efcient such that they are suitable for practical coordinated systems.
REFERENCE
[1]

S.H.Aliand, V.C.M.Leung, Dynamic frequency allocation in fractional frequency reused OFDMA


networks, IEEE Trans. Wireless Communication. vol. 8, no. 8, Aug. 2009.

[2]

H. Zhang and H. Dai, Cochannel interference mitigation and cooperative processing in downlink
multicell multiuser MIMO networks, EURASIP J. on Wireless Communication. and Networking,
July 2004.

[3]

A.Papadogiannis, D.Gesbert, and E. Hardouin, A dynamic clustering approach in wireless networks


with multi-cell cooperative processing, in Proc. IEEE ICC, Apr.2010

[4] S.Kaviani and W.A.Krzymien,Sum rate maximization of MIMO broadcast channels with
coordination of base stations, in Proc. IEEE WCNC, 2008.
[5]

J. Zhang, R. Chen, J. G. Andrews, A. Ghosh, and R. W. Heath, Networked MIMO with clustered
linear precoding, IEEE Trans. Wireless Communication. vol. 8, no. 4, Apr. 2009.

[6]

A. Papadogiannis, D.Gesbert, and E. Hardouin, A dynamic clustering approach in wireless networks


with multi-cell cooperative processing, in Proc. IEEE ICC, 2008,

[7]

B. O. Lee, H. W. Je, I. Sohn, O. Shin, and K. B. Lee, Interference aware decentralized precoding
for multicell MIMO TDD systems, in Proc. IEEE GLOBECOM, 2008,

[8] SEECH: Secure and Energy Efficient Centralized Routing Protocol for Hierarchical WSN,
International Journal of Engineering Research and Development e-ISSN: 2278-067X, p-ISSN:
2278-800X, www.ijerd.com, Volume 2, August 2012

485

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Design and Simulation of Low Leakage High Speed


Domino Circuit using Current Mirror
Mr.C.Arun Prasath1, Dr.C.Gowri Shankar2 and R.Sudha3
Assistant Professor, ECE, KSR College of Engineering, Tiruchengode,
India, Associate Professor, ECE, KSR College of Engineering, Tiruchengode, India ,
PG Scholar, ECE, KSR College of Engineering, Tiruchengode, India

Abstract: Domino logic is widely used in many applications with high performance and less area
overhead. As the technology is scaled down, the supply voltage is reduced for low power and the
threshold voltage is also reduced to achieve high performance. Since lowering the threshold voltage
leads to an exponential increase of sub-threshold leakage current. Reducing power dissipation
has become an important objective in the design of digital circuits. The proposed technique uses
an analog current mirror circuit and it is based on comparison of mirrored current of the pull-up
network with its worst case leakage current. This technique reduces the parasitic capacitance on the
dynamic node, yielding a smaller keeper for wide fan-in gates to implement high-speed and robust
circuits. Thus the delay and power consumption are reduced in the wide fan-in domino circuit. The
leakage current is also reduced by exploiting footer transistor in the diode configuration. Domino
gate that uses an analog current mirror to replicate the leakage current of a dynamic gate pull-down
stack and it tracks the process, voltage and temperature using the stacking effect. It improves the
performance of the current mirror circuit which will reduce the leakage current.
Keywords OR gate, domino logic, current mirror, Leakage current
I. INTRODUCTION

Dynamic logic such as domino logic is widely used in many applications to achieve high
performance, which cannot be achieved with static logic styles. However, the main drawback of dynamic
logic families is that they are more sensitive to noise than static logic families. On the other hand, as the
technology scales down, the supply voltage is reduced for low power, and the threshold voltage (Vth) is
also scaled down to achieve high performance. Since reducing the threshold voltage exponentially increases
the sub threshold leakage current, reduction of leakage current and improving noise immunity are of major
concern in robust and high-performance designs in recent technology generations. However, in wide fan-in
dynamic gates, especially for wide fan-in OR gates, robustness and performance significantly degrade with
increasing leakage current.

Wide fan-in domino is used for variety of applications like memories and comparators. Domino
logic circuit is also a kind of dynamic logic circuit which is used for the high speed and high performance
application. Also the domino logic circuit plays a vital role where fan in are high in any circuit. Domino
circuits are widely used in high performance microprocessors, register files, ALU, DSP circuits and
priority encoders in content addressable memories, such as high fan-in multiplexer or comparator circuits.
Thus domino logic circuit techniques are extensively applied in high performance microprocessors due to
superior speed and area characteristics.
A. Static Logic Circuits
Static Logic circuits can maintain their output logic levels for indefinite period as long as the inputs are
unaffected as shown in Figure 1. Although Static CMOS logic is widely used for its high noise margins,
good performance and low power consumption with no static power dissipation, still these circuits are
limited at running extremely high clock speeds and suffers from glitches [4]. Number of transistors requires
to implement an N fan-in gate is almost equal to 2N; therefore it will consume large silicon area. An
alternate logic style is the dynamic CMOS Logic.

486

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Figure 1. CMOS Static Logic

B. Dynamic Logic Circuits


Dynamic Logic utilizes simple sequential circuits with memory functions. The operation depends on
temporary storage of charge in parasitic node capacitances. Dynamic circuits have achieved widespread use
because they require less silicon area and have superior performance over conventional static logic circuits.
As shown in Figure 2, Dynamic logic uses a sequence of pre charge and evaluation phases governed by the
clock to realize complex logic functions [4]- [6].
Pre Charge Phase
When clock signal () = 0, the output node Out is pre-charged to VDD by the PMOS transistor Mp and the
evaluate NMOS transistor Me remains off, so that the pull-down path is disabled
Evaluation Phase
In the PDN when clock signal () = 1, the pre-charge transistor Mp is OFF, and the evaluation transistor Me
is turned ON. The output is conditionally set down based on the input values and the pull-down topology.
But the main disadvantage of Dynamic Logic Circuits is that they cannot be cascaded. To overcome this
problem Domino Logic came into existence.

Figure 2. CMOS Dynamic Logic

C. Domino Logic
The name domino comes from the behaviour of a chain of the logic gates. It is a noninverting structure
as shown in Figure 3. It runs 1.5-2 times faster than static logic circuits. It is simply a logic which permits
high-speed operation and enables the implementation of complex functions which otherwise is not achieved
by Static and Dynamic circuits [4]-[6]. Domino logic offers a simple technique to eliminate the need of
complex clocking scheme by utilizing a single phase clock and have no static power consumption as it is
removed by clock input in the first stage. These logic circuits are glitch free, have fast switching threshold
and possibility to cascade. Domino circuits employ a dual-phase dynamic logic style with each clock cycle
divided into a pre charge and an evaluation phase.
487

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Figure 3. CMOS Domino Logic

II. LITERATURE REVIEW


The most popular dynamic logic is the conventional standard domino circuit as shown in Figure 4. In this
design, a PMOS keeper transistor is employed to prevent any undesired discharging at the dynamic node
[12] due to the leakage currents and charge sharing of the Pull-Down Network (PDN) during the evaluation
phase, hence improving the robustness. The keeper ratio K is defined as

Where W and L denote the transistor size, and n and p are the electron and hole mobilities,
respectively. However, the traditional keeper approach is less effective in new generations of CMOS
technology. Although keeper upsizing improves noise immunity, it increases current contention between
the keeper transistor and the evaluation network. Thus, it increases power consumption and evaluation
delay of standard domino circuits. These problems are more critical in wide fan-in dynamic gates due to
the large number of leaky NMOS transistors connected to the dynamic node. Hence, there is a trade off
between robustness and performance, and the number of pull-down legs is limited.

Figure 4. Conventional Standard Domino Circuit

Several circuit techniques are proposed [3] to address these issues. These circuit techniques can be divided
into two categories.

1.Changing controlling circuit of the gate voltage of the keeper transistor

2.Changing the circuit topology of the footer transistor

In the first category, circuit techniques change the controlling circuit of the gate voltage of the keeper
such as Conditional-Keeper Domino (CKD) [7], High Speed Domino (HSD) [8], Leakage Current Replica
(LCR) keeper domino [9], and Controlled Keeper by Current-Comparison Domino (CKCCD) [10]. On the
other hand, in the second category, designs including the proposed designs change the circuit topology of
the footer transistor or reengineer the evaluation network such as Diode Footed Domino (DFD) [13] and
488

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
Diode-Partitioned Domino (DPD) [11].
A. Conditional Keeper Domino Logic
Conditional Keeper employs two keepers, small keeper and large keeper. In this technique, the keeper
device (PK) in conventional domino is divided into two smaller ones,PK1 and PK2. The keeper sizes are
chosen such that PK=PK1+PK2. Such sizing insures the same level of leakage tolerance as the conventional
gate but yet improving the speed.

Figure 5. Conditional Keeper Domino

B. High Speed Domino Logic


The circuit of the HS Domino logic is shown in Figure3.8. In HS domino the keeper transistor is driven by
a combination of the output node and a delayed clock.

Figure 6. High Speed Domino logic

The circuit works as follows: At the start of the evaluation phase, when clock is high, MP3 turns on and
then the keeper transistor MP2 turns OFF. In this way, the contention between evaluation network and
keeper transistor is reduced by turning off the keeper transistor at the beginning of evaluation mode. After
the delay equals the delay of two inverters, transistor MP3 turns off. At this moment, if the dynamic node
has been discharged to ground, i.e. if any input goes high, the nMOS transistor MN1 remains OFF. Thus
the voltage at the gate of the keeper goes to VDD-Vth and not VDD causing higher leakage current though
the keeper transistor. On the other hand, if the dynamic node remains high during the evaluation phase (all
inputs at 0, standby mode), MN1 turns on and pulls the gate of the keeper transistor. Thus keeper transistor
will turn on to keep the dynamic node high, fighting the effects of leakage.
C. Controlled Keeper by Current Comparison Domino logic (CKCCD)
A new circuit design with Controlled Keeper by Current Comparison Domino (CKCCD) is proposed to
make the domino circuits more robust and with low leakage without significant performance degradation
or increased power consumption. The reference current is compared with the pull down network current.
If there is no conducting path from dynamic node to ground and only current in the PDN is the leakage
489

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
current.

Figure 7. Controlled keeper by Current Comparison Domino

This idea is conceptually illustrated in figure 7. In fact there is a race between the pull down network and
the reference current
D. Current Comparison Domino (CCD)
In the CCD circuit, the current of the PUN is mirrored by transistor M2 and compared with the reference
current, which replicates the leakage current of the PUN. The topology of the keeper transistors and
the reference circuit, which is shared for all gates, which successfully tracked the process, voltage and
temperature variations. The CCD circuit employs pMOS transistors to implement logical function, as
shown in figure 8. This circuit is similar to a Replica Leakage Circuit, in which a series diode-connection
transistor M6 similar to M1 is added. Using the N-well process, source and body terminals of the pMOS
transistors can be connected together such that the body effect is eliminated. By this means, the threshold
voltage of transistors is only varied due to the process variation and not the body effect. Moreover, utilizing
pMOS transistors instead of nMOS ones in the N-well process, it is possible to prevent increasing the
threshold voltage due to the body effect in existence of a voltage drop due to the diode configuration of
transistor M1, yielding decreasing the delay.

Figure 8. Current Comparison Domino

III. PROPOSED DESIGN


Since in wide fan-in gates, the capacitance of the dynamic node is large, speed is decreased dramatically.
In addition, noise immunity of the gate is reduced due to many parallel leaky paths in wide gates. Although
the keeper transistor can improve noise robustness, power consumption and delay are increased due to large
contention [14]. A Leakage Current Replica (LCR) keeper for dynamic domino gates that uses an analog
current mirror to replicate the leakage current of a dynamic gate pull-down stack and thus tracks process,
voltage, and temperature. The proposed keeper has an overhead of one field-effect transistor per gate plus
a portion of a shared current mirror [7].

Figure 9. Proposed LCR keeper


490

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
The proposed leakage resistant domino uses a current mirror and PMOS as well as NMOS transistors to
conquer the leakage produced by the circuit. Current mirror is a cost-efficient resistor that refuses leakage
and can be shared among all gates [9]. A current mirror is an element with at least three terminals. The
common terminal is connected to a power supply, and the input current source is connected to the input
terminal. Ideally the output current is equal to the input current multiplied by a desired current gain. If the
gain is unity, the input current is reflected to the output, leading to the name current mirror. A current mirror
reads a current entering in a read-node and mirror this current (with a suitable gain factor) to an output node
(nodes). Under ideal conditions, the current mirror gain is independent of input frequency, and the output
current is independent of the voltage between the output and common terminals.

Figure 10. Current mirror

IV. SIMULATION RESULT


The simulation is done for the proposed scheme with the fan-in range of 8,16,32,64 bit OR gate design. The
circuit power is evaluated by using the 350nm technology with 5v supply. It uses 10MHz as an operating
frequency. The OR gate designed using LCR keeper. Because this technique uses less number of transistor
when compared to the other methods. The LCR keeper uses a conventional analog current mirror that
tracks any process corner as well as voltage and temperature. The figure 11 shows the schematic design
for proposed LCR keeper using OR gate and figure 12 shows the waveform for the proposed LCR method.

Figure 11. Schematic for proposed LCR method

Figure 12. Waveforms for proposedd LCR method


491

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
In the 8-bit, only 8 transistors are used in Pull-Down network, hence it produces less leakage current. The
stacking of NMOS keeper may reduce the leakage power. But the power consumption of those transistors
are dominant than the leakage power reduced by that transistors. This procedure is to be continued for 16bit gates.
The current mirror circuit is simulated using theEldo simulation file and the layout is obtained using the
same.

Figure 13. Schematic of analog current mirror

Figure 14. layout of current mirror


Normally in domino logic, in the Pull up network, we use only one PMOS transistor instead of
n transistor n=4,8&16 etc., and the logic will be implemented with the pull down network. By reducing
the number of transistors, capacitance decreases. Few transistors means limited switching (charging and
Discharging) of the capacitor. So dynamic power consumption gets decreased since the main source for
dynamic power dissipation is charging and discharging.
As known already leakage current occurs due to unwanted current flow between source and the drain of a
transistor. This leakage current can be avoided by the stacking effect. Stacking effect says that when two
or more transistors in series are at off condition, the leakage current can be reduced.
V.

CONCLUSION

A continuous scaling of CMOS technology, effective management of leakage power is a great challenge.
492

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
Thus, new designs were necessary to obtain desired noise robustness in very wide fan-in circuits. Moreover,
increasing the fan-in not only reduced the worst case delay, it also increased the contention between the
keeper transistor and the evaluation network. The IRTS report state that leakage power dissipation may
exceed dynamic power dissipation at the 65nm and fore coming technologies. The proposed technique uses
an analog current mirror circuit and it is based on comparison of mirrored current of the pull-up network
with its worst case leakage current. It replicates the leakage current of a dynamic gate pull-down stack and
it tracks the process, voltage and temperature using the stacking effect. It improves the performance of
the current mirror circuit which will reduce the leakage current. The switching activity of the capacitor is
limited by reducing the number of transistors.
REFERENCES
[1]

C. Arun Prasath and B. Manjula, Design and Simulation of Low Power Wide Fan-In Gates,
International Journal of Science and Innovative Engineering & Technology, Vol.1, ISBN 978-81904760-6-5, May 2015.

[2]

C.Arun Prasath, S.Sindhu, Domino Based Low Leakage Circuit for Wide Fan-in or Gates, CiiT
International Journal of Programmable Device Circuits and Systems , Vol 6, No 3 , ISSN: 0974
9624,2014.

[3]

A. Peiravi and M. Asyaei, Current-Comparison-Based Domino: New Low-Leakage High-Speed


Domino Circuit for Wide Fan-In Gates, IEEE Transactions on Very Large Scale Integration (VLSI)
System, Vol. 21, No.5 pp.934-943, May 2013.

[4]

J. M. Rabaey, A. Chandrakasan, and B. Nicolic, Digital Integrated Circuits: A Design Perspective,


2nd ed. Upper Saddle River, NJ: Prentice-Hall, 2003.

[5]

Sung-Mo Kang and Yusuf Leblebici, CMOS Digital Integrated Circuits: Analysis and Design, 3rd
Edition, Tata McGraw-Hill Publishing Company Ltd, New Delhi, 2007.

[6]

Neil H. E. Weste and K. Eshragian, Principle of CMOS VLSI Design, 2nd Edition, Pearson
Education (Asia) Pvt. Ltd.2000.

[7]

A. Alvandpour, R. Krishnamurthy, K. Sourrty, and S. Y. Borkar, A sub-130-nm conditional-keeper


technoque, IEEE J.Solid-State Circuits, vol. 37, no. 5 ,pp. 633-638, May 2002

[8]

M. H. Anis, M. W. Allam, and M. I. Elmasry, Energy-efficient noise-tolerant dynamic styles for


scaled-down CMOS and MTCMOS technologies, IEEE Trans. Very Large Scale(VLSI) Syst., vol.
10, no. 2, pp. 71-78, Apr. 2002.

[9]

Y. Lih, N. Tzartzanis, and W. W. Walker, A leakage current replica keeper for dynamic circuits,
IEEE J. Solid-State Circuits, vol. 42, no. 1, pp. 4855, Jan. 2007.

[10] A. Peiravi and M. Asyaei, Robust low leakage controlled keeper by current-comparison domino for
wide fan-in gates, integration, VLSI J., vol. 45, no. 1, pp. 2232, 2012.
[11] H. Suzuki, C. H. Kim, and K. Roy, Fast tag comparator using diode partitioned domino for 64-bit
microprocessors, IEEE Trans. Circuits Syst., vol. 54, no. 2, pp. 322328, Feb. 2007.
[12] Sherif M. Sharroush, Yasser S. Abdalla, Ahmed A. Dessouki and El-Sayed A El-Badawy,
Compensating for the Keeper Current of CMOS Domino Logic Using a Well Designed NMOS
Transistor , 26th National Radio Science Conference (NRSC2009).
[13] H. Mahmoodi and K. Roy, Diode-footed domino: A leakage-tolerant high fan-in dynamic circuit
design style, IEEE Trans. Circuits Syst. Reg. Papers, vol. 51, no. 3, pp. 495503, Mar. 2004.
[14] ulo F. Butzen, Andre I. Reis, Chris H. Kim, Renato P. Ribas, Modeling and Estimating Leakage
Current in Series- Parallel CMOS Networks, GLSVLSI07,2007.
[15] Nikhil Saxena, Sonal Soni, Leakage current reduction in CMOS circuits using stacking effect,
International Journal of Application or Innovation in Engineering & Management (IJAIEM), Vol. 2,
Issue: 11, pp.213-216,Nov. 2013.
[16] Ankita Nagar, Sampath Kumar.V and Payal Kaushik, Power Minimization Of Logical Circuit
Through Transistor Stacking, International Journal of Application or Innovation in Engineering &
Management (IJAIEM), Vol. 1, Issue: 3, pp.256-260,April-May. 2013.
493

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
[17] James T. Kao and Anantha P. Chandrakasan, Dual- Threshold Voltage Techniques for Low-Power
Digital Circuits, IEEE Transactions on Very Large Scale Integration (VLSI) Systems, Vol. 14, No.
11, pp.1250-1263,July. 2000.
[18] Jun Cheol Park and Vincent J. Mooney, Sleepy Stack Leakage Reduction, IEEE Transactions on
Very Large Scale Integration (VLSI) Systems, Vol. 14, No. 11, pp.1250- 1263,Nov. 2006.
[19] Salendra.Govindarajulu, Kuttubadi Noorruddin, Novel keeper technique for Domino logic circuits
in DSM Technology, International Journal of Latest Research in Science and Technology, Vol. 1,
Issue: 2, pp.127-131, July-August. 2012.
[20] Jun Cheol Park and Vincent J. Mooney III, Sleepy stack: a New Approach to Low Power
VLSI Logic and Memory, School of Electrical and Computer Engineering Georgia Institute of
Technology,2005.
[21] A lecture PPT, EE466: VLSI Design Power Dissipation,.
[22] Shilpa Kamde1, Dr. Saanjay Badjate2 and Pratik Hajare, Comparative Analysis of Improved
Domino Logic Based Techniques for VLSI Circuits, International Journal of Engineering Research
and General Science, Vol. 2, Issue 3, pp. 43-50,2014.
[23] Om Prakash, R.K.Prasad, B.S.Rai and Akhil Kaushik, Analysis and Design of Logic Gates Using
Static and Domino Logic Technique, International Journal of Scientific Research Engineering &
Technology (IJSRET), Vol. 1, Issue. 5, pp.179-183,2012.

494

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Upgrade Reliability for Novel Identity-Based Batch


Verification Scheme In Vanet
S.Naveena devi1, M.Mailsamy2 and N.Malathi3,
1PG Scholar, 2Assistant Professor, 3Assistant Professor
1, 2, 3 Dept. of Information Technology,
Vivekananda College of Engineering for Women,
Tiruchengode - 637205

Abstract: A novel Identity-based Batch Verification Scheme in Vehicular ad hoc network (VANET)
can outstandingly improve the traffic safety and effectiveness. The basic idea is to allow vehicles
to send traffic message to roadside units (RSUs) or other vehicles. Vehicles have to be prohibited
from some attacks on their privacy and misuse of their private data. For this reason, the security and
privacy protection issues are important prerequisites for VANET. The Novel identity-based batch
verification scheme was newly future to make VANET more secure and efficient for practical use.
The current IBV system exist some security risks. To set up an improved scheme that can satisfy
the security and isolation desired by vehicles. The proposed NIBV scheme provides the verifiable
security in the casual Mysql model. In addition, the batch confirmation of the proposed scheme
needs only effectual approach for VSNs to achieve confirmation, reliability, and authority. However,
when the number of signatures received by a Roadside Unit(RSU) becomes bulky, a scalability
problem appear immediately, where theRSU could be difficult to consecutively verify each received
signature within 300 ms period according to the current committed short range communications
broadcast protocol. To introduce a new identity-based batch verification scheme for transportation
between vehicles and RSUs, in which an RSU can confirm abundant received signatures at the same
instance such that the total verification time can be drastically reduced.
Index Terms: Authenticity, novel batch verification, Privacy, Vehicular ad-hoc network.
I. INTRODUCTION
VANETs are a subgroup of mobile ad-hoc networks. The main difference is that the mobile routers
construction the network are vehicles like cars or trucks and their movement is controlled by factors like
road route, surrounding traffic and traffic system. It is a feasible supposition that the members of VANETs
can connect to fixed networks like the Internet occasionally, at least at usual service intervals. A main goal
of VANETs is to enhance road safety. In VANET they have three important entities like trusted authority,
road side unit, on board unit. In trusted authority (TA) schedule the route to the vehicle. The TA can
communicate via a road side unit (RSU).In RSU is a communication between the TA and OBU. In OBU to
commune with roadside units (RSUs) situated at roadside or street intersection. Vehicles can also use OBUs
to commune with each other. VANET can be classifying into two types: vehicle-to-infrastructure (V2I)
communication or inter-vehicle (V2V) communication. The basic use of VANET is that OBUs at regular
intervals transmit information on their nearby states. The information like current time, position, direction,
speed and traffic events are passed to other nearby vehicles and RSUs. For example, the traffic actions
could be accident location, brake light warning, change lane/merge traffic warning, emergency vehicle
warning, etc. Other vehicles may modify their travelling routers and RSUs may inform the traffic control
centre to alter traffic lights for avoiding possible traffic jamming. VANET offers a variety of services
and profit to users, and thus deserve deployment efforts. The wonderful benefits expected from vehicular
communications and the enormous number of vehicles, it is clear that vehicular communications are
probable to become the most relevant understanding of mobile ad hoc networks. The appropriate integration
of on-board units and position devices, such as GPS receivers along with communiqu capabilities, opens
marvelous business opportunities, but also raises alarming research challenges.
495

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
The protection of communication exchange acting a key task in VANET applications. The message
from OBUs has to be identity-authenticated and integrity-checked before it can be trust on. Otherwise,
an opponent can change the information or even masquerade as other vehicles to transmit the wrong
information. The wrong information probably makes some bad situation. For example, the information
of incorrect traffic flow may reason the traffic control centre to make wrong decision. The traffic light
of the heavy side always stay red and the other side stay green. In addition, an opponent may portray an
ambulance to require the traffic light to help with her/him and break the driving right of other users.
A driver may not wish for others to know her/his travelling routes by tracing information sent by
OBU. Or else, it is hard to draw users to link the network. So, an nameless communication is needed. On
the opposing, traceability is also necessary where a vehicles real identity should be able to be exposed by
a trust authority for legal responsibility issue when crimes or accidents happen. For example, a driver who
sent out false information causing an accident should not be clever to escape by using a nameless identity.
In other words, vehicles in VANET need the provisional privacy.
Our main aid in the paper is given as follows: Specified the security issues of avoiding incorrect
information and the contradictory goals of isolation and traceability. The proposed new identity based
batch verification scheme can be used in both V2I and V2V communications. The new IBV scheme can
endure our future threats such as the identity privacy violation, fake and anti-traceability attacks. Compare
to the preceding schemes, the future new IBV scheme is efficient in computational cost of confirmation
delay. It is since the process of batch verification needs only a small stable number of pairing and point
increase computations. In new identity batch verification scheme can improving the security using efficient
algorithm like symmetric encryption algorithm and new identity based batch verification algorithm.
II. RELATED WORKS
In 2015, Shiang-Feng Tzeng, Shi-Jinn Horng [1] proposed a scheme to point out that the present
IBV scheme survive some security risks. To introduce an improved scheme that can satisfy the security
and privacy needed by vehicles. The IBV scheme provides the demonstrable security in the random oracle
model. Lee and Lai [2] described the two weakness of et al.s IBV scheme. First, Zhang et al.s IBV system
is susceptible on the replay attack. An opponent may replicate a false condition, such as traffic squash, by
collect and store the vehicle messages and signatures in the matching condition. In 2013, Shi-Jinn Horng,
Shiang-Feng Tzeng [3], SPECS provided software based key to satisfy the solitude requirement and gave
inferior message slide and more successful rate than earlier result in the message verification phase. To
find out that SPECS is vulnerable to imitation attack. SPECS have a pour such that a spiteful vehicle can
force random vehicles to broadcast fake messages to other vehicles. In 2008, Zhang et al [4] proposed an
identity-based batch verification system for V2I and V2V infrastructure in VANET. They adopt a one-time
identity-based signature, which eliminate the confirmation and broadcast costs of certificate for public
key. It reduces the general verification delay of a lot of message signatures. In 2007, Raya and Hubaux [5]
proposed a scheme to conceal the real identities of users by nameless certificates. The conservative public
key infrastructure is adopt as the security base to achieve both message verification and integrity. The main
problem is that each vehicle loads a large storage capability to save a number of key pairs and the matching
certificates, and incur the high cost of message verification.
III. PRELIMINARIES
A. SYSTEM MODEL
The structure model consists of four entities like trust authority, application servers, roadside units
and on-board units (OBUs) install on vehicles. A two-layer vehicular network model was address in recent
research .The top layer is a trusted authority and application servers. TA and application servers converse
with RSUs through secure channel, the transport layer security protocol, by wired relations. The lower
layer is embrace of vehicles and RSUs. The communiqu amongst them is based on the dedicated short
range communications protocol. The VANET security standard, every vehicle has its own public/private
key pairs distributed by TA. Before messages are transmit, vehicles contain to sign the messages with
their private keys to assurance the honesty of messages. Delivery the safety related or non-traffic related
message, each RSU or vehicle is accountable for verify their signatures of messages.
496

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

FIG 1: The System Model

1) TA is totally confidential by everybody and it is motorized with enough calculation and storage ability.
The laid off TA are installing to keep away from being a bottleneck or a solitary point of failure.2) TA is
the only can decide the vehicles real individuality but not by other vehicles or RSU.
3) TA and RSUs converse via a secure fixed network.
4) RSUs are not confidential. As they are located down road side, they can be simply co-operation. They
are inquisitive about vehicles seclusion.
5) Tamper-proof devices on vehicles are supposed to be believable and its information is for no reason
to been reveal. The WAVE standard, every OBU is capable with a hardware security module , which is a
tamper-resistant module used to accumulate the security resources The HSM in each OBU is accountable
for drama all the cryptographic process such as signing messages, keys update. It is hard for lawful OBUs
to take out their private keys from their tamper-proof devices. The system has its individual clock for make
accurate timestamp and is clever to sprint on its individual battery. TA, RSUs and OBUs have approximately
coordinated clocks.
B.ADVERSARY MODEL
All participating RSUs and OBUs are not believable and the communication channel is not protected. An
opponent is able to performing the following without the novel IBV scheme.
1) An opponent may adjust or repeat existing messages, even an opponent may disperse or mimic any
rightful vehicle to produce incorrect information into the scheme to influence the behavior of other users or
damage the transportation of VANET.
2) An opponent may draw the real identity of any vehicle and can disclose the vehicles real identity by
analyzing many messages sent by it.
IV. PROPOSED SYSTEM
(1)The OBU of the vehicle broadcast or distribute traffic information to RSU or nearby vehicles.
(2)RSU verify the traffic information and send to the TA. (3)TA schedules the route of the vehicles, which
route is traffic free and shortest. (4)To applying a dynamic routing algorithm find shortest energetic routers
without traffic. (5)Energy level should be increased in vehicular networks during that time of providing
497

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
high security. (6)To apply a novel identity based batch verification algorithm deducts the hacking packets
and also find, which vehicle can be create it. To compromise the particular hacking vehicles master keys.
(7)To apply a novel identity based batch verification scheme provide high security and high performance
for vehicular networks.
(8)Compare to existing system, High Security can be provided. RSU extend network range. (9)
In novel identity based batch verification scheme easily identify the changed information and difficult to
access the information without signature key. (10)TA easily fined the duplicate information and provides
high performance in novel IBV scheme. (11)Advanced symmetric key algorithm can be used to novel
identity based batch verification. (12)Novel identity based batch verification algorithm can be used to
improving a security of a VANET and also improving a speed and performance.
V. PERFORMANCE EVALUATION
The computation delay is the mainly important issue, which affect the worth of traffic linked
messages. To describe the time charge of the cryptographic linked operations necessary in each signing and
verification by the novel IBV scheme and other batch verification schemes.
In fig.2 is comparison between computations delay and verify a signing message. A previous
IBV schemes they have more delay for verifying a message. Previous IBV scheme have a delay of 9.6 in
verification and 0.6 in sign message.

FIG 2: Comparison of computational delay to verify and signing message

To proposed a novel IBV scheme having a 5.0 in verification delay and 0.5 in signing a message.
Fig. 3 indicates the connection between the transmission overhead and the number of messages received by
an RSU in 10 seconds. As the number of messages increases, the transmission overhead increases linearly.
The transmission overhead of the novel IBV system
498

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

FIG 3: Transmission overhead

with number of message received

is least among the four schemes. Here, 45,000 correspond to the number of messages transmitted by 150
vehicles in 10 seconds. The previous IBV systems they have transmitted by 150 vehicles in 30 seconds.
IV. CONCLUSION

To proposed an efficient identity-based batch verification (NIBV) scheme for vehicle-toinfrastructure and inter-vehicle communications in vehicular ad hoc network (VANET). The batch-based
verification for multiple message signatures is more efficient than one-by-one single verification when
the receiver has to confirm a large number of messages In particular; the batch verification process of the
proposed NIBV scheme needs only a constant number of pairing and point multiplication computations,
independent of the number of message signatures. The proposed NIBV scheme is secure against existential
forgery in the random oracle model under the computational Diffie-Hellman problem. In the performance
analysis, we have evaluated the proposed NIBV scheme with other batch verification schemes in terms of
computation delay and transmission overhead. Moreover, we verify the efficiency and practicality of the
proposed scheme by the simulation analysis. Simulation results show that both the average message delay
and message loss rate of the proposed IBV scheme are less than those of the existing schemes.
VII. FUTURE WORK
In the future work, we will continue our efforts to enhance the features of IBV scheme for VANET, such
as recognizing illegal signatures. When attackers send some invalid messages, the batch verification may
lose its efficacy. This problem commonly accompanies other batch-based verification schemes. Therefore,
thwarting the invalid signature problem is a challenging and a topic for study in our future research.
VII. REFERENCE
[1]

Shiang-Feng Tzeng, Shi-Jinn Horng, Enhancing security and privacy scheme for identity based
batch verification scheme in VANET, IEEE Transaction on Vehicular technology, 2015.

[2]

C. C. Lee and Y. M. Lai, Toward a secure batch verification with group testing for VANET,
Wireless Networks, vol. 19, no. 6, pp. 1441-1449, 2013.
499

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
[3]

Shi-Jinn Horng, Shiang-Feng Tzeng, b-SPECS+: Batch Verification for Secure Pseudonymous
Authentication in VANET, information forensics and security, vol. 8, no. 11, November 2013.

[4]

C. Zhang, R. Lu, X. Lin, P. H. Ho, and X. Shen, An efficient identity-based batch verification
scheme for vehicular sensor networks, in Proceedings of the 27th IEEE International Conference
on Computer Communications (INFOCOM08), pp. 816-824, 2008.

[5]

M. Raya and J. P. Hubaux, Securing vehicular ad hoc networks, Journal of Computer Security
Special Issue Security Ad Hoc Sensor Networks, vol. 15, no. 1, pp. 39-68, 2007.

500

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Diagnosis of Cardiac Diseases


Using Ecg for Wearable Devices
B. Karunamoorthy#1, Dr. D. Somasundereswari #2 and A. Sangeetha*3
#1 Assistant Professor-SRG, EEE Department, Kumaraguru College of Technology, Coimbatore, Tamilnadu
#2Professor & Dean, ECE Department, SNS College of Technology, Coimbatore, Tamilnadu
*3PG Scholar, EEE Department, Kumaraguru College of Technology, Coimbatore, Tamilnadu

AbstractCardiovascular diseases are clinically diagnosed by ElectroCardioGram analysis. By


determining the pattern, structure and interval of ECG signal, the cardiac diseases are found. The
main cause for this study is to analyse the ECG signal and extract the QRS complex from noise
using low computational filters. The approach is promoted for wearable devices so that the size and
the cost is very less and may also be used in battery operated devices. The setup is experimented
with a prototype of ECG analyser made of a novel filter and implemented in hardware using ARM
microcontroller. The filter is designed in MATLAB.
Index TermsArrhythmia disease, ARM Microcontroller, ECG analysis, High pass non-linear
filter, MATLAB.

INTRODUCTION

The diseases of heart or blood vessels are generally known as cardiovascular disease (CVD). It is
a big health problem. The study says that in 2011, there were almost 160,000 deaths as a result of CVD.
Around 74,000 of these deaths were caused by coronary heart disease. Most deaths from heart disease are
caused by heart attacks. In the UK, there are about 103,000 heart attacks, 152,000 strokes in each year,
resulting in more than 41,000 deaths [7]. This disease can be discovered using ECG analysis. An ECG is the
electrocardiogram which is a test that measures the electrical activity of the heart. The electrical impulses
are generated due to the heartbeat are recorded and usually shown on a piece of paper. This is referred as
an electrocardiogram, and records if any problems with the heart's rhythm, and the conduction of the heart
beat through the heart which may be affected by underlying heart disease. It consists of five waves namely
P, QRS and T. The normal ECG wave is explained in fig.1. Each wave occurs due to some electrical
variations in the heart. QRS complex represents the depolarization of the ventricles of the heart which have
greater muscle mass and therefore its process consumes more electrical activity.

The applications of ECG are to determine the electric axis of the heart, Heart rate monitoring
Arrhythmias, Carditis, Pacemaker monitoring [11]. The automatic detection of QRS is critical for
reliable Heart Rate Variability (HRV) analysis, which is recognized as an effective tool for diagnosing
cardiac arrhythmias, understanding the autonomic regulation of the cardiovascular system during sleep
and hypertension, detecting breath disorder like Obstructive Sleep Apnea Syndrome, and monitoring
other structural or functional cardiac disorders. The detection of QRS complexes have been extensively
investigated in the last two decades. Many attempts have been made to find a satisfying universal solution
for QRS complex detection. The difficulties arise mainly because of the huge diversity of the QRS complex
waveforms, abnormalities, low signal-to-noise ratio (SNR) and the artefacts accompanying the ECG signals
as described in [8]. The main aim of this work is to increase the accuracy of QRS detection in Arrhythmia
ECG signals that suffer from non-stationary random effects, low signal-to-noise ratio (SNR), negative
QRS, and low-amplitude QRS.
501

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Fig.1 Basic ECG Wave


There are many methods proposed to intent the QRS complexes. But it have more hardware
complexity and the cost is high. If the cost is less, then the accuracy will be low [9]. Thus, the proposed
technique is to implement with the MIT-BIH database as described by [15] in the hardware with less
complexity and less cost. This can be done using ARM microcontroller, so that the response of ECG signal
will be fast. A novel filter is designed to eliminate the noise in QRS complex and the accuracy of the signal
is increased. This concept will be more useful in wearable devices [1], portable devices and in battery
operated devices.
COMPARISON OF OTHER METHODS
The acquisition of ECG is a two-step process. (1) Feature extraction (2) Detection. In 1st step, QRS
complexes are enhanced and noise in it are removed. This process is done by using different types of band
pass filtering. The low frequency noise removal is done by high pass FIR filtering technique [2]. It consists
of low cutoff frequencies with numerous taps. Therefore to compute each sample, these filters require n
fixed point adders and multipliers. This setup increases the operation complexity to 2.n. If IIR filters are
used then floating point coefficients are needed. In 2nd step, the position of QRS complexes is determined
by differentiation and squaring of signal. In some methods integration are also used to detect the signal.

Pan and Tompkins is the ancient algorithm [3] for the real time QRS Detection algorithm based on
analysis of the slope, amplitude and width of QRS complexes. This algorithm includes series of filters and
methods that perform low pass, high pass, derivative, squaring, integration, adaptive thresholding. In recent
times, there are many research work have been narrated in automatic detection of QRS complex from ECG
signal. Some of them are frequency based methods, time component analysis, dynamic thresholds [5],
heart beat interval techniques. These methods are executed using fourier transform or Hilbert transform
[6]. It is limited to stationary signals. To overcome this, short fourier transform is introduced.it gives
both frequency and time domain analysis. But some signal cannot be detected using STFT [13]. Wavelet
transform methods [4] & [14] are developed and among other methods it produces reasonable results, the
hardware complexity and cost is high. Though for accurate measurements it is used along with Artificial
Neural Network (ANN), multi-layer perceptron based neural network (MLPNN).
PROPOSED SYSTEM

A novel algorithm is proposed to apply over real time ECG signals. It is flexible, upgradeable,
operates in low processing power and also inexpensive. As similar proposals, it also have enhancement
phase followed by detection phase. The computational costs for the first phase are high in other proposals.
But the proposed technique is to reduce these costs and avail for the wearable applications. The block
diagram for the proposed system is shown in fig.2.
502

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Fig.2 Block diagram

A ECG SIGNAL

The real time ECG signal from the patient is taken for the diagnosis of any cardiac diseases. This

signal consists of P, QRS and T waves. In which the proposed algorithm will concentrate on QRS detection.
Since it is difficult to determine and it is responsible for the diagnosis of arrhythmia.
B PRE-PROCESSING

The ECG signals have various kinds of unwanted noise due to power line interference,

electromagnetic noise and baseline wandering in the heartbeat signal as similar to [12]. So the accuracy of
the signal will be degraded, if the signal is acquired in the same way. For this purpose, the pre-processing
is needed. The process constitutes passing the raw ECG signal through a low pass filter. It performs both
linear and nonlinear filtering of the ECG signal and produces a set of periodic vectors which describe
the events. Thus after filtering, the signal is allowed to amplify using amplifier. This will help in feature
extraction of the peaks.
C LOW FREQUENCY NOISE REDUCTION

A novel high pass filter is designed with low computational costs to remove the base wander. A

high pass non-linear filter is designed. The operation of the filter is to subtract a low-pass filtered signal
from the original signal to attain the higher frequency signal. This subtraction is carried out between
maximum and minimum values of the signal. Let s(t) be the discrete time function of the digitized ECG
signal, then the filtered output y(t) is given by Equation (1), (2) & (3).

503

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
Normally to find s(t) over a window of n samples, n comparisons for each sample is needed, and the amount
of computation is increased. To avoid this pseudo-extrema functions is used. It is termed as high* and low*
and defined in Eqs. (2) and (3) respectively. The operation of the high*(t) is as follows: a variable with the
current maximum is fixed, if the value of the input signal s(t) is equal or lower than the current maximum, it
is decreased the current by a factor . If s(t) is higher, then the current maximum is increased by a factor
. Low*(t) is defined similarly. By this process heart beats appear as pulses of approximately 1 mV.
Therefore the normal heart beats range lies around 60-100 units in the digitized signal. Beats >100beats/
min are termed as Tachycardia as well as <60beats/min are termed as Bradycardia.

To make the filter smooth, the value of is increased so that the cutoff frequency is also increased.
Transient dynamics in non-linear filters are not captured by frequency response analysis. By trial and error
method, the suitable values for and are found.
D HIGH FREQUENCY NOISE REMOVAL

The high* and low* functions are also used to sense amplitude of high frequency signal. To sense
the amplitude of the signal the function called m(t) is used. It is defined in the equation (4) as

This m(t) represents the noise factor and it should be subtracted from the signal. Thus the filtered signal
with the noise reduction is given by the equation (5). The duration of the QRS pulse is normally in the range
of 0.06-0.12s. Therefore some effects are made to improve their detection.

E PEAK DETECTION

All the samples from r(t) is not useful to detect the QRS complexes. The R peaks in this complex
will come as positive and negative values. From this the potential R peaks can be filtered. A certain time
period is set to filter these peaks. The advantage of this method is that it supports situations were the
maximum value remains constant for several samples. The maximum and minimum value of time period is
defined by the equation (6) & (7). Thus, this value is considered as peak.

F BEAT UNIFICATION

As already discussed that QRS complexes are both in positive or negative peaks. So, the negative
peaks are discarded or it can be unified by changing the sign of negative peaks to become positive peaks.
This makes useful for fixing the threshold value in the detection step. Normally in other methods it is done
by squaring the signal. But the aim of the proposed technique is to reduce the multiplication operation as
it requires a higher cost than adding or subtracting. Thus the beat unification time function is defined to
reduce the cost is given by Equation. (8).

504

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
G BEAT DETECTION

The peaks of w(t) higher than the threshold are considered as heart beats. Their amplitude are
measured in the range of mV. In some cases, this amplitude may be varied due to the enhancement process.
Therefore, most QRS detection algorithms use an adaptive threshold technique which is used to detect the
pulses and to modify the threshold accordingly. In the proposed algorithm to an adaptive threshold is used.
Peaks below that threshold are considered as noise. The width of the QRS complex is advised as constant.
The general fact that the heart rate does not vary very significantly from beat to beat. There are certain
criteria are designed to detect the peaks as follows.
Criteria 1

A peak higher than the threshold is considered as detectable heartbeat. Lower peaks are eliminated
as noise. The mean of the last 5detected beats are adaptively updated as threshold.
Criteria 2

QRS pulses are lasts from 0.06 to 0.12 s wide. If a peak is detected and if another higher peak
is detected inside this time period, the higher peak is examined as a valid heartbeat, and discarding the
previous peak.
Criteria 3

The normal heart rate is around 220 beats per minute. It is impossible for another beat to occur
before 0.27 s after the detection of QRS complex. Peaks detected after the maximum QRS width and before
this period are considered as noise.
Criteria 4

If noise was detected from the last pulse, then consider that beat as noise if the beat is smaller than
the maximum noise peak plus the threshold, or smaller than the last detected beat minus the threshold.
1) FLOW CHART OF THE PROPOSED ALGORITHM

Fig. 3 Flow diagram for proposed algorithm


505

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

The Fig 3 represents the step by step process of the proposed method. The ECG signal is obtained
from the patient using a single lead sensor. After acquiring the ECG signal, the preprocessing steps such as
amplification, noise reduction is done by embedded controller. A certain time period is kept as count and it
is allowed to increment. The next step is to calculate the threshold value by taking the average of five peaks
of ECG signal. By keeping this threshold value, the next peaks are compared. The peaks below the current
threshold value are considered as noise. If any peak greater than threshold will appear, then it will update
that peak as the threshold value else, it will eliminate the peak as a noise.

Finally, under periodic intervals the final peak above the threshold value is considered as the
desired peak which is used to find the cardiac diseases. By using this peak values, the heart rate of the
patient is determined. This heart rate will gives us the information about the state of heart of the patient.
After updating the peak value again the process gets continued by, taking the next five peaks to calculate
the threshold value so that the final peak can be determined using the algorithm.
2) ARM PROCESSOR

The ARM processor offers five stages pipe lining. The instruction latency is 2.5 ns. The instruction
throughput is of 400 MIPS. For wearable sensors, the power consumption must be low. The existing few
algorithms are highly computational, so it requires high power. By reducing the duty cycle, the energy
consumption is reduced. Due to less interrupt latency, the response will be high. Therefore it is used for the
real time processing of the ECG signal.

The ARM9 board which consists of Samsung S3C2440 processor, 32-bit data bus, 5 V regulated
supply is used for the implementation of proposed methodology for beat classification in real time.
3) EXPERIMENTAL RESULTS

MATLAB is a multi-paradigm numerical computing environment and fourth-generation
programming language. And Simulink, developed by Math Works, is a graphical programming environment
for modelling, simulating and analyzing multi domain dynamic systems. Its primary interface is a graphical
block diagramming tool and a customizable set of block libraries. It offers tight integration with the rest of
the MATLAB environment and can either drive MATLAB or be scripted from it. Simulink is widely used
in automatic control and digital signal processing for multi domain simulation and Model-Based Design.

The heartbeat is evaluated using the parameters namely sensitivity, positive detection and detection error.
Sensitivity is the detection of accurate events among total number of events. Positive detection is the rate of
correctly classified events in all detected events. To evaluate these parameters true positive, false positive,
false negative are used. A true positive happens when a beat is correctly detected for a certain instant. A
false positive happens when a beat has been detected in a certain instant, but no beat has been annotated in
the database for that instant. A false negative happens when the database reports a beat for a certain instant
but the algorithm fails to detect it.
506

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Fig.4 Simulation Block Diagram

The simulation of the proposed filter is shown in the fig 4 is the extension of [10]. It is drawn in MATLAB
SIMULINK and ran through the same software to display the results. The entire circuit shows the ECG
wave acquisition, amplification, noise reduction using high pass filter and the mathematical calculation of
parameters to measure the heart rate are represented.

Fig.5 Original ECG Signal

The fig.5 displays the normal ECG signal which is generated using simulink block to determine the peak of
the signal. Normally, the ECG signal contains some low frequency noise. So this signal is given to the filter.
Another ECG signal with more noise is generated to analyse the performance of the filter. It is used to
compare the accuracy of the normal state of heart rate to abnormal state. This signal is shown in the Fig.6.
507

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Fig.6 ECG signal with Noise

Fig.7 Peak Detection

The peak is calculated based on the algorithm. The Fig.7 represents the peak calculation. The result shows
the peaks compared with the adaptive threshold. Only the peaks with the threshold value are shown. Other
peaks are eliminated as noise.

Fig. 8 The final peak from the proposed filter

The final simulation result of the proposed alogorithm with the detected peaks are shown in the Figure 6.6.
The performance of the results obtained was better than the existing methods.
508

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
4) COMPARISON OF METHODS:

The method was compared with other existing methods is shown in the Table1. The accuracy of
the proposed method is higher the existing method. Thus, the low frequency noise is reduced in this work.
5) CONCLUSION

A non-linear high pass filter called a Maximum Minimum filter, is proposed to remove the low
frequency noise and to detect the peak of QRS complex. Thus a simple QRS detector
METHODS
PCA+ANN+PSO
RR-INTERNAL+ANN
DWT+ANN
PROPOSED TECHNIQUE

ACCURACY
95.5%
90%
97.93%
98.61%

Table 1. Comparison with Various Methods

The low frequency noise is reduced by using high pass filter. This facilitates in the real time processing of
ECG signals with low computational complexity which is useful for portable devices, wearable devices and
ultra-low power chips. The proposed filter results 98.61% of efficiency. To determine the heart rate, the
parameters were calculated. The result gives 71 beats per minute of heart rate which is the normal heart rate.
6) REFERENCES
[1]

Liang-Hung Wang, Implementation of a Wireless ECG Acquisition SoC for IEEE 802.15.4 (ZigBee)
Applications, IEEE Journal Of Biomedical And Health Informatics, Vol. 19, No. 1, January 2015

[2] Rani.S, Kaur.A, Ubhi.J.S, Comparative study of FIR and fIR filters for the removal of Baseline
noises from ECG signal, Int. J. Comput. Sci. Inf. Technol. 2 (3) (2011), pp.11051108.
[3]

Xiaoyang Zhang, A 300-mV 220-nW Event-Driven ADC With Real-Time QRS Detection for
Wearable ECG Sensors, IEEE Trans.Bio.Med Circuits And Sys., Vol. 8, no. 6, December 2014

[4]

S. Kadambe, R. Murray, G.F. Boudreaux-Bartels, Wavelet transform-based QRScomplex detector,


IEEE Trans. Biomed. Eng. 46 (1999) 838848.

[5]

Mohamed Elgendi, et al, Improved QRS Detection Algorithm


using Dynamic Thresholds,
International Journal of Hybrid Information Technology Vol. 2, No. 1, January, 2009

[6]

Simranjit Singh Kohli, et al., Hilbert Transform Based Adaptive ECG R-Peak Detection Technique,
International Journal of Electrical and Computer Engineering (IJECE) Vol. 2, No. 5, October 2012,
pp. 639~643

[7] http://www.nhs.uk/conditions/cardiovascular-disease/Pages/Introductio.aspx.
[8]

Dorthe B. Saadi, et al., Automatic Real-Time Embedded QRS Complex Detection for a Novel
Patch-Type Electrocardiogram Recorder, IEEE Journal Of Translational Engg. In Health And
Medicine, Vol. 3 2015

[9]

Chatterjee.H.K., et al., Real time P and T wave detection from ECG using FPGA, Procedia
Technology 4 ( 2012 ) 840 844

[10] Karunamoorthy.B,et al., Performance Improvement Of Qrs Wave In Ecg Using Arm Processor,
International Journal of Applied Engineering Research, Vol. 10 No.88 (2015)
[11] http://www.bem.fi/book/19/19.html
[12] Kaur. M, et al, Comparison of different approaches for removal of baseline wander from ECG
signal, in: Proceedings of the International Conference & Workshop on Emerging Trends in
Technology, 2011, pp. 12901294.
509

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
[13] Basheeruddin Shah Shaik, et al, Chandrashekar.T, A Method for QRS Delineation Based on STFT
Using Adaptive Threshold, Procedia Computer Science, Vol. 54, 2015, Pp. 646-653.
[14] Paul S Addison (2005) Wavelet transform and the ECG: a review, Institute of Physics Publishing,
pp. 155-195.
[15] Moody.G, & Mark.R, The impact of the MIT-BIH Arrhythmia Database, Eng. Med. Biol. Mag.
IEEE 20 (2001), pp. 4550.

510

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Optimal Location of Capacitors And Capacitor Sizing


In A Radial Distribution System Using Krillherd Algorithm
SA.ChithraDevi# and Dr. L. Lakshminarasimman*
#Research Scholar, *Associate Professor, Department of Electrical Engineering,
Annamalai University, Annamalai Nagar, India

Abstract - This paper presents a new meta-heuristic technique, krillherd algorithm for solving
capacitor placement problem in radial distribution system (RDS). The algorithm predicted the
optimal size of the capacitors and should be placed at the proper location for loss minimization and
hence improvement in voltage. The krillherd algorithm is established along the biological herding
behavior of krills. The method is implemented in 10 and 85 bus RDS test systems and the results
are compared with other algorithms from the literature. The outcomes reveal the potency of the
algorithm. The simulation is taken away on the MATLAB environment.
Keywords: Capacitor placement, Krillherd algorithm, Power loss minimization, Radial distribution
system (RDS),
I. INTRODUCTION

From the studies, at the distribution side 13% of the total power is exhausted as ohmic losses
caused by reactive current flowing in the network. Shunt capacitors are used for the reduction of reactive
currents which consequences in loss minimization, power factor improvement, system security and better
voltage regulation. The main steps of capacitor problem are (i) optimal location of capacitor units and (ii)
sizing of capacitor units. Hence, getting the optimal position and size of capacitors plays a significant part
in the planning and operation of an electrical system.

In [1], the authors gave a brief survey about the shunt capacitor problem in radial distribution
system from the year 1956 to 2013.

The authors of [2] presented the overview of optimum shunt capacitor placement in distribution
system based upon the techniques that is (i) analytical method (ii) numerical programming method (iii)
heuristic method (iv) artificial intelligence methods (v) multidimensional problems. The authors also
compared the results with Particle Swarm Optimization (PSO) on the basis of power losses reduction,
voltage profile improvement, maximizing loadability and line limit constraint.

The authors of [3] gave a brief introduction and discussed various works done on the Shunt
Capacitor Problem (SCP) till 2014. Also, they used two methods, namely sensitivity analysis for searching
suitable locality of capacitors and gravitational search algorithm for selecting the size of capacitors.

From 2015, Artificial Bee Colony (ABC) algorithm [4], HCODEQ method [5], Bacterial Foraging
Optimization Algorithm (BFOA) [6], monkey search optimization algorithm [7], Bat and Cuckoo search
algorithm [8], Particle Swarm Optimization (PSO) [9], [10] and flower pollination optimization algorithm
[11] are applied for solving optimal capacitor placement and sizing in the RDS.

The main drawback in all the above methods is poor convergence speed and obtaining near
optimal solutions. This is overcome by presenting one of the new bio inspired algorithm, namely Krill
herd algorithm is used for solving the capacitor optimization problem. RDS active power loss minimization
is taken as an objective function subjected to various constraints namely voltage limit, reactive power limit
and capacitor location and an optimum solution is obtained using KH algorithm.

In this proposed approach, the following assumptions are taken:


511

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Harmonics effect is neglected.

The system is within the acceptable balance tolerance.

Bus 1 is always considered as slack/swing

II. OBJECTIVE FUNCTION & CONSTRAINTS



Minimization of the system power loss is the main objective function which subject to various
equality and inequality constraints of a distribution network, by determining the optimal placement and
sizing of Capacitor using KHA.
Minimization of loss

III. PROBLEM FORMULATION & ALGORITHM


A. Load flow equations
The radial distribution system has high R/X ratio. Due to that the classical load flow techniques Newton Raphson and Gauss - Seidal methods are not suited for solving RDS load flow problem. From [12], basic
formulation of Kirchhoffs laws is used for finding out the power flow in the system. RDS Load flow
solution method steps:

512

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

(local) (ii) foraging motion (global) (iii) random physical diffusion. The fitness (imaginary distances) is the
value of the objective function.
The n dimensional decision space is given by

513

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

C. Application of the krillherd algorithm to the OCP problem


In [14], the authors used the krillherd algorithm for the solution of the economic load dispatch problem.
1.

Read system data, dimension of the problem, KHA parameters and number of iterations.

2.

Randomly generate the initial population of the size and location of capacitors and regularized
between the maximum and minimum limits.

3.

Run the load flow in order to find the system losses.

4.

Update motion of each krill individual by induced motion, foraging motion and random diffusion
using the equations (1), (2) and (3) respectively.

5.

Modify the position of each krill individual using equation (5).

6.

Applying crossover and mutation to modify the position of each krill individual using equations (6)
and (7) respectively.

7.

Check for the limits of individual Capacitor size variables; if it violates then set the minimum or
maximum value.

8.

Check the fitness function; the unfeasible solution is replaced by the solution with the best previously
visited position.

9.

Go to step 3 and repeat the procedure from step 3 to 8 until the maximum number of iterations is
reached.

Here the stopping criterion is the number of iterations.


D. Flowchart for the krill herd algorithm application to capacitor problem

514

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

IV. TEST CASE AND RESULT COMPARISON


The krillherd algorithm is implemented on the 10 and 85 IEEE radial test system. The software
program is developed in the MATLAB 2009a environment and executed on Intel Core processor i3 2120
CPU with 3.30GHz. The results are enlightened below in detail.
Case 1: 10 bus system
23kV RDS has 10 buses and 9 branches with a total load of (12.368 + j 4.186) MVA is taken from
[15] is the first test system. The uncompensated system losses are 783.7895 kW. After the application of
proposed algorithm, the system is compensated with 4908.8396 kVAr and the system losses decreased
to 682.7651 kVAr. From Table (1), it is revealed that the results are efficient when compared to PSO
algorithm [10]. Fig. (1) shows the power loss reduction with respect to iterations and Fig. (2) illustrates the
voltage improvement after the capacitor compensation.
515

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
Table (1)

Fig. (1) Power loss reduction for 10 bus system

Fig. (2) Bus voltage improvement with and without capacitor


516

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
Case 2: 85 bus system

11kV rural RDS having 85 buses with total load of (2549.56 + j 2601.066) kVA is taken as large
scale test system from (16). The total losses without capacitor compensation are 311.4145 kW. Network
losses get reduced to 142.7877kW after the application of capacitors with a total capacity of 2322.6632kVAr
at various nodes. The results are given in Table (2) and made a comparison with GSA [3] and bio-inspired
algorithm [8]. Fig. (3) explains the convergence property and Fig. (4) point up the voltage improvement
after the capacitor compensation.
Table (2)

517

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

Fig. (3) Power loss reduction for 85 bus system

Fig. (4) Bus voltage improvement with and without capacitor


CONCLUSION

This paper has presented krillherd algorithm for capacitor placement and sizing problem in radial
distribution system. From the results it is revealed that the algorithm gives the optimal solution when
compared to ther methods from literature. Reconfiguration and including system cost as an objective with
loss are the future scopes for this work.
518

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
REFERENCES
[1] Shilpa Kalambe, Ganga Agnihotri, Loss minimization techniques used in distribution network:
bibliographical survey, Renewable and Sustainable Energy Reviews, vol. 29, pp.184200, 2014.
[2] M.M. Aman , G.B.Jasmon , A.H.A.Bakar , H.Mokhlis , M.Karimi , Optimum shunt capacitor
placement in distribution system A review and comparative study, Renewable and Sustainable
Energy Reviews, vol. 30, pp. 429 439, 2014.
[3] Y. Mohamed Shuaib, M. Surya Kalavathi , C. Christober Asir Rajan, Optimal capacitor placement
in radial distribution system using Gravitational Search Algorithm, Electrical Power and Energy
Systems, vol. 64, pp.384 397, 2015.
[4] Marjan Tavakoli , Mahdi Mozaffari Legha, Sajad Salehi, Sarbijan, Mehran Montazer, Capacitor
placement in radial distribution system for improve voltage profile and loss reduction using artificial
bee colony, WALIA journal, vol. 31(S3), pp. 44 49, 2015.
[5] Ji-Pyng Chiou, Chung-Fu Chang, Development of a novel algorithm for optimal capacitor
placementin distribution systems, Electrical Power and Energy Systems, vol. 73, pp. 684 690,
2015.
[6] K.R. Devabalaji, K. Ravi, D.P. Kothari, Optimal location and sizing of capacitor placement in radial
distribution system using Bacterial Foraging Optimization Algorithm, Electrical Power and Energy
Systems, vol. 71, pp. 383 390, 2015.
[7] Felipe G. Duque, Leonardo W. de Oliveira , Edimar J. de Oliveira, Andr L.M. Marcato, Ivo C.
Silva Jr., Allocation of capacitor banks in distribution systems through a modified monkey search
optimization technique, Electrical Power and Energy Systems, vol. 73, pp. 420 432, 2015.
[8] Satish Kumar Injeti, Vinod Kumar Thunuguntla, Meera Shareef, Optimal allocation of capacitor
banks in radial distribution systems for minimization of real power loss and maximization of
network savings using bio-inspired optimization algorithms for minimization of real power loss and
maximization of network savings using bio-inspired optimization algorithms, Electrical Power and
Energy Systems, vol. 69, pp. 441 455, 2015.
[9] Neeraj Kanwar, Nikhil Gupta, Anil Swarnkar, K. R. Niazi, R.C. Bansal, New Sensitivity based
Approach for Optimal Allocation of Shunt Capacitors in Distribution Networks using PSO, Energy
Procedia, vol. 75, pp. 1153 1158, 2015.
[10] Chu-Sheng Lee, Helon Vicente Hultmann Ayala, Leandro dos Santos Coelho, Capacitor placement
of distribution systems using particle swarm optimization approaches, Electrical Power and Energy
Systems, vol. 64, pp. 839 851, 2015.
[11] A.Y. Abdelaziz, E.S. Ali, S.M. Abd Elazim, Optimal sizing and locations of capacitors in radial
distribution systems via flower pollination optimization algorithm and power loss index, Engineering
Science and Technology, an International Journal , 2015.
[12] D. Shirmoharmnadi H. W. Hong A. Semlyen G. X. Luo, A Compensation based power flow method
for weakly meshed distribution and transmission networks, IEEE Transactions on Power Systems,
vol. 3, pp. 753 762, May 1988
[13] Amir Hossein Gandomi, Amir Hossein Alavi, Krill herd: A new bio-inspired optimization algorithm,
International journal on Communications in Nonlinear Science and Numerical Simulation, vol. 17,
pp. 48314845, 2012.
519

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)
[14] Barun Mandal, Provas Kumar Roy, Sanjoy Mandal, Economic load dispatch using krill herd
algorithm, Electrical Power and Energy Systems, vol. 57,pp. 110, 2014.
[15] S.K. Goswami, T. Ghose, S.K. Basu, An approximate method for capacitor placement in distribution
system using heuristics and greedy search technique, Electric Power Systems Research, vol. 51, pp.
143151, 1999.
[16] D. Das, D.P. Kothari, A. Kalam, Simple and efficient method for load flow solution of radial
distribution network, Electrical Power and Energy Systems, vol. 17 (5), pp. 335346, 1995.

520

INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL,


ELECTRONICS AND COMPUTATIONAL INTELLIGENCE (ICAEECI16)

521

Você também pode gostar