Escolar Documentos
Profissional Documentos
Cultura Documentos
PROCEEDINGS OF
INTERNATIONAL CONFERENCE ON
ADVANCES IN ELECTRICAL, ELECTRONICS AND COMPUTATIONAL
INTELLIGENCE (ICAEECI16)
Organized by
In Association With
Lion.Dr.K.S.Rangasamy, MJF
Founder Chairman
KSR Institutions
In todays fast changing world there is a demand for new technologies and innovations in every
sphere of industry. The ideas that feed the ever growing demand for new designs and applications are derived
from the intensive efforts put in by scientists and researchers all over the world who work enthusiastically
for the upliftment of the society.
I am delightful to note that the accepted papers are being published by leading International
I wholeheartedly appreciate all the sincere efforts of the entire team of ICAEECI16 and wish
Thiru.R.Srinivasan
Chairman & Managing Trustee,
K.S.R. College of Engineering
The K.S.R College of Engineering was started in 1998 with the vision to produce the most
competent Scientists, Engineers. Entrepreneurs, Managers and Researchers through quality education.
Being located in a rural setup, it caters to the needs of rural students and over a time it has attracted students
from various countries.
One most forever strive for excellence or even perfection in any task however small it may be
and never satisfied with the second best quality and standard is the baseline behind the host institutions
success in every venture it has undertaken is now maneuvering to take up the task of conducting the First
ICAEECI16 Conference.
The various invited speakers, guests, scholars, delegates from different countries who will be
playing an active role in this conference will surely find the best hospitality and a very comfortable stay.
I wholeheartedly congratulate the entire team of ICAEECI16 for their efforts and wish the
PRINCIPAL MESSAGE
Dr.K.Kaliannan
Principal,
K.S.R.College of Engineering
It is pleasing to note that our K.S.R. college of Engineering is posting the prestigious International
It is right to acknowledge and place on record the magnanimous support provided by the
management of K.S.R. College of Engineering, especially the Chairman of K.S.R. Institutions and the
Managing Trustee of our college.
The hallmark of this event is best brought out by selecting the 8 Leading International Journals,
which also has evinced a good score of around 320 papers received from the coveted areas of referred
research activities around the globe. All the papers were reviewed by over a hundred reviewers.
Making this mega conference a resounding success, lies in the planned hard work of many. It is my
duty to appreciate those who are behind this conference for taking a step ahead to make a mark in research
and technological advancements.
Dr.K.Kaliannan
DEAN MESSAGE
Dr.A.Krishnan,
K.S.R.College of Engineering
and Communication Engineering and Computer Science Engineering are organizing an International
Conference on Advances in Electrical, Electronics and Computational Intelligence ICAEECI16 at our
campus on 22.02.2016. I hope that more number of engineers, researchers will participate in this Technical
event. This will help the participants to improve their technical know-how, leadership quality, organizing
skills and to become a meaningful entrepreneur.
This conference enable the participants to apply their knowledge in the relevant fields and help
them to bring out novel techniques in emerging areas. This program aids them to shape their future and
caters all their needs.
R.Jayaprakash
CHIEF-IN-EDITOR
IJETCSE
On behalf of the conference committee, I am honored to invite you all to the International
R.Jayaprakash
Conveners
Dr.P.S.Periasamy, Prof.
Dr. S. Ramesh, Prof.
Co-Ordinators
Dr. T.R. Sumithira
Mrs. K.Yamuna
Dr. S. Karthikeyan
Dr. N.S. Nithya
Organising Committee
Dr. P. SUGANYA
Dr.C.GOWRI SHANKAR
Mrs. B.YUVARANI
Mrs. E.VANI
Dr.A.MAHESWARI
Mr. R.CHANDRASEKAR
Dr.C.KARTHIKEYAN
Dr. R.SANKARGANESH
Mr. K.R.NANDHAGOPAL
Dr.V.RAVI
Mr.M.VIJAYAKUMAR
Mr.K.P.SURESH
Dr. M.VIJAYAKUMAR
Dr.G.VIJAYAKUMAR
Mr.S.GOWTHAM
Dr. R. GOPALAKRISHNAN
Mr. S.CHINNAIYA
Dr.P.SATHISHKUMAR
Mr. P.SUNDARAVADIVEL
Mrs. S.THIRUVENI
Mr.S.GOPINATH
Dr.M.RAMASAMY
Mrs S.MAHALAKSHMI
Mr.R.KUMARESAN
Mrs. N.NISSANTHI
Mr.C.PAZHANIMUTHU
Mr.K.PRAKASAM
Ms.M.MUTHULAKSHMI
Mrs. M.SORNALATHA
Mr. J.THIYAGARAJAN
Mr. S. AROCKIASAMY
Mrs.R.JEYANTHI
Mr.E.KANNAN
Mrs.K.GOWRI
Mrs. R. POORNIMA
Mrs. S. POONGODI
Mrs. S. JEYABHARATHI
Ms. K. KIRUBA
Mrs.V.M.JANAKI
Ms.S.PREMALATHA
Mr. L. RAJA
Ms.A.LAVANYA
Mr.K.KARUPPANASAMY
Mrs.T.KAVITHA
Mr.R.VEERAMANI
Dr. J. GNANAMBIGAI
Mr. M. SUBRAMANI
Mr.S.MANOHARAN
Mrs. P. THILAGAVATHI
Mr. P. BALAKRISHNAN
10
Mr.A.SURENDAR
Mr. G. SENTHILKUMAR
Mr.P.SIVAKUMAR
MR.R.MAHENDRAN
Mr.C.KARTHIK
Mr. R. ESWARAMOORTHI
Mr. S. SENTHILKUMAR
Mr.K.P.UVARAJAN
Mr. J. RAMESHKUMAR
Mr.G.S.MURUGAPANDIAN
Mr.M.RAJASEKAR
Mr.S.VELMURUGAN
Mr.S.KRISHNAKUMAR
Mr. M. JOTHIMANI
Mr.S.VADIVEL
Dr. R.VELUMANI
Ms. V.VENNILA
Mr.K. KUMARESAN
Dr. A.VISWANANTHAN
Ms. G.S.RIZWANABANU
Mr.C.THIRUMALAISELVAN
Dr. E.BABYANITHA
Ms. S.SUGANYA
Mr.M.SUKUMAR
Mr. G. SIVASELVAN
Ms. S.REVATHY
Mr.T.SARANSUJAI
Dr. P.SIVAKUMAR
Ms.M.UMAMAHESWARI
Mr. M.PRAKASH
Ms. K.THAMARAISELVI
Ms.S.SAVITHA
Mr. G.NAGARAJAN
Ms. K.NITHYA
Ms.S.SENBHAGA
Mr. T.SASI
Mr. A.R.SURENDRAN
Ms.D.SANDHIYA
Mr. J.SANTOSH
Mrs. V.SHARMILA
Mr. M.SUDHARSAN
Mr. G.T.RAJAGANAPATHI
Dr.M.SOMU
Mr. A.MUMMOORTHY
Mr. K.DINESHKUMAR
Dr.P.BALAMURUGAN
Mr. P.PRAKASH
Mr. S.ANGURAJ
Dr.S.NITHYAKALYANI
Mr. C.ANAND
Mr. V. SENTHILKUMAR
Dr. M.TAMILARASI
Mr. G.KARTHIK
Mr. S.SIVAPRAKASH
Mr. S. SELVANAYAGAM,
Organizing Head, IJETCSE
Mr. S. GOVINDASAMY,
Co-Ordinator,IJETCSE
11
DATE
SESSION DETAILS
TIME
08.45-09.30 am
09.30-10.30 am
BOARD: EEE
PROGRAMME
VENUE
PROGRAMME SCHEDULE
REGISTRATION
MAIN BLOCK
REGISTRATION
INAUGURAL
Dr.KANNAN JEGATHALA
KRISHNAN
PROFESSOR, VICTORIA
UNIVERSITY, AUSTRALIA
Dr.V.N.MANI
SCIENTIST-E, C-MET, GOVT.
OF INDIA, HYDREBAD
10.30-10.50 am
TECHNICAL
SESSION-I
22-02-2016
11.00-01.00 pm
TEA BREAK
01.00-02.00 pm
02.00-04.00 pm
12
LUNCH
TEA BREAK
HALL NO.1
(MAIN BLOCK)
Dr.M.R,Dr.G.V
HALL NO.2
(MAIN BLOCK)
Dr.C.G.S,Dr.M.V
KSRCE BOYS
HOSTEL
LUNCH
HALL NO.1
(MAIN BLOCK)
-Dr.M.R,Dr.G.V
HALL NO.2
(MAIN BLOCK)
Dr.C.G.S,Dr.M.V
TECHNICAL
SESSION-II
04.00-04.15 pm
TEA BREAK
TEA BREAK
04.15-04.45 pm
VALEDICTORY
VALEDICTORY
DATE
SESSION DETAILS
TIME
08.45-09.30 am
09.30-10.30 am
BOARD: CSE
PROGRAMME
VENUE
PROGRAMME
SCHEDULE
REGISTRATION
MAIN BLOCK
REGISTRATION
Dr.KANNAN
JEGATHALA
KRISHNAN
PROFESSOR,
VICTORIA
UNIVERSITY,
AUSTRALIA
INAUGURAL
Dr.V.N.MANI
SCIENTIST-E,
C-MET, GOVT.
OF INDIA,
HYDREBAD
22-02-2016
10.30-10.50 am
TEA BREAK
11.00-01.00 pm
TECHNICAL
SESSION-I
HALL NO.6
(MECHANICAL
BLOCK)
01.00-02.00 pm
02.00-04.00 pm
LUNCH
TECHNICAL
SESSION-II
TEA BREAK
ICAEECI 001,
ICAEECI 006,
ICAEECI 013,
ICAEECI 098,
ICAEECI 192
ICAEECI 201,
ICAEECI 202,
ICAEECI 205,
ICAEECI 011,
ICAEECI 046
LUNCH
HALL NO.5
(MECHANICAL
BLOCK)
ICAEECI 047,
ICAEECI 048,
ICAEECI 049,
ICAEECI 052,
ICAEECI 053
HALL NO.6
(MECHANICAL
BLOCK)
ICAEECI 055,
ICAEECI 064,
ICAEECI 069,
ICAEECI 135
04.00-04.15 pm
TEA BREAK
TEA BREAK
04.15-04.45 pm
VALEDICTORY
VALEDICTORY
13
DATE
SESSION DETAILS
TIME
08.45-09.30 am
09.30-10.30 am
BOARD: ECE
PROGRAMME
VENUE
PROGRAMME
SCHEDULE
REGISTRATION
MAIN BLOCK
REGISTRATION
Dr.KANNAN
JEGATHALA
KRISHNAN
PROFESSOR,
VICTORIA
UNIVERSITY,
AUSTRALIA
INAUGURAL
Dr.V.N.MANI
SCIENTIST-E, C-MET,
GOVT. OF INDIA,
HYDREBAD
22-02-2016
10.30-10.50 am
11.00-01.00 pm
01.00-02.00 pm
02.00-04.00 pm
14
TEA BREAK
TEA BREAK
LUNCH
TECHNICAL
SESSION-I
LUNCH
TECHNICAL
SESSION-II
04.00-04.15 pm
TEA BREAK
TEA BREAK
04.15-04.45 pm
VALEDICTORY
VALEDICTORY
Page
No.
S.No.
16
26
34
P.Pavithra,
38
R.Mohana,
43
S.Pavithra,
47
D.Sandhiya , B.M.Brinda
53
G.Madhubala , R.Sangeetha
58
S.Sevvanthi, G.Arulkumaran
10
63
11
68
Kiruthika S,
12
13
S.Selvam
Dr.S.Thabasu Kannan
72
R.Ganesh
79
14
83
15
87
16
17
94
100
18
106
19
113
20
123
21
132
22
138
23
147
K.Thamizhazhakan , Dr.S.Maheswari
24
25
153
159
Dr.K.Sundararaju, T.Rajesh
26
167
27
16
174
28
29
181
P.Saranya, V.Saravanan
188
P.Nandini1, Mr.C.S.ManikandaBabu2,
30
31
194
203
32
209
33
217
M.Vanathi1 , J.Jayanthi2
34
222
Ms.M.Keerthana, , Ms.K.Kavitha,
35
226
36
235
37
245
Sindhuja J1,Anitha B2
38
252
S.Dharanya1,J.Jayanthi2
39
256
40
262
Aparnaa.K.S, Santhi.P
41
270
42
275
43
Measurement of Temperature using RTD and Software Signal Conditioning using LabVIEW
281
C. Nandhini, M. Jagadeeswari
44
289
D.Shabna1, Mr.C.S.ManikandaBabu2,
45
294
46
300
S.Gayathri1, Mr.C.S.ManikandaBabu2
47
307
48
315
Karthik K1 Suthahar P2
49
324
50
331
51
341
52
346
53
353
54
363
55
18
368
56
373
57
*2
386
58
59
391
396
60
404
R .Senthilkumar, S.Murugesan
61
414
S.Murugesan, R.Senthilkumar
62
425
63
432
64
439
Karthik K1 Suthahar P2
65
66
455
462
481
486
448
67
68
469
69
70
19
71
495
501
511
72
73
20
Abstract For either grid-connected or off-grid environment with input like solar photovoltaic,
wind turbines, batteries, H2 generators and conventional generators etc, designing and analyzing
hybrid systems inclusive of renewable energy Hybrid Optimization Model for Electric Renewables
(HOMER) is widely used in many countries. Distributed generation and hybrid systems inclusive of
renewable energy continue to grow and mitigation of financial risk for hybrid systems inclusive of
renewable energy projects is served by HOMER.
The paper mainly focuses on simulation and optimization of the implemented 4.5kW Wind/Solar
micro-generation system to obtain the most cost-effective, best component size and the projects
economics for 8.4MWh/d load with 827kW peak. The methodology and simulation model of 4.5kW
micro-generation system is presented in this paper. The collected climatic data of 5 Victorian
suburbs, load profile from the Department of Facilities and details of system components from
Power Systems Research Laboratory at Victoria University, Melbourne, and electricity as input-the
economic, technological and environmental performances is examined in this paper. The benefit of
using HOMER for micro-power optimization model and the determination for realistically financing
renewable energy or energy efficiency projects will be presented.
IndexTermsHOMER, 4.5kW climatic data, load profile, Cost of Energy (COE) and Net Present
Cost (NPC) and Wind/Solar micro-generation system
I. INTRODUCTION
The study in this paper aims to investigate the economic, technical and environmental performance
of the implemented 4.5kW Wind/Solar micro-generation under Australian (Victorian) climatic conditions.
Using global solar irradiation and wind speed as solar and wind energy data, load data (Building D, Level
5 at Victoria University, Footscray Park Campus), the price of PV array, Vertical Axis Wind Turbine
(VAWT), converters, grid electricity tariff and sale-back tariff as inputs of economic analysis, 4.5kW Wind/
Solar micro-generation system was simulated and optimized by Hybrid Optimization Model for Electric
Renewable (HOMER) [1]. Section 2 presents the methodology, simulation model, system simulation tool,
components modeling and system optimization problem. Section 3 provides the study locations and their
climatic data, load profile, details of system components and electricity tariff. Section 4 highlights the
economic, technological and environmental results. Finally, Section 5 provides the summary of this paper.
II METHODOLOGY
The implemented 4.5kW Wind/Solar micro-generation system in Power Systems Research
Laboratory at Victoria University was undertaken for research in this paper by using computer-based
energy simulation tool. Simulation software and system optimization objective [1] are introduced in this
section with economic data, collected weather data and load data as inputs.
A Simulation model
The 4.5kW Wind/Solar micro-generation system as shown in Fig. 1 has a direct current (DC) 1.5kW
PV array and alternating current (AC) 3kW VAWT as the energy generator. The system has also rectifier
converting electricity between AC to DC as well as inverter converting electricity between DC to AC as
the load and the grid is AC.
1
The HOMER software was used to model the implemented 4.5kW Wind/Solar micro-generation
system. It is a micro power optimization model that simplifies the task of designing distributed generation
(DG) system both on and off-grid developed by National Renewable Energy Laboratory (NREL), US.
HOMER simulates the operation of the system and performs energy balance calculations for each system
configuration. It also estimates the cost of installing and operating the system over the life time of the project
and supplies the optimized system configuration and components sizing. For simulation and optimization
of conventional and renewable energy system HOMER is widely used in many countries [2]. The 4.5kW
Wind/Solar micro-generation system is simulated by the HOMER coding as shown in Fig. 2.
C Components modeling
The solar energy is converted into DC electricity by the PV array in direct proportion to the solar
radiation incident upon it [6.1]. The PV array is placed at a tilt angle of 300 in order to achieve higher
insolation level to the solar radiation incident upon it. HOMER calculates the PV array output [2] as shown
in (1) and the radiation incident on PV array [2] as shown in (2).
(1)
where:
[2]
where:
For wind modeling, during each hour of the year, the base line data is a set of 8,760 values
representing the average wind speed expressed in meters per second. From twelve average wind speed
values: one for each month of the year, HOMER builds a set of 8,760 values, or one wind speed value for
each hour of the year. The synthesized data sequence has the specified Weibull distribution, autocorrelation,
seasonal and daily patterns [1-5].
2
(3)
where:
Table 2 Global clearness index and daily radiation for Victoria, Australia [2], [6]
Fig. 3 Global solar radiations (kW/m2) per annum of Victoria, Australia [2], [6]
Table 3 Global Wind speed data for selected Victorian suburbs, Australia [7]
Fig. 4 Wind resources Hourly wind speed data per annum for Melbourne [7]
Fig.10 Victoria University, Footscray Park Campus access and mobility map [9]
C Electricity tariff
The electricity involved in the 4.5kW Wind/Solar micro-generation system includes electricity
purchasing tariff and electricity sale-back tariff (feed-in-tariff) [10-11]. When a continuous supply of
electricity day or night is provided by the grid to all domestic and commercial appliances, the users need to
pay the electricity bill calculated by different types of tariffs. Taking an example of the AGL Energy tariff,
single rate meter tariff is 29.238 c/kWh inclusive of GST, two rate meter tariff which allows a permanently
wired storage hot water unit to be heated overnight then for 8 hours each night is 19.701 c/kWh inclusive
of GST and a time of use meter that measures electricity during peak and off-peak times is 36.850 c/kWh
and 20.471c/kWh [10].
8
The systems life cycle cost is represented by the total NPC and HOMER calculates the NPC [2] as shown
in (4)
(5)
where:
The energy originated from renewable power sources is referred to as renewable fraction
and HOMER calculates the renewable fraction [2] as shown in (6).
(6)
9
A shortfall between the required operating capacity and the amount of operating capacity the
system can provide is defined as the capacity shortage. HOMER calculates the capacity shortage over the
year [6.2]. The ratio between the total capacity shortage and the total electric load is known as capacity
shortage fraction and HOMER calculates the capacity [2] as shown in (7).
(7)
where:
The performance profiles of the optimized systems of all 5 Victorian suburbs for grid connected system
with 9.6kW converter size, grid connected system with 4.5kW converter size and off-grid system are
shown in Tables (6-19). From Tables (6-19) the system configuration, component sizing, initial capital
cost, operating cost, total NPC, COE, renewable fraction and capacity shortage can be found.
A Economic performance
The grid connected system with 9.6kW converter size which is implemented in Power Systems
Research Laboratory at Victoria University has the best economic performance in Melbourne as shown in
Table 6 (since the system shows the minimum NPC) and the least economic performance being in Nhill as
shown in Table 12 (since the system shows the maximum NPC).
Similar results are obtained for the grid connected system with 4.5kW converter size, Melbourne
has the best economic performance as shown in Table 6.7 and Nhill has the least economic performance as
shown in Table 13. The initial capital cost, the operating cost and the total NPC of 4.5kW converter size for
Melbourne and Nhill as shown in Tables (7 and 13) is less than the initial capital cost, the operating cost
and the total NPC of 9.6kW converter size as shown in Tables
(6 and 12) because of the difference in price of converters (4.5kW converter is $5000 cheaper than 9.6kW
converter).
COE for grid connected system with both 9.6kW converter size and 4.5kW converter size for all 5 suburbs
Melbourne, Mildura, Nhill, Sale and Broadmeadows is 0.279 $/kWh. For off-grid system Melbourne has
the least COE of 0.236 $/kWh and Nhill and Sale has the most COE of 0.239 $/kWh as shown in Tables
(8, 14 and 17).
B Technological performance
The system configuration and components for grid connected system with both 9.6kW converter size and
4.5kW converter size is almost similar for all five suburbs.
For off-grid system with 1.5kW PV and 400 units of 3kW VAWT, capacity shortage varies for all 5 suburbs.
Mildura have the least capacity shortage of 22% as shown in Table 6.11, Melbourne and Broadmeadows
have 24% of capacity shortage as shown in Tables (6.8 and 6.20). Nhill and Sale have the most capacity
shortage of 25% as shown in Tables (6.14 and 6.17).
For all the five suburbs if the capacity shortage is decreased by 5% to 10% the COE increases rapidly. To
maintain COE of off-grid system equal to the grid connected system the capacity shortage (22% - 25%) has
to be met from either batteries, diesel generators or from grid.
10
11
12
13
14
Fig. 13 Capacity shortage for Victorian Suburbs (COE at 0.236 to 0.239 $/kWh)
REFERENCES
[1]
[2]
NREL HOMER getting started guide for HOMER version 2.1, tech. rep. National Energy
laboratory, Operated for the U.S. Department of Energy Office of Energy Efficiency and Renewable
Energy; 2005.
[3]
[4] Z. Simic, V. Mikulicic, "Small wind off-grid system optimization regarding wind turbine power
curve," AFRICON 2007, pp.1-6, 26-28 Sept. 2007.
[6.5] M. Moniruzzaman, S. Hasan, "Cost analysis of PV/Wind/diesel/grid connected hybrid systems,"
International Conference on Informatics, Electronics & Vision (ICIEV), vol., no., pp.727-730, 18-19
May 2012.
[6] NASAs Surface Metrology and Solar Energy. [Online]Viewed 2013 June 02. Available:http://
eosweb.larc.nasa.gov/sse/
[7] Weatherbase. [Online]Viewed 2013 June 21. Available: http://www.weatherbase.com
[8] Masters and Gilbert 2004, Renewable and Efficient Electric Power Systems, NewYork: Wiley, 2004
[9] Footscray Park Campus Access and Mobility Map. [Online] Viewed 2013 June 22.
A
vailable:http://www.vu.edu.au/sites/default/files/facilities/pdfs/footscray-park-access-and-mobilitymap.pdf
[10]
Department of Environment and Primary Industries. [Online] Viewed 2013 June 21. Available:
http://www.dpi.vic.gov.au/home.
15
Abstract Abstract Heterogeneous grid environments are well suited to solve the scientific and
engineering applications that require large computational demands. The problem of optimally mapping,
that is selecting the appropriate resource and scheduling the tasks in an order onto the resources of a
distributed heterogeneous grid environment has been shown, in general to be a NP-Complete problem.
NP-Complete problem requires the development of heuristic techniques to identify the best possible
solution. In this paper, a new heuristic scheduling algorithm called Credit Score Tasks Scheduling
Algorithm (CSTSA) is proposed. It aims to maximizing the resource utilization and minimizing the
makespan. The new strategy of Credit Score Tasks Scheduling Algorithm is to identify the appropriate
resource and to find the order in which the set of tasks to be mapped to the selected resource. The order
in which the tasks to be mapped is identified based on the Credit Score of the task. Experimental results
show that the proposed Credit Score Tasks Scheduling Algorithm outperforms the Min-min heuristic
scheduling algorithm in terms of resource utilization and makespan.
Index Terms Computational Grid, Grid scheduling, Heuristic, Makespan .
I. INTRODUCTION
Grid is an infrastructure and builds various functions and it helps to involve integrated and
collaborative use of various technologies like computers, networks, database and scientific instrument which
are owned and managed by multiple organizations. It is globally distributed and consists of heterogeneous
and loosely coupled data and resources. Grid is the dynamic environment, so it has the ability to change
the resource frequently. Middleware is one of the important strategies in grid computing which divides
program into number of pieces among several computers [2,4].
Computational grid is defined as the distributed infrastructure that appears to an end user who
divides the job among individual machines and run the calculations in parallel and returns the results to
the original machine. Scheduling has direct impact on performance of grid application. One important
challenge in task scheduling is to allocate the optimal resources to the job in order to minimize the task
computation time. Several heuristic task scheduling
algorithms have been developed for task scheduling. Dynamically tasks are entered and scheduler
must allocate the resource effectively but it is a tedious process [9,10,11].
Opportunistic Load Balancing (OLB) algorithm assigns the job in an arbitrary order based on the
shortest schedule to the processor without considering the ETC of that processor and also it assigns task in
arbitrary order to the next available machine regardless of its expected execution time of the machine [5,6].
Minimum Execution Time (MET) algorithm based on the minimum execution time of the task it is
assigned to the machine without considering the resource availability of that machine and also it assigns
job to the machine in arbitrary order regardless of the current load on the processor in order to improve the
performance and faster execution [1,7].
Minimum Completion Time (MCT) algorithm with the earliest completion time and minimum
expected completion time of the job each task is assigned arbitrarily to the processor. The ETC of the job j
on the processor p is added to the ps current schedule length which is the completion time of the job j on
the processor p [1,7].
Min-min algorithm calculates the expected completion time of each task with all the processors then
it assigns the task to the resource with the minimum expected completion time [5,6,8].
Max-min algorithm is similar to the Min-min algorithm; first it calculates the minimum completion
16
The mapping of the n meta-tasks to the set of m heterogeneous resources is made based on the
following assumptions [1,7]:
A set of independent, non-communicating tasks called meta-tasks is being mapped.
Heuristics originate a static mapping.
Each resource executes a single independent task at a time.
The number of tasks to be scheduled and the number of heterogeneous resources in the grid
computing environment are static and known a priori.
ETC (Expected Time to Compute) matrix represents the expected execution time of a task on a
resource.
ETC matrix of size n*m, where n represents the number of meta-tasks and m represents the
number of heterogeneous resources.
ETij- represents the expected execution time of a task ti on a resource rj.
Task set is represented as T={T1,T2,T3......Tn}
Resource set is represented as R={R1,R2,R3......Rm}
The accurate estimate of the expected execution time for each task on each resource is
contained within an ETC matrix
TCTij expected completion time of task Ti on resource Rj
RTj-ready time of resource Rj
Makespan = max(TCTij)
ETC matrix is computed by the formula
where Tasklengthi represents the length of the task Ti in MI and powerj represents the
computing power of the resource Rj in MIPS
The ready time of the resource Rj, is the time at which the resource Rj completes the execution of
the previously assigned tasks and is defined as
The proposed Credit Score Tasks Scheduling Algorithm considers two criteria for scheduling the
17
meta-tasks onto the resources. The two criteria considered for efficient scheduling are,
Task Execution Time Credit
Unique Value Credit for the meta-task
The proposed algorithm schedules the task with the highest credit score value to the resources that
provides the minimum completion time of the task.
C. Task Execution Time Credit
The steps involved in calculating the task execution time credit for a meta-task is
listed below:
1) From the ETC matrix, the maximum execution time of a task is identified.
MAXET=max (ETij), 1 i n, 1 j m
2) Credits are assigned to each task using the following formula:
If the highest unique value given to a task is a two digit number, then dv=100. If the highest unique
value given to a task is a three digit number, then dv=1000 and so on.
18
MAXET=19.9
CV1=9.9
CV2=6.6
CV3=16.5
CV4=23.1
Credit Score (CSi) for each task ti is computed using Algorithm1 and the result is shown in Table2.
Table2 Credit Score for each Task
A Unique Value (UV) for each task is assigned in random in the range 1 to 10. Unique Value Credit
(UVC) for each task is computed using the Algorithm 2 and is shown in Table3.
Table3 Unique Value Credit for each Task
19
The tasks to be scheduled are ordered in the Credit Score Set CSS in the descending order of TCSi.
CSS={T6,T3,T8,T2,T5,T9,T7,T10,T4,T1}
Now, the tasks are scheduled to the resource with minimum completion time. The makespan is 43.96
sec.
The order in which the tasks are scheduled, and the makespan obtained for Min-min algorithm and
the proposed Credit Score Tasks Scheduling Algorithm is shown in Table5.
Table 5 A Comparisons between Min-min Algorithm and Credit Score Tasks Scheduling Algorithm
in makespan and task schedule order.
B. Evaluation Parameters
Makespan
21
Table 2, 3, 4, 5 show the comparison of the makespan values obtained by Min-min and
CSTSA in all the four instances which comprises High Task High Resource, High Task Low
Resource, Low Task High Resource, Low Task Low Resource. The four instances are represented
for consistent, inconsistent, semi-consistent or partially consistent heterogeneous computing
systems. Figure 2, 3, 4, 5 shows the graphical representation of all the four instances for three
different consistencies.
22
23
REFERENCES
T.Braun, H.Siegel, N.Beck, L.Boloni, M.Maheshwaran, A.Reuther, J.Robertson, M.Theys, B.Yao,
D.Hensgen, and R.Freund, A Comparison Study of Static Mapping Heuristics for a Class of
Meta-tasks on Heterogeneous Computing Systems, In 8th IEEE Heterogeneous Computing
Workshop(HCW99), pp. 15-29, 1999.
[2]
I.Foster and C. Kesselman, The Grid: Blueprint for a Future Computing Infrastructure, Morgan
Kaufmann Publishers, USA, 1998.
[3]
E.U.Munir, J.Li, and S.Shi, QoS Sufferage Heuristic for Independent Task Scheduling in Grid,
Information Technology Journal 6(8), pp. 1166-1170, 2007.
[4]
TD. Braun, HJ. Siegel, N.Beck, A Taxonomy for Descriging Matching and Scheduling Heuristics
for Mixed-machine Heterogeneous Computing Systems, IEEE Workshop on Advances in Parallel
and Distributed Systems, West Lafayette, pp. 330-335, 1998.
[5]
R.Armstrong, D.Hensgen, and T.Kidd, The Relative Performance of Various Mapping Algorithms is
Independent of Sizable Variances in Run-time Predictions, In 7th IEEE Heterogeneous Computing
Workshop(HCW98), pp. 79-87, 1998.
[6]
R.F.Freund and H.J.Siegel,Heterogeneous Processing, IEEE Computer , 26(6), pp. 13-17, 1993.
T.D.Braun, H.J.Siegel, and N.Beck, A Comparison of Eleven Static Heuristics for Mapping a Class
of Independent Tasks onto Heterogeneous Distributed Computing Systems, Journal of Parallel and
Distributed Computing 61, pp.810-837, 2001.
[7]
T.D.Braun, H.J.Siegel, and N.Beck, A Comparison of Eleven Static Heuristics for Mapping a Class
of Independent Tasks onto Heterogeneous Distributed Computing Systems, Journal of Parallel and
Distributed Computing 61, pp.810-837, 2001.
24
[9]
[10] G.K.Kamalam, and Dr. V..Murali Bhaskaran, An Improved Min-Mean Heuristic Scheduling
Algorithm for Mapping Independent Tasks on Heterogeneous Computing Environment,
International Journal of Computational Cognition, Vol. 8, N0. 4, pp. 85-91, 2010.
[11] G.K.Kamalam, and Dr. V..Murali Bhaskaran, New Enhanced Heuristic Min-Mean Scheduling
Algorithm for Scheduling Meta-Tasks on Heterogeneous Grid Environment, European Journal of
Scientific Research, Vol.70 No.3, pp. 423-430, 2012.
[12] H.Baghban, A.M. Rahmani, A Heuristic on Job Scheduling in Grid Computing Environment, In
Proceedings of the seventh IEEE International Conference on Grid and Cooperative Computing, pp.
141-146, 2008.
25
I. INTRODUCTION
In recent years, wind energy has become one of the important and promising sources of renewable
energy. But incorporation of large amount of wind energy in power network will result in fluctuating real
power injection and varying reactive power absorption which leads to voltage flactuations and affect the
stability and power quality of the system. Flexible AC Transmission System (FACTS) devices can give
solution for the variations created in power system by such types of renewable resources and helps to
improve its stability, power transfer capability and control of power flow. FACTS controllers provide the
necessary dynamic reactive power support and the voltage regulation at the Point of Common Coupling .
Here the Unified Power Flow Controller is chosen for the power quality improvement because the UPFC
allows simultaneous control of all three parameters of power system that is the line impedance, voltage
magnitude and power angle. It is primarily used for independent control of real and reactive power in
transmission lines for flexible, fast, reliable and economic operation.
In the WECS most commonly used generators are wound rotor induction generators. Induction
generators draw reactive power from the main power grid and hence might result in voltage drops at the
PCC. Moreover, the input power to these induction machines is variable in nature and hence the output
voltages are unacceptably fluctuating.
More research has been done on FACTS devices and discussed on controllers like Static
Var Compensator and STATic synchronous COMpensator to improve voltage ride-through of induction
generators [1], The article [2] gives an approach based on Differential Evolution for optimal placement &
parameter setting of UPFC for improving power system security. Research on control design to improve
the
dynamic performance of a wind turbine for induction generator unit [3], and also how the FACTS
devices could be used for the power transfer capability improvement using fuzzy controller is explained in
[4]. Many authors have discussed on power quality improvement in WECS, voltage
regulation, reactive power power support and transient stability improvement [5-8].
In this paper, UPFC control scheme is used for the grid connected wind energy generation
system for power quality improvement and it is simulated using MATLAB/SIMULINK. When a three
phase to ground fault occurs, the voltage at the WECS terminals drops, Thus the generated active power
falls. After fault clearance, the reactive power consumption increases resulting in reduced voltages at the
PCC. Here test case1 considered is a IEEE 5 BUS system and case 2 is real time grid system. Here results
shows that, UPFC connected at the terminals of WECS results in the voltage improvement at PCC and real
and reactive power improvements that in turn gives the power quality improvement.
II. POWER QUALITY ISSUES
Perfect power quality means the voltage is continous and sinusoidal having a constant
amplitude and frequency. It is described in terms of voltage, frequency, and interruptions. Grid connected
wind turbines do affect the power quality. Power quality depends upon the interaction between the grid
26
All the three parameters of line power flow (line impedance, voltage and phase angle) can
be simultaneously controlled by the Unified Power Flow Controller device. UPFC is a device that
combines together the features of two devices STAtic synchronous COMpensator (STATCOM) and
the Static Synchronous Series Compensator (SSSC)[12]. These two devices are two Voltage Source
Converters connected respectively in shunt with the transmission line through a shunt transformer and in
series with the transmission line through a series transformer, connected to each other by a common dc
link including a storage capacitor. Filters are connected across capacitor to prevent the flow of harmonic
currents generated due to switching. At the output of the converters ,the transformers are used to give
the isolation and modify voltage/current levels and also to prevent DC capacitor getting shorted by
the operation of various switches. Insulated Gate Bipolar Transistors (IGBTs) are the power electronic
devices used with anti-parallel diodes for shunt and series
converters .The shunt inverter is used for voltage regulation at the point of connection, injecting
an opportune reactive power flow into the line and to balance the real power flow exchanged between
the series inverter and the transmission line. The series inverter can be used to control the real and
reactive line power flow inserting an opportune voltage with controllable magnitude and phase in series
with the transmission line. Thereby, the UPFC can fulfill functions of reactive shunt compensation,
active and reactive series compensation and phase shifting.
27
In order to control the bus voltage, sending-end voltage(Vs) is measured instantly and subtracted
from its reference value (Vs_ref) ,which gives the error. This error signal has been given as inputs to a PI
block [13]. The output of PI controller gives the magnitude of injected shunt voltage similarly, DC link
capacitor voltage (Vdc)is also measured and subtracted from its reference value (Vdc_ref) to get error.
This error signal has been given as input to a PI block to obtain the angle. Pulse Width Modulation (PWM)
technique is used to
generate the pulses for IGBT. Reference signal is compared with carrier (triangle) signal and the
outputs of the comparators are given to the converter switches as firing signals .
In case 1 the UPFC performance has been tested in an IEEE 5 bus system for power quality
improvement.
In this test system shown in Fig.3, the buses 1 and 2 are generator buses . Here bus 1 considered is
an IG based wind farm and buses 3, 4, 5 are load buses (PQ buses). The base case has been taken as 11KV
and 18 MW. A three phase to
TEST CASE1:IEEE 5 BUS SYSTEM
28
With UPFC
Without UPFC
Without UPFC
Without UPFC
Without UPFC
Without UPFC
With UPFC
With UPFC
With UPFC
Without UPFC
Without UPFC
Without UPFC
With UPFC
30
31
The Total Harmonic Distortion in FFT analysis at PCC voltage for grid connected wind farm
without UPFC controller is 6.72% shown in figure.15 and with UPFC controller it is reduced to 2.01%
shown in figure.16. From the figures it can be observed that the UPFC controller helps to mitigates the
harmonic distortion in the transmission line.
3.CLOUDCOMPUTING DEPLOYMENT
MODELS
IX CONCLUSION
The performance of the proposed method has been simulated for an1.IEEE
bus system
The 5Public
Cloudand a real
time wind farm connected grid UPFC is connected at PCC to compensate the voltage
sag
by fault. Service Model
Figure 1: created
Cloud Computing
It is observed that real power flow obtained has increase and reactive power absorption got in result has
been decreased after fault clearance by incorporating UPFC at PCC. Total Harmonic Distortions has been
reduced using proposed UPFC controller. Therefore, it is concluded that the proposed UPFC control results
in power quality improvement of grid with wind farm.
REFERENCES
[1] Saad-Saoud Z, Jenkins N, The application of advanced static VAR compensators to wind
farms, IEEE colloquium on power electronics for renewable energy, London, June 1997.
[2]
Shaheen H. I, Rashed G.I and Cheng SJ, Optimal location and parameter setting of UPFC for
enhancing power system security based on Differential Evolution algorithm, INTJ. Electrical
Power and Energy Systems, vol. 33, pp. 94-105, 2011.
[3]
Ezzeldin SA, Xu Wilson. Control design and dynamic performance analysis of a wind turbine
induction generator t, IEEE Transactions on Energy Conversion, 2000, vol.15, p. 916
[4]
Shameem Ahmad a, Fadi M. Albatsh a, Saad Mekhilef a, Hazlie Mokhlis Fuzzy based controller for
dynamic Unified Power Flow Controller to enhance power transfer capability, Energy Conversion
32
Saad Saoud Z, Lisboa ML, Ekanayake JB, Jenkins N and Strbac G, Application of STATCOMs
to wind farms, In IEE Proceedings on generation transmission and distribution, 1998,vol. 145, p.
5116.
[8]
[9]
K. C. Divyaa; P.S. Nagendra Rao Effect of Grid Voltage and Frequency Variations on the Output
of WindGenerators, Electric Power Components and Systems, Taylor & Francis, vol 6, 2008, p.602
614.
[10] Power Quality issues standards and guide lines, IEEE, Vol -32, May96.
[11] J. J. Gutie rre z, J. Ru iz, L. Leturiondo, and A. Lazkano, Flicker measurement system for wind
turbine certification, IEEE Trans Instrum. Meas., vol. 58, no. 2, p. 375382.
[12]
T. T. Nguyen and V.L. Nguyen, Dynamic Model of Unified power Flow Controllers in load flow
analysis,
[13] R. Jayashri a, R.P. Kumudini Devi b, Effect of tuned unified power flow controller to mitigate the
rotor speed instability of fixed-speed wind turbines, Renewable Energy vol.34 ,2009,P. 591596.
33
I. INTRODUCTION
Photo Mosaic is a concept in which picture usually a photograph that has been divided into usually
equal sized rectangular sections, each of rectangular section is replaced with another photograph that
matches the target photo. Image Mosaicking and other similar variations such as image compositing. And
stitching had found a huge field of application ranging from aerial Imagery or satellite to medical imaging,
street view
Maps, city 3D modelling, texture synthesis or stereo reconstruction, to name a few. In general,
whenever merging two or more images of the same scene was required for evaluation or integration purposes,
a mosaic is built. Two problems are concerned in the computation of an image mosaic the geometric
along with the photometric correspondence. Image mosaicking application is required both photometrical
and geometrical registrations between the images that compose the mosaic. First, the image to be color
corrected was segmented into several regions using mean shift. Then, connected regions is extracted by
using the median filtering technique and local joint image histograms of each region was modelled as
collections of truncated Gaussians using a maximum likelihood estimation procedure.
The geometric correspondence is usually referred to as image registration and this is the procedure of
overlaying two or more images of the same scene taken at different times, maybe from different viewpoints
and by different sensors. It should renowned so that the most cases the alignment that was produced by a
registration method was never accurate to the pixel level. Hence, a pixel to pixel direct mapping of color
is not a feasible solution. On the other hand, the photometrical correspondence between images deals
with photometrical alignment of the images capturing devices. The same object under the same lighting
condition should be represented to the same color in two different images. Whatever, even in set of images
taken from the same camera and the colours representing an object may difference from picture to picture.
This poses a problem to the fusion of information from several images. So the problem of how to balance
the colour of one picture so that it is matches the color of another must be tackled. This procedure of
calibration and photometrical alignment is referred to as color correction between images is depicted in
the paper and compare each to the baseline approach from. This paper proposes a new color correction
algorithm presented several technical novelties while compared to the state of the art, Images are color
segmented using median filtering technique. The filtering operation is used to take away the disparities in
image. Inverse color gradient algorithm is used to determine the layer that needs Mosaicking. Convolution
and image Mosaicking is performed to all the region of the image. Then the image can be well scrutinized
as a high resolution image. A methodology to perform the expansion of the color palette mapping functions
to the non overlapping regions of the images. To the best of our information, this paper also presents
one of the most complete evaluations of color corrections algorithms for image mosaicking published
in the literature. An extensive comparison, which is include the nine other approaches, two dataset and
34
Median filtering in signal processing is often desirable to be able to carry out some kind of noise
reduction on a picture or signal. The median filter is mostly used in nonlinear digital filtering technique; it
is often used to remove noise. Such noise reduction was a typical pre-processing step to improve the results
of later processing (for example an image and edge on). Median filtering was very widely used in digital
image processing because, under certain conditions, it is preserving edges of the images when removing
noise. Median was a nonlinear local filter whose output value was the middle element of a sorted array
of pixel values from the filter window. Since median value was robust to outliers, the filter has used for
reducing the impulse noise. Now we would describe median filtering with the help of example in which
we will be placed some values for pixels. There was no entry preceding the first value, the first value was
repeated, with the last values to handle the missing window entries at the boundaries of the signal, but
there was other schemes that had different properties that might be preferred in particular circumstances.
Handing out the boundaries, with or without cropping the signal and image boundary afterwards, fetching
entries from other places in a signal take images; entries from the far vertical or horizontal boundary might
be selected.
3.2 Convolution Based Processing
Convolution was an important operation in signal and image processing. Convolution operates
on two signals (in 1D) and two images (in 2D) we can think of one as the input signal (or image), and the
other is called the filter on the input image, producing an output image (so convolution took two images
as input and produces a third as output). Convolution was incredibly important concept in many areas of
engineering and math.
It is a general purpose filter effect for images. It is a matrix applied to the image and the mathematical
operation comprised of integers. It is work by determining the values of each central pixel by add the
35
To determine the layer that needs mosaicking and to determine which part of the image can be used
for mosaicking we use Inverse color gradient algorithm. All dynamical system produces a sequence of
values z0, z1, z2 zn. Fractal images is created by producing one of these sequence to each pixel in the
image the coloring algorithm was what interprets this is the sequence to produce a final color.
Typically, the coloring algorithm produces a single value to every pixel. Since color was a threedimensional space, that one-dimensional value must be expanded to produce a colour image. The common
method was to create a palette, a sequence of 3D colour values which are connected end to end and the
colouring algorithm value is used as a position beside this multi-segmented line (the gradient). If the last
palette colour was connected to the first, a closed segmented loop was formed and any real value from the
coloring algorithm can be mapped to a defined color in the gradient. These are similar to the pseudo color
renderings often used for infra red imaging. Gradient is normally linearly interpolated in RGB space (Red,
Green, Blue), but it can also be interpolated in HSL space (Hue, Saturation, Lightness) & interpolated with
spine curves instead of straight line segments.
The selection of the gradient was one of the most critical artistic choices in creating a high-quality
fractal image. Color selection can emphasize one part of a fractal image while de emphasizing others. In
extreme cases, two imagery with the same fractal parameters, but different color schemes will be appear
totally different.
Some coloring algorithms produced discrete values, when some produce continuous values. Discrete
values will produce visible stepping while used as a gradient position; until recently this is not terribly
important, as the restriction of 8-bit color displays introduced an element of color stepping on gradients
anyway and discrete coloring values are mapped to corresponding discrete color in the gradient with the
introductions of inexpensive 24 bit displays, algorithms which are produce continuous values is becoming
more important, as this is permit interpolating along the color gradient to any color precision desired.
3.4
Image Mosaicking
Photo Mosaic is a concept in which picture has been divided into equal sized rectangular sections;
each of rectangular section is replaced with another photograph that matches the target photo.
To the best of our knowledge, this paper also included one of the most complete evaluations of colour
correction algorithms for image mosaicking published in the literature. The extensive comparison, which
includes other approaches, two datasets with number of image pairs and two distinct evaluation metrics,
was presented. Image mosaicking and other similar variations such as image compositing have found a
vast field of applications ranging from satellite or aerial imagery to medical imaging street view maps city
super-resolution texture synthesis and also stereo reconstruction to name a few. In general, whenever
merging two or more images of the same scene was required for comparison or integration purposes, a
mosaic is built. Two problems are involved in the computation of an image mosaic: the geometric and the
photometric correspondence.
4. CONCLUSION
This work proposes a novel color correction algorithm. Images are color segmented and extracted
using median filtering technique. Each segmented region is used to consider a local color palette mapping
function and convolution based processing. Inverse color gradient algorithm is used to determine the layer
thats need mosaicking. Finally, by using an extension of the color palette mapping functions to the whole
picture, it is achievable to make mosaics where no color transitions are noticeable.
For the proper assessment of the performance of the proposed algorithm, ten other color correction
algorithms were evaluated (#2 through #11), next to with three alternatives to the proposed approach (#12b
through #12d). Each of the algorithms was functional to two datasets, with a combined total of 63 image
pairs. The proposed approach outperforms all other algorithms, in most of the image pairs in the datasets,
considering the PSNR and S-CIELAB evaluation metrics. Not only has it obtained some of the best average
36
M. Ben-Ezra, A. Zomet, and S. Nayar, Video super-resolution using controlled subpixel detector
shifts, IEEE Trans. Pattern Anal. Mach. Intell., vol. 27, no. 6, pp. 977987, Jun. 2005.
[3]
D. Comaniciu and P. Meer, Mean shift: A robust approach toward feature space analysis, IEEE
Trans. Pattern Anal. Mach. Intell., vol. 24, no. 5, pp. 603619, May 2002
[4]
J. S. Duncan and N. Ayache, Medical image analysis: Progress over two decades and the challenges
ahead, IEEE Trans. Pattern Anal. Mach. Intell., vol. 22, no. 1, pp. 85106, Jan. 2000.
[5]
H. S. Faridul, J. Stauder, J. Kervec, and A. Tremeau, Approximate cross channel color mapping
from sparse color correspondences, in Proc. IEEE Int. Conf. Comput. Vis. Workshops (ICCVW),
Dec. 2013, pp. 860867.
[6]
[7]
[8]
C.-H. Hsu, Z.-W. Chen, and C.-C. Chiang, Region-based color correction of images, in Proc. Int.
Conf. Inf. Technol. Appl., Jul. 2005, pp. 710715.
[9]
J. Jia and C.-K. Tang, Tensor voting for image correction by global and local intensity alignment,
IEEE Trans. Pattern Anal. Mach. Intell., vol. 27, no. 1, pp. 3650, Jan. 2005.
[10] G. Lee and C. Scott, EM algorithms for multivariate Gaussian mixture models with truncated and
censored data, Comput. Statist. Data Anal., vol. 56, no. 9, pp. 28162829, Sep. 2012.
[11] V. Lempitsky and D. Ivanov, Seamless mosaicing of image-based texture maps, in Proc. IEEE
Conf. Comput. Vis. Pattern Recognit., Jun. 2007, pp. 16.
[12] A. Levin, A. Zomet, S. Peleg, and Y. Weiss, Seamless image stitching in the gradient domain, in
Proc. Eur. Conf. Comput. Vis., May 2003, pp. 377389.
[13] P. Meer and B. Georgescu, Edge detection with embedded confidence, IEEE Trans. Pattern Anal.
Mach. Intell., vol. 23, no. 12, pp. 13511365, Dec. 2001.
[14] B. Sajadi, M. Lazarov, and A. Majumder, ADICT: Accurate direct and inverse color transformation,
in Proc. Eur. Conf. Comput. Vis., Sep. 2010, pp. 7286.
[15] P. Soille, Morphological image compositing, IEEE Trans. Pattern Anal. Mach. Intell., vol. 28, no.
5, pp. 673683, May 2006.
37
I. INTRODUCTION
Hybrid wireless network combine mobile ad-hoc network and infrastructure wireless network. It
is to be an enhanced network structure for the next generation network. According to the environment
condition, it can choose base station transmission mode or mobile ad-hoc transmission mode. The mobile
ad-hoc network is an infrastructure-less network. The devices in a mobile ad-hoc network can move in any
direction and the link between the devices can changed frequently. In this network, the data is transmitted
from source to destination in a multi-hop manner through intermediate nodes. In an infrastructure wireless
network (e.g. Cellular network), each device communicates with other device through base stations. Each
cell in a cellular network has a base station. These base stations are connected via wire or fiber or wirelessly
through switching centers.
If the region has no communication infrastructure or the existing infrastructure, communication
between nodes are difficult or inconvenient to use. In this situation hybrid wireless network may still be
able to communicate through the construction of an ad-hoc network. In such a network, each mobile node
operates as a host and also as a router. Forwarding packets to other mobile nodes in the network may not
be within direct wireless transmission range. Each node participates in an ad-hoc routing and infrastructure
routing, for this distributed three hop routing protocol is used. It allows to discovering a Three-hop path
to any other node through the network is introduced in this work. The first two hops in ad-hoc networking is
sometimes called infrastructure-less networking, since the mobile nodes in the network dynamically create
routing among themselves to form their own network. The third hop is created in infrastructure networking.
Most Wi-Fi networks function in an infrastructure mode. Devices in this network communicate through
a single access point, which is generally the wireless router. For example, consider the two laptops are
placed next to each other, each connected to the same wireless network. Even the two laptops are placed
next to each other, theyre not communicating directly in infrastructure network. Some possible uses of
hybrid wireless network consist of students using laptop, computers to participate in an interactive lecture,
business associates and sharing information during a meeting, soldiers communicate information about the
situation awareness on the emergency disaster relief and personnel coordinating efforts after a hurricane or
earthquake.
Spread Code is generally used for secured data transmission in wireless communication as a way
to measure the quality of wireless connections. In wired networks, the existence of a wired path between
the sender and receiver are determining the correct reception of a message. But in wireless networks,
path loss is a major problem. The wireless communication network has to take a lot of environmental
parameters to report background noise and interfering strength of other simultaneous transmission. SINR
attempts to generate a demonstration of this aspect. So the TAS protocol is implemented to maintain the
details about the sender and receiver and the communication media in the network. This is implemented
38
[3]
L. M. Feeney, B. Cetin, D. Hollos, M. Kubisch, S. Mengesha, and H. Karl, Multi-rate relaying for
performance improvement in IEEE 802.11 wlans, In Proc. of WWIC, 2007.
[4]
X. J. Li, B. C. Seet, and P. H. J. Chong, Multi-hop cellular networks: Technology and economics,
41
[6]
P. Thulasiraman and X. Shen, Interference aware resource allocation for hybrid hierarchical
wireless networks, Computer Networks, 2010.
[7]
L. B. Koralov and Y. G. Sinai, Theory of probability and random processes, Berlin New York
Springer, 2007.
[8]
D. M. Shila, Y. Cheng, and T. Anjali, Throughput and delay analysis of hybrid wireless networks
with multi-hop uplinks, In Proc. of INFOCOM, 2011.
[9]
T. Liu, M. Rong, H. Shi, D. Yu, Y. Xue, and E. Schulz, Reuse partitioning in fixed two-hop cellular
relaying network, In Proc. of WCNC, 2006.
[10] C. Wang, X. Li, C. Jiang, S. Tang, and Y. Liu, Multicast throughput for hybrid wireless networks
under Gaussian channels model, TMC, 2011.
42
I. INTRODUCTION
Cloud computing is documented as an alternative to traditional information technology due to its
intrinsic resource sharing with low maintenance characteristics. In cloud computing, the cloud service
providers (CSPs), such as Amazon and others are able to distribute different services to cloud users with
the assist of authoritative data centers. By shifting the local data management systems into cloud servers
and users may enjoy high quality services and save significant investments on their limited infrastructures.
One of the most essential services is offered by cloud providers was data storage. Let's consider a limited
data application the company allows its staffs in the same group or department to store and shared files in
the cloud. By utilizing the cloud that the staffs could be completely released from the troublesome local
data storehouse and maintenance. However, it is also posing a significant risk to the confidentiality of those
stored files. Specifically the cloud servers are managed by cloud providers is not fully trusted by users while
the data files stored in the cloud might be confidential and sensitive such as business plans. To preserve data
privacy is the primary solution for encrypted data files and then uploaded the encrypted data into the cloud.
Unfortunately, the designing of the efficient and secure data sharing scheme for groups in the clouds is not
an easy task due to the following challenging issues.
First of all identity the privacy is being one of the most significant restriction for the wide deployment
of cloud computing. Here not holding the guaranteed of identity privacy user may be unwilling to append
in cloud computing systems because their real identities can be easily disclose to cloud providers and also
attackers. On the other hand its unconditional identity privacy might incur the abuse of privacy for example
the misconduct staff could deceive others on the company to sharing false files without being traceable.
Therefore, traceability enables the TPA to expose the real identity of a users are also highly desirable.
Second, it is highly recommended that any member in the groups should able to fully enjoy the data
storing as well as sharing services provided by the cloud which are defined as the multiple owner manner.
Compare with the single owner manner where only the group manager could store and modify data in the
cloud, the multiple owner manners are more flexible in practical applications. More concretely, each user
in the groups is able to not only read data and also modify his or her part of the data in the entire data file
shared with the company.
Last but not the least so that groups are normally dynamic in practice, e.g., new staff cooperation
and current employees revocation in the company. The changes of membership make secure data sharing
extremely problematic. On one hand, the anonymous systems can challenge modern granted users can learn
the content of data files stored before their cooperation, because it is not possible for new granted users to
contact with anonymous data owners and access the corresponding decryption keys. On the other hand the
43
H. Chen and P. Lee, Enabling data integrity protection in regenerating-coding-based cloud storage:
Theory and implementation, Parallel and Distributed Systems, IEEE Transactions on, vol. 25, no.
2, pp. 407416, Feb 2014.
[3]
K. Yang and X. Jia, An efficient and secure dynamic auditing protocol for data storage in cloud computing,
Parallel and Distributed Systems, IEEE Transactions on, vol. 24, no. 9, pp. 17171726, 2013.
[4]
Y. Zhu, H. Hu, G.-J. Ahn, and M. Yu, Cooperative provable data possession for integrity verification
in multicloud storage, Parallel and Distributed Systems, IEEE Transactions on, vol. 23, no. 12, pp.
2231 2244, 2012.
[5]
A. G. Dimakis, K. Ramchandran, Y. Wu, and C. Suh, A survey on network codes for distributed
storage, Proceedings of the IEEE, vol. 99, no. 3, pp. 476489, 2011.
[6]
H. Shacham and B. Waters, Compact proofs of retrievability, in Advances in CryptologyASIACRYPT 2008. Springer, 2008, pp. 90 107.
[7]
Y. Hu, H. C. Chen, P. P. Lee, and Y. Tang, Nccloud: Applying network coding for the storage
repair in a cloud-of-clouds, in USENIX FAST, 2012.
[8]
C. Wang, Q. Wang, K. Ren, and W. Lou, Privacy-preserving public auditing for data storage
security in cloud computing, in INFOCOM, 2010 Proceedings IEEE. IEEE, 2010, pp. 19.
[9]
G. Ateniese, R. Di Pietro, L. V. Mancini, and G. Tsudik, Scalable and efficient provable data
possession, in Proceedings of the 4th international conference on Security and privacy in
communication networks. ACM, 2008, p. 9.
[10]
S. Goldwasser, S. Micali, and R. Rivest, A digital signature scheme secure against adaptive chosen
message attacks, SIAM Journal of Computing, vol. 17, no. 2, pp. 281308, 1988.
46
I. INTRODUCTION
The plan of cognitive networks was initiate to enhance the effectiveness of spectrum utilization.
The basic idea of cognitive networks is to allow other users to utilize the spectrum allocated to licensed
users (primary users) when it is not individual use by them. These other user who are opportunistic users
of the spectrum are called secondary users. Cognitive radio [1] expertise enables secondary users to
dynamically sense the spectrum for spectrum holes and use the same for their communication. A group of
such self-sufficient, cognitive users communicating with each other in a multi-hop manner form a multihop cognitive radio network (MHCRN). Since the vacant spectrum is shared among a group of independent
users, there should be a way to control and manage access to the spectrum. This can be achieve using a
central control or by a cooperative disseminated approach. In a centralized design, a single entity, called
spectrum manager, controls the procedure of the spectrum by secondary users [2]. The spectrum manager
gathers the information about free channels either by sensing its complete domain or by integrate the
information collected by potential secondary users in their respective local areas. These users transmit
information to the spectrum manager through a dedicated control channel. This approach is not possible
for dynamic multi-hop networks. Moreover, a direct attack such as a Denial of Service attack (DoS) [3]
on the spectrum administrator would debilitate the network. Thus, a distributed approach is chosen over
a centralized control. In a disseminated approach, there is no central administrator. As a result, all users
should jointly sense and share the free channel. The information sense by a user should be shared with
other users in the network to enable certain necessary tasks like route detection in a MHCRN. Such control
information is broadcast to its neighbours in a traditional network. Since in a cognitive method, each node
has a set of channels accessible, a node receives a message only if the message was send in the channel on
which the node was listen to. So, to make sure that a message is effectively sent to all neighbors of a node,
it has to be broadcast in every channel. This is called entire broadcasting of information. In a cognitive
location, the amount of channels is potentially large. As a result broadcasting in every channel causes a
large delay in transmit the control information. Another solution would be to choose one channel from
among the free channel for control sign exchange. However, the possibility that a channel is common
with all the cognitive user is little [4]. As a result, several of the nodes may not be available using a single
channel. So, it is necessary to transmit the control information on more than one channel to make sure that
every neighbour receives a copy [5]. With the raise in number of nodes in the system, it is potential that
the nodes are scattered over a huge set of channels. As a effect, cost and delay of communications over all
47
IV.SELECTIVE BROADCASTING
In a MHCRN, each node has a set of channels presented when it enters a network. In order to
become a part of the network and start communicate with other nodes, it has to initial know its neighbors
and their channel information. Also, it has to let other nodes know its occurrence and its accessible channel
information. So it broadcasts such information over all channels to make sure that all neighbors obtain
the message. Similarly, when a node wants to start a communication it should replace certain control
information useful, for example, in route discovery. However, a cognitive network location is dynamic
due to the primary users traffic. The number of available channels at each node keeps changing with time
and location. To keep all nodes efficient, the information change has to be transmitted over all channels
as quickly as possible. So, for successful and efficient coordination, fast dissemination of control traffic
48
VI.
PERFORMANCE EVALUATION
In this section the performance of the selective broadcast is compared with complete broadcasting
by studying the delay in broadcasting control information and redundancy of the received packets. The
performance evaluation used in all these experiments is shown below. For each experiment, a network area
of 1000m1000m is considered. The number of nodes is different from 1 to 100. All nodes are deployed
randomly in the network. Each node is assign a random set of channels changing from 0 to 10 channels. The
transmission range is set to 250m. Each data point in the graphs is an average of 100 runs. Before looking
at the routine of the proposed idea, two observations are made that help in understanding the simulation
results. Fig. 3 shows the plot of channel spread as a function of number of nodes. Channel spread is defined
as the combination of all the channels covered by the neighbors of a node.
Figure 4 Plot of channel spread with respect to number of nodes for a set of 10 channels.
50
Chunsheng Xin, Bo Xie, Chien-Chung Shen, A novel layered graph model for topology formation
and routing in dynamic spectrum access networks, Proc. IEEE DySPAN 2005, November 2005,
pp. 308-317
[3]
K. Bian and J.-M. Park, "MAC-layer misbehaviors in multi-hop cognitive radio networks," 2006
US - Korea Conference on Science, Technology and Entrepreneurship (UKC2006), Aug. 2006.
[4]
J. Zhao, H. Zheng, G.-H. Yang, Distributed coordination in dynamic spectrum allocation networks,
in: Proc. IEEE DySPAN 2005, pp . 259-268, November 2005
[5]
G. Resta, P. Santi, and J. Simon, Analysis of multi-hop emergency message propagation in vehicular
ad hoc networks, in Proc. 8th ACM Int. Symp. Mobile Ad Hoc Netw. Comput., 2007, pp. 140149.
51
[7]
S.-Y. Ni, Y.-C. Tseng, Y.-S. Chen, and J.-P. Sheu, The broadcast storm problem in a mobile ad hoc
network, in Proc. 5th Annu. ACM/IEEE Int. Conf. Mobile Comput. Netw., 1999, pp. 151162.
[8]
J. Wu and F. Dai, Broadcasting in ad hoc networks based on selfpruning,in Proc. IEEE Conf.
Comput. Commun., 2003, pp. 22402250.
[9]
[10] Qayyum, L. Viennot, and A. Laouiti, Multipoint relaying for flooding broadcast messages in mobile
wireless networks, in Proc. 35th Annu. Hawaii Int. Conf. Syst. Sci., 2002, pp. 38663875.
[11] Y. Song and J. Xie, QB2IC: A QoS-based broadcast protocol under blind information for
multi-hop cognitive radio ad hocnetworks, IEEE Trans. Veh. Technol., vol. 63, no. 3, pp.
14531466 Mar. 2014.
[12] L. Lazos, S. Liu, and M. Krunz, Spectrum opportunity-based control channel assignment in
cognitive radio networks, in Proc. IEEE 6th Annu. Commun. Soc. Conf. Sens., Mesh Ad Hoc
Commun. Netw., 2009, pp. 19.
[13] J. Zhao, H. Zheng, and G. Yang, Spectrum sharing through distributed coordination in dynamic
spectrum access networks, Wireless Commun. Mobile Comput., vol. 7, pp. 10611075, Nov. 2007.
[14] K. Bian, J.-M. Park, and R. Chen, Control channel establishment in cognitive radio networks using
channel hopping, IEEE J. Sel. Areas Commun., vol. 29, no. 4, pp. 689703, Apr. 2011.
[15] Y. Song and J. Xie, ProSpect: A proactive spectrum handoff framework for cognitive radio ad
hoc networks without common control channel, IEEE Trans. Mobile Comput., vol. 11, no. 7, pp.
11271139, Jul. 2012.
52
I. INTRODUCTION
Wireless access in vehicular environments (WAVE) is distinct to carry applications for
intelligent transportation systems (ITSs), with security and disaster services, automatic toll collection,
traffic management, and commercial trans-actions among vehicles. It specifies the architecture and
management functions to allow protected vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I)
wireless announcement. In order to ease ITS applications, local area networks are recognized to consist
two types of major archi-tectural mechanism, i.e., the onboard units (OBUs) in vehicles and the roadside
units (RSUs) install in road infrastructure, which are denoted as stations in this paper. Specifically, the
IEEE1609.4 criterion is intended to improve the IEEE802:11p medium access control (MAC) protocol
for multi-channel operations. The WAVE system is intended to work on the75 MHz band in the licensed
ITS5:9GHz band. The operating band is divided into seven channels, including one control channel (CCH)
and six service channels (SCHs), each with10MHz bandwidth. Execution of increasing high-speed wireless
applications requires exponential growth in spectrum demand. How-ever, it has been reported that current
use of owed spectrum can be as low as 15%. Thus, there is an increas-ing interest in initial well planned
method for spectrum administration and sharing which is encouraged by both industry and FCC authority.
This motivate to exploit the spectrum opportunities in space, time, frequency while protecting users of the
primary network holder from extreme interference due to opportunistic spectrum access. In fact, it is required
that an intrusion limit corresponding to an intrusion tempera-ture level be maintained at the receiving points
of the primary network. The input challenge in cognitive radio networks is how to construct band access/
sharing schemes such that users of the primary network (will be called primary users in the sequel) are
protected from excessive intrusion due to secondary band access and QoS performance of secondary users
are guar-anteed. In this paper, we present a band sharing frame for cognitive CDMA wireless networks
with explicit interference protection for primary users and QoS constraint for secondary users. Secondary
users have minimum transmission rates with required QoS performance and highest power constraints.
When the network load is lofty, an admission control algorithm is proposed to guarantee QoS constraint for
secondary users and intrusion constraints for primary users. When all the secondary user can be support,
we present a joint rate and power allocation solution with QoS and intrusion constraint.Prioritized optimal
53
EXISTING SYSTEM
Existing research works have been proposed based on the IEEE 802:11p/1609 standards
a self-organizing time division multiple access (STDMA) scheme was proposed into ensure successful
transmission of time critical traffic between the vehicles. In, a carrier sense multiple access (CSMA)
based protocol for multi channel network is proposed. A separate control channel is utilized to eliminate
interference between the control and data messages. All RTS and CTS packets are transmitted on the
control channel, and the optimal channel for each user is selected based on the signal to interference plus
noise ratio (SINR) to exchange data messages. However, without the consideration of different priorities
among stations in the delivery of safety-related messages cannot be protected and guaranteed. In POCA-D,
distributed CR network, SPs are not allowed to negotiate with each other. In POCA-C, centralized CR
network, SPs can negotiate with each other by sending control messages at the beginning of SCH interval
drawback of existing system 1)Throughput is low 2)Delay is high 3)Quality of service is low.
Considering either distributed or centralized networks, the proposed POCA schemes can be
distinguished into distributed POCA (POCA-D) and centralized POCA (POCAC) protocols. Distributed
network system is considered in POCA-D scheme with the knowledge of PPs distribution probability.
Optimal channel-hopping sequence can be obtained based on dynamic programming (DP) in order to achieve
maximum aggregate throughput for SPs under the quality-of-service (QoS) constraint of PPs. On the other
hand, the POCA-C scheme is proposed for centralized networks, where an optimal channel allocation for
SPs is derived by means of linear programming based on the number of PPs of each channel in every SCH
interval. With the adoption of proposed POCA schemes, optimal load balance can be achieved between the
probability of channel availability and channel utilization; while the transmission opportunities for safetyrelated messages can also be preserved. Note that the proposed POCA-D and POCA-C schemes can be
utilized to investigate the effects from different network scenarios. Performance validation and comparison
of both protocols will be evaluated via simulations.
Multi-Channel Operation
The coordinated universal time (UTC) is adopt for all stations the synchronization scheme for
sync intervals. As the stations will toggle to the CCH in every CCH interval to either listen or transmit
advertising messages, and potentially switch into one of the SCHs during the SCH interval for data spread.
During the CCH interval, the safety-related messages can be broadcast on the CCH by the providers and
these messages are expected to be received by all stations. On the other hand, if a provider intends to deliver
non-safety information to some of the users, the provider will broadcast the WAVE services advertisement
(WSA) edge on the CCH. The WSA frame nmainly contains two fields including the SCH that the provider
plans to switch into and the planned MAC address of user for data transmission. In order to facilitate twoway handshaking process, the WSA response (WSAR) casing is defined in this paper and will be issued
by the corresponding user to acknowledge the reception of WSA frame if the user agrees to receive data
from the provider. After the provider has received the WSAR frame, both the provider and user will switch
to the consequent SCH that is recorded in the WSA frame in the following SCH interval. During the SCH
interval, the channel access method for provider is based on the carrier sense multiple access with smash
evading (CSMA/CA) scheme. A data transmission is completed by means of the RTS/CTS/DATA/ACK
four-way handshaking mechanism. Furthermore, there can be multiple providers that intend to compete
for both the announcement on the CCH during CCH interval and the utilization of six SCHs during SCH
interval. The random backoff plan is adopted to improve potential smash between the WSA frames on CCH
as well as collision between the the RTS frames on SCHs. Note that even with successful data delivery,
54
Figure:Architecture
IV.ALGORITHM USED
Admission Control Algorithm
An admission control algorithm which is perform together with power control such that QoS
requirements of all admitted secondary users are contented while keeping the intrusion to primary users
below the tolerable limit. When all secondary users can be supported at least rates, we permit them to
increase their transmission rates and share the spectrum in a fair manner. Admission control algorithms to
be used during high network load conditions which are performed mutually with power control so that QoS
desires of all admitted secondary users are satisfied while keeping the intrusion to primary users below the
passable limit. If all secondary users can be supported at minimum rates, we allow them to enlarge their
broadcast rates and share the band in a fair manner. The secondary links requesting access to the band
approved to the primary network have QoS requirements.
V.EXPERIMENTAL RESULT
When the network load is small, all secondary associates can be admitted into the network and
they would increase their broadcast rates over the least values. In core, we wish to solve the optimization
problem. The decision variables are transmission rates and powers P. Transform this problem into a convex
optimization problem where globally optimal solution can be obtained. We would like to note that the joint
rate and power allocation for cellular CDMA networks has been an active research topics over the last
several years. We refer the readers to and references therein for existing fiction on the trouble. However, the
work is one of the first papers which adapt the problem to the ad hoc network setting. Here, the objective
is to minimize the maximum service time on different transmission links. In this paper, we proceed one
55
IEEE P1609.4/D6.0, Draft Standard for Wireless Access in Vehicular Environments (WAVE) Multi-Channel Operation, 2010.
[3]
IEEE P802.11p/D7.0, Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY)
Specifications-Amendment 7:
Wireless Access in Vehicular Environment,2009.
[4]
S. Eichler, Performance evaluation of the IEEE 802.11p WAVE communication standard, inProc.
IEEE 66th Veh. Technol. Conf., Oct. 2007, pp. 21992203.
[5]
[6]
N. Choi, S. Choi, Y. Seok, T. Kwon, and Y. Choi, A solicitation-based IEEE 802.11p MAC protocol
for roadside to vehicular networks, inProc. Mobile Netw. Veh. Environ., May 2007, pp. 9196.
[7]
K. Bilstrup, E. Uhlemann, E. Strom, and U. Bilstrup, Evaluation of the IEEE 802.11p MAC method
for vehicle-to-vehicle communication, inProc. IEEE 68th Veh. Technol. Conf., Sep. 2008, pp. 15.
[8]
S. Wang, C. Chou, K. Liu, T. Ho, W. Hung, C. Huang, M. Hsu, H.Chen, and C. Lin, Improving
the channel utilization of IEEE 802.11p/1609 Networks, inProc. IEEE Wireless Commun. Netw.
Conf., Apr. 2009, pp. 16.
[9]
C.-M. Lee, J.-S. Lin, Y.-P. Hsu, and K.-T. Feng, Design and analy-sis of optimal channel-hopping sequence
for cognitive radio networks, inProc. IEEE Wireless Commun. Netw. Conf., Apr. 2010,pp. 16.
[10]
J. So and N. H. Vaidya, Multi-channel MAC for ad hoc networks: Handling multi-channel hidden
terminals using a single trans-ceiver, inProc. 5th ACM Int. Symp. Mobile Ad Hoc Netw. Comput.,
May 2004, pp. 222233.
[11]
N. Jain, S. Das, and A. Nasipuri, A multichannel CSMA MAC protocol with receiver-based
channel selection for multihop wire-less networks, in Proc. 10th Int. Conf. Comput. Commun.
Netw., Oct. 2001, pp. 432 439.
56
C. Han, M. Dianati, R. Tafazolli, and R. Kernchen, Throughput analysis of the IEEE 802.11p
enhanced distributed channel access function in vehicular environment, inProc. IEEE 72nd Veh.
Tech-nol. Conf. Fall, 2010, pp. 1 5.
[14] S. Haykin, Cognitive radio: Brain-empowered wireless communications, IEEE J. Select. Areas
Commun.,vol.23,no. 2, pp. 201220, Feb. 2005.
[15] I. F. Akyildiz, W.-Y. Lee, M. C. Vuran, and S. Mohanty, Next gen-eration/dynamic spectrum
access/cognitive radio wireless net-works: A survey,Comput. Netw., vol. 50, pp. 21272159, 2006.
57
I. INTRODUCTION
Mobile ad hoc networks (MANETs) are auspicious area based on the ability to self-conguring
mobile devices connection into wireless network without using any infrastructure. MANETs are mobile,
they use wireless connections to connect to various networks. In the occasion where there is a group
effort required, the MANETs plays a major role in wireless communication and provides effective
communication. SECURE neighbor discovery is a fundamental functionality in mobile ad hoc networks
(MANETs) deployed in aggressive environments. It refers to the process that nodes exchange messages
to discover and authenticate each other [2]. Therefore the basis of other network functionalities such
as medium access control and routing, secure neighbor discovery need be often performed due to node
movability. Direct Sequence Spread Spectrum (DSSS) is a common forms of spread spectrum techniques
[15]. In classic spread spectrum techniques, senders and receivers need to pre-distribute a secret key, with
which they can generate spread codes, for communication. If a jammer knows the secret key, the adversary
can easily jam the communication by the spread codes, used by the sender. There have been a few current
attempts to remove the circular dependency of jamming-resistant communications on pre-shared keys like
JR-SND [1]. Many existing protocols in MANET work properly only against single node attacks. They
cannot provide protection against multiple malicious nodes working in collusion with one another. Since
packet transmission in MANETs depends heavily on mutual trust and cooperation among the nodes in
the network, therefore determining the trust of an individual node before actually forwarding packet to it
becomes essential for successful packet transmission.
In this paper we propose, Watchdog timer and NCPR can help in detecting malicious behavior of
some nodes in the network. The NCPR is used to decrease routing overhead based on neighbor coverage
knowledge and rebroadcast probability (NCPR). The excess of route request has been decreased using
several methods like neighbor coverage based probabilistic(NCPR) method which guides to high end-toend delay and packet delivery ratio. The node which has sufficient power to send the packet is recognized
by using good neighbor node detection method. Here, This NCPR provides optimal solution for finding
good nodes. Performance metrics in classification of nodes are transmission range and power of node,
signal strength, high packet forwarding capacity and relative location of node.
Our main contributions are summarized as follows.
1. We identify selfish node in MANETs as a related problem that cannot be addressed by existing
58
The node ni receives an RREQ packet from its previous node s, it can use the neighbor list in the
RREQ packet to calculate how much its neighbors have not been covered by the RREQ packet from s. If
node ni has more neighbors unveiled by the RREQ packet from s, which means that if node ni rebroadcasts
the RREQ packet, the RREQ packet can reach more additional neighbor nodes. In algorithm N(s) and N
(ni) are the neighbors sets of node s and ni, respectively. s is the node which sends an RREQ packet to node
ni. When a neighbor receives an RREQ packet, it could calculate the rebroadcast delay Td (ni) according to
the neighbor list in the RREQ packet and its own neighbor list. Where Tp (ni) is the delay ratio of node ni,
and MaxDelay is a little constant delay. | . | is the number of elements in a set. The node s sends an RREQ
packet, all its neighbors ni, i = 1,2 . . . | N(s)| receive and process the RREQ packet. If node ni receives
a replicate RREQ packet from its neighbor nj, it knows that how many its neighbors have been covered
by the RREQ packet from nj. UCN set according to the neighbor list in the RREQ packet from nj . After
60
Figure 1: Probability of an attack when varying the number of nodes and the
Figure 2 shows the results obtained with different parameters. We can see that mobility clearly
affects the number of attacks detected. It decreases when mobility is increased. With a mobility of 1 m/s,
near by 100% of the attacks are detected.
V.CONCLUSION
In this paper, we propose watchdog mechanism to detect the selfishnode based on NCPR. It refers
61
[3]
Priyanka Sharma, Anil Suryawanshi, Enhanced Security Scheme against Jamming attack in Mobile
Ad hoc Network, IEEE International Conference on Advances in Engineering & Technology
Research (ICAETR - 2014), August 2014.
[4]
[5] Vme-Rani Syed, Dr.ArifIqbal Vmar, Fahad Khurshid, Avoidance of BlackHole Affected
Routes in AODV BasedMANET, international Conference on Open Source Systems and
Technologies(iCOSST), 2014
[6]
Q. Wang, P. Xu, K. Ren, and X.-Y. Li, Towards optimal adaptive UFH-based anti-jamming wireless
communication, IEEE J. Select. Areas Commun., vol. 30, no. 1, pp. 1630, 2012.
[7]
R. Stoleru, H. Wu, H. Chenji, Secure Neighbor Discovery in Mobile Ad Hoc Networks, IEEE
International Conference on Mobile Ad-Hoc and Sensor Systems, 2011.
[8]
Marcin Poturalski, Panos Papadimitratos, Jean-Pierre Hubaux, Formal Analysis of Secure Neighbor
Discovery in Wireless Networks, IEEE TRANSACTIONS ON DEPEDABLE AND SECURE
COMPUTING, 2013.
[9]
Qiben Yan, Huacheng Zeng, Tingting Jiang , Ming Li, Wenjing Lou, Y. Thomas Hou, MIMObased Jamming Resilient Communication in Wireless Networks, IEEE Conference on Computer
Communications,2014
[10] Liang Xiao, HuaiyuDai, Peng Ning, Jamming-Resistant Collaborative Broadcast Using
Uncoordinated Frequency Hopping, IEEE TRANSACTIONS ON INFORMATION FORENSICS
AND SECURITY, Vol. 7, No. 1, February 2012.
[11] Chengzhi Li, HuaiyuDai, LiangXiao, Peng Ning, Communication Efficiency of Anti-Jamming
Broadcast in Large-Scale Multi-Channel Wireless Networks, IEEE TRANSACTIONS ON
SIGNAL PROCESSING, Vol. 60, No. 10, October 2012.
[12]
Reshma Lill Mathew, P. Petchimuthu, Detecting Selfish Nodes in MANETs Using Collaborative
Watchdogs, International Journal of Advanced Research in Computer Science and Software
Engineering, Volume 3, Issue 3, March 2013.
[13] Lahari.P, Pradeep.S, A Neighbor Coverage-Based Probabilistic Rebroadcast for Reducing Routing
Overhead in Mobile Ad Hoc Networks,International Journal of Computer Science and Information
Technologies, Vol. 5 (2) , 2014.
[14]
62
Shengli Zhou, Georgios B. Giannakis, Ananthram Swami, Digital Multi-Carrier Spread Spectrum
Versus Direct Sequence Spread Spectrum for Resistance to Jamming and Multipath, IEEE
TRANSACTIONS ON COMMUNICATIONS, Vol. 50, No. 4, April 2002.
AbstractSite data storage is a function of cloud that relieve the customers from focus on data storage
cloud computing organization. Out sourcing data to third-party administrative control makes security
problem. Data leakage may occr due to attacks by other users in the cloud. Comprehensive of data by
cloud service provider is yet another problem in the cloud because of that high level of security is needed
. In this paper provide high security by the concept of Data Security for Cloud Environment with SemiTrusted Third Party for secure group data sharing and forwarding. It provides for key management, access
control, and file assured deletion. The DaSCE use Shamir entry system to handle and generate the key.
We use multiple key managers for one share of key. many key managers avoid on its own failure by using
cryptographic keys.We execute and estimate running model of DaSCE performance based on the time
consumption when more number operations, next analyze the working of DaSCE using High Level Petri
nets .The outcome can be efficiently used for security measures of outsourcing data by key management,
access control, and file assured deletion.
Index Terms Cloud Computing , High Level Petri Nets,file assured deletion,key management,shamir
scheme
I. INTRODUCTION
Cloud Computing is packaged within a new infrastructure paradigm that offers improved scalability,
elasticity, startup time, reduced costs, and just-in-time availability of resources.Cloud computing has emerge
as managing the hardware and software assets located at third-party facility provider.Demand way in to the
computing resources relieve the customers from building and maintaining complex infrastructures.Cloud
computing has every computing component as a utility ,such as software, platform, and infrastructure.
The cost-cutting measure of infrastructure, maintenance, and flexibility makes cloud computing smart
for organizations and individual customers.although benefits, cloud computing faces assured challenges
and issues widespread adoption of cloud.For instance,security, performance, and quality are mentioned.
security , privacy unease when using cloud computing services are like to those of traditional non]
cloud services, apprehension are amplified by external control over executive resources and the potential
for mismanagement of those resources. Transitioning to public cloud computing involve a transfer of
responsibility and control to the cloud provider over information system Representing characteristics and
c utility ,causes the user to focus on data security, transmission, processing , moving data to the cloud,
and operated by certain level of trust and security. Multiple users, separated through virtual machines,
share resources as well as storage space. Multi-tenancy and virtualization generate risks and underpins the
confidence of users to adopt the cloud model.
The security of outsourcing data to public clouds, work for the development of data security
technique. We aim for a technique capable of addressing the critical issues. data security scheme that
uses key manager servers for the cryptographic keys. Shamirfs (k, n) threshold scheme is used for the
management of keys that use k share out of n in the direction of reconstruct the key. Access to key and data
is ensured through a policy file.The client generates random symmetric keys for encryption and integrity
functions. Symmetric keys are protected by the public key, over all symmetric keys are deleted from the
client. Encrypted data, keys are uploaded to the cloud. For downloading the data,
client presents a policy file to cloud and downloads the encrypted ,decrypted data and keys. The
FADE is a light-weight scalable method that give surety the deletion of files from cloud when requested by
the user .during our examination FADE short on issues of security of keys and authentication of participating
parties. Based on that identified with FADE, development to the scheme and name it as Data Security for
63
(b)File download:
The client send requests to download the file after that encrypted keys in the cloud. The client check
for the resoluteness of the file through the HMAC. Then the client generates a secret number and calculates
sent to KM for decryption. The KM sends attributes used are based on Pi. The client extracts key from the
received message and that in turn is used to decrypt F.
VI.MODULE DESCRIPTION
(1)Key manager setup:
For the efficient storage of different keys to encrypt the file which could be stored in cloud, needs
key managers. All the key managers are authenticated by the cloud service provider .Each manager has its
own identity and generate the key for encrypt the file.
(2)Keys for file encryption:
User selects their own number of key managers who are all needs to produce the key.Generally data
owner generate the symmetric key value S. That will be spitted into k number of keys.Each key managers
stores their key pair with its own identity and public key.
(3) Shamirs Strategy:
Help to reconstruct the symmetric key s, by getting many parts of key from specified key managers.
Here k represent the number of key managers and n represents spited number of symmetric keys on all key
managers(km)client :breaks up symmetric keys s into n shares(s1,s2,sn).encrypt i th shares with the public
keys of i th km. upload all shares of s to cloud. client downloads all shares of key client selects k number
of kms randomly.sends i th share of s to its km. receives back decrypted i th share. reconstructs s from k
shares according to Shamirs strategy
W. Liao and M.-Y. Jiang, Family ACK tree (FAT): Supporting reliable multicast in mobile ad hoc
networks, IEEE Trans. Veh. Technol., vol. 52,no. 6, pp. 16751685, Nov. 2003.
[3]
G. Ateniese, M. Steiner, and G. Tsudik, New multi-party authentication services and key agreement
protocols, IEEE J. Sel. Areas Cmmun, vol. 18, no. 4, pp. 628639, Apr. 2000.
[4]
M. Steiner, G. Tsudik, and M. Waidner, Key agreement in dynamic peer groups, IEEE Trans.
Parallel Distrib. Syst., vol. 11, no. 8, pp. 769780 Aug. 2000.
[5]
Y. Kim, A. Perrig, and G. Tsudik, Simple and fault-tolerant key agreement for dynamic collaborative
groups, in Proc. ACM Conf. Comput .Commun. Security, Nov. 2000, pp. 235244. [3] Z. Wan,
J. Liu, and R. H. Deng, HASBE: A hierarchical attribute-based solution for flexible and scalable
access control in cloud computing, IEEE Trans. Inf. Forensics Security, vol. 7, no. 2,pp. 743754,
Apr. 2012.
[6]
Li, S. Yu, Y. Zheng, K. Ren, and W. Lou, Scalable and secure sharing of personal health records in
cloud computing using attribute based encryption, IEEE Trans. Parallel Distrib. Syst., vol. 24, no.
1,pp. 131143, Jan. 2013.
[7]
J. Li, X. Huang, J. Li, X. Chen, and Y. Xiang, SecurelyOutsourcing attribute-based encryption with
check ability, IEEE Trans. Parallel Distrib. Syst., vol. 20, no. 8, pp. 22012210Aug. 2014.
[8]
P. Barralon, N. Vuillerme, and N. Noury, Walk detection with a kinematic sensor: Frequency and
wavelet comparison, in EMBS06. IEEE, 2006, pp. 17111714.
[9]
67
Abstract Buck Boost Converter is unease to accomplish a sky-scraping efficiency control strategy
for improving the transients in the output voltage. The sophisticated control technique can regulate
an output voltage for an input voltage, which is elevated, lower, or the same as the output voltage.
There are several obtainable solutions to these tribulations. The technique introduced here is unique
of its kind from the point of view of ripple contented in the output voltage and the reliability of the
control strategy. The unsurpassed approach involves a tradeoff among cost, efficiency, and output
noise or ripple. The main objective of this work is to enclose a positive buck-boost regulator that
automatically transits from one mode to the other. The method introduced in this dissertation is an
amalgamation of buck, boost, and buck-boost modes. Basic analytical studies have been made and
are presented. In the buck boost method, instead of immediate transition from buck to boost mode,
intermediate combination modes consisting of numerous buck modes followed by several boost
modes are utilized to deal out the voltage transients. This is consummate of its kind from the point of
view of improving the efficiency and ripple at ease in the Output voltage Theoretical considerations
are accessible. Simulation results are shown to prove the proposed theory.
Keywords : dc-dc convertor,charge pump,buck converter
I. INTRODUCTION
A very widespread power-handling problem, especially for portable applications, powered by
batteries such as cellular phones, special digital assistants (PDAs), wireless and digital subscriber line
(DSL) modems, and digital cameras, which afford a regulated non-inverting output voltage from a variable
input battery voltage. The battery voltage, when charged or discharged, can be superior than, equal to, or
less than the output voltage. But for such small-scale applications, it is very obligatory to regulate the output
voltage of the converter with high precision and Performance. For with the purpose of reason, a switch in
the interior of cost, efficiency, and output transients should be considered. A regular power-handling issue
for space-restrained applications motorized by battery of the output voltage in the midrange of a patchy
input battery voltage. . Some of the common examples are 3.3 V output with the 34.2 V Li cell input, 5 V
output with a 3.66 V four-cell alkaline participation, or a 12 V output with an 815 V leadacid battery
input.
Here it describes a new method for minimizing the transients in the output of a DC-DC converter
required for diminutive powered portable electronic applications.
Digital Combination of Buck and Boost Converters to Control a Positive Buck Boost Converter
The complicated control technique can control an output voltage for an input voltage, which is
higher, lower, or the similar as the output voltage. There are several obtainable solution to these problems,
but all contain their disadvantages. The method introduced here is unique of its kind from the point of view
of ripple content in the output voltage and the reliability of the control approach. The best approach involves
a tradeoff among cost, efficiency, and output noise or ripple. It is have a positive buck-boost regulator that
automatically transits from one mode to the other. This method introduced in that combination of buck,
boost, and buck-boost modes. Basic analytical studies have been made and are presented. The proposed
method, instead of immediate transition from buck to boost mode, intermediate mixture mode consisting
of several buck modes follow by several boost modes are utilized to distribute the voltage transients. This
is unique of its kind from the point of view of improving the efficiency and ripple content in the Output
voltage Theoretical considerations are available.
A voltage-mode DCDC buck converter with fast output voltage-tracking speed and wide output
voltage range
This paper presents a high switching frequency and wide output range DCDC buck converter
with a novel compensated error amplifier. The converter has been fabricated in a standard 0.35m CMOS
process. The DCDC converter has good stability when worked at the wide output range. The converter gets
a high speed of output voltage-tracking with 8.8 s/V for up-tracking and 6s/V for down-tracking. Besides
the recovery times are less than 8s for both load step-up and step-down. Therefore, the best converter is
suitable for the wide output range, especially on the occasion of the fast voltage-tracking speed.
Reduction of Equivalent Series Inductor Effect in Delay-Ripple Reshaped Constant On-Time
Control for Buck Converter with Multilayer Ceramic Capacitors
The DZC-NME technique is proposed in this paper to conquer the small ESR value and large
ESL effect in the COT buck converter. Even though the MLCC is used as the output capacitor if without
conventional ESR compensation, the DZC technique still can increase the system stability since the
compensator contributes phase lead similar to the PD controller. Besides, the differential structure can
benefit the noise margin to decrease the Jitter and the EMI effects. On the other hand, the NME technique
eliminates the effect of ESL to enhance the noise immunity. Furthermore, using the reliable on-time timer
with an improved linear function, the near-constant switching frequency, which is adjusted to accommodate
to variable input voltage, can further confirm the system stability. Because of MLCC with extremely small
RESR value for general applications, the output ripple can be greatly reduced and thus switching power loss
can be decreased in corresponding to large RESR used to compensate conventional ripple-based control.
Experiment results verify the correct and effective functions of the DZC and the NME techniques at the
strict case when small RESR of 1 m and large VESL of 40 mV. Without scarifying the inherent advantages
of the COT control, the DZC-NME technique for the MLCC applications can ensure low ripple of 10 mV
and high efficiency of 91%.
70
The design of single-inductor multiple-output DC-DC converter is important for future low-power
portable systems. The key issues and some possible solutions have been described in this chapter. The
provided examples demonstrate the feasibility and the limits of various approaches. Because of the increased
diffusion of complex and portable systems is expected that the area will come upon great development in
the near future.
III. CONCLUSION
An extremely steady ying-capacitor buck boost converter applies a novel pseudo current dynamic
acceleration method is described in detail. The circuitry design of the proposed converter is simple and
implemented by profitable CMOS manufacture procedure. The input voltage is 5 V, the output voltage
range 2.3 V, and the process frequency is 1 MHz the boost ratio of the positive output voltage is 2D and
the power conversion efficiency reaches 80%. The proposed converter operates in the buck/boost mode by
merely changing the duty cycle and the transient reply time.
REFERENCES
[1].
P.-C. Huang, W.-Q. Wu, H.-H. Ho, and K.-H. Chen, Hybrid buckboost feedforward and reduced
average inductor current techniques in fast line transient and high-efficiency buckboost converter,
IEEE Trans. Power Electron., vol. 25, no. 3, pp. 719730, Mar. 2010.
[2].
Y.-H. Lee et al., Power-tracking embedded buck-boost converter with fast dynamic voltage scaling
for the SoC system, IEEE Trans. Power Electron., vol. 27, no. 3, pp. 12711282, Mar. 2012.
[3].
W.-C. Chen et al., Reduction of equivalent series inductor effect in delay-ripple reshaped constant
on-time control for buck converter with multi-layer ceramic capacitors, in Proc. IEEE ECCE, Sep.
2012, pp. 755758.
[4]. A. Chakraborty, A. Khaligh, A. Emadi, and A. Pfaelzer, Digital combination of buck and boost
converters to control a positive buckboost converter, in Proc. IEEE Power Electron. Spec. Conf.,
Jun. 2006, vol. 1, pp. 16.
[5]. Yong-Xiao Liu*, Jin-Bin Zhao, and Ke-Qing Qu* Fast Transient Buck Converter Using a
Hysteresis PWM Controller in proc. Journal of Power Electronics, Vol. 13, No. 6, pp. 991-999,
November 2013.
[6]. Yu-Huei Lee,Chao-Chang Chiu,Ke-Horng Chen A Near-Optimum Dynamic Voltage Scaling
(DVS) in 65-nm Energy-Efficient Power Management With Frequency-Based Control (FBC) for
SoC System IEEE Journal Of Solid-State Circuits, Vol. 47, No. 11, November 2012
[7].
Yang Miao; ., Zhang Baixue, Cao Yun, Sun Fengfeng and Sun Weifeng A voltage-mode DCDC
buck converter with fast output voltage-tracking speed and swide output voltage rangeJournal of
Semiconductor Vol. 35, No. 5, May 2014.
[8]. Pengfei Li, StudentMember, IEEE, Deepak Bhatia,Member, IEEE, Lin Xue, and Rizwan
Bashirullah,Member, IEEE IEEE A 90240 MHz Hysteretic Controlled DC-DC Buck Converter
With Digital Phase Locked Loop Synchronization Journal Of Solid-State Circuits, Vol. 46, No. 9,
September 2011.
71
S.Selvam
Dr.S.Thabasu Kannan
R.Ganesh
AbstractImage compression refers to representing an image with as few bits as possible while
preserving level of quality and intelligibility required for particular application. The present work aims to
developing an efficient algorithm for compression and storage of two-tone image.
In this paper, an efficient coding technique is proposed and termed as line- skipping coding for twotone image.The technique exploits 2-D correlation present in the image. This new algorithm is devised to
reduce fifty to seventy five percentage of memory storage.
Keywords 2-D correlation, 1LSC, DCT, RLC.
I. INTRODUCTION
The need for compression and electronic storage of two-tone image such as line diagrams,
weather maps and printed documents has been increasing rapidly. It has endless applications ranging from
preservation of old manuscripts. Paperless office and electronic library etc. For text such as English and
Arabic, good quality optical character readers are available and provide good compression. But they accept
only limited fonts and style of characters. Also for line diagram and text whose OCRS are not readily
available, the materials has to be considered as an image. Electronic storage of such image required a very
large amount of memory. To reduce the memory requirements and hence the cost of storage, efficient
coding techniques are used. A large number of coding techniques have been proposed and studied by
different reaches. These techniques are broadly classified into two categories: Loss less and loss.
Lossless techniques do not introduce any distortion. From the coded bit stream, we can reconstruct
the digitized original image extract. Lossy techniques introduce some distortion to the reconstructed image
while achieving high compression and retaining image usability.
A scan line of a two-tone image consists of runs black pixels separated by runs of white pixels.
The spatially close pixel are significantly correlated, source coding technique exploit this correlation either
along a single scan line or along many scan lines. Simplest and most commonly used techniques is run
length coding, which exploit the correlation along a single scan line to code the runs of black or white
pixels. More complex techniques exploit the correlation along many scan lines to give better compression.
However this is achieved at the cost of increased system complexity.
In this paper, a very simple and efficient coding technique is proposed and termed as skip-line
coding for two-tone image. The technique exploits 2-D correlation present in the image. The technique is
based on the assumption that if there exist very high degree of correlation between successive scan lines,
then there is no need to code each of them, only one of them need be coded and other may be skipped. While
decoding, skipped lines are taken to be identical to the pervious line. This reduced the storage requirement
significantly. The performance of this technique is compared with the run length coding. This paper also
concentrates on Discrete Cosine Transform based compression so as to have comparative study of its
performance.
2.Existing System:
Image Data Compression is one of the major areas of research in image processing. There
were several algorithms designed for compression of images. Here two method have been explained for
compression of image data, which is considered to be an existing system.
i)
Block Truncation coding
ii)
Run Length Coding
72
73
Each block of 8 X 8 pixels is then transformed using DCT into an array of 8 X 8 coefficients as
74
The first coefficient (0,0) of every block is called coefficient, while the rest of the coefficient are called
AC coefficients. DCT (low frequency) AC coefficients as shown figure 3
In other word, the higher frequency coefficients contain relatively less crucial details. All the
coefficients are quintile using a uniform misstep quintile and rounded to the nearest integer as expressed in
the equation()
C(u,v) = F(u,v)(Q(u,v))/2 / Q(u,v)
Where, Q(u,v) = quantization step size for coefficient (u,v);
C(u,v) = rounded value of the quantized coefficient;
Each application can have its own quantization tables, which is usually designed to provide the
best possible reconstructed image quality. In order to obtain a good subjective image quality, the DC and
the low frequency AC coefficient are quantized using large step sizes. Hence there is a tradeoff between the
quantization steps size (i.e. image quality) and the compressive achieved: Smaller the steps size, better the
image quality and smaller the compression ratio. The correlation between the DCT Coefficients of adjacent
blocks and exploited using DCPM to achieve further compression.
The DCT a coefficient of each block is reordered in a 1-D sequence using Zigzag scan as shown
in the figure 4.
75
The scheme generates long runs or zero value coefficients (corresponding to the high frequency AC
coefficient) in most of the image. The Zigzag ordered coefficient and run length and Huffman coded (i.e.) a
code stored / transmitted for each DC coefficient and non zero AC coefficient indicating the magnitude and
position in the Zigzag order.
Finally, the image blocks are rested scanned to generate the image bit stream.
The image is reconstructed by performing the decompression operations in the reverse order.
Each block of 8 X 8 pixels is hence transformed back to spatial domain using the inverse discrete cosine
transformation (IDCT) in equation(1).
The scheme generates long runs or zero value coefficients (corresponding to the high frequency AC
coefficient) in most of the image. The Zigzag ordered coefficient and run length and Huffman coded (i.e.)
a code stored / transmitted for each DC coefficient and non zero AC coefficient indicating the magnitude
and position in the Zigzag order.
Finally, the image blocks are rested scanned to generate the image bit stream.
The image is reconstructed by performing the decompression operations in the reverse order.
Each block of 8 X 8 pixels is hence transformed back to spatial domain using the inverse discrete cosine
transformation (IDCT) in equation(1).
7 7
F(u,v) = [1/4] C(u) C(v) f(i,j)x cos[(2i + 1)u /16] cos[(2i + 1)v /16] ---(1)
i=0 j=0
Where, f(i,j) = (i,j)th pixel in the reconstructed image block.
The baseline sequential algorithm is used to reconstruct the image in its original size at a specific image
quality (SNR resolution).
4.Performance Evaluation
We select to test Line Skipping Coding Method, to compare the new system results with Run
length coding method. The quality of the image after decompression has been found quantitatively using
signal to noise ratio.
The proposed method gives maximum compression than run length code and Discrete Cosine
Transform coding. It gives an option of maximum compression of 98% to 99% at the cost of relative
degradation in the output.
The following table:1 gives the performance of all the compression algorithms in terms of
compression ratio and signal-to-noise ratio.
J. Polec and J. Pavlovicova,; , A new version of region Based BTC, EUROCON2001, Trends in
Communications,International Conference on. , vol.1, no., pp.88-0 vol.1, 4-7 July 2001.
[3]
C.K. Yang, and W.H. Tsai, Improving block truncation coding by line and edge i n f o r m a t i o n
and adaptive bit plane selection for gray-scale image compression, Pattern recognition letters,volu
me.16,number1,pages=67-75,1995.
[4]
T.M. Amarunnishad, V. K. Govindan and Abraham T. Mathew, Improved BTC image compression
using a fuzzy complement edge operator, signal processing operator, signal Processing, vol- 88,
issue 12, (2008)2989- 2997, Elsevier 2008.
[5]
[6] S.Selvam and Dr.S.Thabasu Kannan, titled IMAGE RETRIVAL OPTIMIZATION WITH
GENITIC ALGORITHM, published IJAER ,International Journal of Applied Engineering
Research as Special issue, Volume No 10, Issue No 55(2015) and IJAER is indexed by SCOPUS,
EBSCOhost, Google Scholar, JournalSeek, J-Gate etc. And also listed in Anna University Chennai
Annexure II-2014(Sl.No.8565).
[7]
detection,
on, vol., no.,
[8]
S.Selvam and Dr.S.Thabasu Kannan, titled An Empirical Review on Enhancing the Robustness of
Multi resolution Water Marking, published IJAER ,International Journal of Applied Engineering
Research as Special issue, Volume No 10, Issue No 82(2015), ISSN 0973-4562 and IJAER is
indexed by SCOPUS, EBSCOhost, Google Scholar, JournalSeek, J-Gate etc. And also listed in
Anna University Chennai Annexure II-2014(Sl.No.8565).
[9]
U.Y. Desai, M. M. Mizuki and I. Masakiand Horn; B.K.P., Edge and mean based
compression,1996.
[10]
R. Redondo and G. Cristobal; Lossless chain coder for gray edge images, Image Processing, 2003.
ICIP 2003. Proceedings. 2003 International Conference on , vol.2, no., pp. II- 201-4 vol.3, 14-17
Sept. 2003.
[11]
D. E. Tamir , K. Phillip and Abdul-Karim, , Efficient chain- code encoding for segmentationbased image compression, Data Compression Conference, 1996. DCC 96. Proceedings , vol., no.,
pp.455, Mar/Apr 1996.
image
[12] H. Sung and W.Y. Kuo. A skip-line with threshold algorithm for binary image compression. In
Image and Signal Processing (CISP), 2010 3rd International Congress on, volume 2, pages 515-523.
IEEE, 2010.
[13] E.J. Delp, M. Saenz and Salma, article BLOCK TRUNCATION CODING (BTC),2010.
[14]
S.Selvam and Dr.S.Thabasu Kannan, titled An Empirical Review on Enhancing the Robustness
of Multi resolution Water Marking, published IJAER ,International Journal of Applied Engineering
Research as Special issue, Volume No 10, Issue No 82(2015), ISSN 0973-4562 and IJAER is
indexed by SCOPUS, EBSCOhost, Google Scholar, JournalSeek, J-Gate etc. And also listed in
Anna University Chennai Annexure II-2014(Sl.No.8565).
[15]
Rafel C. Gonzalez and Richard E. Woods, Digital Image Processing, Second Edition, Pearson
Education Asia, 2005.
78
I. INTRODUCTION
The data observation and model simulation rapidly develops in geosciences. The data from these
systems have high dimensionality and huge volumes. Bulk amount of observation of exiting attributes/
variables are successively produced by large-scale observation systems. These data are compressed for
storage and the lately arrived data must be constantly compressed and attached to the present data, such
that these data are integrated to the existing data as a whole. This updation procedure should be done in
a short time and can be continually applied for the next piece of fresh data. The compression and storage
must preserve the reliability of the spatial-temporal reference (STR) of this data.Balances the data accuracy,
compression performance and improve the index and query analysis.The explosion of both the data volumes
and dimensionality makes storage, management, query and processing a scary approach for existing results.
Conventional methods make use of data indexes to speed up the query and storage. When the dimension
grows, the data segmentation along with data structure are becoming complex and inefficient. Big data
or data-intensive computing results use parallel data I/O and computation to fasten the data accessing
and updating. On the other hand, huge computers and complex computation architectures are required
to provide the I/O bandwidth and computation power needed. This condition turns out to be worse when
the continuous data compressing, attaching and updating are necessary. Within the current data version
and analysis framework, neither the conventional methods nor the big data or data-intensive computing
solutions are matched for dynamic data attaching and updating. Hence finding optional data structures
that fit the essential storage architecture may be difficult. The current exiting solutions for constant data
processing need different data structures in the management, query and analysis measures that requires to
undergo numerous difficult processing steps before they attain the final stage. The regular data transmit
between different data structures slows down the processing throughput.
Tensor is a vital tool for multidimensional data processing and analysis. It is derived from data79
Fig.1.System architecture
4. IMPLEMENTATION OF MODULES
4.1 Geospatial Data: (Spatial Index RTree)
A familiarmethod to look foran object based on their spatial position is location-based search, for
instance to find all restaurants within 5 Kms of my current location, or find all colleges within the zip code
of 651101. All spatial objects can be signified by an object id, a minimal bounded rectangle (MBR), with
other attributes. So the space can be signified by assortment of spatial objects. A query can be represented
as an additional rectangle. The query is regardingthe location of the spatial objects whose MBR go beyond
with the query rectangle.
RTree is a spatial indexing method that is given a query rectangle. This is to quickly locate the
spatial object results. The concept is related to BTree. The spatial objects are grouped that are close to each
other and structure a tree whose intermediarynodes contain near-by objects. Since the MBR of the parent
node has all MBR of its children, the Objects are close by if their parents MBR is minimized.
A. Aji, F. Wang, H. Vo, R. Lee, Q. Liu, X. Zhang, and J. Saltz, Hadoop gis: A high performance
spatial data warehousing system over map reduce, Proc. VLDB Endowment, vol. 6, no. 11, pp.
10091020, 2013.
[2]
[3]
[4]
[5]
G. Cugola and A. Margara, Processing ows of information: From data stream to complex event
processing, ACM Comput. Surv., vol. 44, no. 3, pp. 15:115:62, 2012.
[6]
M. L. Yiu, H. Lu, N. Mamoulis, and M. Vaitis, Ranking spatial data by quality preferences, IEEE
Trans. Knowl. Data Eng., vol. 23, no. 3, pp. 433446, Mar. 2011.
H. Plattner and A. Zeier, In-Memory Data Management: An Inection Point for Enterprise
Applications. New York, NY, USA: Springer, 2011.
[9]
[10]
I. Arad and Z. Landau, Quantum computation and the evaluation of tensor networks, SIAM J.
Computer., vol. 39, no. 7, pp. 3089 3121, 2010.
82
I. INTRODUCTION
A Mobile Ad-hoc Network is an` anthology of autonomous mobile nodes that can communicate
with each other through radio waves. A Mobile Ad-hoc Network has many free or autonomous nodes
often unruffled of mobile devices or other mobile pieces that can organize themselves in various ways and
operate without strict top-down network administration. A mobile ad-hoc network (MANET) is a network
of mobile routers coupled by wireless links - the union of which forms a casual topology. The routers
are free to move indiscriminately and organize themselves in a unsystematic manner so the networks
wireless topology may perhaps change hastily and indeterminable. In MANET the concert of the network
is based on nodes uniqueness like effectiveness, energy efficiency, transmission speed etc., the concert of
the network is high if the nodes in the network satisfy the distinctiveness.
MANET characteristics: MANET network has an autonomous behavior where each node presents
in the network; act as both host and router. During the transmission of data if the destination node is out of
range then it posses the multi-hop routing. Operation performed in Manet network is distributed operation.
Here the nodes can join or leave the network at any time. Topology used in MANET network is dynamic
topology.
Routing protocols: Generally routing protocol is defined as a set of rules which regulates the
transmission of packets from source to destination. These characteristics are maintained by different
routing protocols. In MANET different types of protocols are used to find the shortest path, status of the
node, energy condition of the node.
II NEIGHBOR DISCOVERY PROTOCOL
Central servers can be engaged, proximity-based applications potential can be better demoralized
providing the capability of discovering close by mobile devices in wireless communication locality due to
some reasons like users can enjoy the ease of local neighbor discovery at any occasion, although the federal
service may be occupied due to unexpected reasons, a single neighbor discovery protocol can advantage
various applications by providing more litheness than the centralized approach. Communications between a
central server and different mobile nodes may persuade problems, such as unnecessary transmission outlay,
clogging, and unpredicted reaction delay, penetrating for close by mobile devices locally is entirely free of
charge. a dispersed neighbor discovery protocol for mobile wireless networks is tremendously needed to
put into practice. Usually, there are three challenges in cunning such a neighbor discovery protocol.
Neighbor discovery is nontrivial for several reasons: Neighbor discovery needs to deal with collisions.
Idyllically, a neighbor discovery algorithm desires to minimize the possibility of collisions and, therefore,
the time to determine neighbors. In many realistic settings, nodes have no awareness of the number of
neighbors, which makes cope with collisions even harder. When nodes do not have right to use a global
clock, they have to activate asynchronously and at rest be able to determine their neighbors competently. In
83
Comparing with the existing system the energy consumption is reduced from 60%to 30%.similarly
the performance level increased up to 90%
VIII CONCLUSION
In MANETs energy consumption and performance are the main challenges. Here efficient routing
protocols were used to improve the performance. Energy consumption is decreased so that the lifetime
of the network is improved and the performance level is increased up to 90%.The performance gain is
increased in both the symmetric and asymmetric case.
IX FUTURE WORK
We have recognized the deviation path problem and traffic concentration problem of the ZTR. These
are the basic problems of the general tree routing protocols, which source the overall network performance
dreadful conditions. To conquer these problems, we suggest STR that uses the neighbor table, originally
defined in the Zig Bee standard. In STR, each node can locate the best next hop node based on the enduring
tree hops to the destination. The analyses show that the one-hop neighbor in sequence in STR reduces the
traffic load concentrated on the tree links as well as provides an efficient routing path.
REFERENCES
[1] S. Vasudevan, M. Adler, D. Goeckel, and D. Towsley, Efficient algorithms for neighbor discovery
in wireless networks, IEEE/ACM Trans. Netw., vol. 21, no. 1, pp. 6983, Feb. 2013.
85
X. Zhang and K. G. Shin, E-MILI: Energy-minimizing idle listening in wireless networks, IEEE
Trans. Mobile Comput vol.11, no.9, pp. 14411454, Sep. 2012.
[3]
W. Zeng et al., Neighbor discovery in wireless networks with multi- packet reception, in Proc.
MobiHoc, 2011, Art. No. 3.
[6]
R. Khalili, D. Goeckel, D. F. Towsley, and A. Swami, Neighbor discovery with reception status
feedback to transmitters, in Proc. IEEE COM, 2010, pp. 19.
[7]
M.J. McGlynn and S. A.Borbash,Birthday protocols for low energy deployment and flexible
neighbor discovery in ad hoc wireless networks, in Proc. MobiHoc, 2001, pp. 137145.
[8]
[9]
[10]
S. Bitan and T. Etzion, Constructions for optimal constant weight cyclically permutable codes and
difference families, IEEE Trans. Inf. Theory, vol. 41, no. 1, pp. 7787, Jan. 1995.
86
I. INTRODUCTION
Electrical energy is the most efficient form of energy and the modern society is heavily dependent
on the electric supply. The life is impossible without the supply of electricity. The quality of the electric
power is very important for the efficient functioning of the power system components and the end user
equipment. The term power quality became most important in the power sector and both the electric power
supply company and the end users are concerned about it. The electric power system is affected by various
problems like transients, noise, voltage sag/swell, which leads to the production of harmonics and affect the
quality of power delivered to the end user [1]. The harmonics may exist in voltage or current waveforms
which are the integral multiples of the fundamental frequency, which does not contribute for the active
power delivery. The quality of power is affected when there is any deviation in the voltage, current or
frequency.
The main effect of these problems is the production of harmonics. The presence of harmonics
deteriorates the quality of power and may damage the end user equipment. These harmonics causes the
heating of underground cables, insulation failure, increases the losses, reduces the life-time of the equipment
etc. The most effective solution to improve the power quality is the use of filters to reduce harmonics. There
are different filter topologies in the literature such as- active, passive, hybrid.
The passive filter is used to compensate the current harmonics. The voltage harmonics are
compensated using the Active filter. The Active filter can regulate the voltage at the load but cannot reduce
the current harmonics in the system [2-3]. The hybrid filter is the combination of the active filter and
passive filter. Among various combination the series APF with a shunt connected passive filter (SHAPF)
is widely used. To overcome the problems of both passive and active power filters, Series Hybrid 1Active
Power Filters (SHAPF) have been used and extensively used. It provide the cost effective solution for the
nonlinear load compensation. The performance of the SHAPF depend on the proper reference generation
algorithm.
A variety of configurations and control strategies are proposed to reduce inverter capacity [4-6].
Many approaches have been published. The instantaneous reactive power theory caused a great impact on
harmonic isolation. The instantaneous reactive power theory caused a great impact in reference voltage
generation. The instantaneous active and reactive power has average component and oscillating component.
This paper is organized as follows. First, a system configuration is presented in section II. The
generalized definition of instantaneous active, reactive and apparent power quantity is presented in section
III-A. The control strategy for the Series Active Filter is presented in section III-B. The simulation results
87
The turn ratio of the transformer should be high in order to reduce the amplitude of the inverter
output and to reduce the voltage induced across the primary winding. Also, the selection of the transformer
turns ratio affect the performance of the ripple filter connected at the output of the PWM inverter. The series
active filter in the arrangement is controlled as active impedance and is controlled as a harmonic voltage
source which offers zero impedance at fundamental frequency and high impedance at all desired harmonic
frequencies.
III CONTROL SCHEME
A.Instantaneous Reactive Power Theory
The Generalized Theory of the Instantaneous Reactive Power in Three-Phase Circuits, also known
as instantaneous power theory or p-q theory. This Theory was given by Akagi, Kanazawa and Nabae in
1983. Control strategy presented in this section is capable of compensating the source current harmonics
and it balance in load voltages. It deals with instantaneous power and classified into following two groups.
The first one is developed based on abs phase to three orthogonal axes which is known as p-q theory that is
based on a-b-c to --0 transformation, and the next is directly on a-b-c phases. The main use of this theory
is that it is valid for steady state or transitory operations. It also allow control the active filter in real time.
The main advantage of using this technique is the calculation is simple. It require only algebraic calculation.
The p-q theory consists of an algebraic transformation (Clarke transformation) of the three-phase
voltages and currents in the a-b-c coordinates to the --0 coordinates, followed by the calculation of the
p-q theory instantaneous power components [10-11]. Three phase generic instantaneous line current can
be transformed on the --0 axes. On applying the --0 transformation, the zero sequence can be separated
and eliminated.
B.Control statergy
Control strategy plays very important role in the performance of the system. The instantaneous 3
88
The Instantaneous Real Power and the Instantaneous Imaginary Power has both average and
oscillating power. The average power of the real and reactive power are expressed as p and q . The
oscillating power of the real and reactive power are expressed as p and q . The real and imaginary power
can be obtained based on the average and oscillating power is
The Zero sequence voltage is eliminated and the and coordinates are to be considered. The
voltage of the and corresponding with the oscillating power of the real power and the reactive power is
calculated in (6).
The reference voltage is calculated in order to compensate the harmonic voltage in (7). They are
obtained by the inverse Clarke transformation.
The reference voltage is compared with the source voltage and the output is given to the comparator
89
The load voltage and load current obtained when the system is in open loop without any filter is
shown in the Figure 2 and Figure 3. Load voltage and Load Current obtained consist of more Harmonics
which must be eliminated. The harmonics generated is eliminated with the help of filters.
The FFT Analysis is carried out for the system without any filter. The THD values are calculated and
it is 34.64%, which is higher shown in the Figure 4.
The FFT Analysis is carried out. The THD values are calculated for the system with passive filter
and shown in the Figure 7. The value is 3.14%, which is less compared with the system without filter.
The load current and load voltage obtained when the system with both Active and Passive filter is
shown in the Figure 8 and Figure 9. The Harmonics content in the load voltage and Load current obtained
with the series connected active filter and shunt passive filter is comparatively low than the other.
The FFT Analysis is carried out. The THD values are calculated for the system with SHAPF and
shown in the Figure 10. The value is 0.24%, which is lesser compared with passive filter.
91
The Table gives the THD value for the system (a) without filter, (b) With Passive Filter, (c) With
Active and Passive Filter. The THD value that are obtained by the RL Load. From the table the THD value
for the system with both Active and Passive Filter are very much less compared with the system with no
Filter and system with Passive Filter.
The voltage and current harmonics that are produced in the system will be eliminated with the
Active and Passive Filter. The Active Filter is connected in series and Passive Filter is connected in parallel
to obtain the necessary output.
V.CONCLUSION
The demand for electric power is increasing at an exponential rate and at the same time the quality
of power delivered became the most prominent issue in the power sector. Thus, the reduction of harmonics
and improving the power factor of the system is of utmost important. In this project a solution to improve
the electric power quality by the use of Active Power Filter is discussed. Most of the loads connected to
the system are non-linear which the major source of harmonics is in the system. A Hybrid power filter
with series connected APF and shunt connected passive filter is used. The simulation is also carried out
with unbalanced load and found that the APF improves the system behavior by reducing the harmonics.
Therefore, it is concluded that the hybrid filter consisting of series APF and a shunt passive filter is a
feasible economic solution for improving the power quality in electric power system.
REFERENCES
[1] W. E. Reid, "Power quality issues-standards and guidelines", IEEE Translnd. Appl. , Vol. 32, No.3,
pp. 625-632, 1996.
[2]
F. Z. Peng and D. J. Adams, Harmonics sources and filtering approaches, Proc. Industry
Applications conf., vol. 1, Oct.1999.
[3]
[4]
Salmeron, P.; Litran, S.P., A Control Strategy for Hybrid Power Filter to Compensate Four-Wires
92
J. Tian, Q. Chen and B. Xie, Series Hybrid Active Power Filter based on Controllable Harmonic
Impedance, IET Journal of Power Electronics, 2012, Vol. 5, Issue 1, pp. 142-148.
[6]
H. Akagi, Y. Kanazawad, and A. Nabae, Instantaneous power theory and application to power
conditioning IEEE Press, Wiley-inter-science, A Jon Wiley & sons, INC., Publication.
[7]
[8]
M. F. Shousha, S. A. Zaid and O. A. Mahgoub. Better Performance for Shunt Active Power Filters
Clean electrical Power (ICCEP), IEEE June 2011
[9]
[10]
Bhattacharya, S., Cheng, P.-T., Divan, D.M.: Hybrid solutions for improving passive filter
performance in high power applications, IEEE Trans. Ind. Appl., 1997, 33, (3), pp. 732747
[11]
Mulla, M.A., Chudamani, R., Chowdhury, A novel control scheme for series hybrid active power
filter for mitigating source voltage unbalance and current harmonics. Presented at the Seventh Int.
Conf. on Industrial and Information Systems (ICIIS-2012) held at Indian Institute of Technology
Madras, Chennai, India, 0609 August 2012
93
I. INTRODUCTION
Transmission lines are vital part of the electrical system, as they provide the path to transfer power
between generation and load. Electrical power system suffers unexpected failures in transmission lines for
various causes such as natural events, physical accidents, failure of equipments and misoperation generate
faults and hence protection of transmission lines is important element of power system. Any fault, if not
detected and isolated quickly will cascade into a system wide disturbance causing interconnected system
operating close to its limits. Initially decision tree based method is used for classifying the fault in single
transmission line [1] and the voltage and current values are obtained from both end of the transmission line.
Support Vector Machine is used for classifying and locating the fault but this leads to various problems like
increased steady state current, voltage inversions which acts non-linearly during fault condition and it is
not accurate[2][3]. Directional relays based on negative or zero sequence components or compensated post
fault voltages are most commonly used in [4]. These relays have drawback of their inability to respond to
all types of faults and slow operating time. The major drawback of conventional method is that it fails to
adapt for the dynamical conditions of power system.
This paper demonstrates the theory of Artificial Neural Networks (ANNs) that can be used as an
alternative to the conventional approach for identifying, classifying and locating the various types of
faults like single line to ground, double line to ground and three phase faults. The ANNs provide a viable
alternative because they can handle most situations based on dynamic conditions. Training patterns to be
absorbed by the ANN were generated using voltage and current samples for different faults at various
locations along the transmission line.
The direction of the fault on a transmission line is determined by the phase angles of instantaneous
voltages and current phasors but it does not determine the fault location. A directional relaying algorithm
based on the phase angles between positive-sequence components of fault voltages and currents was
developed for various types of fault in [5]. But it does not identify the faulty phase and the distance to fault
point. There are several papers which uses superimposed components, high-frequency signals, wavelet
transform for detection and classification of various types of faults [6-10].
II. POWER SYSTEM NETWORK
The system considered is composed of a section of 220KV and transmission line of 80-120 km
length is connected to source at one end and load at another end. Various types of faults like AG, BG, CG,
ABG, BCG, ACG, and ABC are considered and the location of faults are found. This method is used to
analyze various faults, classify the fault and locate the fault by using Artificial Neural Network (ANN).
94
95
The data sets were used as spaced equal points throughout the original data. Both the networks
were trained using LevenbergMarquardt algorithm using neural network toolbox of MATLAB [12]. This
learning strategy converges itself to the desired output.
IV NETWORK SIMULATION
Fault is defined as the short circuit or open circuit or an external disturbance that occur in the power
system. Hence the fault should be classified in order to obtain the accurate result.
The circuit for classification is shown in Figure 3. In the circuit during compiling the fault is selected
and the location is given. Neural network should classify and locate the accurate fault based on training.
96
From the figure 3, while compiling the fault is selected and location is entered manually in order to
classify and locate it. The fault selected in figure 3 is BCG and the location is given as 112km. The neural
network should
classify and locate based on the training. The output is shown in figure 4 and it classifies the fault
accurately based on training. From the training the fault classified is BCG and the fault is 112.4624.
Therefore the error obtained is 0.4624.
The performance plot is shown in figure 5. In that the best validation performance is 0.072101
at epoch 5. In the plot the blue line is for training, the red line is for testing and green line indicates the
validation.
The table 3 provides the various types of fault location values. It provides the various types of faults
and their distance. The neural network should locate the exact position based on the training.
97
The figure 6 provides the voltage waveform for BG fault. The magnitude of voltage at phase B is
reduced. Next graph provides the RMS voltage value.
The figure 7 provides the current waveform at B phase. The magnitude of B phase is increased due
to fault.
CONCLUSION
By using ANN the fault classification, location and direction has been estimated. The fault has been
classified accurately for all types of faults. The fault location has error that could be rectified with more
number of iteration of the fault location values. The computation complexity is more due to large training
data, parameter selection, and large training time. The proposed method is analyzed for simple system and
in future this model can be implemented to complex power system networks. To minimize the computation
complexity and to improve the efficiency of the system Fuzzy Logic schemes can be implemented.
REFERENCES
[1] Jamehbozorg, A., Shahrtash, S.M.: A decision-tree-based method for fault classification in singlecircuit transmission lines, IEEE Trans.Power Deliv., 2010, 25, (4), pp. 21902196.
98
Parikh, U.B., Das, B., Maheshwari, R.: Fault classification technique for series compensated
transmission line using support vector machine, Int. J. Electr. Power Energy Syst., 2010, 32, (6),
pp. 629636.
[3]
Ekciki, S.: Support vector machines for classification and locating of fault in transmission line,
Appl. Soft Comput., 2012, 12, pp. 16501658.
[4] Duan, J.D., Zhang, B.H., Luo, S.B., Zhou, Y.: Transient-based ultra-high-speed directional
protection using wavelet transforms for EHV transmission lines. IEEE/PES Transmission and
Distribution Conf. and Exhibition: Asia and Pacific, 2005, pp. 16.
[5]
O. A. S. Youssef, New algorithm to phase selection based on wavelet transforms, IEEE Trans.
Power Del., vol. 17, pp. 908914,Oct. 2002.
[6]
Xinzhou Dong, Wei Kong, and Tao Cui, 2009: Fault Classification and Faulted-Phase Selection
Based on the Initial Current Traveling Wave, IEEE transactions on power delivery, Vol. 24, No. 2,
552-559.
[7]
Dong, X., Dong, X., Zhang, Y., Guo, X., Ge, Y.: Directional protective relaying based on polarity
comparison of traveling wave by using wavelet transform, Autom. Electr. Power Syst., 2000, 7, pp.
1115.
[8]
[9]
D. Das, N. K. Singh, and A. K. Sinha, A comparison of Fourier transform and wavelet transform
methods for detection and classification of faults on transmission lines, presented at the IEEE Power
India Conf., India, 2006.
[10]
S. M. Brahma, New fault-location method for a single multiterminal transmission line using
synchronized phasor measurements, IEEE Transactions on Power Delivery, Vol. 21, No. 3, pp.
1148 1153, July 2006.
[11]
S. M. Brahma, Fault location scheme for a multi-terminal transmission line using synchronized
voltage measurements, IEEE Transactions on Power Delivery, Vol. 20, No. 2, pp. 1325 1331,
April 2005.
[12]
H. Demuth and M. Beale, Neural Network - Toolbox - For Use with Matlab 2000.
99
3
Professor&Head, Department of Electrical and Electronics Engineering,
Dr.Mahalingam College of Engineering and Technology, Pollachi - 642001.
India, Email id: hod_eee@drmcet.ac.in
4
AbstractElectroCardioGram (ECG) wave reveals the electrical activity of the cardiac system. The
small changes in amplitude and duration of ECG signal cannot be described precisely by the human
eyehence there is a need for computer aided diagnosis system. In this proposed method, dual tree complex
wavelet transform based feature extraction approach is used for a classification of cardiac arrhythmias.
The feature set consist of complex wavelet coefficients extracted from the fourth and fifth scale of
DTCWT decomposition of a QRS complex signal in association with four other features like AC power,
kurtosis, skewness and energy extracted from the QRS complex signal.Support Vector Machine (SVM)
is used to classify the ElectroCardioGram (ECG) beats. The empirical results reveal that the DWT and
DTCWT established feature extraction technique classifies ECG beats of MIT-BIH Arrhythmia database.
Index termsDiscrete Wavelet Transform(DWT), Dual Tree Complex Wavelet Transform (DTCWT),
Electro Cardio Gram(ECG), Support Vector Machine(SVM).
I. INTRODUCTION
The analysis of ECG has been extensively used for diagnosing many cardiac diseases. Arrhythmias
commonly occur due to abnormal heart beat. These cardiac disease can be noninvasively diagnosed using
ECG signal. Computer-aided heart arrhythmia identification and classification can play a significant role
in the management of cardiovascular diseases. An important step toward detection of arrhythmia is the
classification of heartbeats. The rhythm of the ECG signal can then be decisive by knowing the classification
of consecutive heartbeats in the signal. Hence, there is a need for computer aided diagnosis system which
can accomplish higher recognition accuracy. Numerous techniques are applied to analyse and classify ECG
beats.
In [1], the biographer classified PVC beats from normal and other abnormal beats by using wavelet
transformed ECG waves with timing data as feature and ANN as a classifier. An overall accuracy of 95.16%
is achieved by using this technique. In [2], PCA is used as a gadget for the classification of five types of
ECG beats (N, LBBB, RBBB, PVC and APC). A relative study is performed on three methodologies of
feature extraction (principal component of segmented ECG beats, principal component of error signals
of linear prediction model, principal components of DWT coefficients). In [3], an accuracy of 94.64% is
accomplished using the approximation wavelet coefficient of ECG signal in conjunction with three timing
data as feature and RBF Neural network as a classifier. Here, classification was performed on five types of
cardiac beats (N, LBBB, RBBB, PVC and APC). In [4], the biographer have used particle swarm optimization
and radial basis function neural network (RBFNN) used for classifying six types of ECG beats. In [5], an
experimental pilot study is performed to examine the property of pulsed electromagnetic field (PEMF) at
100
The ECG signals of MIT-BIH database are sampled at a frequency of 360 samples per second,
hence the frequency component in ECG range between 0 and 180 Hz. In our work, the wavelet coefficients
are computed across the QRS complex is maximum in the frequency range of 8-20Hz.The number of
decomposition levels is limited to 5 beyond which baseline.
V. SUPPORT VECTOR MACHINE
Support Vector Machine (SVM) was introduced by Vapnik. Support Vector Machines (SVM) is a
relatively new learning method used for binary classification. The basic idea is to find a hyperplane which
separates the d-dimensional data perfectly into its two classes. SVM is a supervised classification method.
Here, a set of known objects is called training set. Each object of the training set consists of a feature vector
and a belonging class value. Based on the training data, the learning algorithm extracts a decision function
to classify the unknown input data. The architecture of SVM shown in Fig V.
For examples (xi, yi) i = 1 ...l, where each example has d inputs (xi Rd), and a class label with one
of two values (yi(-1,1)) . Now, all hyper planes in Rd are parameterized by a vector (w), and a constant
(b), expressed in the equation
w.x+b=0
w is the vector orthogonal to the hyperplane. Given such a hyperplane (w, b) that separates the data,
this gives the function.
f(x) = sign(w.x + b)
Which correctly classifies the training data. However, a given hyperplane represented by (w, b)
is equally expressed by all pairs (w,b) for R^+ .The canonical hyperplane is defined to be that which
separates the data from the hyperplane by a distance of at least1. That is, we consider those that satisfy
y_i (x_i.w+b)1foralli
To obtain the geometric distance from the hyperplane to a data point, we must normalize by the
magnitude of w. This distance is given by
d((w,b),x_i=(yi(xi.w+b))/(w)1/(w)
The hyperplane that maximizes the geometric distance to the closest data points is needed. This is
103
VII. CONCLUSION
This paper, a technique is proposed for classifying ECGbeats using DTCWT based feature set.
Four features AC power,kurtosis, skewness and energy extracted from QRS complex of each cardiac
cycle concatenated with the features extractedfrom the fourth and fifth decomposition levels of DTCWT,
are usedas total feature set. In this paper, the SVM is used as a classifier because it has ability to learn
and generalize, smaller training set requirements, fast operation, and ease of implementation. The major
advantage of this network is that it finds the nonlinear surfaces separating the underlying patterns which is
generally considered as an improvement on conventional methods and the complex class distributed features
can be easily mapped by classifier. The proposed method has shown a promising sensitivity of 50% which
indicates that this technique is an excellent model for computer aided diagnosis of cardiacarrhythmias.
The performance of the proposed method is comparedwith DWT based statistical features and it is seen
that the proposedfeature set achieves higher recognition accuracy than DWT basedfeature. The proposed
methodology can be used in telemedicine applications, arrhythmia monitoring systems, cardiac pacemakers,
remote patient monitoring andin intensive care units.
REFERENCES
[1] Inan OT, Giovangrandi L, Kovacs GT. Robust neural network based classification of premature
ventricular contraction using wavelet transform and timing interval feature. IEEE Trans Biomed
Eng 2006; 53(December (12)):250715.
[2]
Korurek M, Dogan B. ECG beat classification using particle swarm optimization and radial
basisfunction neural network. Expert Syst Appl 2010; 37:75639.
[3]
Yu SN, Chen YH. Electrocardiogram beat classification based on wavelet trans-formation and
probabilistic neural network. Pattern Recogn 2007; 28:114250.
[4]
Thomas M, Das MK, Ari S. Classification of cardiac arrhythmias based on dualtree complex
wavelet transform. In: Proceedings of IEEE International Conference on Communication and
Signal Processing-ICCSP 2014. 2014.
[5]
Ubeyli ED. Statistics over features of ECG signals. Expert Syst Appl 2009; 36:875867.
104
Kadambe S, Srinivasan P. Adaptive wavelets for signal classification and compression. Int J
Electron Commun (AE) 2006; 60:4555.
[7]
Martis RJ, Acharya UR, Ray AK. Application of principle component analysis to ECG signals for
automated diagnosis of cardiac health. Expert Syst Appl2012; 39:11792800.
[8]
Chen G. Automatic EEG seizure detection using dual-tree complex wavelet-Fourier features.
Expert Syst Appl 2014, April; 41:23914.
[9]
Hosseini HG, Reynolds KJ, Powers D. A multi-stage neural network classifier for ECG events. In:
23rd Int. Conf. IEEE EMBS. 2001. p. 16725.
[10] Chazal PD, Dwyer MO, Reilly RB. Automatic classification of heartbeats using ECG morphology
and heartbeat interval features. IEEE Trans Biomed Eng2004; 51(July (7)).
105
2
M.E.,Department of Information Technology,
Kongu Engineering College, Perundurai, Erode,
Tamil Nadu 638052, India.
AbstractGrids enable large-scale coordinated and collaborative resource sharing. Grid resources
owned and managed by multiple organizations for solving scientific and engineering problems that require
the large amount of computational resources. Scheduling of the tasks to the distributed heterogeneous
grid resources belongs to the class of NP-Complete problems. To achieve high performance in the
heterogeneous grid environment requires an efficient mapping of the tasks to the appropriate resources is
essential. The order in which the tasks are scheduled to the resources is very critical criterion in scheduling
which results in reduced makespan. This paper proposes a heuristic scheduling technique QoS Guided
Prominent Value Tasks Scheduling Algorithm that determines the order in which the tasks are to be
scheduled to the appropriate resources to optimize the completion time of the tasks. The comparison
study shows that the proposed QoS Guided Prominent Value Tasks Scheduling Algorithm deals with the
efficient resource mapping to the tasks and provides overall optimal performance with reduced makespan.
The experimental results reveal that the order of mapping heuristic strategy depends on the parameters
such as (a) QoS value (b) Prominent value and (c) execution time of the tasks.
Index terms Task Scheduling, Heterogeneous, QoS, NP-Complete
I. INTRODUCTION
An emerging trend in network technology led to the possibilities of interconnection of diverse set
of geographically distributed heterogeneous resources which supports executing computationally intensive
applications. The high performance of the grid applications can be achieved by an efficient scheduling
strategy. The key strategy for achieving high performance is the efficient mapping of the meta-task to the
available computational resources. The fundamental criterion
for obtaining optimal task scheduling is the reduced makespan [3,9]. Meta-task can be defined as a
collection of independent, non-communicating tasks. Makespan can be defined as the overall completion
time of all the computational tasks. The problem of optimally mapping the computational tasks to the
diverse set of geographically distributed heterogeneous grid resources has been shown to be NP-Complete
[2,4]. The grid scheduler needs to consider the task and QoS constraints to identify a better mapping
between the tasks and the grid resources. The proposed QoS Guided Prominent Value Tasks Scheduling
Algorithm based on the task requirement of QoS classifies the tasks into high QoS tasks and low QoS
tasks. The grid resources based on the task constraints are classified into high QoS provision resources and
low QoS provision resources. The proposed QoS Guided Prominent Value Tasks Scheduling Algorithm
performs the better mapping between the tasks and the grid resources by computing the Prominent Value
(PV) for the task. The tasks are ordered into the Prominent Value Set (PVS) from minimum to the highest
prominent value of the task. The proposed algorithm achieves optimal scheduling with reduced makespan
compared to that of the Min-min heuristic scheduling algorithm.
106
The pseudocode for finding Credit Point for each task is given below:
B.
108
QoS Value
A simple example is given below to illustrate the execution of the proposed algorithm QoS Guided
Prominent Value Tasks Scheduling Algorithm and to compare its efficiency with the existing Min-min
heuristic scheduling algorithm.
Table 1 shows the execution time of 9 tasks on 5 resources. The entry X in the table denotes that
the resource does not have the capability to execute that particular task due to its low QoS provision.
109
\
The maximum value in the given ETC matrix is,
METC=17.3
TV=17.3/2=8.7
TV=17.3/3=5.8
TV=14.5
TV=23.2
The Credit Point for each task is computed and is shown in Table 2.
Table 2 Credit Point for each Task
The task t1 can be executed only on one resource R5, Task t1 is called high QoS task. So, the task
t1 is given the low QoS value 1.Next, the task t2 can be executed on two resources R4 and R5. The task
t1 is given the low QoS value 2 and so on. The tasks t8 and t9 are called low QoS task, since they can
be executed on all resources. The tasks t8 and t9 are given high QoS value. The task t9 has maximum
execution time and is given the high priority and the credit for task t9 is 5 and for task t8 is 6. The QoS
value, QoS Credit Value for each task is computed and is shown in Table 3. The Prominent Value for each
task ti is computed and is shown in Table 3.
The tasks are ordered in the Prominent Value Set (PVS) in the ascending order of PVi .
PVS = {t1,t2,t4,t3,t5,t8,t9,t6,t7}
The high QoS tasks are scheduled to the resources that have low QoS provision and the low QoS
tasks are scheduled to the resources that have the high QoS provision. The tasks are scheduled in the order
110
Table 4 A Comparisons between existing and proposed algorithms in makespan and task schedule
order.
IV.SIMULATION AND RESULTS
The proposed approach is evaluated with user-defined number of resources and tasks. The
execution time of all the tasks is considered for efficient scheduling. The execution time of all the tasks in
all the resources is generated using the ETC matrix, a benchmark model designed by Braun et.al [1,4,7].
The rows of the ETC matrix represent the execution time of each task on all given resources.
Figure 1 shows the experimental results corresponding to ETC matrices of 50 Tasks* 5 resources,
100 tasks * 10 resources, 150 Tasks* 10 resources, 200 tasks*10 resources and 250 tasks*10 resources
indicate that the proposed QoS Guided Prominent Value Tasks Scheduling Algorithm performs well and
outperforms the Min-min heuristic scheduling algorithm. The proposed QoS Guided Prominent Value Tasks
Scheduling Algorithm gives reduced makespan for all five cases than the Min-min heuristic scheduling
algorithm.
V.CONCLUSION
Task scheduling is an NP-Complete problem in distributed grid environment. This paper proposed
a novel heuristic scheduling strategy by considering QoS factor in scheduling the tasks on to the resources.
The proposed QoS Guided Prominent Value Tasks Scheduling Algorithm and the Min-min heuristic
scheduling algorithm are examined using the benchmark simulation model by Braun et.al [1,4,7]. Presented
experimental results prove that the proposed heuristic scheduling strategy QoS Guided Prominent Value
Tasks Scheduling Algorithm has a significant improvement in performance in terms of reduced makespan
and outperforms Min-min heuristic scheduling algorithm.
REFERENCES
[1]
[2]
I.Foster and C. Kesselman, The Grid: Blueprint for a Future Computing Infrastructure, Morgan
Kaufmann Publishers, USA, 1998.
111
E.U.Munir, J.Li, and S.Shi, QoS Sufferage Heuristic for Independent Task Scheduling in Grid,
Information Technology Journal 6(8), pp. 1166-1170, 2007.
[4]
TD. Braun, HJ. Siegel, N.Beck, A Taxonomy for Descriging Matching and Scheduling Heuristics
for Mixed-machine Heterogeneous Computing Systems, IEEE Workshop on Advances in Parallel
and Distributed Systems, West Lafayette, pp. 330-335, 1998.
[5]
R.Armstrong, D.Hensgen, and T.Kidd, The Relative Performance of Various Mapping Algorithms is
Independent of Sizable Variances in Run-time Predictions, In 7th IEEE Heterogeneous Computing
Workshop(HCW98), pp. 79-87, 1998.
[6]
R.F.Freund and H.J.Siegel,Heterogeneous Processing, IEEE Computer , 26(6), pp. 13-17, 1993.
[7]
T.D.Braun, H.J.Siegel, and N.Beck, A Comparison of Eleven Static Heuristics for Mapping a Class
of Independent Tasks onto Heterogeneous Distributed Computing Systems, Journal of Parallel and
Distributed Computing 61, pp.810-837, 2001.
[8]
[9]
[10]
[11] G.K.Kamalam, and Dr. V..Murali Bhaskaran, New Enhanced Heuristic Min-Mean Scheduling
Algorithm for Scheduling Meta-Tasks on Heterogeneous Grid Environment, European Journal of
Scientific Research, Vol.70 No.3, pp. 423-430, 2012.
[12] H.Baghban, A.M. Rahmani, A Heuristic on Job Scheduling in Grid Computing Environment, In
Proceedings of the seventh IEEE International Conference on Grid and Cooperative Computing, pp.
141-146, 2008.
[13] F.Dong, J.Luo, L.Gao, and L.Ge, A Grid Task Scheduling Algorithm based on QoS Priority
Grouping, In Proceedings of the 5th International conference on Grid and Cooperative Computing,
pp.58-61,2006.
[14] H.E.Xiaoshan, X.H.Sun, and G,V.LLaszewski, QoS Guided min-min heuristic for grid task
scheduling, Journal of computer science technology (Special Issue on Grid Computing), pp.442451,2003.
[15] M.Singh, and P.K.Suri, Analysis of service, challenges and performance of a grid, International
Journal of Computer Science and Network Security, pp.84-88, 2007.
[16] H.Ligang, and A.Stephen, Dynamic Scheduling of parallel jobs with QoS demands in multiclusters
and grids, Proccedings of 5th IEEE/ACM International Workshop on Grid Computing, pp.402409,2004.
112
I. INTRODUCTION
Hand Recognition of sign languages is one of the major concerns for the international deaf
community. However, contrary to popular belief, sign language is not universal. Wherever communities
of deaf people exist, sign languages develop, but as with spoken languages, these vary from region to
region. There is no unique way in which such a recognition can be formalized. Every country has its own
interpretation. They are not based on the spoken language in the country of origin. In fact their complex
spatial grammars are markedly different. Sign language recognition is a multidisciplinary research area
involving pattern recognition, computer vision, natural language processing and psychology. Sign language
recognition is a comprehensive problem because of the complexity of the visual analysis of hand gestures
and the highly structured nature of sign languages. A functioning sign language recognition system can
provide an opportunity for a mute person to communicate with non-signing people without the need for an
interpreter. It can be used to generate speech or text making the mute more independent. Unfortunately,
there has not been any system with these capabilities so far. All researches till date have been limited to
small scale systems capable of recognizing only a minimal subset of a
full sign language. The most complicated part of dynamic hand gesture recognition is the sign
language recognition as both local and global motions of the hand preserve necessary information in
addition to temporal information. In order to recognize even the simplest hand gesture, the hand must
be detected in the image. Once the hand is detected, a complete hand gesture recognition system must be
able to extract the hand shape, the hand motion and the spatial position of the hand. Moreover, the hand
movement for a particular sign follows some temporal properties.
Hand gestures are a type of communication that is multifaceted in a number of ways. Hand gestures
provide an attractive alternative to the cumbersome interface devices used for human-computer interaction
(HCI). Thus, integrating the use of hands in HCI would be of great benefit to users. Smart environments
have recently become popular to improve our quality of life. Gesture recognition capabilities implemented
in embedded systems are very beneficial in such environments to provide various apparatuses with efficient
HCI. Real time processing is an essential feature to use hand signs for HCI. Since real time recognition
incurs very high computational costs, a powerful full-specification PC is necessary to implement recognition
systems as software. However, such systems are very large physically, and consume large amount of power,
which is not suitable for embedded systems. Improvements in field programmable gate arrays (FPGAs)
have driven a huge increase in their use in space, weight, and power constrained embedded computing
systems. The implementation of FPGAs has raised the possibility of achieving portable systems that can
recognize hand gestures without bulky PCs while decreasing the response time due to their computing
power. Hand gestures are generally either hand postures or dynamic hand gestures. Hand postures are static
113
L (P,)L (P,) xx xy L (P,)L (P, ) yx yy are the convolution of the Gaussian second order derivatives
with the image I at pixel P. Keypoints are found by using a so called Fast-Hessian Detector that bases on an
approximation of the Hessian matrix for a given image point. The responses to Haar wavelets are used for
orientation assignment, before the keypoint descriptor is formed from the wavelet responses in
a certain surrounding of the keypoint. The descriptor vector has a length of 64 floating point numbers
but can be extended to a length of 128. As this did not significantly improve the results in our experiments
but rather increased the computational costs, all results refer to the standard descriptor length of 64.
119
b. Pre-Processing Result
Fig 4.1 shows the pre-processing which including gray image, histogram equalization and median
filtering method for a clear visible output, given input is converted into gray scale image & histogram
equalization is calculated followed by Median filtering technique.
c. Database Creation Result
Fig 4.2 shows the database creation the given image is stored inside the database and it is later
compared with the given input image from the web camera which is attached with the system
Fig 4.3 shows the detected output after the input image compared with the database, the given input
image is stored with the user name and when the input image is compared it will show the user name is
detected.
120
Fig 4.4 Shows when there is any change in the direction of knuckle the input image will not be
matched with the data base and we will get an error report cannot detect the knuckle print show me the
knuckle print image.
Fig 4.5 shows SURF and SIFT algorithm , when the input image is preprocessed and when the
detection is verified ,proposed SURF and SIFT algorithm is applied and compared to prove that SURF
algorithm is the fast and efficient algorithm.
V.CONCLUSION
A novel SIFT algorithm is compared with the new algorithm that requires many stage of preprocessing.
We obtained that all considered approximate transforms perform very close to the ideal SURF. However,
the proposed transform possess a computational complexity and is faster than all other approximations
under consideration. In terms of security identification, knuckle authentication is a challenging process
in terms of Image quality improvement. Hence the new proposed transform is the best algorithm for the
knuckle identification in terms of image selection and also in terms of Image Quality metrics such as CRR
and Hit rate. The CRR values obtained are Optimum threshold values for SIFT and SURF feature extractors
to achieve maximum CRR of 96.36% and 99.69% respectively are found to be 0.2 and 0.09 with the
sampling step of 4. Future work includes the implementation of Hand sign technique in VLSI hardware kit
and also to approximate versions for various hand sign to provide a better authentication prototype.
REFERENCES
[1] Annamaria R, Balazs Tusor andVarkonyi-Koczy(2011), HumanComputer Interaction for Smart
Environment Applications Using Fuzzy Hand Posture and Gesture Models,IEEE transactions on
instrumentation and measurement, vol. 60, pp. 5-15.
[2]
[3]
David Zhang, Guangwei Gao, Jian Yang, Lei Zhang and Lin Zhang, (2013) Reconstruction Based
Finger-Knuckle-Print Verification With Score Level Adaptive Binary Fusion, IEEE transactions on
image processing, vol. 22, pp. 12-25.
121
Douglas Chai and King N. Ngan(2002), "Face Segmentation Using Skin-Color Map in Videophone
Applications", IEEE transactions on Circuits and Systems for Video Technology, vol. 09, Issue no.
04.
[5]
Feng-Cheng Huang, Ji-Wei Ker, Shi-Yu Huang and Yung-Chang Chen(2012), High-Performance
SIFT Hardware Accelerator for Real-Time Image Feature Extraction,IEEE transactions on circuits
and systems for video technology, vol. 22, pp. 3-15.
[6]
Hanqing Lu, Jian Cheng, Kongqiao Wang and Yikai Fang(2007), A Real-Time Hand Gesture
Recognition Method, IEEE International Conference on Multimedia and Expo, vol.11, pp. 995998.
[7]
Hiroomi Hikawa and Keishi Kaida(2015), "Novel FPGA Implementation of Hand Sign Recognition
System with SOM-Hebb Classifier", IEEE transactions on Circuits and Systems for Video
Technology, vol. 25, Issue no. 01.
[8]
Kannan.S and Muthukumar .A(2013), Finger knuckle print recognition with sift and k-means
algorithm, ICTACT journal on image and video processing, vol. 03,Issue no. 03.
[9]
Liu Yun and Zhang Peng(2009), "An Automatic Hand Gesture Recognition System Based on ViolaJones Method and SVMs", IEEE transactions on Computer Science and Engineering, vol. 02, pp.
72-76.
[10] Nasser H. Dardas and Nicolas D. Georganas(2011), Real-Time Hand Gesture Detection and
Recognition Using Bag-of-Features and Support Vector Machine Techniques, IEEE transactions
on instrumentation and measurement, vol. 60, pp.11-27.
[11] http://fourier.eng.hmc.edu/e161/lectures/gradient/node9.html
122
*
Professor, Electrical and Electronics Engineering,
M.Kumarasamy college of engineering, Karur, Tamilnadu
AbstractAn electrical power system is a complex network which consists of numerous generators,
transformers, transmission lines and variety of loads. The power demand should be equal to the power
generated then the system will be stable. Transient stability is a complicated problem in the power system
transmission lines for maintaining synchronism. Sometime the total power losses, cascading outage of
transmission lines, etc, are increased due to power loadability in lines and the system leads to collapse. In
order to avoid this, the power system will be analyzed with various optimization methods. New voltage
stability index is method used to determine the critical line on the transmission lines. In order to improve
transient stability in power system, FACTS device can play important role on it. UPFC is one of the
FACTs controllers which can control both real and reactive power of transmission lines. UPFC is needed
to be locate on optimal location of system and this effective control strategies is achieved by genetic
algorithm. PID controller is employed for controlling UPFC and the results obtained are by placing the
UPFC in optimal location on the transmission lines. The transient stability of seven bus system is studied
under some external disturbance in MATLAB/SIMLINK environment.
Index terms Flexible AC Transmission system (FACTS), Unified Power Flow Controller (UPFC),
New Voltage Stability Index (NVSI), Genetic Algorithm (GA)
I. INTRODUCTION
An electric power system is a network of electrical components that is used to supply and transfer
the power to the load for satisfying the required demand. The transmission lines get overloaded when the
demand of the lines gets increases due to which the system becomes complex and this leads to serious
stability issues. The most important stability issue is transient stability [1]. The stability problems of power
system can be effectively reformed by the use of FACTS devices. The FACTS device is employed here in
order to transfer the exceeding power from generator to the system.
FACTS are the most multifarious devices [2] used for controlling reactive power and real power in
transmission line for economic and flexible operation. The main objective of FACTS devices are:
Increases power transfer capability
Controls power flow in specified routes
Realize overall system optimization control
FACTS devices include Static Var Compensator (SVC), Static Synchronous Compensator
(STATCOM), Thyristor Controlled Series Capacitor (TCSC), Thyristor Controlled Phase Shifter (TCPS),
Static Synchronous Series Compensator (SSSC) and Unified Power Flow Controller (UPFC) etc., UPFC
is used for voltage control applications. UPFC helps to maintain a bus voltage at a desired value during
load variations. The UPFC can be made to generate or absorb reactive power by adjusting firing angle. The
major problem in the FACTS controllers is identifying [2] the location for fixing and the amount of voltage
and phase angle to be injected. The stochastic algorithms can be used for locating the FACTS devices on
the transmission lines. There are several stochastic algorithms such as Genetic Algorithms, Differential
Evolution, Tabu Search, Simulated Annealing, Ant Colony Optimization, Particle Swarm Optimization
and Bees Algorithm. Each of these algorithms has its own advantages. Genetic Algorithms (GA), Particle
Swarm Optimization (PSO) and Bees Algorithm (BA) are a few efficient and well known stochastic
algorithms.
UPFC OPERATION AND MATHEMATICAL MODEL
Unified Power Flow Control is a combination of parallel and series branches, each consists of
123
The DC-circuit allows the active power exchange between shunt and series transformer to control
the phase shift of the series voltage, this setup is shown in Figure 1. The DC link in UPFC is used for filter
the ripple content. Fig 2 shows a UPFC equivalent circuit, the parameter are connected between bus i and
bus j, the voltages and angles at the buses i and j are V_i , V_j, _i and _j respectively. The main advantage
of the UPFC is to control the active and reactive power flows in the transmission line.
The real power (P_ij) and reactive power (Q_ij) flow between the buses i to j can be written as
follows,
The real (P_ji) and reactive power flow between the buses j to i can be written as
The operating constraint of the UPFC (the active power exchange via the dc link) is
PE= P_sh+P_ser=0
------------- (5)
Where PE is active power exchange, P_sh is Real power on shunt transformer and P_ser is real
power on series transformer. The value of P_sh is found from Re(V_sh I_sh^*) and value of P_ser is found
fromRe(-V_ser I_ser^*). The UPFC absorbs and generates reactive power by adjusting firing angle so in
the operating constraint of UPFC the real power of series transformer and shunt transformer is made as
equal to zero.
II LOCATION CRITERIA OF UPFC
The location of FACTs devices are based on several criteria which include sensitivity based
approach, artificial intelligence methods, point of voltage collapse method frequency response, stability
124
With taking the suffix i as sending end bus and j as the receiving bus. NVSI can be defined by
as follows
V
Where NVSI is new voltage stability index, V_1is the magnitude of bus 1, P_2 and Q_2 is the
real and reactive power of bus 2. If the equation (13) is equal to 1.00 then the respective line is said to be
maintain stable in transmission lines.
IV. GENETIC ALGORITHM
Genetic algorithm is proposed by Darwin in the 19th century and it is developed based on the
evolutionary theories [9]. The genetic algorithm is used to solve complex optimization problems. It is a
global search technique which is based on the mechanisms of natural selection and genetics, it also can
search several possible solution simultaneously. The GA start with random generation of initial population
Figure 4 shows the flow chart of genetic algorithm. The first step is to initialize of variables that need to
125
------------- (15)
------------- (16)
The PID controller is co-dependent on each separate function in order to update the derivative which
is from the integral function as supplied data. By using PID controller the following advantage exist on the
system.
Fast in operation
Low offset
Low overshoot
VII. SYSTEM DESCRIPTION
The three phase, seven bus system is taken in account to show the performance of UPFC with
127
In simlink model each bus in system consists of phase locked loop, phase sequence analyzer, three
phase instantaneous active and reactive power. Phase locked loop (PLL) is used for synchronize on the set
of variable frequency. The terminator is the most important block which is used for terminating the output
signal within the simulating time. Here RLC series branch is used. The two generator bus is used as sources
of system.
Fig no 12: bus system with UPFC (when external disturbance applied)
Figure 5 and 6 represents the real and reactive power and magnitude and phase angle of source bus
where as remaining five buses are used as load bus where the corresponding graph is shown in figure 7 and
8. Here the frequency of these bus systems are compared to each other and it is shown in figure 9. Firstly
the bus system operated without any fault, so in order to evaluate the improvement of transient stability
the external disturbance is being applied on system. The system is simulated for the duration of 5 seconds.
The genetic algorithm is most important technique which is used for solving optimization problems. UPFC
detects the disturbance that occurs on the system and it is solved by the UPFC due to which the transient
stability of the system improved. The external disturbance exists in system from 0.1second to 0.6second.
The fault is cleared above 2 second and stability of system is maintained at the time period of 3second and it
maintains same till the system is operated. The optimal location of UPFC is done by New Voltage Stability
Index where it is maintained at 1.00 for making the system stable. When compared to existing system, it
129
[3]
[4]
O.P.Dwivedi, J.G.Singh and S.N.Singh Simulation and Analysis of Unified Power Flow Controller
Using SIMULINK National Power System Conference, NPSC( 2004).
[5]
K.R.Padiyar and A.M.Kulkarni. Control Design and Simulation of Unified Power Flow Controller
IEEE Transaction on Power delivery, Vol.13 No 4, October (1998).
[6]
M. Noroozian, L. Angquist, M. Ghandari, and G. Anderson, Use of UPFC for optimal power flow
control, IEEE Trans. on Power Systems, vol. 12, no. 4, 1997, pp. 16291634.
[7]
[8]
N.Tambey and M.L. Kothari: " Damping of power system oscillations with unified power flow
controller (UPFC) , IEE Proceeding ,Vol. 150, No. 2, March 2003.
[9]
T.K.Mok, H.Liu, Y.Ni, F.F.Wu and R.Hui, Tuning the fuzzy damping controller for UPFC through
genetic algorithm with comparison to the gradient descent training, Electric Power Systs. Research.
Vol-27, pp. 275283, 2005.
[10] S.Mishra, P.K. Dash, P.K.Hota and M.Tripathy, Genetically optimized neuro-fuzzy IPFC for
damping modal oscillations of power system. IEEE Trans. Power Systs., vol-17, pp. 11401147,
2002.
[11] C.Houck, J.Joines and M.Kay, A genetic algorithm for function optimization: A MTLAM
implementation. NCSU-IE, TR 9509. 1995.
[12] S. Panda and R.N.Patel, Optimal location of shunt FACTS controllers for transient stability
improvement employing genetic algorithm, Electric Power Components and Systems, Vol. 35, No.
2, pp. 189-203, 2007.
130
131
I. INTRODUCTION
The power electronic devices are used for ac power control in the power system. The ac power
flow in industrial and domestic applications in the form of adjustable speed drives (ASDs), furnaces,
computer appliances etc. The harmonics injection and reactive power cause disturbance to the customer
and interference in the communication line with low system efficiency and poor power factor. Many
researchers have provided solutions for harmonic and reactive power compensation [1] and they imposed
specific limitations of current harmonics and voltage notches. Passive filter is used for eliminating lower
order harmonics and capacitors used for compensating reactive power demand in the system. They have
some drawback such as fixed compensation and resonance problems. Then the fundamental frequency
reactive power may affect the system voltage regulation. Here the increased harmonic pollution leads to
the development of active filters. The active filter rating depends on harmonics and reactive power to be
compensated. Generally active filters required high current rating and higher bandwidth requirement that
do not constitute cost effective solution for harmonic mitigation.
Hybrid filter as a combination of active and passive filters provide an effective harmonics and
reactive power compensation overcoming the technical disadvantages of active and passive filters [2].
The hybrid filter with shunt combination of active and passive filter [3] were proposed. The hybrid filter
topology with parallel combination of active and passive filter is used for reducing the filter bandwidth
requirement of active filter.
Many control strategies, starting from instantaneous reactive power compensation, evolved since
the inception of shunt active filters. One of the control strategy based on DC link voltage was discussed
[4] [5]. In this method the shunt active filter is to compensate the load side harmonics and reactive power,
thereby making the load linear. Therefore the supply side distortions are imposed on the line current. Even
though this method meets the reactive power requirement of the load when the supply voltage distortion
occur and the same are imposed on the line current also, where the line current still remains non-sinusoidal
even after compensation.
In this paper the instantaneous reactive power algorithm has been used for shunt active filter with
132
The generated pulse width modulation (PWM) signals required for the operation of control circuits.
The control strategy proposed here is for making the compensated line current to be sinusoidal and
balanced. Therefore the objective includes a sinusoidal reference current calculation and the current control
technique for generation of switching pulses to the VSI for a sinusoidal and balanced line current. Where is
the amplitude of the desired line current, the phase and frequency of the line current are obtained from the
supply voltage. The magnitude of reference line current can get by regulating the DC bus voltage of VSI.
The DC-link capacitance of VSI is used as an energy storage element in the system. For a lossless active
filter in the steady state, the real power supply from the supply should be equal to the real load demanded,
and no more real power passes through the power converter into the capacitor. Therefore the averaged
dc-capacitor voltage can be maintained at the reference voltage level. For a balanced line current under
unbalanced source voltage the proposed method to use one phase of source voltage as phase reference and
1200 shifter. By this method the harmonics present in the source voltage are reflected in the reference line
current. Therefore, a modified algorithm, by preprocessing the source voltage template is proposed to make
the compensated the line current sinusoidal.
The source voltages are transformed into d-q reference frame using parks transformation. After
transformation nth order positive sequence component becomes (n-1)th order component becomes (n-1)
th order component and nth order component becomes (n+1)th order component in d-q reference frame.
The fundamental component of source voltage becomes a dc component in d-q reference frame which
should be filtered out using a low-pass filter. This filtered dc value after counter transformation into 3-phase
component can be used for unit templates for reference current calculation. Thus this modification filters
out the effect of source side distortion in the line current.
In order to drive the line currents to trace the reference currents an effective current control technique
has to be used for generating the switching pulses of the VSI. Hysteresis control is implemented here for
this purpose. In this control line currents are sensed and compared with the reference currents. The error in
each phase is sent to the hysteresis control, Where in a switching pulse is generated to the upper switch of
the VSI, if this error s less than the lower hysteresis band and a switching pulse is generated to the lower
switch of the VSI if the error is found more than the upper hysteresis band.
133
III.SYSTEM CONFIGURATION
Fig1.2 Block diagram for parallel combination of shunt active and passive filters
The fig1.2 represents three-phase source and non-linear load. Where the shunt passive filter and
shunt active filter are connected in shunt with the line. Passive filter provides cost effective mitigation for
harmonics and reactive power from supply. Active filter can effectively compensate the harmonics and can
meet the reactive power demand. The hybrid filter with parallel combination of active filter and passive
filter reduces the filter bandwidth requirement of active filter. The active filter in this topology is a voltage
source inverter (VSI) with a DC-link capacitance (Cdc) at the dc side and a filter inductor (Lc) to attenuate
the ripple of the converter current caused by the switching of the converter. Two single tuned low pass
passive filter tuned to 5th and 7th harmonics along with a high pass passive filter tuned to 11th harmonic are
used with active filter. for making the compensated line current to be sinusoidal and balanced.
The fig1.3 represents the controller block diagram. Where three phase supply from the grid and they
are converted into dq0 transformation. By eliminating the 0th term we get dq alone, setting dq reference
value by comparing the actual value and reference value we get dq error value.PID controller output depends
on the erroneous value. Then converting dq0 to abc value they are given as input to the pulse generator, by
varying the amplitude can generate six pulse and given as input to the three-phase inverter.
IV.SIMULATION RESULTS
134
The simulation results for phase -a of source voltage (Vs), Load current (Iload), Line current
(Is), Compensating current (Ic). Here implements the three phase circuit breaker in the circuit design. The
circuit breaker can be used series with the three-phase element which want to switch. Where the opening
and closing time can be controlled either from an external simulink signal or from an internal control timer.
If it sets in external control mode, the control signal connected to input must be either 0 or 1, 0 to open the
breaker and 1 to close them. If the three phase breaker block is set in internal control mode, the switching
times are specified in the dialog box of the block. The three individual breakers are controlled with the
same signal. When external switching time mode is selected, a simulink logical signal is used to control the
breaker operation. Switch is in open state and the switching time is 0.1s. Before 0.1s the simulation result
shows without using hybrid filter. The active filter parameters are Lc=3.35mH, Rc=0.4 ohms, DC-link
capacitance CDC=2200 F, DC-link reference voltage Vdc,ref=680volts for parallel hybrid filter.
Fig1.3 shows the simulation results for parallel combination of passive and active filter. The source
voltage (Vs) is having a THD of 25.23%, and the load current (Iload) having a THD of 30.65%. The line
current(Is) after compensation is having 11.41%. The peak value of supply current is found less than the
peak value of load current that shows the supply current is carrying only the active component of load
current and active component of compensating current. The DC-link voltage of the VSI is maintained at
680Volts. The passive filter component compensates for 5th,7th,11th harmonics partly. Since this passive
filter is connected a supply having 3rd harmonic component, it draws a 3 harmonic component adding
extra burden to the active filter. The compensating current from the active filter is having a fundamental of
0.7887amps with a THD of 478.87% which shows active filter is used at reducing rating.
To drive the line currents to trace the reference currents an effective current control technique has
to be used for generating the switching pulses for the VSI. Hysteresis control is implemented for this
purpose. In this control the line current are compared and sensed with the reference current. Therefore this
modification filters out the effect of source side distortions in the line current.
The topology of hybrid filters are simulated using MATLAB/SIMULINK and the results are
compared under non-ideal supply voltage with a 0.1pu 3rdm harmonic negative sequence component in
the source voltage. The rms value of the fundamental component is 230Volts. The various design values of
passive filter are shown in Table2.
V.CONCLUSION
Thus the results show the use of hybrid filter topology for harmonic and reactive power compensation.
However, hybrid filter topology with a parallel combination of active and passive filters reduces the load
distortion bandwidth to be compensated by the active filter, thereby lowering the bandwidth required by the
active filter. The passive filter performance might have affected due to changes in the system parameters.
This topology is beneficial only when the source voltage is sinusoidal, as the performance of passive filters
is improved when they are connected to a pure sinusoidal supply, which further reduces the burden on active
filter. . The active filter performance is often degraded due to the distorted and unbalanced main voltages
In this paper, a new algorithm has been proposed to improve the active filter performance under nonideal
main voltages. The performance of control strategy used here is simple and effectively compensates the
load generated harmonics and nullifies the effect of source voltage harmonics in the line.
REFERENCES
[1] Bhim Singh, Kamal Al-Haddad and Ambrish Chandra, A Review of Active Filters for Power
Quality ImprovementIEEE VOL.46,N0.5,OCT 1999.
[2]
Bhim Singh and Vishal Verma, An Indirect Current Control of Hybrid Power Filter for Varying
LoadsIEEE VOL.21,NO.1,JAN 2006.
[3]
Adil M. Al-Zamil and David A. Torrey, A Passive Series, Active Shunt Filter for High Power
ApplicationsIEEE VOL.16,NO.1,JAN 2001.
[4]
Shyh-Jier Huang and Jinn-Chang Wu, A Control Algorithm for Three-Phase Three-Wired Active
Power Filters Under Non ideal Mains Voltages IEEE VOL.14,NO.4,JULY 1999.
136
Shailendra Kumar Jain, Pramod Agarwal and H. O. Gupta A Control Algorithm for Compensation
of Customer-Generated Harmonics and Reactive PowerIEEE VOL.19,NO.1 JAN 2004.
[6]
Montero, M.I.M. ,Cadaval, E.R. , Gonzalez, F.B. Comparison of Control Strategies for Shunt
Active Power Filters in Three-Phase Four-Wire Systems IEEE VOL.18,NO.3 JAN 2005.
[7]
[8]
H. Akagi and S. Atoh, Control strategy of active power filter using multiple voltage-source PWM
converters, IEEE Trans. Ind. Applicat.,vol. IA-22, pp. 460465, May/June 1986.
[9]
T. Furuhasshi, S. Okuma, and Y. Uchikawa, A study on the theory of instantaneous reactive power,
IEEE Trans. Ind. Electron., vol. 37, pp.8690, Feb. 1990.
[10] M. Areds, J. Hafner, and K. Heumann, Three-phase four-wire shunt active filter control strategies,
IEEE Trans. Power Electron., vol. 12,pp. 311318, Mar. 1997.
137
NOMENCLATURE
138
Here Vab is the line voltage at the PCC. Maximum modulation index is selected as 1 for linear
range. The value of DC link voltage (Vdc) by (1) is estimated as 375 V. Hence, it is
selected as 375 V.
B. Selection of VSC Rating
The DFIG draws a lagging Volt-Ampere Reactive (VAR) for its excitation to build the rated air gap
voltage. It is calculated from the machine parameters that the lagging VAR of 2kVAR is needed when it is
running as a motor.
In DFIG case, the operating speed range is 0.7 p.u to 1.3 p.u. So the maximum slip (smax) is 0.3.
For making unity power factor at the stator side, reactive power of 600 VAR (Smax*Qs = 0.3*2 kVAR) is
needed from the rotor side (Qrmax). The maximum rotor active power is (Smax*P). The power rating of the
DFIG is 5 kW. So the maximum rotor active power (Prmax) is 1.5kW (0.3*5 kW=1.5 kW). So the rating of
the VSC used as RSC,Srated is given as,
The rating of the VSC used as RSC, Srated is given as,
Out of all variable speed wind turbines, Doubly Fed Induction Generators (DFIGs) are preferred
because of their low cost. The other advantages of this DFIG are the higher energy output, lower converter
rating and better utilization of generators. These DFIGs also provide good damping performance for the
weak grid. Independent control of active and reactive power is achieved by the decoupled vector control
algorithm presented in Ref[2]. The dynamic performance of proposed DFIG is also demonstrated for
varying wind speeds and changes in unbalanced nonlinear loads at PCC . This vector control of such system
is considering the peak ripple current as 25% of rated GSC current.Interfacing inductor between PCC and
GSC is selected as 4 mH.
V. CONTROL STRATEGY
Control algorithms for both GSC and RSC are presented in this section. Complete control schematic
is given in Fig. 3. The control algorithm for emulating wind turbine characteristics using DC machine and
Type A chopper are given below.
140
141
are respectively the synchronous angular speed of the generator and the angular speed of the rotor
with .
are respectively the ststor and rotor inductance and M is the magnetizing inductance.
is the turbine speed.
Here the speed error (er) is obtained by subtracting sensed speed (r) from the reference speed (r*).
kpd and kid are the proportional and integral constants of the speed controller. er(k) and er(k-1) are the
speed errors at kth and (k-1)th instants. idr*(k) and idr*(k-1) are the direct axis rotor reference current at
kth and (k-1)th instants. Reference rotor speed (r*) is estimated by optimal tip speed ratio control for a
particular wind speed.
The tuning of PI controller used in both RSC and GSC are achieved by using Ziegler nichlos method.
initialy value is set to zero and increase the value of until the response stars oscillating with a period of Now
the value of is taken as 0.45 and is taken as 1.2 /
Where slip angle ( ) is calculated as,
Here is calculated from PLL for aligning rotor currents into voltage axis. The rotor position ( )
Achieved with an encoder.
142
Here kpdc and kidc are proportional and integral gains of DC link voltage controller. Vdce(k) and
Vdce (k-1) are DC link voltage errors at kth and (k-1)th instants. igsc*(k) and igsc* (k-1) are active power
component of GSC current at kth and (k-1)th instants.
The grid phase voltage can be expressed as follow:
The active and reactive powers, exchanged between the grid and the GSC, are given by (15) the
following equations:
If the d-axis is aligned with the stator voltage, one can write: vdg =us and vqg = 0. Hence, the active
and reactive powers expressions are easily simplified as follows:
Fig.4.Wind Speed
This Fig.4 shows variation of the amplitude of speed of the wind with time.From this graph we have
concluded that the speed of wind will not remain constant and vary with time.
WIND SPEED =1400
2. MPPT power generation
This Fig.5 shows that maximum constant power can be extracted from the wind power plant using
MPPT algorithm. MPPT can be implemented by varying the pitch angle and yawing.
5.STATOR SIDE
The Fig.7 given below shown that the amplitude and waveform of the induced stator side voltage
and currents.
6.ROTOR SIDE
The Fig.9 shown below represents current induced in the rotor are not pure and contain some
harmonics.There convertors are used to eliminate the
harmonics and maintain the Voltage.we can able to connect rotor side to the grid only by maintaining
the rotor side voltage constant.
Conclusion
The GSC control algorithm of proposed DFIG has been modified for supplying the harmonics and
reactive power of the local loads. In this proposed DFIG, the reactive power for the induction machine has
been supplied from the RSC and the load reactive power has been supplied from the GSC. The decoupled
control of both active and reactive powers has been achieved by RSC control. The proposed DFIG has
also been verified at wind turbine stalling condition for compensating harmonics and reactive power of
local loads. This proposed DFIG based WECS with an integrated active filter has been simulated using
MATLAB/ Simulink environment and simulated results are verified.Steady state performance of proposed
DFIG has been demonstrated for a wind speed. Dynamic performance of this proposed GSC control
algorithm has also been verified for the variation in the wind speeds and for local nonlinear load
145
S. Muller, M. Deicke and R. W. De Doncker, Doubly fed induction generator systems for wind
turbines, IEEE Ind. Appl. Magazine, vol.8, no.3, pp.26-33, May/Jun 2002.
[3]
A. Gaillard, P. Poure and S. Saadate, Active filtering capability of WECS with DFIG for grid power
quality improvement, Proc. IEEE Int. Symp. Ind. Electron., Jun. 30, 2008, pp. 2365 -2370.
[4]
[5]
A. Gaillard, P. Poure, S. Saadate, M. Machmoum, Variable speed DFIG wind energy system for
power generation and harmonic mitigation, Renewable Energy 34 (6) (2009) 1545-1553.
[6]
E. Tremblay, A Chandra and P. J. Lagace, Grid-side converter control of DFIG wind turbines to
enhance power quality of distribution network, 2006. IEEE PES General Meeting, pp. 6.
[7]
E. Tremblay, S. Atayde and A Chandra, Direct power control of a DFIG-based WECS with active
filter capabilities, 2009 IEEE Electrical Power & Energy Conference (EPEC), 22-23 Oct. 2009,
pp.1-6.
[8]
R Datta, Rotor side control of grid-connected wound rotor induction machine and its application to
wind power generation, Ph.D. dissertation, Dept. Electr. Eng., Indian Inst. Sci., Bangalore, India,
2000.
[9]
B. Rabelo and W. Hofmann, "Control of an optimized power flow in wind power plants with doublyfed induction generators, " in Proc.34th Annu. Power Electronics Specialists
146
Dr.S.Maheswari
I. INTRODUCTION
Watermarking is a technique through which the information is carried without degrading the quality
of the original signal. Key is used to increase the security, which does not allow any unauthorized users to
manipulate or extract data. Watermarking technology is now helpful in the attention of protecting copyrights
for the images. There are two types of watermarks are present. They are visible and invisible or transparent
watermarks, which cannot be perceived by the human sensory system. Based on the embedding domain,
watermarking system can be classified as spatial domain and transform domain [6].
An audio watermarking is a technology to hide information in an audio file without the information
to the listener and without affecting the quality of the audio signal [9]. The spatial domain watermarking
system can directly alters the main data elements in an image to hide the watermark data. The transform
domain watermarking system alters the transforms of data elements to hide the watermark data. This has
proved to be more robust than the spatial domain watermarking [8]. Some complexities are present in
the execution process. To overcome those problems a new audio signal decomposition method called
Discrete Wavelet Transform (DWT) is used in our method [1]. One was derived from low frequency DWT
coefficients, and the other was constructed from DWT coefficients of log-polar mapping of the host image
[2].The audio embedded in to watermark with the help of secret key and the watermarked image pass
through the channel which several attacks like noise addition, re-sampling etc.,[4] The same secret key after
attacks in watermarked image to recover the original watermark image.
Step 3: One level Discrete Wavelet Transform (DWT) is applied into samples in each sections.
The signal divided into approximation and detailed coefficients. Then the approximation coefficients of one
level DWT is obtained.
Step 4: Energy will be calculated in approximation coefficients.
S(i) = sum(abs(Yi(k))*abs(Yi(k)))
(1)
The energy S(i) is sum of absolute values of approximation coefficients multiplied by other
absolute values of approximation coefficients.
Step 5: Compare the energy of each coefficients. The energy S(i) is greater than next energy
values S(i)+1 then the condition is TRUE. In TRUE condition the relational value get 1.The energy S(i) is
less then next energy values S(i)+1 then the condition is FALSE. In FALSE condition the relational value
get 0.The relational values gotten between each adjacent section is called relational value array.
Step 6: The binary-pixel watermark image and relational value array on perform exclusive OR
operation to get a key. With this key the watermark can be extracted. Send this key to the extracting side.
149
Figure 8.Key
III.WATERMARK EXTRACTION
The watermark extraction is the reverse process of watermark embedding. The watermarked
audio signal is processed and finally the watermarked image is extracted.
Step 1: The audio signal is divided into number of sections.
Step 2: Each sections are called audio data blocks. Each section has N samples.
Step 3: One level Discrete Wavelet Transform (DWT) is applied to every segmented frame. The
approximation coefficients of one level DWT is obtained.
Step 4: Energy will be calculated in approximation coefficients.
Step 5: Compare the energy of each coefficients. The energy S(i) is greater than next energy values
S(i)+1 then the condition is TRUE. In TRUE condition the relational value get 1.The energy S(i) is less than
150
The bit error rate (BER) is the number of bit errors per unit time. The bit error ratio is the number
of bit errors divided by the total number of transferred bits during a studied time interval.BER is a unit
less performance measure, often expressed as a percentage. The bit error ratio can be considered as an
approximate estimate is accurate for a long time interval and high number of bit errors.
Table II. BER and NC values of different audio signals against various attacks.
151
S. Wu S., J. Huang J., D. Huang and Y. Q, Efficiently Self-Synchronized Audio Watermarking for
Assured Audio Data Transmission, IEEE Trans Broadcast, Vol.51, No.1, pp. 69-76, 2005.
[3]
V. K. Bhat, I. Sengupta, and A. Das, An Adaptive Audio Watermarking Based on the Singular
Value Decomposition in the Wavelet Domain, Digital Signal Processing, vol. 20, no. 6, pp. 15471558, 2010.
[4]
J. Huang, Y. Wang, and Y. Q. Shi, A blind audio watermarking algorithm with self-synchronization,
in Proc. IEEE Int. Symp. Circuits and Systems, vol. 3, 2002, pp. 627630.
[5]
Q. Wen, T.-F. Sun, and S.-X. Wang, Concept and application of zero-watermark, Tien Tzu Hsueh
Pao/Acta Electronica Sinica, vol. 31, no. 2, pp.214216, 2003.
[6]
Dhar, P.K. , Shimamura T, Entropy-based audio watermarking using singular value decomposition
and log-polar transformation, Circuits and Systems (MWSCAS), 2013 IEEE 56th.
[7]
Yang Yu, Lei Min, Cheng Mingzhi, Liu Bohuai, Lin Guoyuan, Xiao Da, An Audio Zero-Watermark
Scheme Based on Energy Comparing, information security, china communications,2014.
[8]
Wang X, Peng H, Audio Watermarking Approach Based on Energy Relation in Wavelet Domain,
Journal of Xihua University Natural Science, Vol.28, No.3,2009.
[9]
D. Kiroveski and S. Malvar, Robust Spread Spectrum Audio Watermarking, IEEE International
Conference on Acoustics, Speech and Signal Processing (ICASSP01), pp. 1345-1348, 2001.
[10]
X.-Y. Wang and H. Zhao, A novel synchronization invariant audio watermarking scheme based on
DWT and DCT, IEEE Transactions on Signal Processing, vol. 54, no. 12, pp. 48354840, 2006.
152
Dr.C.R.Balamurugan
I. INTRODUCTION
The need of a vehicles increased every day, People are like safe and comfort travel with low cost
investment. In this proposed system there are seven safety measures are included in this system. These
safety measures are the most common reasons for road accidents during day and night time driving. The
Main motivation of this proposed system is to reduce driving accidents for automotive.
In this proposed system the included measures are,
To reduce night time driving accidents due to opponent headlight illumination by AFHAS
(Automatic Front Headlight Adjustment System) because most of the accidents arisen at night
time driving.
To reduce the short circuit faults at the vehicle wiring connections.
Gas leakage detection and prevention.
Monitoring temperature in an automotive engine location.
Automatically adjust horn sound for the respective surroundings.
To display accurate fuel level.
Monitoring the wheel pressure.
II. LITERATURE SURVEY
Postolache [1] made an survey that the CAN and TIA-485 are two of the most used standards in
field bus systems. While CAN ISO IS-11898 includes complete data link layer specifications on top of
its physical layer, TIA-485 only addresses the physical layer of the 7-layer OSI model. Jianqun Wang et
al [2] discussed an autonomous dynamic synchronization method for CAN communication based on the
queue of IDs of frames is proposed to realize a quasi synchronous communication result. The purpose of
the method is setting up a quasi synchronous principle, in which every CPU sends information according
to an agreed consequence so that the possibility of collisions is reduced and the situation on which CPUs
send data in no order is avoided and thus the flow of effective data in communication and real-time ability
of data in network are heighten, and at the same time arranges the CPU an given sending queue to reduce
the losses of frames due to arbitration conflicts, which, guarantees the synchronous acquisition greatly.
Hyeryun Lee et al [3] demonstrated that automobiles on the streets were very vulnerable to the cyber attacks
conducted by injecting harmful packets into the popular car network, CAN via a wireless communication
channel, Bluetooth. In contrast to the some previous works showed the vulnerability of automobiles, based
on the in-depth analysis, we showed that it was possible to attack automobiles by only injecting the packets
153
A. Master Module
The master module have temperature monitoring unit, automatic front headlight adjustment system,
LPG gas leakage sensor and exhaust gas control, which are connected to an PIC 16f877A microcontroller.
It has 5 I/O (Input /Output) Ports for connecting to the sensor modules. The sensor modules are
connected to the PIC 16f877A through I/O ports in fig 2.
The sensors used are:
1. Temperature Sensor LM35
2. Light Sensor LDR and BH1750FVI
3. Gas leakage sensor-MQ6
4. Accelerometer ADXL335
Each sensor is connected to the I/O ports on the PIC microcontroller. The temperature sensor LM35
which is used to monitor the temperature on the engine. When the heat of an engine exceeds to a certain
level the LM35 detects the increase in temperature and it transmits the signal to the PIC microcontroller
for the abnormal condition in the engine temperature. The PIC microcontroller sends the information to the
LCD1 and enables the LED D1 to alert the user.
The Automatic Front Headlight Adjustment System is mainly reduce the glaring effect during night
time driving .The LDR is used to detect the light intensity of the opposite vehicle and it adjust the intensity
of the vehicle to lower level which makes the driver safely. The accelerometer is used to adjustment of the
headlight according to the steering wheel position.
The gas leakage sensor MQ6 detects the LPG gas and other gas leakages and sends the information
to PIC microcontroller to drive the motor connected to the vehicle window to exhaust the harmful gas
outside and it activates the warring signal through LED D2 and the LCD displays the Gas leakage detected.
B.Slave Module
154
It is used to transmit as well as receive the signal. The two modules are connected to the Can
bus through the CAN Controller and CAN Transceiver to the Master and Slave Module. The monitored
information of the slave module is transmitted to the CAN Bus to the Master module. The monitored
information of Master module along with Slave module are passed to the UART(Universal Asynchronous
Receiver Transmitter).The UART protocol is a point to point communication and it transmits one data at a
time. The UART has one master and one slave module.
The Output of both the modules are transmitted through the UART protocol on the master module
to the virtual terminal. The implementation of CAN protocol along with two modules are shown in fig 2.
155
Temperature
Gas level
Fuel Level
Current Sensor
RFID Sensor
V. CONCLUSION
The proposed system has been designed and this system has two modules namely master and
slave which takes required action for the night time driving accidents due to glaring effect of headlight
luminance, to provide clear vision for vehicle driver, short-circuit fault line detection, Gas leakage detection
cum prevention action and monitoring the engine area temperature by an analog and digital sensor ,horn
volume level adjustment through RFID for reducing the noise pollution and disturbance to the surroundings
and wheel pressure monitoring to know the pressure on the wheel are above stated in this paper. The
communication between master and slave module through Controller Area Network serial communication
protocol has been implemented and it precedes required actions, values displayed in dashboard for driver
157
Jianqun Wang , Jingxuan Chen and Ning Cao, A Method to Improve the Stability and Real-time
Ability of CANInternational Conference on Mechatronics and Automation,2015,vol.2,No.1,pp
1531 1536.
[3]
Hyeryun Lee, Kyunghee Choi, Kihyun Chung Jaein Kim and Kangbin Yim Fuzzing CAN
Packets into Automobiles International Conference on Advanced Information networking and
Applications,2015, vol.3,No.5pp.817-821.
[4]
Donghyuk Jang, Sungmin Han, Suwon Kang, and Ji-Woong Choi Communication Channel
Modeling of Controller Area Network (CAN)International Conference on Ubiquitous and Future
Nwtworks, 2015, vol.1,No.2pp.86-88.
[5]
Jaromir Skuta and Jifi Kulhanek Control of car LED lights by CAN/LIN bus2015, vol.2,No.1pp
486-489.
[6] Shane Tuohy, Martin Glavin, Ciarn Hughes, Edward Jones, Mohan Trivedi, and Liam
Kilmartin, Intra-Vehicle Networks: A Review IEEE Transactions on Intelligent Transportation
systems,Vol.16,issue 2,pp.534-545,2014.
[7]
Beying Deng and Xufeng Zhang Car networking application in vehicle safety Workshop on
Advanced Research and Technology in Industry Applications, 2014, pp.834-837.
[8]
Sathya narayanan, Monica and Suresh,Design and implementation of ARM microcontroller based
vehicle monitoring and control system using Controller Area Network(CAN) protocol,Internation
Journal on Innovative Research in Science, Engineering and Technology, 2014 ,Vol 3,Issue 3,pp.712718.
[9]
Yeshwant Deodhe, Swapnil Jain and Ravindra Gimonkar, Implementation of Sensor Network
using Efficient CAN Interface, International Journal of Computing and Technology, 2014, Vol. 1,
Issue 1. Pp.19-23.
[10] Alberto Broggi, Michele Buzzoni, Stefano Debattisti,Paolo Grisleri, Maria Chiara Laghi, Paolo
Medici, and Pietro Versari, Extensive Tests of Autonomous Driving Technologies IEEE
Transactions on Intelligent Transportation Systems, Vol. 14, No. 3, pp.1403-1415, 2013.
[11] Vikash Kumar Singh and Kumari Archana, Implementation of CAN Protocol in Automobiles
Using Advanced Embedded System International Journal of Engineering Trends and Technology
(IJETT), 2013, Vol.4 pp. 4422-4427.
[12] Jaimon Chacko Varghese, Binesh Ellupurayil Balachandran, Low Cost Intelligent Real Time Fuel
Mileage Indicator for Motorbikes, International Journal of Innovative Technology and Exploring
Engineering , 2013, Vol-2, Issue-5,pp.97-107.
[13] Ashwini S. Shinde , Prof. vidhyadhar and B. Dharmadhikari , Controller Area Network for Vehicle
automation, International Journal of Emerging Technology and Advanced Engineering ,2012, Vol.
2, Issue 2,pp.12-17.
[14] A.Che Soh, M.K.Hassan and A.J.Ishak, Vehicle Gas Leakage Detector, The Pacific Journal of
Science and Technology, 2010, Vol. 11. Number 2, pp.66-76.
158
ABSTRACT - Voltage source convertors based static synchronous compensators are used in
the transmission and distribution line for voltage regulation and reactive power compensation.
Nowadays angle controlled STATCOM have been deployed in the utilities to improve the output
voltage waveform quality with lower losses compared to that of PWM STATCOMS.Even though
angle control STATCOM has lot of advantages,it suffers in their operation,when unbalanced and
fault conditions occur in the transmission and distribution lines. This paper presents an approach of
Dual Angle control strategy in STATCOM to overcome the drawbacks of the conventional angle
control and PWM controlled STATCOMS.Here,this paper will not completely changes the design
of conventional angle control STATCOM,instead it add only (ac ) AC oscillations to the output
of the conventional angle controller output (dc)to make it as a dual angle controlled. Hence the
STATCOM is called dual angle controlled (DAC) STATCOM.
Index terms- Dual angle control (DAC),hysteresis controller,STATCOM.
I. INTRODUCTION
There are lot of devices used in the power system for voltage regulation, reactive power
compensation and power factor regulation[1]. The voltage source convertor(VSC) based STATCOM is
one of the widely used device in the large transmission and distribution systems for voltage regulation and
reactive power compensation. Nowadays angle controlled STATCOM have been deployed in the utilities
to improve the output voltage waveform quality with lower losses compared to that of PWM STATCOMS.
The first commercially implemented installation was 100 MVAr STATCOM at TVA Sullivan substation
and followed by New York Power Authority installation at Marcy substation in New York state in[13] and
[16].150-MVA STATCOM at Leardo and Brownsville substation at Texas,160-MVA STATCOM at Inez
substation in Eastern Kentucky, 43-MVA PG&E Santa cruz STATCOM and 40-MVA KEPCO (Korea
Electric Power Corporation) STATCOM at Kangjin substation in South Korea are the few examples of the
commercially implemented and operating angle controlled STATCOM on worldwide.
Even though angle control STATCOM has lot of advantages compared to other STATCOMS,
it suffers in their operation by over current and possible saturation of the interfacing transformers caused
by negative sequence during unbalanced and fault conditions occur in the transmission and distribution
lines in [4]. This paper presents an approach of Dual Angle control strategy in STATCOM to overcome
the drawbacks of the conventional angle control and PWM controlled STATCOMS[2]. Here,this paper
will not completely changes the design of conventional angle control STATCOM,instead it add only (ac
) AC oscillations to the output of the conventional angle controller output (dc) to make it as a dual
angle controlled.Hence the STATCOM is called dual angle controlled (DAC) STATCOM.Angle control
STATCOM same degree of freedom compared to that of PWM STATCOM, but it is widely used because
it has higher waveform quality of voltage compared to that of PWM STATCOM.
This paper presents a new control structure for high power angle controlled STATCOM.Here the
only control input to angle control STATCOM is phase difference between VSC and ac bus instantaneous
voltage vector.In the proposed control structure, is split into two parts, dc and ac. The DC part dcwhich
is the final output of the conventional angle controller is incharge of controlling the positive sequence
VSC output voltage.The oscillating part ac controls the dc link voltage oscillations. The proposed model
STATCOM has the capablity to operate under fault conditions and able to clear the faults and unbalanced
occurs in the transmission and distribution lines.
In this paper,we have implemented a new control structure in STATCOM ,which has the ability to clear
such as sag and swell and other types of which will appears in the power systems. The analysis of the
proposed control structured STATCOM is done on the MATLAB simulations and the experimental results
are satisfied.
159
Then the second type is the angle controlled STATCOM .Here by changing the output voltage angle of the
STATCOM for a particular time on compared to that of line voltage angle, the inverter can be able provide
both inductive and capacitive reactive power.
By controlling the towards the positive and negative direction and varying the dc link voltage,
we can able increase or decrease the final output voltage of voltage source convertors(VSC) in [2].Here
the ratio between the dc and ac voltage in STATCOM should be kept constant. If the final output voltage
of the STATCOM is greater than the line voltage it will absorb reactive power from the line.But, if the
output voltage of the STATCOM is lesser than the line voltage ,then it will inject reactive power into the
line.Throughout this paper the performance of the proposed control structure will be shown by MATLAB
simulations.
160
This system will protects the switch and limits STATCOM current under fault conditions.The
dc-link voltage oscillations will be occurred in this method and it will cause the STATCOM to trip.The
injection of poor quality voltage and current waveforms into faulted power system will produce undesirable
stress on the power system components [7].
IV. ANALYSIS OF STATCOM UNDER UNBALANCED OPERATING CONDITIONS
In this method a set of unbalanced three phase phasor is split into two symmetrical positive and
negative sequences and zero sequence component. The line currents in the three phases of system is
represented by the equations 1,2,3and 4 mentioned below,
161
Where is the angle by which the invertor voltage leads/lags the line voltage vector and K is the factor for
the invertor which relates the dc side voltage to the phase to neutral voltage at the ac side terminals.The
invertor terminal fundamental voltage is given in the equation 8,9,10 mentioned below,
Basically, the unbalanced system can be analysed by postulating a set of negative sequence voltage
source connected in series with the STATCOM tie line. The main idea of the Dual Angle Control strategy
is to generate a fundamental negative sequence voltage vector at VSC output terminals to attenuate the
effect of negative sequence bus voltage.The generated negative sequence voltage will minimize the
negative sequence current produced on STATCOM under fault conditions.The third harmonic voltage will
be produced at VSC output terminals because of interaction between dc link voltage second harmonic
oscillations and switching function.The third harmonic voltage is positive sequence and contains phase a,b
and c which are 1200 apart. Basically, the negative sequence current will be produced in the unbalanced
ac system conditions generates the second harmonic oscillations on the dc link voltage and it will reflects
as third harmonic voltage at the VSC output terminals and fundamental negative sequence voltage.Similar
to fundamental negative sequence voltage, dc link voltage oscillations will decide the amplitude of second
harmonic voltage in [3].Here by controlling the second harmonic oscillations on the dc link voltage ,the
negative sequence current can be reduced.Decreased negative sequence current will reduce the dc link
voltage.Reducing the dc link voltage second harmonic will reduce the third harmonic voltage and current
at the STATCOM tie line in [12].Here the control analysis of STATCOM under fault conditions are done
in MATLAB
V. PROPOSED CONTROL STRUCTURE DEVELOPMENT
As discussed in the previous section ,the STATCOM voltage and current during unbalanced conditions are
calculated by connecting a set of negative sequence voltage in series with STATCOM tie line are shown
in Fig.
162
Then the reflected negative sequence voltage at phases a,b and c STATCOM terminal are calculated
by the equation 11,12 and 13 mentioned below,
The derivative of STATCOM tie line negative sequence currents with respect to time are
calculated by the equation 14,15 and 16 mentioned below,
In proposed structure,angle is divide into two parts dc and ac .The angle dc is the output of
the positive sequence controller and ac is the output of the negative sequence controller. The angle ac is
the second harmonic oscillations which will generate negative sequence voltage vector at the VSC output
terminals to attenuate the effect of the negative sequence bus voltage on fault conditions.The ac should be
properly filtered out otherwise it will leads to higher order harmonics on the ac side.
163
Here the voltage is suddenly decreasing in the particular time interval due to sudden change in the load
value.when the load connected to the system does not remain constant,then the current and voltage of
the line will not remain constant. During fault occurance, the current and voltage of the grid will not
remain constant, so the STATCOM can be used to maintain the voltage .Because voltage is the important
protection parameter, it has capability to damage insulations of the transmission and protection device.
164
Here ,the reduction in amplitude of voltage is observed because of the sudden change in the load value.
This is due to the inverse proportionality nature of voltage and current value in normal power systems.
This sudden increase of load is achieved by connecting a load to the grid by means of switch.By giving a
time sequence to the switch for connecting it with the grid,we can able to connect and disconnect the load
automatically for the particular time sequence.
I. INTRODUCTION
Cracking is one of the most common and important types of asphalt pavement distress. In many
countries, they are spent to reduce errors and increase the performance of the automatic evaluation of
the quality of the road. Generally, cracking distress can be divided into three main types longitudinal,
transversal, and alligator cracking.
167
Traditionally, data about pavement cracking has been gathered by human inspectors by collecting
data manually through visual surveys. Manual surveys are time consuming and costly and definitely
involve a fair amount of subjectivity. There are also dangerous risks to the surveying personnel due to high
speeds of public traffic. In my proposed system the main aim is to reduce shadowing eliminating white
land marking and increase the image resolution and recognition rate by using shape based image retrieval
algorithm.
II. LITERATURE SURVEY
Haiyan Guan., et., all proposed a survey of literature about road feature extraction, giving a detailed
description of a Mobile Laser Scanning (MLS) system (RIEGL VMX-450) for transportation related
applications. The proposed Road SEA detects road curbs from a set of profiles that are sliced along vehicle
trajectory data. Haiyan Guan., et., all proposed the assessment of pavement cracks is one of the essential
tasks for road maintenance. The proposed ITV Crack comprises the following: The preprocessing involving
the separation of road points from non road
points using vehicle trajectory data. The generation of the geo referenced feature (GRF) image from
the road points. Henrique Oliveira., et., all proposes a fully integrated system for the automatic detection
and characterization of cracks in road flexible pavement surfaces. The first task addressed, i.e., crack
detection, is based on a learning from samples paradigm, where a subset of the available image database is
automatically selected and used for unsupervised training of the system. The second task deals with crack
type characterization, for which another classification system is constructed, to characterize the detected
cracks connect components. Haiyan Guan, Jonathan Li., et.all this paper presents the development and
implementation aspects of an automated object extraction strategy for rapid and accurate road marking
inventory. The proposed road marking extraction method is based on
2-D geo referenced feature (GRF) images, which are interpolated from 3-D road surface points
through a modified inverse distance weighted (IDW) interpolation. Wei Chen., et., all Data visualization is
an efficient means to represent distributions and structures of datasets and reveal hidden patterns in the data.
This paper introduces the basic concept and pipeline of traffic data visualization, provides an overview of
related data processing techniques, and summarizes existing methods for depicting the temporal, spatial,
numerical, and categorical properties of traffic data. M. Salmon., et, all The existing algorithm is planned
to be enhanced by analyzing connected components and by introducing some further post- processing
techniques. A novel approach to automatically distinguish cracks in digital pavement images is proposed
in this paper. Wei Na., et, all this paper provides an approach for achieving an automatic classification for
pavement surface images. First, image enhancement is performed by mathematical morphological operator.
Secondly, pavement image segmentation is performed to separate the cracks from the background. Tien
Sy NGUYEN., et., all this paper presents a new measure which takes into accounts simultaneously
brightness and connectivity, in the segmentation step, for crack detection on road pavement images.
We have introduced a new method for crack detection on road pavement images. By considering all
characteristics of crack and by unrestricting crack orientations and forms are observed. Y.H. Tseng., et, all
in this research, we specific focus on developing strategies for executing the inspection tasks using robots.
We developed three strategies. The first strategy is random-walk. The second strategy is random-walk
with map recording. The third one adds the vision capacity to the robot. Three proposed strategies have a
higher possibility of revisiting distresses, and it means making the results more reliable. Haiyan Guan., et.,
all this paper presents an automated approach to detection and extraction of road markings from mobile
laser scanning (MLS) point clouds by taking advantages of multiple data features. The test dataset collected
168
D. EDGE DETECTOR
Edge detection is the name for a set of mathematical methods which aim at identifying points
in a digital image at which the image brightness changes sharply or, more formally, has discontinuities. The
points at which image brightness changes sharply are typically organized into a set of curved line segments
169
Where * denotes the 2-dimensional convolution operation. Since the sobel kernels can be decomposed
as the products of an averaging and a differentiation kernel, they compute the gradient with smoothing.
For example,
can be termed as
The x-coordinate is defined as increasing in the right direction and the y-coordinate is defined as
increasing in the down direction. At each point in the image, the resulting gradient approximations can be
combined to give the gradient magnitude using
For example, is 0 for a vertical edge which is darker on the right side.
IV.SIMULATION RESULT
V. CONCLUSION
Based on the detected road surface data an automated road marking and pavement using Shape
Based Image Retrieval (SBIR) algorithm is implemented. It can cause more serious problems to
valuable human life, such as potholes and pop-outs. Developing such a project like this will lead to, reduce
maintenance cost and it will create a better road network for people to use with neat and safe journey.
REFERENCES
[1] Haiyan Guan, Jonathan Li, Yongtao Yu, Michael Chapman, and Cheng Wang, (2015) Automated
Road Information Extraction From Mobile Laser Scanning Data IEEE Transaction on Intelligent
Transportation Systems, FEB, Vol.16, Issue. 1, Page No: 194 205.
[2]
Haiyan Guan, Jonathan Li, Yongtao Yu, MichaelChapman, and Cheng Wang, Ruifang Zhai (2015)
Iterative Tensor Voting For Pavement CrackExtraction Using Mobile Laser Scanning Data
IEEE Transactions on Geosciences and Remote Sensing, March, Vol.53, No.3, Page No: 1527 1537.
[3]
Haiyan Guan, Jonathan Li, Yongtao Yu, Zheng Ji, (2015) Using Mobile LIDAR data for rapidly
updating road markingsIEEE Transactions on Intelligent Transportation Systems, March, Vol.18,
No.1, Page No: 125-137.
[4]
Wei Chen, Fangzhou Guo, and Fei-Yue Wang, Fellow, (2015)A survey of traffic data visualization
IEEE Transactions on Intelligent Transportation Systems, March, Vol.14, No.1, Page No: 1-15.
[5]
Henrique Oliveira and Paulo Lobato Correia, (2013) Automatic Road Crack Detection and
172
M. Salman, S. Mathavan, K. Kamal, M. Rahman, (2013)Pavement crack detection using the Gabor
filter IEEE Annual Conference on Intelligent Transportation Systems, Oct, Vol.6-9, No.1, Page No:
2039-2044.
[7]
Haiyan Guan a, Jonathan Li b, Yongtao Yu b, Cheng Wang b,(2013)Rapid update of road surface
databases using mobile LIDAR:Road- Markings Fifth International Conference on Geo- Information
Technologies for Natural Disaster Management, Page No: 124-129.
[8]
Wei Na and Wang Tao, (2012)Proximal support vector machine based pavement image classification
IEEE fifth International Conference on Advanced Computational Intelligence (ICACI), Oct, Vol.1820, No.1, Page No: 686-688.
[9]
[10]
Y.H. Tseng, S.C. Kang, Y.S. Su, C.H. Lee, J.R. Chang,(2010)Strategies for autonomous robot to
nspect
pavement
distress
IEEE/RSJ International Conference on Intelligent Robots
and Systems Oct, Vol.18-22, No.1, Page No: 1196-1201.
[10] Zhao Liu, Daxue Liu, Tong tong Chen, Chong yang Wei,(2013)Curb detection using 2D range data
in a campus environment Seventh International Conference on Image and Graphics FEB, Vol.16,
Issue. 1, Page No: 291 296.
173
I. INTRODUCTION
Wireless Sensor Network (WSN) is a multi-hop wireless network consisting of a large number
of small, low-cost, low-power sensor nodes to perform intended monitoring functions in the target area.
The sensor node is used to sense the data and forward that data over a wireless medium to a remote data
collection device. Localization is a critical requirement for determining where a given node is physically
located. It is necessary for data correlation. Location information is gathered from manual setting or GPS
device. Since manual setting requires huge cost of human time, and GPS requires expensive device cost,
it does not work indoor environment and underwater. Both approaches are not applicable for large scale
WSN.
The localization method can be classified according to the type of information. First method is
range based localization where the locations are calculated from node to node distance estimation or inter
node angles. Second method is range free localization method in which the locations are determined by
radio connectivity constraints. Both methods are more complex and more expensive because they require
infrared, X-ray or ultrasound techniques to calculate the distance.
In this paper we propose a circle based path planning mechanism to determine the localization of
each node because it covers four corners of the sensing field by increasing the diameter of the concentric
circles and also determine the hacker node in the group of wireless sensor networks.
II. LITERATURE SURVEY
A predefined trajectory algorithm [1] proposed to determine the accurate and low cost sensor
localization and also minimize the localization error of the sensor node using a single mobile anchor node
and the obstacle resistant trajectory is also developed to detect obstacle in the sensing field. Route planning
mechanism for MANET [2] developed using a single mobile anchor node to determine the locations of
sensor node and also determine the block-hole attack such as denial of service attack (DOS) by improving
the security in each and it drops the incoming packet between source and destination. In this algorithm they
mainly focus on determining and improving the security of routing protocol Ad-hoc on Demand Distance
Vector (AODV).This paper presents better results for end to end delay, packet delivery ratio and throughput.
174
The system architecture consists of mobile sink differentiated by different colors. Mobile sink is used
to collect the data from sensing field. The localization method classified into two types such as 1)Range
based localization 2)Range free localization. The range based algorithm is used to find the point-to-point
distance calculation and angle estimation to calculate the position of sensor node. It consist of different
parameters they are Received Signal Strength Indicator (RSSI), Time of Arrival (TOA), Time Difference
Of Arrival(TDOA) and Angle Of Arrival(AOA).The range free localization further classified into two
types they are Local technique and Hop based technique. In local technique each mobile anchor node
equipped with GPS to determine the location unknown node. In hop based technique the Distance vector
(DV) routing is used to find the position of landmark announcement. The localization method consists of
following different factors.
Accuracy: In many WSN applications accuracy is very essential to determine the locations. For
example in military application sensor network is deployed for intrusion detection.
Power: Power is important for computation. Each sensor network has limited power which is
supplied by battery.
Cost: Cost is very critical requirement in the localization. Many of the localization algorithms give
low cost in the development of localization.
Static Nodes: The static nodes are deployed in environment which has the ability to compute and
sensing capability to sense and forward the data from source to destination.
Mobile Nodes: The mobile anchor node is also deployed in wireless environment and it is equipped
176
In localization method sensor node fixed at the centre of the circle, a single mobile anchor moves
through the sensing field, it broadcast its beacon messages and a sensor node select appropriate position
on anchor node which is called beacon point to construct a chord of its communication circle. However
it requires three beacon points to construct a communication circle. The accuracy of localization depends
upon the chord length. In this the location unknown node can be determined by using mobile anchor node.
IV. PATH PLANNING ALGORITHM
Path planning can be classified into static or dynamic. Static path planning method is used to design
movement trajectory before initial execution. Dynamic path planning scheme is proposed for real time
distribution of unknown node in given ROI. It consist of Breadth-First (BRF) algorithm and a Backtracking
Greedy (BTG) algorithm. Static path planning can be SCAN, HILBERT and S-CURVE method. As the
scan method is a straight line method it cannot give guarantee to length of each chord which exceeds a
certain threshold. Hilbert method is a curve method which cannot give guarantee that sensor node to obtain
three or more beacon to form a communication circle and the s-curve method is difficult to obtain each
sensor node can form a valid chords. The proposed CIRCLE method gives guarantee that four corners of
the sensing field by increasing the diameter of the communication circle.
The data flow diagram shows steps to determine the localization using path planning mechanism.
V. SIMULATION RESULT
The simulation result implemented in NS2 platform and IEEE 802.11 used as the MAC layer in our
simulation experiments. The simulation result consists of different parameters and its value shown in table.
Throughput
Throughput Graph:
PDR Graph:
Packet drop
V.CONCLUSION
In this paper propose a circle based path planning algorithm to determine the location of every
179
[2]
[3]
P. Sangeetha and B. Srinivasan,Mobile Anchor Based Localization Using PSO and Path Planning
Algorithm In Wireless Sensor NetworksInternational Journal of Innovative Research and Advanced
Studies,2015,Vol 2,Issue 2,pp.5-10.
[4]
[5]
Javad Rezazadeh, Marjan Moradi, Abdul Samad Ismail, Eryk Dutkiewicz,Superior Path Planning
Mechanism for Mobile-Beacon Assisted Localization in Wireless Sensor NetworkIEEE Sensors
Journal,2014,pp.1-13.
[6]
Ms. Prerana Shrivastava, Dr. S.B Pokle, Dr.S.S.Dorle,An Energy Efficient Localization Strategy
Using Particle Swarm Optimization in Wireless Sensor Network, International Journal of Advanced
Engineering and Global Technology,2014,Vol 02,Issue 19,pp.17-22.
[7]
Guangjie Han, Chenyu Zhang, Jaime Lloret, Joel J. P. C. Rodrigues,A mobile Beacon Assisted
Localization Algorithm Based on Regular Hexagon in Wireless Sensor Network, The Scientific
World Journal,2014,pp 1-12.
[8]
Chia-Ho Ou, Wei-Lun He,Path Planning Algorithm for Mobile Anchor Based Localization in
Wireless Sensor Network, IEEE sensors Journal,2013,Vol 13, No 02,pp.466-475.
[9] Harsha Chenji,, Radu Stoleru,Toward Accurate Mobile Sensor Network LocalizationIEEE
Transaction on Mobile Computing,2013,Vol12,No .6,pp.1094-1106.
[10] Mansoor-ul-haque, Farrukh Aslam Khan, Mohsin Iftikhar,Optimized Energy-efficiet
Iterative Distributed LocalizationIEEEInternational Conference on Systems, Man, and
Cybernetics,2013,PP.1407-1512.
[11] P.K Singh, Bharat Tripathi, Narendra Pal Singh,Node Localization in Wireless Sensor Network
International Journal of Computer Science and Information Technologies,2011,Vol.2(6),
pp.2568-2572.
[12] Zhang Shaoping, Li Guohui, Wei Wei, Yang Bing,A Novel Iterative Multilateral Localization
Algorithm for Wireless Sensor Nerwork Journal Of Networks,2010,Vol. 5,No.1,pp.112-119.
[13] W.-H. Liao, Y.-C. Lee, S.P. Kedia,Mobile Anchor Positioning for Wireless Sensor NetworkThe
Institute of Engineering and Technology,2010,Vol.5,Issue.7,pp.914-921.
[14] Frankie K. W. Chan, H. C. So, W.-K. Ma,A Novel Subspace Approach for Cooperative Localization
in Wireless Sensor Network Using Range MeasurmentsIEEE Transactions On Signal Processing,2
009,Vol.57,No.1,pp.260-269.
[15] Kuang Xing-hong, Shao Hui-he,Distributed Localization Using Mobile Beacons in Wireless Sensor
Network The Journal Of China Universities of Post and Telecommunications,2007,Vol.14,pp.7-12.
180
I. INTRODUCTION
In the recent years, many researchers have investigating underwater acoustic sensor networks
(UWASNs) are useful for the oceanic environment monitoring, oceanic geographic data collection, offshore
exploration, assisted navigation, disaster prevention and tactical surveillance. The channel of UWASNs
has many specific characteristics, such as long propagation delay, low available bandwidth, multi-path
transmission, and the Doppler spread, which make different from terrestrial wireless networks.
The common communications architecture for underwater wireless sensor network is shown in
Figure 1. Sensor nodes in the network may also communicate with a surface station and autonomous
underwater vehicle deployed in the underwater
sensor network, the location of the sensor nodes need to
determine the sensed data. Radio wave Frequency communication does not work well for underwater
and the well-known use of
GPS is restricted to surface nodes.Hence,packet exchange between the
underwater nodes and surface nodes needed for a localization that should carried out using an
acoustic communications.UASN acoustic channels have unique characteristics ,long propagation delay,
limited bandwidth and multipath interference.. Localization schemes in UASN fulfill the following
desirable qualities such as more accurate, fast transmission, wide coverage of communication between
181
In the network with architecture as shown in Fig.2, for the information generated at sensor nodes
transmits hop-by-hop to the sink in a many-to-one pattern. As packets move more closely toward the sink,
the packet collision increases. Because of the long propagation delay and the low available bandwidth
in UWASNs, existing contention based MAC protocols with handshake mechanism is not appropriate for
their high reserved cost, and existing schedule based MAC protocols is not appropriate for the long slot
time. The T-Lohi protocol and the ordered carrier sense multiple access (CSMA) protocol work well
in single-hop underwater acoustic
networks for discarding the handshake mechanism, but they can not obtain high performances
in multi-hop networks. In this paper proposed a modified low complexity algorithm produces high
performances in multi anchor in multi hop networks.
A. Localization Basics
Localization is one of the most important technologies play a difficult task in many applications
especially underwater wireless sensor networks. Localization algorithm classified in to three different
categories based on sensor nodes. Stationary localization algorithm, mobile localization algorithm and
hybrid localization algorithm. Three kinds of sensor nodes are used in underwater acoustic sensor network:
they are anchor nodes, unknown nodes and reference nodes. Unknown nodes are responsible for sensing
environment data. Anchor nodes are responsible for localizing unknown node, and reference nodes consist
of localized unknown nodes and initial anchor nodes.Currently, many localization algorithms are proposed
for underwater acoustic sensor networks. Researchers classify localization algorithm in to two categories:
distributed and centralized localization algorithms based on where the location of an unknown
node is determined. In distributed based localization algorithm, each underwater sensor nodes can sensed
the unknown node and collect the localization information then runs a location estimation algorithm
individually.Centralised localization algorithm, the location of each unknown is estimated by a base station
or a sink node.
183
Collision tolerant packet scheduling: During a localization period or receiving a node transmit
randomly to avoid a coordination among anchor node in a collision tolerant packet scheduling, anchor
node work independently. Packet transmitted from different anchor now collide each other with successful
reception shown in fig 4.
VI. CONCLUSIONS
In this work, the problem of scheduling the localization packets of the anchors is formulated in
an underwater sensor network. In existing method using the single anchor node to
locate and finding a location of single node at a time, but it produce delay during communications it
need more time to find location of entire node. To overcome this issues design an efficient MAC protocol to
improve the network efficiency and multiple anchor nodes are used to find multiple nodes location reduces
the communication delay by using low complexity algorithm and location aided routing protocol. The
proposed algorithm is low complexity algorithm in order to minimize the duration of the localization task.
Moreover, we observed that system adjust the multi anchor nodes dynamically compared high performance
in different MAC protocol such as TDMA MAC, BMAC and MAC-MAN (mobile anchor node or sink
186
H. Ramezani, H. J. Rad, and G. Leus, Target localization and tracking for an isogradient sound
speed profile, IEEE Transaction on Signal Processing, Vol. 61, No. 6, pp. 14341446, March
2013.
[4] Rahman Zandi, Mahmoud Kamarei, and Hadi Amiri Underwater Acoustic Sensor Network
Localization Using Received Signals Power IEEE 2013.[5]
A. G. Dimakis, K. Ramchandran,
Y. Wu, and C. Suh, A survey on network codes for distributed storage, Proceedings of the IEEE,
vol. 99, no. 3, pp. 476489, 2011.
[5]
[6]
[7]
Ameer P M and Lillykutty Jacob, Localization Using Stochastic Proximity Embedding for
Underwater Acoustic Sensor Networks, IEEE 2012.
[8]
[8]. Wouter van Kleunen, Nirvana Meratnia, Paul J.M. Havinga MDS-Mac: A Scheduled MAC
for Localization, Time- Synchronisation and Communication in Underwater Acoustic Networks
IEEE 15th International Conference on Computational Science and Engineering 2012.
[9]
J.P. Kim, H.-P. Tan, and H.-S. Cho, Impact of MAC on localization in large-scale seabed s ensor
networks, in Proc. IEEE AINA., Washington, DC, USA, pp. 391396, 2011.
[14] K. Kredo, P. Djukic, and P. Mohapatra, STUMP: Exploiting position diversity in the
staggered TDMA underwater MAC protocol, in Proc. IEEE INFOCOM, pp. 29612965, 2009.
187
I. INTRODUCTION
A thermocouple is a device made of two dissimilar conductors or semiconductors that contact each
other at one or more points. Voltage is produced in the Thermocouple when the temperature of one of
the contact points differs from the temperature of another, which is known as the thermoelectric effect.
It is a major type of temperature sensor used for measurement and control purpose, and also converts
a temperature gradient into electricity. Based on Seebecks principle, thermocouples can measure only
temperature differences and they need a known reference temperature to yield the absolute readings. The
Seebeck effect describes the voltage or Electromotive Force (EMF) induced by the temperature gradient
along the wire. The change in material EMF with respect to a change in temperature is called the Seebeck
coefficient or thermoelectric sensitivity. This coefficient is usually a non-linear function of temperature.
For small changes in temperature over the length of a conductor, the voltage is approximately linear,
which is represented by the following equation (1) where V is the change in voltage, S is the Seebeck
coefficient, and T is the change in temperature:
V = ST (1)
Thermocouples require some form of temperature reference to compensate for the cold junctions.
The most common method is to measure the temperature at the reference junction with a direct-reading
temperature sensor then apply this cold-junction temperature measurement to the voltage reading to determine
the temperature measured by the thermocouple. This process is called Cold-Junction Compensation (CJC).
Because the purpose of CJC is to compensate for the known temperature of the cold junction, another
less-common method is forcing the junction from the thermocouple metal to copper metal to a known
temperature, such as 0 C, by submersing the junction in an ice-bath, and then connecting the copper wire
from each junction to a voltage measurement device.
Data AcQuisition[2] is the process of measurement of an electrical or physical phenomenon such
as voltage, current, temperature, pressure, or sound with a computer. PC-based DAQ systems exploit the
processing power, productivity, and display of industry-standard computers that provides more powerful,
flexible, and cost-effective measurement solution. When dealing with the factors like high voltages, noisy
environments, extreme high and low signals, or simultaneous signal measurement, signal conditioning is
the most essential process for an effective data acquisition system. It maximizes the
accuracy of a system, and allows sensors to operate properly, and guarantees safety.
Static and Dynamic Temperature Measurements[1] were done earlier and it was found that a time
constant of about 0.01 would be a good choice for use with the digital filter. This experiment was conducted
in order to test the sensitivity of a J-type thermocouple, and also to test its dynamic response to a known
step input. The sensitivity of the thermocouple was found by plotting its voltage vs. the temperature of the
188
The Fig.1.illustrates the process flow for measuring the thermocouple temperature. The hot
Temperature where two dissimilar metals contact each other, is first acquired from the sensor using the
DAQ device. The reference Cold Junction Temperature is measured. This reference temperature will not be
absolute zero degree Celsius. So Cold Junction Compensation is done inorder to avoid signal conditioning
errors. The formulas are used to convert CJC voltage to temperature value. NIST standard sheet provides
the coefficient values. The obtained temperature is checked for linearity and high amplification is done to
get better results.
To determine the temperature at the thermocouple junction we can start with equation (2) shown
below, where VMEAS is the voltage measured by the data acquisition device, and VTC (TTC Tref ) is
the Seebeck voltage created by the difference between TTC (the temperature at the thermocouple junction)
and Tref (the temperature at the reference junction)
NIST thermocouple reference tables are generated as shown in Table.1 with the reference junction
held at 0 C.
Table.1. NIST Standard table
We can rewrite Equation (2) as shown in Equation (3) where VTC (TTC ) is the voltage measured
by the thermocouple assuming a reference junction temperature of 0 C, and VTC (Tref ) is the voltage that
would be generated by the same thermocouple at the current reference temperature assuming a reference
junction of 0 C:
In Equation (4), the computed voltage of the thermocouple assumes a reference junction of 0 C.
Therefore, by measuring VMEAS and Tref , and knowing the voltage-to-temperature relationship of the
thermocouple, we can determine the temperature at the primary junction of the thermocouple.
There are two techniques for implementing CJC when the reference junction is measured with a
direct-reading sensor: hardware compensation and software compensation. A direct-reading sensor has an
output that depends on the temperature of the measurement point. Semiconductor sensors, thermistors, or
189
The temperature -to- voltage conversion during cold junction temperature measurement is given by,
Initially the Cold Junction Temperature of the thermocouple is measured and it is converted into
equivalent voltage. The reference temperature to CJC voltage conversion is given by the formula
Where Tcj is the cold junction temperature, Vcj is the computed cold junction voltage, and the T0,
V0, pi and qi are coefficients. And the coefficients are selected based on the thermocouple type using NIST
Table.
To calculate the CJC voltage and CJC temperature. CJC voltage.vi is used as sub VI. Thermocouple
type and CJC channel are given as controls. DAQmx driver has predefined VIs for creating channel,
reading and clearing task. The sensor connected to the channel 0 of NI 9211[3] acquires input and it is read
by DAQmx read. The value read from thermocouple is defined for CJC Temperature. This CJC temperature
is converted into CJC Voltage for dynamic analysis.
The reference temperature acquired from ice bath is converted to Cold Junction Compensation(CJC)
voltage. The array represents the t0 , v0 , p and q coefficients given as input to the math script node
through index array. The thermocouples specified in the array are in the order. The thermocouple type and
reference temperature TCJ are also given as controls. The formula is entered in the MathScript Node and
CJC Voltage is obtained as the output interms of millivolt. The error in and error out terminals are provided
to bypass if any error occurs.
The Sub VI is developed to have Thermocouple channel as control signal and produces the
Thermocouple Voltage as output. The measured voltage is added with CJC compensation voltage. The
effective voltage is converted into the equivalent temperature. DAQmx is used for creating channel,
sampling clock, starting task, reading values, stopping task and clearing VI. The For loop is used for
reading 30 values continuously and finally displaying a single output value. The array elements from the
190
The CJC compensated voltage is given to the filter circuit to remove the noise signals and amplified
with the gain of 31.25. The output voltage from thermocouple is in the range of -80mV to 80mV. Its offset
shifted to 0-160mV and then amplified with the gain of 31.25 to reach the voltage range of 0-5V.
Amplified voltage = 31.25 x CJC compensated voltage
The ADS1240 ADC is used with the 24-bit resolution and delta-sigma configuration. The amplified
input voltage is given to the ADC input. The 0-5 voltage converted into (0-224 -1= 0 to 16777215 bits)
using the below mentioned formula.
ADC Count = (Input voltage in mV / 5000) x 16777216 bits
Then mV to Temperature ADC count is coded. Thermocouple range.vi and mV to temperature.
vi are used as sub vis. The controls are the Thermocouple range, CJC voltage, Thermocouple mV and
error in status code. The indicators are CJC voltage, ADC count, Linearity range, Temperature, ADC Vin,
Linearity region and Thermocouple measuring range. Linearity range is selected from the output mV of
thermocouple range. The string array contains the thermocouple range for different types of thermocouple.
Thermocouple measuring linearity region is determined by selecting the accurate range of thermocouple
from the obtained mV.
The mV range for every type of thermocouple is selected and by using case structure, Inrange and
coerce icon is used to check whether the thermocouple voltage range lies in the limit or not. The mV range
varies for every type of thermocouple. Based on the upper and lower limit in the Inrange icon, the select
terminal chooses the appropriate mV range for the given Thermocouple type and the voltage. The error
checking block is present to indicate if the voltage value is out of range.
III. RESULTS AND DISCUSSION
Static analysis is done for a particular true temperature value. Variations in temperature can be
obtained by running the vi again and again. Stable temperature applications which require slow changes
can use static analysis. Linearity is the major parameter in Static analysis. The physical channel and CJC
channel are selected based on the connection of thermocouple with the NI 9211 hardware. The sampling
rate is given as 10 and the 30 readings are obtained for a single table input. The Thermocouple type and True
temperature are given as inputs. After specifying all the inputs, the initiate is clicked. The Thermocouple
mV obtained is amplified and ADC count is calculated. The error % is calculated by finding the difference
between the True temperature and Measured temperature. The Static analysis of measurement of temperature
of thermocouple is shown in Fig.2.
The waveforms are plotted for True Temperature Vs Thermocouple Voltage, Amplified Voltage,
ADC count and Measured Temperature as shown in Fig.3. The waveforms implies that the static analysis
produces non-linear variation in temperature. Thus non-linearity is obtained as the result of Static analysis.
191
The Front panel of dynamic analysis shown in Fig.4. explains the temperature calculation for a range
of temperature. The input parameters are initialized and Starting and Ending temperature is mentioned.
The samples are collected and the process starts when the Start temperature is reached. The Time constant
starts counting when the start temperature is obtained. The Time constant calculates the time taken by the
thermocouple to reach 63.2% of the temperature difference between the starting and end temperature. Gain
is calculated using the formula del T/ del mV.
The waveforms for dynamic analysis is shown in Fig.5. The Time Vs Measured temperature,
Measured Thermocouple Voltage are plotted. The gain is increased in dynamic analysis.
IV. CONCLUSION
The measurement of temperature using the thermocouple included the signal conditioning stages of
reference temperature sensor (for Cold Junction Compensation), high amplification and linearization. The
Thermocouple voltage(mV) is converted into temperature( 0 c ) which can be used in real time applications
to have fast data acquisition rate. The errors in the amplification, ADC count and linearization are identified
and compensated. In Static analysis, non-linearity is obtained for variation in temperature. In Dynamic
analysis, gain is increased Comparision and evaluation of performance of software and hardware based
192
. http://www.ni.com/data-acquisition/usb/
[3]
. http://www.ni.com/pdf/manuals/373466e.pdf
[4]
. http://www.ni.com/pdf/manuals/374014a.pdf
[5]
. www.ni.com
193
I. INTRODUCTION
During recent years the main and highly concerned issue in the low power VLSI design is energy/
power dissipation. This is due to the increasing demand of portable systems and the need to limit the
power consumption in VLSI chips. In conventional CMOS circuits, the basic approaches used for reducing
power consumption are by reducing the supply voltages, on decreasing node capacitances and minimize
the switching activities with efficient charge recovery logic. The adiabatic logic works on the principle
of energy recovery logic and provides a way to reuse the energy stored in load capacitors rather than the
conventional way of discharging the load capacitors to the round and wasting this energy. The Power
consumption is the major concern in low power VLSI design technology.
MOTIVATION
A. Need for low power design
The requirement for low power design has caused a large paradigm shift where energy dissipation
has become as essential consideration as area and performance. Several factors have contributed to this
trend. The need for low power devices has been increasing very quickly due to the portable devices such
as laptops, mobile phones and battery operated devices such as calculator, wrist watches. These products
always put a large attention on minimizing power in order to maximize their battery life. Another motive
for low power is associated to the high end products. This is due to the packaging and cooling of such high
performance, high density and high power chips are prohibitively expensive. Another consideration low
power design is related to the environment. The Micro electronics products become tolerable usage in
everydays life, their need on energy will sharply increase. Therefore the reduction in power consumption
reduces the heat generated and so reduces the cost required for extra cooling systems in homes and office.
B. Multiplexer
Fig 1: Multiplexer
194
C. Demultiplexer
Fig 3: Demultiplexer
A demultiplexer is a device which has single input and many outputs. Demultiplexer is used to
connect a single source to multiple destinations. Figure 2 shows the block diagram of demultiplexer. The
multiplexer and demultiplexer work together to perform the process of transmission and reception of data
in communication system. It performs the reverse operation of multiplexer. Both play an important role in
communication systems. Figure (4) shows the function of demultiplexer.
195
Conventional CMOS designs consume a lot of energy during switching process. Two major sources
of power dissipation in digital CMOS circuits are dynamic power and static power. Dynamic power is
related to the changing events of logic states or circuit switching activities including power dissipation due
to capacitance charging and discharging. Figure (1) shows the CMOS switching process.
During device switching, power dissipation primarily occurs in conventional CMOS circuits as
shown in figure (1). In CMOS logic design half of the power is dissipated in PMOS network and during the
switching events, stored energy is dissipated during discharging process of output load capacitor. CMOS
NAND gate is shown in the figure (2) which consists of 2 PMOS and 2NMOS devices.
Transmission gate
The CMOS transmission gate (T-gate) is a useful circuit for both analog and digital applications. It
acts as a switch that can operate up to VDD and down to VSS. The CMOS transmission gate utilizes the
parallel connection of an NMOS and a PMOS transistor. When the transmission gate is on, it provides a
low-resistance connection between its input and output terminals over the entire input voltage range. In
transmission gate, the parallel connection of an NMOS and a PMOS transistor acts as a switch. The symbol
and truth table of transmission gate is shown in figure (7). A Transmission Gate is a kind of MUX structure.
ECRL consists of two cross coupled PMOS transistors and two N-functional blocks for ECRL
adiabatic logic block. Both out and out bar are generated. Energy dissipation is reduced to a large extent
in ECRL logic by performing the precharge and evaluation phase simultaneously. ECRL dissipates less
energy than other adiabatic logics by eliminating the precharge diodes. It consists of only two PMOS
switches. It provides full swing at the output. The basic structure of ECRL logic is similar to the Differential
Cascode Voltage Switch Logic (DCVSL) with differential signaling. Figure (8) shows the ECRL NAND
gate. A major disadvantage of ECRL circuit is that the coupling effects due to the two outputs are connected
by the PMOS latch and the two complementary outputs can interfere with each other.
196
IECRL consists of a pair of cross coupled PMOS device and two N-functional blocks. In IECRL,
delay has been improved by adding a pair of cross coupled NMOS devices in the ECRL design. The basic
structure of IECRL is similar to the Modified Differential Cascode Voltage Switch Logic (MDCVSL) with
differential signaling. Figure (9) shows the IECRL NAND gate. The IECRL logic is the improved ECRL
logic. The performance of IECRL is better than the ECRL logic even though the number of transistors is
higher than the ECRL logic. The main advantage of IECRL logic is that it consists of a pair of cross coupled
NMOS devices to improve the performance of ECRL logic.
Figure (10), (11), (12) and (13) shows the 16:1 multiplexer design using Conventional CMOS logic,
Transmission gate logic, ECRL and IECRL respectively. The circuits using TGL, ECRL and IECRL are
compared with the conventional CMOS based 16:1 multiplexer based on the power dissipation.
197
198
1:16 Demultiplexer
Figure (14), (15), (16) and (17) shows the 1:16 demultiplexer design using conventional CMOS
logic, transmission gate logic, ECRL and IECRL respectively. The circuits using TGL, ECRL and IECRL
are compared with the conventional CMOS based 1:16 demultiplexer based on the power dissipation.
Comparative analysis
The simulation results are compared based on the power dissipation of the proposed circuits and
their transistor count with conventional CMOS logic design.
Table 1: Comparison of 16:1 Multiplexer design using different low power techniques
200
A.Chandrakasan, S. Sheng and R. Brodersen, Low-power CMOS digital design, IEEE Journal of
Solid State Circuits, Vol. 27, No 4, pp. 473-484, April 1992.
[3]
Deepti Shinghal, Amit Saxena and Arti Noor, Adiabatic Logic Circuits: A Retrospect MIT
International Journal of Electronics and Communication Engineering, Vol. 3, No. 2, pp. 108114,
2013.
[4]
Shruti Konwar, Thockchom Birjit Singha, Soumik Roy, Reginald H. Vanlalchaka Adiabatic logic
based low power multiplexer and de-multiplexer Computer Communication and Informatics
(ICCCI), 2014 International Conference Page(s): 15, Jan. 2014.
[5]
Manasvi Pandey & Darpan Sibbal, Mux and Demux and There Uses in Telephone Lines,
International Journal of Research,Vol-1, Issue-10, 2014.
[6]
Anu Priya and Amrita Rai, Adiabatic Technique for Power Efficient Logic Circuit Design, IJECT,
Vol. 5, Issue Spl-1, Jan - March 2014.
[7]
Shruti Konwar, Thockchom Birjit Singha, Soumik Roy, Reginald H. Vanlalchaka, Adiabatic Logic
Based Low Power Multiplexer and Demultiplexer, 2014 International Conference on Computer
Communication and Informatics (ICCCI -2014), Jan. 03 05, 2014.
[8]
Abhishek Dixit, Saurabh Khandelwal and Dr. Shyam Akashe, Design Low Power High Performance
8:1 MUX using Transmission Gate Logic (TGL), International Journal of Modern Engineering &
Management Research Volume 2 Issue 2, June 2014 p.no.14-20.
[9]
Abdhesh Kumar Jha and Anshul Jain, Comparative Analysis of Demultiplexer using Different
Logic Styles, International Journal for Scientific Research & Development, Vol. 2, Issue 12, 2015.
[10] Bhakti Patel and Poonam Kadam, Comparative Analysis of Adiabatic Logic Techniques,
International Journal of Computer Applications, pp 20- 24, 2015.
[11] Mohamed Azeem Hafeez and Aziz Mushthafa, Analysis of Adiabatic Circuit Approach for Energy
Recovery Logics, International Journal of Engineering Sciences & Research Technology, pp 702707, October, 2015.
[12] Ms.Amrita Pahadia and Dr. Uma Rathore Bhatt,Layout Design, Analysis and Implementation of
Combinational and Sequential Circuits using Microwind SSRG International Journal of VLSI &
Signal Processing (SSRG-IJVSP), volume 2, Issue 4, , pp 6-14, July-August 2015.
[13] Saseendran T K and Rajesh Mehra, Area and Power Efficient CMOS De-multiplexer Layout
on 90nm Technology, International Journal of Scientific Research Engineering & Technology
(IJSRET), EATHD-2015 Conference Proceeding, ISSN: 22780882, pp 14-15, March 2015.
201
202
*
Assistant Professor, Department of ECE,
M.Kumaraswamy College of Engineering,
Tamil nadu, INDIA
Abstract Floating point arithmetic is widely used in many areas especially scientific computation and
signal processing. The main objective of this paper is to reduce the power consumption and to increase the
speed of execution and the implementation of floating point multiplier using sequential processing on the
reconfigurable hardwareFloating Point (FP) addition, subtraction and multiplication are widely used in
large set of scientific and signal processing computation. In addition, the proposed designs are compliant
with IEEE-754 format and handles over flow, under flow, rounding and various exception conditions.
The adder/subtractor and multiplier designs can achieve high accuracy with increased throughput.
This approach is To provide a high accuracy reconfigurable adders and multipliers for floating point
arithmetic. To understand how to represent single and double precision floating point architecture in
single architecture using quantum flux circuits for DSP applications.
Keywords Floating point unit, Delay ,High Throughput
I. INTRODUCTION
Floating point addition and multiplication are the most frequent floating point operations. Many
scientific problems require floating point arithmetic with high level of accuracy in their calculations. A
floating point number representation can simultaneously provide a large range of number and a high degree
of precision. The IEEE 754 floating point standard is the most common floating point representation used
in modern microprocessors.
Efficient use of the chip area and resources of an embedded system poses a great challenge while
developing algorithms in embedded platforms for hard real time applications, like digital signal processing,
control systems and so on. As a result, a platform of modern microprocessors is often dedicated to hardware
for floating point computation. Previously, silicon area constraints have limited the
complexity of the floating point unit or FPU .Advances in integrated circuit fabrication technology
have resulted in both smaller feature sizes and areas. It has therefore become possible to implement more
sophisticated arithmetic algorithms to achieve higher FPU performance.
The recent advancements in the area of Field Programmable Gate Array (FPGAs) has provided
many useful technique and tools for the development of dedicated and reconfigurable hardware employing
complex digital circuits at the chip level. Floating point Addition, Multiplication is most widely used
operation in DSP/Math processors, Robots, Air Traffic Controller, Digital computers, because of its raising
application the main emphasis is on the implementation of floating point multiplier effectively such that it
uses less chip area with more clock speed.
II. FORMATS
i. Fixed point Format
A value of a fixed-point data type is essentially an integer that is scaled by a specific factor determined
by the type. For example, the value 1.23 can be represented as 1230 in a fixed-point data type with scaling
factor of 1/1000, and the value 1230000 can be represented as 1230 with a scaling factor of 1000.
Unlike floating-point data types, the scaling factor is the same for all values of the same type, and
does not change during the entire computation.
ii. Floating point format
One of the ways to represent real numbers in binary is the floating point formats.
203
V. CONCLUSION
The floating point calculations consume more power and occupy larger area due to high dynamic
range. Using fused floating point techniques the area and power have been reduced. For that reason fused
FMA and FAS concept used in FFT butterfly radix-2 calculation. Based on the data flow analysis, the
proposed fused floating-point adder can be split into three pipeline stages, which increases the throughput.
The proposed floating point arithmetic unit can be achieved by adopting the intermediate gating threshold
voltage method.
207
Floating-Point
[2]
[3]
Anant G. Kulkarni,Dr. Manoj Jha, Dr. M. F. Qureshi; (2014). Design and Simulation of Eight Point
FFT Using VHDL; IJISET - Vol. 1 Issue 3
[4]
[5]
Anant G. Kulkarni & Sudha Nair July-December 2009, Design and Implementation of Frequency
Analyser using VHDL, International
Journal of Electronics Engineering , Volume 1,
Number 2, pp. 265-268
[6]
[7]
Chih-Yuan Liang, Kee-Khuan Yu, And Shiann-Rong Kuang (2012) Multiple-Mode Floating-Point
Multiply-Add Fused Unit For Trading Accuracy With Power ConsumptionIEEE; Page(s): 429
435.
[8]
Claudio Brunelli, Fabio Garzia, Jari Nurmi. J Real-Time Image Proc (2008). A c o a r s e - g r a i n
reconfigurable architecture for multimedia applications featuring subword computation capabilities.
[9]
Daumas, M. Matula, D.W. Jul 1993. Design of a Fast Validated Dot Product Operation 11th
Symposium on Computer Arithmetic, 1993.Proceedings, pp 62 - 69, 29 Jun-2.
[10]
David Elam and Cesar Iovescu. Sep. 2003. A Block Floating Point Implementation for an N-Point
FFT on the TMS320C55x DSP. Application Report SPRA948 page1-11 .
208
Transform:
2007.
I. INTRODUCTION
Addition is a fundamental operation for any digital system, digital signal processing (DSP) or
control system. A fast and accurate operation of a digital system is greatly influenced by the performance
of the residential adders. Adders are also very important component in digital systems because of their
extensive use in basic digital operations such as subtraction, multiplication and division. Hence, improving
performance of digital adder would highly advance the execution of binary operations inside a circuit
contained those blocks. The performance of a digital circuit block is gauged by analyzing its power
dissipation, layout area and its operating speed.
The Carry Select Adder (CSA) provides a compromise between small areas but longer delay Ripple
Carry Adder (RCA) and a large area with short delay Carry Look-Ahead Adder (CLA) [1]. In mobile
electronics, reducing the area and power consumption are key factors in increasing portability and battery
life. Even in the servers and desktop computers, power consumption is an major design constraint. Design
of area- and power-efficient high-speed data path logic system are the most substantial areas of research in
VLSI system design. In digital adders, the speed of addition is limited by the time requirement to propagate
a carry through the adder. The sum for each bit position in elementary adder is generated sequentially
after the previous bit position has been summed and a carry propagated into the next position [3]. Among
different types of adders, the CSA is intermediate regarding speed and area [2].
VLSI Integer adders find the applications in Arithmetic and Logic Units (ALUs), microprocessors
and memory addressing units. Speed of the adder frequently decides the minimum clock time in a
microprocessor. The need for a Parallel Prefix adder is that it is primarily fast on comparison with ripple
carry adders. Parallel Prefix adders (PPA) are family of adders derived from the common carry look ahead
adders.
These adders are well suited for adders with wider word lengths. PPA circuits uses a tree network to
reduce the latency to be O(log2 n) where n represents the number of bits. A three stage process is generally
involved in the construction of PPA. The first step involves the creation of generate, complementary skill
and propagate signals for all the input operand bits.
209
In the above equation, operator is applied on two pair of bits and . These bits represent,
generate and propagate signals used by addition. The output of the operator is a new pair of bits which
is once again combined using a dot operator or semi-dot operator with another pairs of bits. This
procedural use of dot operator and semi-dot operator creates a prefix tree network which ultimately
ends in generation of all carry signals. In the final step, the sum bits of the adders are generated with the
propagate signals of operand bits and the preceding stage carry bit using a xor gate. The semi-dot operator
will be obtained as last computation node in each column of the prefix graph structures, where it is
essential to compute only generate term, whose value is the carry generated from that bit to the succeeding
bit.
II. REQUIREMENTS
A. Design
We propose a high speed Carry Select Adder by replacing Ripple Carry Adder with parallel
prefix adder. Adders are the basic building blocks in digital integrated circuit based designs. Ripple
Carry Adder (RCA) is usually preferred for addition of two multi-bit numbers as these RCA offer
fast design time among all types of adders. However RCAs are slowest adder as every full adder must
wait until the carry is generated from previous full adder. On the other hand, Carry Look Ahead
(CLA) adder are faster adder, but they required more area. The Carry Select Adder is a compromise
on between the RCA and CLA in terms of area and delay. CSLA is designed by using dual RCA: due to
this arrangement the area and delay are concerned factors. It clears that there is a scope for reducing
delay in such arrangement. In this research, we have implemented CSLA with parallel prefix adders.
Parallel prefix adders are tree based structure and are preferred to speed up the binary
additions. This process estimates the performance of proposed design in terms of logic and route
delay. The experimental results show the performance of CSLA with parallel prefix adder is fast and
area efficient compared to conventional modified CSLA.
B. Functionality
In addition to the final deadline, each section of the project was given separate deadlines to ensure
each design group was making sufficient progress throughout the semester. The first deadline required us
to turn in the ADD, OR, PASS A, 8:1 MUX functions, as well as an arbitrary function that we chose on
our own, and the second design review required the ADD, SUB, SHIFT, ALU, in/out connectivity, and
registers working. Since we had already finished those parts previously, the final report does not cover those
individual components, but it does require that our ALU be able to complete each function and demonstrate
its correctness.
The total list of functions that our ALU must complete is listed in Table 1.
Table 1. Required ALU functions
210
When we add with the computer, it adds from right to left. Just like when we add without the
computer, in the parallel binary adder is a step by step list, Fig.1 showing you what happens in the parallel
Binary Adder.
D. Specification
Each of the three metrics have been specifically stated how they will be evaluated. Active power
is measured for one computation per cycle at the highest frequency achievable by the design for a specific
series of inputs, which PICo will supply at the second design review. The delay is the worst case access
delay. The area is the sum of the widths of transistors used in the design.
In addition, our design is assumed to interface with pads that connect to the outside world with all
inputs valid .5 FO4 delay before the rising edge of the clock, and hold for 1 FO4 delay after the rising edge
of the clock. Therefore, we assumed that the clock is an ideal signal driven through a static CMOS buffer.
III. DESIGN
During our design process, we encountered several design decisions that we had to make to reduce
our overall metric. We made our original designs for each sub-circuit when they were due for the design
reviews, however, we did not take into account metric decisions then.
When it came time to reduce the overall delay, power, and cost of our design, we went to each
individual sub circuit and evaluated how we could reduce the specifications for that sub circuit, in an effort
211
The most important decision in our Digital Signal Processor was choosing the adder, as its delay
would be significantly greater than any other function, and so would be the determining factor in our
maximum speed. In all cases, the adder was converted to an adder/ subtractor by adding an inverter, a 2:1
multiplexer, and a select line. This line was determined to be 0 for add, and 1 for subtract. This line also
was the carry in for the entire adder, which gave a 2s complement version of B when subtract was selected.
1) Ripple Carry Adder : Originally we used a ripple-carry adder, which gave us a large delay of
11ns, but was simple to implement by chaining together full Adders. The full adders were designed using
the mirror adder pattern as Fig. 3rather than the full static CMOS design. The large delay was a result of
each full adder having to wait for the carry bit to be calculated from the previous full adder, and as a result, a
large number with 16 bits would take a long time to fully calculate. Because of the large delay, we searched
for faster adders to increase speed as in Fig.2.
2 ) Carry Look-Ahead Adder: Carry Look-Ahead adders were designed to reduce overall
computational time by using propagate and generate signals for each bit position, based on whether a carry
is propagated through to the next bit.
The carry lookahead adder (CLA) solves the carry delay problem by calculating the carry signals
in advance, based on the input signals. It is based on the fact that a carry signal will be generated in two
cases: (1) when both bits
and ai are bi 1, or (2) when one of the two bits is 1 and the carry-in is 1.
This sequence of adding from the least significant bit and propagating the carry bit ahead reduces
the overall delay, but is more complex than the Ripple-Carry adder, and also uses more transistors overall.
Due to its complexity and size, as well as the possibility of a Manchester adder see Fig.4, we decided not to
use a carry look-ahead adder.
212
3) Carry Select Adder: In the carry select adder, there are two full adders, each of which takes a
different preset carry-in bit. The sums and carry-out bits that are produced are then selected by the carry-out
from the previous stage.
One of the earliest logarithmic time adder designs is based on the conditional - sum addition
algorithm. In this scheme, blocks of bits are added in two ways: assuming an incoming carry of 0 or of 1, with the
correct outputs selected later as the blocks true carry-in becomes known. This is one of the speed-up techniques
that is used in order to reduce the latency of carry propagation as seen with the ripple-carry adder.
Basically, the adder will add the sum with and without a carry from the previous stage, and will
then use a multiplexer to determine which sum is the correct one, depending on whether or not there was a
carry. The basic design for the carry select adder is shown in Fig.5.
The Carry Select adder was more efficient than the Ripple-Carry Adder, with a delay of 8ns. It was
more difficult to implement, but since we had already finished our 2:1 multiplexers, it wasnt too difficult.
The major problem with the carry select adder was the size, well over double that of the ripple carry adder.
This was too large a price to pay in cost to get only a 3ns increase in speed.
4) Manchester Carry Adder : The Manchester Carry Chain is a variation of the Carry Look-Ahead
adder, but instead uses shared transistor logic to lower the overall transistor count. The Manchester Carry
Adder consists of cascading chains of Manchester Carry chains, which is broken down in order to reduce
the number of series-propagate transistors, resulting a great reduction in delay as the number of transistors
in series is reduced. As with the Carry Look-Ahead adder, it was too complex to be used in this design is
shown in Fig.6.
C. 8:1 Multiplexer
For our 8:1 Multiplexer, we originally used four 2:1 multiplexers combined with a 4:1 multiplexer,
which worked well. Our 4:1 multiplexers were created out of two 2:1 multiplexers, combined with another
2:1 multiplexer to selected between all 4 inputs, as shown in Fig.8.
However, we realized that we could implement the same function using two 4:1 multiplexers
combined with a 2:1 multiplexer to meet the requirement as well, and it would also offer a better delay,
with fewer transistors.
D. Parallel Prefix Adder
Parallel Prefix Adder (PPA) is very useful in todays world of technology because of its
implementation in Very Large Scale Integration (VLSI) chips. The VLSI chips rely heavily on fast and
reliable arithmetic computation. These contributions can be provided by PPA. There are many types of
PPA such as Brent Kung , Kogge Stone , Ladner Fisher , Hans Carlson and Knowles.
For the purpose of this research, only Han Carlson adder will be investigated with the other types
of adders.
The design file has to be analyzed from Fig.9, synthesis and compile before it can be simulated.
Simulation results in this project come in the form of Register Transfer Level (RTL) diagram, functional
vector waveform outcome and classic timing analysis. The RTL design can be obtained by using the RTL
viewer based on the Netlist viewer. Functional vector waveform outcome are produced by selecting random
bit values and add up to produce the sum and carry bits. Timing analysis can be obtained by viewing the
214
V. ACKNOWLEDGEMENTS
Our thanks to M.Kumarasamy college of Engineering for offering us the opportunity to do this
wonderful project, and to Dr. V. Kavitha for her Guidance to do the survey.
VI. REFERENCES
[1] Bender, Ryan (April 17, 2000). A Simulator for Digital Circuits. Massachusetts Institute of
Technology. Retrieved on April 28, 2008 from http://mitpress.mit.edu/sicp/full_text/sicp/book/
node64.html
[2]
Alan, Elay (2007). Hierarchal Schematics and Simulation Within Cadence. University of California
at Berkley. Retrieved on April 28, 2008 from http://bwrc.eecs.berkeley.edu/Classes/ICDesign/
EE141_f07/CadenceLabs/hierarchy/hierarchy.htm
[3]
Lin, Charles (2003). Half Adders, Full Adders, Ripple Carry Adders. University of Maryland.
Retrieved April 28, 2008 from http://www.cs.umd.edu/class/sum2003/cmsc311/Note/Comb/adder.
html
[4]
Mlynek, D. Design of VLSI Systems. EPFL. Retrieved on April 28, 2008 from http://lsiwww.epfl.
ch/LSI2001/teaching/webcourse/ch06/ch06.html
[5]
Lie, Sean. (2002). Carry Select Adder Details. Retrieved April 28, 2008 from http://www.slie.ca/
projects/6.371/webpage/cryseladderdetails.html
216
I. INTRODUCTION
Software testing is essential phase in software engineering which is used to detect errors as early as
possible to ensure that changes to existing software do not break the software and also used to determine the
quality of software product. The main myth is good programmers write code without bugs.
Phases in a tester`s mental life can be categorised into 5 phases. They are
Phase 0(Debugging Oriented)
Phase1(Demonstration Oriented)
Phase2(Destruction Oriented)
Phase3(Evaluation Oriented)
Phase4(Prevention Oriented)
Test Case is step by step description of the action that we do during the testing. Test Suite is the
collection of test cases and it is way in which we are grouped the test case based on the module wise
structure and rule module wise structure along with future.
Test prioritization is basic idea to group test cases based on some criteria and we can prioritize based
on another set of criteria such as impact of failure ,cost to fix etc. We can prioritize the test cases by using
methods like Cosine methodology, Greedy algorithm, prioritization metrics and measuring efficient etc.
The Goals of prioritization are,
To increase the rate of fault detection.
To increase the coverage of code.
To increase their confident in the reliability of the system.
TestCase minimization technique is used to find and remove the redundant test case from the test
suite. Test cases became redundant because their input/output relation is no longer meaningful due to
change in program and their structure is no longer in conformity with software coverage. It is yet another
method for selecting tests for regression testing.
Regression testing is used to verify that changes work correctly and meet specified requirement. It is
executed after defect fixes in software or its environment. Whenever the defects are done a set of test cases
that are need to be rerun/ retesting to verify defect fixes are affected or not. Rerunning or retesting of all
test cases in test suite may require an unacceptable amount of time. Minimizing the test case will overcome
these difficult.
Data flow testing is based on selecting the path through the programs control flow in order to explore
sequence of events related to status of data or variable or object. It flows on the points at which variable
receives value and points at which values are used. It denotes each link with symbolslike d,k,u,c,p or
sequence of symbols like dd,du,ddd,.etc that denotes sequence of operation. The data object state and usage
are, Defined(d), Killed(k) and Usage(u).
217
Fig 3.1 describes the system architecture of proposed system. In this input is given as coding and
output is in format of dataflow test case generation. Here also the processes of data flow testing techniques
are used.
Here first java code get parsing by splitting the input into two things Split the input into tokens
and then find the hierarchical structure of the input.Then they are converted into control flow graph(CFG).
219
JUnit test classes that contain one or more test methods can be run individually or a collection of
JUnit test cases can be run as unit. JUnit is designed tests to run as sequences of tests invoked through test
suites that invoked test classes.
JUnit framework allowed us to achieve the following four objectives,
Coverage explorer
Clover dashboard
Test contribution
After all these reports of this also get by using clover views. By using this, data flow testing
techniques are going to be tested.
IV. CONCLUSION
The proposed tool has been designed and developed with java code. The outcome of this tool is to
assist the tester to test the code in efficient manner. The test cases are tested and provide report about the
test cases. Using this tool the test cases are generated and tested with its report. In this too all test are tested.
JUnit tool help us to see the report in graphical way which is help full in easy understanding.
REFERENCE
[1] Adam Kurpisz,Samulileppanen, On the sum of the squares hierarchy with knapsack covering
inequality ,arxiv: 1407. 1746v1 [cs.DS], 2014.
[2]
Amrita jyoti,Yogesh Kumar and D.pandy Recent priority algorithm in regression testing,International
journal of information technology and knowledge management,volume-2,no.2,pp.391-394,2010.
[3]
Alex kineer and Hyunsook Do Empirical studies of test case prioritization in a JUnit testing
environmentno.2,pp 1-12,2007.
[4] Bang ye wu: A simple approximation algorithm for internal steiner minimum tree.CoRR,
abs/1307,3822,2013.
[5] Baradhi.G and Mansour.N A Comparative Study of Five Regression Testing Algorithm.
Proceedings of IEEE international symposium on software testing and analysis,pp,143-152,1997.
[6]
220
Jyoti and Kamna Solanki A Comparative study of five regression testing techniques:ASurvey,IISN
R.Beena and S.Sarala, Code Coverage based Test Caseselection and Prioritization,International
Journal of software Enginneering& Application,vol.4,no.6,nov 2013.
[8]
Sapna P.G and Hrushikes ha Mohanty Prioritization of scenarios based on UML activity diagrams.
IN 1st International Conference on computational intelligence, pages 271-276.IEEE computer
society,2009.
221
I. INTRODUCTION
In the uplink communication system, the base station (BS) receives low intensity signals from
cell-edge users and signals from users at the edge of adjacent cells simultaneously. In the downlink
communication system, the user receives signals from the BS in its own cell and signals from the BSs
at the adjacent cells with similar power. The received signals from other cells act as interference and
cause performance degradation. In this case, both the capacity and the data rate are reduced by the intercell interference (ICI)[1]. In the past, the fractional frequency reuse (FFR) scheme, which is a simple ICI
reduction technique, has been used to achieve required performance in interference limited environment .
Since the FFR scheme increases the performance at the cell-edge but degrades the overall cell throughput, a
coordinated system was proposed to overcome the weakness of the FFR scheme. Also, the techniques for ICI
mitigation and performance enhancement by sharing the full channel state information (CSI) and transmit
data were studied in. [2]However, the techniques are difficult to implement in a practical communication
system because of the large amount of information to be shared between BSs. Instead of the impractical
scenario that requires full CSI and transmit data sharing among the whole network, a clustering algorithm
has been applied to practical communication systems by conguring the cluster for sharing full CSI between
a limited number of cells. The clustering algorithms are classied into two types: static clustering algorithm
and dynamic clustering algorithm. A dynamic clustering algorithm to avoid the ICI was developed , whose
objective is that the overall network has the minimum performance degradation while also improving the
performance of the cell-edge user. A clustering algorithm for sum rate maximization by using the greedy
search was proposed to improve the sum rate without guaranteeing the cell-edge users data rate. However,
when the size of the whole network is large, the complexity of the algorithm is increased rapidly. If the
complexity of the algorithm is large.the processing speed cannot adopt the change of the channels. [3]The
purpose of coordinated communication is to minimize the inter-cell interference to the cell-edge user and to
improve their performance. When the clusters are not properly congured, the performance of the cell-edge
users will be further degraded. Even though the existing algorithm improves the overall data rate, it does
not consider the goal of the coordinated communication: the improvement of the performance of cell-edge
users.
EXISTING SYSTEM
In the uplink communication system, the base station (BS) receives low Intensity signals from celledge users and signals from users to the edge of adjacent cells simultaneously. In the downlink communication
system, the user receives signals from the BS on its own cell and signals from the BSs at the adjacent cells
in similar power. In the past, the fractional frequency reuse (FFR) scheme, which is a simple ICI reduction
222
FLOW CHART
RESULT
We proposed novel dynamic cell clustering algorithms for maximizing the coordination gain
in the uplink coordinated system. The MAX-CG clustering algorithm maximizes the coordination gain
and improves the average user rate. Simulation and analytical results show that the complexity of the
MAX-CG clustering algorithm is much less than that of the FSCA. The IW clustering algorithm reduces
the complexity of the MAX-CG clustering algorithm and uses the IW to supplement the disadvantage of
224
H. Zhang and H. Dai, Cochannel interference mitigation and cooperative processing in downlink
multicell multiuser MIMO networks, EURASIP J. on Wireless Communication. and Networking,
July 2004.
[3]
[4] S.Kaviani and W.A.Krzymien,Sum rate maximization of MIMO broadcast channels with
coordination of base stations, in Proc. IEEE WCNC, 2008.
[5]
J. Zhang, R. Chen, J. G. Andrews, A. Ghosh, and R. W. Heath, Networked MIMO with clustered
linear precoding, IEEE Trans. Wireless Communication. vol. 8, no. 4, Apr. 2009.
[6]
[7]
B. O. Lee, H. W. Je, I. Sohn, O. Shin, and K. B. Lee, Interference aware decentralized precoding
for multicell MIMO TDD systems, in Proc. IEEE GLOBECOM, 2008,
[8]
SEECH: Secure and Energy Efficient Centralized Routing Protocol for Hierarchical WSN,
International Journal of Engineering Research and Development e-ISSN: 2278-067X, p-ISSN:
2278-800X, www.ijerd.com, Volume 2, August 2012
225
Abstract With the restructuring of power systems and with shifting trend towards distributed and
dispersed generation, the issue of power quality is going to take newer dimensions. The present research
is to identify the prominent concerns in this area and hence the measures that can enhance the quality
of power are recommended. Voltage sag is a common and undesirable power quality phenomenon in
the distribution systems which puts sensitive loads under the risk. An effective solution to mitigate this
phenomenon is to use dynamic voltage restorers and consequently, protect sensitive loads. In addition,
different voltage injection schemes for DVRs are explored to inject minimum energy for a given apparent
power of DVR. The performance of this proposed DVR is examined with different control strategies
like conventional Proportional and Integral (PI) control and Synchronous Reference Frame (SRF)
Theory based PI control. TheproposedReduced Rating DVR with SRF theory based PI Controller offers
economic solution for voltage sag mitigation. Simulation results are carried out by MATLAB with its
Simulink to analyze the proposed method.
Index Terms Dynamic Voltage Restorer, Voltage Sag, PI Controller, SRF theory based PI Controller.
I. INTRODUCTION
Power Quality and reliability in distribution system have been attracting an increasing interest
in modern times and have become an area of concern for modern industrial and commercial applications.
Introduction of sophisticated manufacturing systems, industrial drives, precision electronic equipments
in modern times demand greater quality and reliability of power supply in distribution networks than
ever before. Power Quality problems encompass a wide range of phenomena. Voltage sag/swell, flicker,
harmonics distortion, impulse transients and interruptions are a prominent few [1]. These disturbances are
responsible for problems ranging from malfunctions or errors to plant shut down and loss of manufacturing
capability. Among the power quality problems voltage sag is the most frequently occurring one. Therefore
this sag is the most important power quality problem in the power distribution system.
Voltage Sag or Voltage Dip is defined by the IEEE 1159 as the decrease in the RMS voltage level to
10%-90% of nominal, at the power frequency for durations of cycles to one minute. The IEC (International
Electro-technical Commission) terminology for voltage sag is dip. The IEC defines voltage dip as a sudden
reduction of the voltage at a point in the electrical system, followed by voltage recovery after a short period,
from a cycle to a few seconds. Voltage sags are usually associated with system faults but they can also
be generated by energisation of heavy loads or starting of large motors which can draw 6 to 10 times its
full load current during starting. There are two types of voltage sag which can occur on any transmission
lines; balanced and unbalanced voltage sag which are also known as symmetrical and asymmetrical voltage
sag respectively. Most of these faults that occur on power systems are not the balanced three-phase faults,
but the unbalanced faults. In the analysis of power system under fault conditions, it is necessary to make a
distinction between the types of fault to ensure the best results possible in the analysis.
Unsymmetrical voltage sag
Single phase voltage sag
Two phase voltage sag
Symmetrical voltage sag
Three phase voltage sag
226
The DVR operation for the compensation of sag in supply voltage is shown in Fig.3. Before sag the
load voltages and currents are represented as VL (presag) and Isa as shown in Fig.3. After the sag event,
the terminal voltage (Vta) is gets lower in magnitude and lags the presag voltage by some angle. The DVR
injects a compensating voltage (VCa) to maintain the load voltage (VL) at the rated magnitude. VCa has
two components, VCad and VCaq. The voltage in-phase with the current (VCad) is required to regulate
the dc bus voltage and also to meet the power loss in the VSI of DVR and an injection transformer [5].
The voltage in quadrature with the current (VCaq) is required to regulate the load voltage (VL) at constant
magnitude.
III. CONTROL OF DVR
The efficiency of DVR depends on the performance of the control technique involved in switching
of inverters. Hence different control techniques such as PI controller, and SRF theory based PI controller
were used here. Based on the comparison between the performances of these controllers in controlling the
switching of PWM inverter switches, the optimum controller that improves the performance of DVR is
suggested.
A. PI CONTROLLER
A PI controller output signal is directly proportional to the linear combination of measured
actuating error signal and its time.
A proportional-integral (PI) controller shown in fig.4.drives the plant to be controlled with a weighted
sum of the error (difference between the actual sensed output and desired set-point) and the integral of that
value. An advantage of a proportional plus integral controller is that its integral term causes the steady-state
228
Fig.4. PI Controller
Fig. 5. shows the SRF control algorithm which is able to detect different types of power quality
problems without an error and introduces the appropriate voltage component to correct instantly any
deformity in the terminal voltage to keep the load voltage balanced and constant at the nominal value [12],
[13]. This is a closed loop system which needs DC link voltage of DVR and amplitude of load voltage to
produce direct axis and quadrature axis voltages. When the load voltage descents 10% of its reference load
voltage then the error signal is generated by the DVR controller to generate the PWM waveform for 6-pulse
IGBT device.
SRF Theory is used for the control of DVR. The voltages at the PCC are transformed to the reference
frame using abc-dq0 conversion as,
The harmonics and the oscillatory components are excluded using low pass filters. The components
of voltages in d-axis and q-axis are,
229
Similarly, a second PI controller is used to standardize the amplitude of the load voltage. The
amplitude of load voltage at point of common coupling is calculated from the ac voltages (VLa, VLb,
VLc) as,
The amplitude of the load voltage (VL) is employed over the reference amplitude (VL*) and the
output of PI controller is considered as the reactive component of voltage (Vqr) for regulation of load
voltage added with the dc component of Vq to generate Vq*. The reference q-axis load voltage is therefore
as,
The resultant voltages (Vd*, Vq*, V0) are again altered into a-b-c frame using reverse Parks
transformation as,
Reference load voltages and the sensed load voltages are used in PWM generator to generate gate
pulses for the switches.
IV. PROPOSED COMPENSATION STRATEGY
In a three phase distribution system, when the voltage sag occurs, the fuel cell based DVR needs
to provide essential voltage to compensate it. The voltage Vinjis inserted such that the load voltage Vload
is constant in magnitude and undistorted even though the supply voltage Vs is not constant in magnitude or
is distorted.
Fig. 6. shows the phasor diagram for different voltage injection schemes of the DVR. VL(presag)
is a voltage across the critical load prior to the voltage sag. During the voltage sag, the load voltage is
reduced to VL(sag) with a phase lag angle of . Now the DVR needs to provide some voltage such that the
load voltage magnitude is maintained at the pre-sag condition. Based on the phase angle of load voltage,
the voltage injected by DVR can be comprehended in four ways. Vinj1 represents the voltage injected by
DVR that is in-phase with the VL(sag). With the injection of Vinj2, the load voltage magnitude retains the
same but it leads VL(sag) by a small angle. In Vinj3, the load voltage holds the same phase as that of the
pre-sag condition.
230
Fig. 6.
Vinj4 is the condition where the injected voltage is in quadrature with the current and this injection
comprises no active power. On assessment of these four voltage injection schemes, with the injection of
Vinj1, a minimum possible rating of the converter is achieved. The sinusoidal signal is phase modulated by
means of the angle and the modulated three phase voltages are given by,
232
DC Voltage of DVR
DC Bus Voltage PI Controller
AC Load Voltage PI Controller
PWM Switching Frequency
Series Transformer
: 300V
: Kp1 = 0.5, Ki1 = 0.35
: Kp2 = 0.5, Ki2 = 0.35
: 10kHz
: 10kVA, 200V/300V
REFERENCES
[1] M. H. J. Bollen, Understanding Power Quality ProblemsVoltage Sags and Interruptions,New
York, NY, USA: IEEE Press, 2000.
[2]
A. Ghosh and G. Ledwich, Compensation of Distribution System Voltage using DVR, IEEE
Trans. Power Del., vol. 17, no.4, pp.10301036, October 2002.
[3]
H. Igarashi and H. Akagi, System configurations and operating performance of a dynamic voltage
restorer, IEEE Trans. Ind. Appl., vol. 123-D, no. 9, pp. 10211028, 2003.
[4]
A.Ghosh, A.K.Jindal and A.Joshi, Design of capacitor Supported Dynamic Voltage for Unbalanced
and distorted Loads, IEEE Trans.Power Del., Vol. 19, no. 1, pp.405-413, January 2004.
[5]
J. G. Nielsen, M. Newman, H. Nielsen, and F. Blaabjerg, Control and Testing of a Dynamic Voltage
Restorer (DVR) at Medium Voltage Level, IEEE Trans. Power Electronics, vol. 19, no. 3, pp.
806813, May 2004.
[6]
J.A. Martinez and J.M.Arnedo, Voltage Sag Studies in Distribution Networks-partI: System
modeling, IEEE Trans. Power Del.,vol.21, no. 3, pp. 338345, July 2006.
[7]
[8]
[9]
M. Moradlou and H. R. Karshenas, Design Strategy for Optimum Rating Selection of Interline
DVR, IEEE Trans. Power Delivery, vol.26, no. 1, pp. 242249, January 2011.
[10] A.K. Sadigh, K.M. Smedley, Review of Voltage Compensation methods in Dynamic Voltage
Restorer (DVR), IEEE Power and Energy Society General Meeting, July 2012, pp.1-8.
[11]
Pradip Kumar Saha, SujaySarkar, SurojitSarkar, Gautam Kumar Panda, By Dynamic Voltage
Restorer for Power Quality Improvement, International Journal of Engineering and Computer
Science, Volume 2, Issue 1 Jan 2013..
[12] D.P.Kothari, Pychadathil Jayaprakash, Bhim Singh, and Ambrish Chandra, "Control of Reduced
Rating Dynamic Voltage Restorer with a Battery Energy Storage System,IEEE Trans.Power Del.,
vol. 50, no. 2, March/April 2014.
[13] Himadri Ghosh, Pradip Kumar Saha and Goutam Kumar Panda,Design and Simulation of a Novel
Self Supported Dynamic Voltage Restorer for Power Quality Improvement, Int. J. Elect. Power
Energy Syst., vol. 3, no. 6, ISSN 2229-5518, June 2012.
234
Abstract To reduce the power quality issues, it is important to eliminate the harmonics in the power
systems. The harmonic elimination through Shunt Active Power Filter (SAPF) provides higher efficiency
when compared with other filters.Nonmodel-based controllers have been designed for the control of a
SAPFto reduce the distortion which is created by the non-linear loads.An Artificial Neural Network
(ANN) is becoming a deterioration technique in many controlapplications due to its parallel operation and
high learningcapability. In this paper, the Least Mean Square (LMS) based ADALINEANN is proposed
to regulate the DC bus voltage (Vdc) to eliminate harmonics and load compensation in the system. The
Simulink model of the proposed system is developed using MATLAB/SIMULINK tool. The performance
of anANN controller is compared with FLC and conventional PI controller.The proposed method offers
a better dynamic response and efficient control in varying load conditions.
Index Terms Power Quality, Harmonic Elimination, Shunt Active Power Filter, Neural Network
controller.
I. INTRODUCTION
Many domestic and industrial non-linear loadsare power electronic switching devices such
as television, personal computers, business and office equipment namely copiers, printers, industrial
equipment such as Programmable Logic Controllers (PLCs), Adjustable Speed Drives (ASDs), rectifiers,
inverters, CNC tools. The power quality issues like interruptions, voltage sag, swell, harmonics, noise and
switching transients are occurred in power system and introduces serious power pollution to the utility
side. Among these power quality issues, the harmonics are the major contribution for polluting the power
grid. Traditionally, passive LC filters have been used to avoid these effects [1]. The resonance, fixed
compensation and huge size are the problems arising in passive filters. These problems are overcome by
the introduction of active filters which addresses more than one harmonic at a time.
Among the active filters, the SAPF is a power electronic converter that is connected in parallel and
cancels the reactive and harmonic currents due to non-linear load [2]. Ideally, the SAPF needs to generate
reactive and harmonic current to compensate the non-linear loads in the supply line. The SAPF is Voltage
Source Inverter (VSI) with DC side capacitor (Cdc) and used to generate the filter current (if) and is injected
into the utility power grid. This cancels the harmonic components by the non-linear load and keeps the
utility line current (is) sinusoidal [3]. It has the advantage of carrying the compensation current and small
amount of active fundamental current supplied to compensate for system losses.
The Vdc is regulated by using PI controller. This improves the system performance effectively.
Several techniques are available to generate the switching current for the APF [4]-[6]. Bhim Singh et
al proposed PI control algorithm for single phase SAPF [7]. In PI control strategy, reference current is
calculated by sensing only line currents [8]. The PI controller requires accurate linear mathematical models,
which fails to perform satisfactorily under non-linearity, load disturbances and parameter variations [9].
The conventional control requires mathematical analysis of the system so soft computing is an
alternate solution to control the APF. Soft computing is a technology to extract information from the
process signal by using expert knowledge. In order to enhance the performance of SAPF, genetic algorithm,
bacterial foraging technique, particle swarm optimization, Ant Colony Optimisation (ACO), fuzzy logic
controller and ANN technique are employed. The SAPF is optimized by bacterial foraging (BF) technique
for load compensation in [4] and Ant colony optimization (ACO) in [5]. The APF is controlled by ANN
technology in [10]-[11]. The adaptive neural network compensation algorithm is used to compensate
harmonics and reactive power for PQ and DQ strategy [14]-[15]. The Takagi Sugeno-FLC and mamdani
FLC are compared in [16]. The FLC with different membership functions are compared in [17]-[18]. The
conventional PI, FLC and ANFIS are compared in [12]-[13] based on PQ strategy.
235
In this paper, an ANN is used for controlling SAPF. The performance indices considered are
percentage peak overshoot (%Mp), DC link voltage settling time (Vdc_Ts) and Total Harmonic Distortion
(%THD). The proposed ANN controller offers improved dynamic response by comparing with FLC and
conventional PI controller.
REFERENCE SOURCE CURRENT ESTIMATION METHOD
Due to non-linear load, the harmonic distortion occurs in the supply system and in other loads which
are connected from the same supply. Hence the SAPF is connected across the main supply system at Point
of Common Coupling (PCC). Fig.1 shows the basic principle of SAPF. It controls and cancels the current
harmonics on the utility side by supplying a compensating current which makes the source current in phase
with the source voltage [3].
From Fig.1, the instantaneous current is given by Eq. (1):
There are also some switching losses in the PWM converter and so the utility must supply a small
overhead for the capacitor leakage and converter switching losses accumulated with real power of the load.
236
The total peak current supplied by the source (I_sp) is given by Eq. (7)
where,I_slis the peak value of the loss current
If the active filter provides the total reactive and harmonic power, then it will be in phase with the
utility voltage and purely sinusoidal. At this time, the active filter must provide the compensating current
as in Eq. (8):
Thus, for accurate and instantaneous compensation of reactive and harmonic power, it is necessary
to estimate i_s (t) (i.e., the fundamental component of the load current as the reference current). The
peak value of the reference current can be estimated by controlling the DC side capacitor voltage. Ideal
compensation requires the main current to be sinusoidal and in phase with the source voltage, irrespective
of the load current nature. The desired source currents, after compensation, can be given as in Eq. (9):
where I_sp is the amplitude of the desired source current, while the phase angle can be obtained
from the source voltages.
Hence the Isp needs to be determined. The peak value of the reference current has been estimated by
regulating the Cdc voltage of the PWM converter. The capacitor voltage is compared with a reference value
and the error is processed in ANN controller. The output of the ANN controller has been considered as the
amplitude of the desired source current and the reference currents are predicted by multiplying this peak
value with the unit sine vectors in phase with the source voltages. The detailed schematic diagram of ANN
controller based SAPF is shown in Fig.2. The modified Space Vector Pulse Width Modulation (SVPWM)
current control scheme [19] is used to generate switching pulse of SAPF. In this paper, the performance of
proposed ANN controller is compared with FLC and conventional PI controller of SAPF.
DESIGN OF ANN CONTROLLER
An ANN is implemented to control the Cdc voltage based on processing of the Vdc error i(n) is used
to improve the dynamic of SAPF. An ANN consists of a large number of strongly connected elements.
The artificial neurons represent a biological neuron conceptconceded in a computer program. The artificial
neuron model is shown in Fig.3.
237
Inputs i(n) enter into the processing element from the left. The first step is to multiply each of
these inputs by their respective weighting factor w(j). These modified inputs are then fed into the summing
function (w(j)*i(n)and the information flow to the output through a transfer function which may be the
threshold function, sigmoid function, tangential function, Gaussian function, hyperbolic function, linear
function or pure linear function. [20]
Adaline Based Control Algorithm
The basic concept of proposed ANN is based on the LMS algorithm and it is trained through
ADALINE tracks the unit vector templates to maintain minimum error. The initial weight is set to zero
whereas the learning rate is the coefficient of convergence and its value lies between 0 and 1. It is used as
0.001 in the LMS algorithm. The LMS based ADALINE control algorithm is shown in Fig.4.The initial
output pattern is compared with the current output and the weights are updatedusing LMS algorithm until
the error becomes small.
The amplitude of the desired source current is estimated by ANN is given in Eq. (10)
where Y is the amplitude of the desired source current, is learning rate, e(n) iserror between output
equation and target value, i(n) isinput values and w(j) is weights of the ADALINE network.
An ADALINE is used to extract the amplitude of desired source is shown in Fig.5. The input of the
ANN block i(n) is the errorsignal by comparing the capacitor voltage and reference value.The desired peak
value of source current is estimated by using LMS based ADALINE ANN.
The reference currents are estimated by multiplying this desired peak value with the unit sine
vectors in phase with the source voltages. The modified SVPWM current control scheme is used to generate
switching pulse of SAPF by comparing actual source current and desired source current which is estimated
by ANN.
238
(a)
(b)
(c)
Fig.7. Normalized triangular membership function of FLC for: (a) input variable (e(t)), (b) input
variable (e(t)) (c) output variable (Iref)
2.Transient response
To execute the transient analysis, the load resistance is increased from 6.7 to 10 at t=0.3s.
The supply voltage (Vsa), supply current (Isa), load current (ILa), filter current (Ica) and Vdc related to
conventional PI controller, ANN and FLC controller are shown in Fig. 12 to Fig. 14 respectively. The
performance indices are listed in Table.III.
When compared to conventional PI with FLC and ANN, rise or dip in Vdc is larger in conventional
PI and takes more cycles to settle down. In conventional method, the %THD in source current settles after
3-4 cycles. The FLC takes 10ms and ANN takes 6ms to settle down at Vdc
TABLE III
PERFORMANCE ANALYSES OF SAPF BASED ON CONVENTIONAL PI, ANN AND FLC
CONTROLLERS
241
242
The comparison of settling time, %THD and peak overshoot for Conventional PI, FLC and
ANN controllers is shown in Fig.15 to Fig.17. The ANN controller settles at 6ms compared to FLC
and Conventional PI controllers. The %THD is reduced in ANN compared to FLC and Conventional PI
controllers. But the peak overshoot produced in ANN is larger compared to FLC and Conventional PI. Thus,
the dynamic performance of ANN is compared with FLC and conventional PI controller. The performance
of ANN is improved than FLC and conventional PI controller. The ANN has a settling time of 6ms, which
is much better than the FLC and conventional PI controller.
CONCLUSION
In this paper, the Nonmodel-based controllers are designed to achieve better utilization and
reactive current compensation. The soft computing techniques were applied to control the switching of the
SAPF. The LMS based ADALINE network is trained online to extract the fundamental load active current
magnitude. The performance such as settling time, %THD of the ANN-based SAPF controller is better
than FLC and conventional PI controllers and it is found to provide much better response under dynamic
conditions.
REFERENCES
[1] Akagi H, Fellow, New trends in Active filters for power conditioning, IEEE transactions on
industry applications, Vol. 32, No.6, pp.1312-1322, 1996.
[2] Mishra S, Bhende C.N, Bacterial Foraging Technique-Based Optimized APF for Load
Compensation,IEEE transactions on power delivery, Vol. 22, No. 1.pp.457 465, 2007.
[3]
Bhim Singh. Kamal Al-Haddad, AmbrishChandra,A Review of Active Filters for Power Quality
Improvement, IEEE Trans. On Industrial Electronics, Vol. 46, No.5, pp.960-971,1999..
[5]
El-Habrouk M, Darwish M K, MehtaP, Active power Filterrs: A Review., Proc. IEE Electric
Power Applications,pp.403-413, 2000.
[6]
ZainalSalam, Tan Perng, AwangJuosh. Harmonics Mitigtion using Active Filter: A Technical
Review, Elektrika, pp.17-26, 2006.
[7]
Bhim Singh, Ambrish Chandra, Kamal Al-Haddad, An Improved Single Phase Active Filter with
Optimum DC capacitor, Proceedings of the IEEE IECON 22nd International Conference, pp.677682, 1996.
[8] C. N. Bhende , S. Mishra, S. K. Jain, TS-Fuzzy-Controlled Active Power Filter for Load
Compensation, IEEE Transactions On Power Delivery, Vol. 21, No. 3, pp.1459-1465, 2006.
[9]
S.K. Jain, P. Agrawal, H.O. Gupta, Fuzzy logic controlled shunt active power filter for power
quality improvement, IEEE Proc. Electronics Power Applications. Vol. 149, No. 5, pp.317-328,
2002.
[10] J.R.Vazquez and P.Salmeron, Active Power Filter Control using Neural Network Technologies,
IEE Proc. Electr. Power Appl. Vol.150, No.2, pp. 139-145, 2003.
[11] Bhim Singh, Ambrish Chandra, Kamal Al-Haddad, Computer-aided modeling and simulation of
active power filters, Electrical Machines and Power systems, pp. 1227-1241, 1999.
[12] Parmod Kumar, AlkaMahajan, Soft Computing Techniques for the Control of an Active Power
Filter, IEEE Transactions On Power Delivery, Vol. 24, No.1, pp. 452-461, 2009.
[13] Brahmaiah.routhu, N.Arun, PI, FUZZY and ANFIS Control of 3-phase Shunt Active Power Filter,
International Journal of Engineering and Technology, Vol. 5, No. 3, pp.2163-2171, 2013.
[14] Bhim Singh, JitendraSolanki, An Implementation of an adaptive control algorithm for a threephase SAPF, IEEE Transactions on Industrial Electronics, Vol.56, No.8, pp.2811-2820, 2009.
[15] Bhim Singh, Jayaprakash, Implementation of Neural Network controlled three leg VSC and
transformer as three-phase four-wire DSTATCOM, Vol.47, No. 4, pp.1892-1901, 2011.
[16] FatihaMekri, BenyounesMazari and Mohammed Machmoum, Control and Optimisation of Shunt
Active Power Filter parameters by Fuzzy Logic, Can. Journal. Elect. Comput. Eng., Vol. 31, No.3,
pp.127-134, 2006.
[17] Suresh Mikkilli, A.K.Panda, Real time implementation of PI and fuzzy logic controllers based shunt
active filter control strategies for power quality improvement, ELSEVIER Journal Of Electrical
Power And Energy Systems 43, pp.1114-1126, 2012.
[18] K. Sundararaju, A. Nirmal Kumar, Cascaded and Feed forwarded Control of Multilevel Converter
Based STATCOM for Power System Compensation, International Review on Modelling and
Simulations (I.RE.MO.S.), Vol. 5, No. 2, pp . 609 -615 .
[19] Anup Kumar Panda, Suresh Mikkilli, FLC based Shunt active filter (p-q and Id -Iq) control
strategies for mitigation of harmonics with different fuzzy MFs using MATLAB and real-time
digital simulator, ELSEVIER Journal of Electrical Power and Energy Systems 47, pp.313336,
2013.
[20] Microsemi User Guide, Space Vector Pulse Width Modulation Hardware Implementation, 2014.
[21] Dhuliya A, Tiwary U.S, Introduction to Artificial ANN, IEEE transaction in Electronic Technology.
pp. 36 62, 1995.
244
Abstract Hiding a message in compression codes can reduce transmission costs and simultaneously
make the transmission more secure. In high-performance, data-hiding LempelZivWelch (HPDHLZW) scheme, which reversibly embeds data in LZW compression codes by modifying the value of
the compression codes, where the value of the LZW code either remains unchanged or is changed to
the original value of the LZW code plus the LZW dictionary size according to the data to be embedded.
Compared to other information-hiding schemes based on LZW compression codes, the proposed scheme
achieves better hiding capacity by increasing the number of symbols available to hide secrets and also
achieves faster hiding and extracting speeds due to the lower computation requirements.
Keywords LZW, Steganography, Information hiding.
I. INTRODUCTION
With the rapid development of new Internet techniques, huge amounts of data are generated on the
Internet daily. With the extensive, worldwide use of the Internet, it is now necessary to encrypt sensitive data
before transmission to protect those data. Reversible data-hiding techniques can ensure that the receiver can
receive hidden messages and recover needed data without distortion. Reversible data-hiding has received
extensive attention since recoverable media are more useful when protecting the security and privacy of
sensitive information. For example, assume that the personal information of a patient is private information
and the patients X-ray images are used as cover media. It is very important to recover the X-ray images
without any loss of detail after retrieving the patients personal information. Currently, reversible datahiding schemes are applied in three domains, i.e., the spatial domain, the transformed domain and the
compression domain. In the spatial domain, the values of the pixels of the cover image are altered directly
to hide the data. In the transformed domain, the cover image is processed by a transform algorithm to
obtain the frequency coefficients. Then, the frequency coefficients are modified to hide the data. In the
compression domain, the compression code is altered to hide the data. LZW coding is a simple,well-known,
lossless compression algorithm that compresses and decompresses data by using a dictionary that is
automatically produced, so LZW coding eliminates the need to analyze the source file or transmit any
auxiliary information to the decoder.
The related DH-LZW scheme based on the LZW algorithm hides the data by shrinking one character
of one symbol to hide the data. However, the hiding capacity was low because only the symbol whose
length is greater than the threshold can hide secret data and an embeddable symbol hides only one secret bit.
The HCDH-LZW scheme is used to improve the performance of Shim, Ahn, and Jeons method
by shrinking the characters according to the length of the symbol used to hide the data, thereby achieving
higher embedding capacity. This hiding capacity is higher because more symbols are available to hide
secret bits and because one symbol can hide more than one secret bit. However, only symbols with lengths
larger than the threshold can hide data and repeated symbols increase the size of the dictionary, which, in
turn, lowers the hiding speed. In addition, the extracting algorithm is very complicated, and this increases
the computation costs. Further, both scheme must transmit auxiliary information, the threshold value.
To overcome the shortcomings of these methods, the proposed data-hiding scheme that is based
on LZW codes by utilizing the relationship between the output compression codes and the size of the
dictionary. The proposed scheme guarantees that the receiver can recover the source data and extract the
hidden data without loss. In comparison to other proposed schemes, our scheme can achieve a much higher
embedding capacity and lower computation costs.
The need for data hiding is such that the existence of the message is not known to anyone apart from
the sender and the intended receiver. In data hiding, the receiver can able to recover only the hidden data
and not the source data which is used as a cover medium.
245
In the example, the source file is sddsddssddsdsddsddsd, and the secret file is 1001000100.In
the following table, the first secret bit is 1, and since 256 symbols existed in the dictionary before the data
hiding procedure, the output code is the value of the original code plus the current size of the dictionary,
i.e., 371.
248
The proposed scheme increases the embedding capacity by increasing the number of embeddable
symbols. The increased hiding and extracting speeds of the proposed scheme are the result of the simple
computation of the proposed scheme. Moreover, the proposed scheme decreases the dictionarys size
because there is no modification of the content of the dictionary during the data hiding phase.
This schemeis based on the LZW compression code but modifies the value of the LZW compression
codes to embed secret data. The proposed scheme increases the number of symbols available to hide
secrets and does not change the content of the dictionary. Since the maximum number of hidden bits in the
proposed scheme is equal to the size of the dictionary, it achieves much higher embedding capacity than
HCDH-LZW. In addition, the proposed scheme achieves faster hiding and extracting speeds than HCDHLZW. Also, the dictionary generated by our proposed scheme is much smaller than that for HCDH-LZW.
249
In Figure 6.1 Embedding capacity graph, the comparison between the existing system and the
proposed system is shown. The embedding capacity is increased according to the file size. Thus the data
hiding speed increases in high performance lossless data hiding scheme.
5.Conclusion and Future work
In the proposed scheme, the value of the LZW compression code is modified to embed the secret
data. The proposed scheme increases the number of symbols available to hide secrets and does not change
the content of the dictionary. It achieves high embedding capacity and faster hiding and extracting speed
than HCDH-LZW. The dictionary generated is also much smaller than the HCDH-LZW. From the results
it can be observed
that the proposed scheme works better when compared to the existing system and high embedding
capacity is achieved. This scheme can be applied with efficient version of LZW algorithm which can be
taken as the future work.
6 REFERENCES
[1] Chang C.C, Lee C.F, Chuang L.Y, (2009), Embedding secret binary message using locally adaptive
data compression coding, International Journal of Computer Sciences and Engineering Systems,
Vol.3, No.1, pp 55-61.
[2]
Chang C.C, Lin C.Y, (2007), Reversible Steganographic method using SMVQ Approach based on
declustering, Information Sciences, Vol.177, No.8, pp 1796-1805.
[3]
Chang C.C, Lu T.C, (2006), Reversible index domain information hiding scheme based on side-
250
Chang C.C, Wu W.C, (2006), A Steganographic method for hiding secret data using side match
vector quantization, IEICE Transactions on Information and Systems, Vol.8, No.9, pp.2159-2167.
[5]
Chen C.C, Chang C.C, (2010), High Capacity reversible data hiding for LZW codes, In proceedings
of the second International conference on Computer Modeling and Simulation, No.1, pp.3-8.
[6]
Chen W.J, Huang W.T, (2009), VQ indexes compression and information hiding using hybrid
lossless index coding, Digital Signal Processing, Vol.19, No.3, pp.433-443.
[7]
Jo M, Kim H.D, (2002), A Digital image watermarking scheme based on vector quantization,
IEICE Transactions on Information and Systems, Vol.85, No.6, pp.1054-1056.
[8]
Lu Z.M, Wang J.X, Liu B.B, (2011), An improved lossless data hiding scheme based on image VQ
index residual coding, Journal of Systems and Software, Vol.82, No.6, pp.1016-1024.
[9]
251
I. INTRODUCTION
QoS is defined as the ability of a web service to respond to expected invocations. A web service is
a piece of software that is available over the internet and uses a standardized XML messaging system.Web
service enables communication among various applications by using open standards such as HTML, XML,
WSDL, and SOAP.
HTML (Hypertext Markup Language)
HTML is the collection of markup symbols or codes in a file for display on WWW (World Wide
Web). It is used to create visually engaging interfaces for web application.
XML (Extensible Markup Language)
XML i
s defines a set of rules for encoding documents in both the format human readable and machine
readable
<? XML version = 1.0 encoding = UTF_8?>
WSDL (Web Services Description Languages)
WSDL is an XML based language and is used for describing the functionality offered by a web
services. It provides a machine readable description of how the service can be called, what data structures
it returns and what parameter it expects.
SOAP (Simple Object Access Protocol)
SOAP is a protocol specification for exchanging structured information in the implementation of
web services in computer networks. It uses XML information set for its message format.
Related works
In [10], Web services are the collection of software components and standards for the next generation
technologies.Integration with GIS application to produce interactive interface for travel and Tourism
Domain. GIS (Geographic Interface System) based technology incorporates common database operations
such as query and statistical analysis with the unique visualization and geographic analysis benefits offered
by maps. Quality of service (QoS) is a combination of several qualities or properties of a service, such as:
Availability, Response Time, Throughput and Security [2].
Response time of structured BPEL (Business Process Execution Languages). The constructor
sequence correspond to a sequential execution of s1 to sn elementary Web services. The anal1ytical
formulas of mean response time E(Tsequence) is given by[11].
252
Fig: 1An approach to get Overall response time using automation tool during load test
Proposed System
The Quality of Service (QoS) non-functional model that compose of four criterion as parameter
for the quality of web services model such as Service Cost, Service Response Time, Service Availability
and Service Reputation. In this proposed model it deals about the improvement of Service Response Time
(SRT) and Service Availability (SA). By improving this non-functional Quality of Service (QoS) attributes,
the performance of the web services could be improved.
Daniel A. Menasc, George Mason University QoS issues in web services IEEE Internet computing
1089-7801/02/2002.
[3]
D.Controneo, M.Gargiulo, S.Russo, G.Ventre Improving the availability of web services, 2002.
[4]
Daniel A. Menasc Response Time Analysis of composite web services IEEE computing, vol. 8, 1,
pp 90-92, 2004.
[5]
[6]
[7]
M. Dakshayini, H.S.Guruprasad An optimal model for priority based service scheduling policy for
cloud computing environment International journal of computer application (0975-8887)-2011.
[8]
MarziehKarimi, Faramarz Safi Esfahani and NasimNoorafza Improving Response Time of Web
Service Composition based on QoS Properties Indian Journal of Science and Technology, vol 8
(16), 55122, July2015.
[9]
Rahul Sharma An Efficient approach to calculate overall response time during Load Testing
International journal of advanced research in computer science and Software Engineering, volume
5, Issue8, August 2015 ISSN: 2277 128X.
[10] R.Sethuramana, T.Sasiprabha, A.Sandhya An Effective QoS Based Web Service Composition
Algorithm for Integration of Travel & Tourism Resources Procedia Computer Science 48 (2015)
pp 541-547.
[11] Serge Haddad, Lynda Mokdad, Samir Youcef Response time analysis for composite web services
vol 44, pp 1041-1045.
[12] Steffen Bleul, Thomas Weise and Kurt Geihs The Web Service Challenge A review on Semantic
Web Service Composition Electronic communications of the EASST volume X(2008) ISSN 18632122.
[13] Verka JOVANOVIC, Angelina NJEGUS The Application of GIS and its Components in Tourism,
Yugoslav Journal of Operations Research, Vol 18 (2008), number 2, 261-272.
255
Abstract The Auto irrigation system of this system uses soil moisture sensor to detect the moisture
level and 4X4 keypad for various crops control. When the moisture content of the soil is reduced then the
sensor sends detected value to the microcontroller. Then the water pump is automatically ON according
to the moisture level. The main aim of this paper is to reduce the human intervention for farmers and use
solar energy for irrigation purpose. The entire system controlled by the PIC microcontroller.
Index Terms Auto irrigation, moisture sensor, water pump, PIC microcontroller.
I. INTRODUCTION
The proper method is to be implemented for the irrigation system because of lack of rain and
scarcity of water in soil. Agricultural field always needs and depends on the water level of the soil. But
continuous extraction of water from soil reduces the moisture level of soil to avoid this problem planned
irrigation system should be followed. And improper use of water leads to wastage of significant amount
of water. For this purpose, automatic plant irrigation system is designed using moisture sensor and solar
energy.
The proposed system derives power from sunlight through photo-voltaic cells. Hence, the system
cannot depend on the electricity. In this proposed model by using sunlight energy, power the irrigation
pump. The circuit comprises of soil moisture sensor are inserted in the soil to sense whether the soil is wet
or dry.
A PIC microcontroller is used to control the whole system. When the moisture level of the soil
is low then the sensor detects the soil condition and gives condition to the relay unit connected to the
switch of the motor. It will ON in dry condition and switch off the motor when the soil is in wet condition.
The moisture level of the soil is sensed by the sensor inserted into the soil which gives signal to the
microcontroller whether the land needs water or not. The signal from the sensor received through the
output of the comparator and it is preceded with instruction from the program stored in the microcontroller.
When the soil is dry motor ON and in wet condition motor is OFF. This condition of motor ON and OFF
is displayed on a 16X2 LCD.
A.PV cell
Photovoltaic cell is a system converts light energy into electricity. Photovoltaic cell is otherwise
known as solar cells. This is used in simple and complicated application. The simplest system of photo
voltaic cells is small calculators and wrist watches in everyday usage. Most complicated system that provide
electricity for pumping water, powering communications equipment, lights to the homes and running our
appliances. The PV cells which takes sunlight and convert it into electricity this is kept as a small grid.
Solar electric panels more commonly referred to as photovoltaic, or PV, panels, it converts sunlight into
electricity. The electricity is used to run appliances and electrical devices or stored in batteries to be used
later. Solar Thermal Panels are used in commercial purpose to heat the water.
Solar collectors are the heart of most active solar thermal energy systems. The collector
absorbs the sun's light energy and converts it into heat energy. This thermal energy used to heat water for
commercial and residential purposes and conserve the electricity power. Solar buildings technologies are
useful to the buildings which uses more power to run man applications. Solar thermal collectors are the
main component of active solar systems, and are designed to meet the specific temperature requirements and
climate conditions for the different endues. Flat-plate collectors, Evacuated-tube collectors, concentrating
collectors, transpired air collectors these are some types of collectors in solar system. The proposed system
uses the solar energy to ON the water pump. Here the irrigation maintained through the soil moisture sensor
and solar energy. There are many plants which required minimum level of moisture. If the required level of
water is not provided then the plant will die and results in low production [2]. By irrigate the crop according
to the moisture level they need, is provided by the soil moisture sensor. Due to the presence of sensor crops
256
The boost converter is used to convert DC to DC power to improve the output power
of the solar panel because if solar panel receives less amount of light then boost converter gives higher
voltage compared with input voltage. Boost converter is a switch mode power supply contains a diode and
a transistor with one energy storage element, capacitor. Filters are used to reduce output voltage ripple..
When the switch is closed then the current flows in clockwise direction through the inductor
and it stores some energy by generating a magnetic field. When the switch is opened, current will be
reduced as the impedance is higher. The magnetic field previously produced will be destroyed to maintain
the current flow towards the load. For this the polarity will be reversed (means left side of inductor will be
negative now). As a result two sources will be in series causing a higher voltage to charge the capacitor
through the diode D. The automatic irrigation system consist of solar panel, boost converter, Inverter,
motor supply, soil moisture sensor, LCD display, 4X4 key pad, microcontroller, regulator. Soil moisture
sensor is inserted into the soil for level of moisture detection and also it indicates different moisture level
for different crops. In this system crops like paddy, wheat, and sugarcane can be irrigated. For the selection
of crops 4X4 key pad is used in this system. The next important part of the system is solar panel here the
power is driven from the solar panel. The solar panel that converts sunlight into electricity this converted
electricity is send to boost converter and to the battery.
257
A. Boost converter
This charge controller is suitable for charging flooded lead acid, Gel cell or sealed lead
acid (SLA) and Absorbed Glass Mat type batteries. The Boost converter charge controller keeps the solar
panel current and voltage at the regulated power point while charging the battery. Boost converter helps to
maintain the constant output from solar panel to battery.
B. Regulator
In this regulator IC 7805 are used to convert the 12v supply from battery to 5v supply through
the microcontroller 16F877A and to hygrometer soil moisture sensor.
C. 4X4 Keypad
V. CONCLUSION
The proposed system is beneficial to the farmers when this system is implemented. And also useful
to the government with solar panel energy, solution for energy crisis is problem.
When the soil needs
water is indicated by the sensor by this automatic irrigation system is implemented. Then the various crops
also irrigated with this system by turn on the button. According to the button pressed the irrigation system
detects the moisture level of the crop. For example, Wheat, Paddy, Sugarcane crops moisture content of soil
is detected and irrigated automatically. Automatic irrigation system is used to optimize the usage of water
by reducing wastage and reduces the human work. The energy needed to the water pump and controlling
system is given by solar panel. Solar panels which are small grid that can be produce excess energy. By
using solar energy reduces the energy crisis problem.
The system requires minimal maintenance and attention because they are self-starting. To further
enhance the daily pumping rates tracking arrays can be implemented. This system demonstrates the
260
M. Lincy Luciana, B.Ramya, and A. Srimathi, Automatic Drip Irrigation Unit Using PIC Controller,
Proceedings of the International Journal of Latest Trends in Engineering and Technology, Vol. 2,
Issue 3, May 2013.
[3]
H.T ingale and N.N. Kasat, Automated Irrigation System, Proceedings of the International
Journal of Engineering Research and Development,
Volume 4, Issue 11, November 2012.
[4]
K.Prathyusha, M. Chaitanya Suman, Design of embedded systems for the automation of drip
irrigation, International journal of application or innovation in engineering of management, volme
1, Issue 2, October 2012.
[5] Cuihong Liu Wentao Ren Benhua Zhang Changyi Lv, The application of soil temperature
measurement by LM35 temperature sensors, International conference on Electronic and Mechanical
Engineering and Information Technology (EMEIT), 2011.
[6]
Andrew J. Skinner and Martin F. Lambert, An Automatic Soil Pore-Water Salinity Sensor Based
on a Wetting-Front Detector , IEEE Sensors journal, vol. 11, no. 1, January 2011.
[7]
AUTHOR PROFILE
V R. Balaji obtained his B.E. degree from Sudharsan Engineering College, Anna
University. He completed his M.E in Government College of Technology at Coimbatore
in the specialisation of Power Electronics and Drives. His area of interest in the field
of power quality management in utility grid. Currently he is working as a Assistant
professor in the department of EEE at KCT, Coimbatore.
M.Sudha obtained her B.E. degree from Vivekanandha Institute of Engineering and
Technology for Women. She pursuing M.E in Kumaraguru College of Technology at
Coimbatore in the specialisation of Embedded system technologies.
261
I. INTRODUCTION
Phishing attack is the attempt by an individual or a group to steal personal confidential information
through fake or look-a-like websites of an existing authorized website. It is a form of online identity theft
that aims to steal the sensitive personal information such as online banking passwords and credit card
information from the users. There has been extensive press coverage over the phishing attacks because such
attacks have been escalating in the numbers along with increasing online customers and sophistication.
To provide improved security from leaking of confidential information we need to switch over to an even
more reliable protection scheme to ensure safe networking of transactions. Bank customers are the favorite
targets of those who involve in phishing attacks.
At present many bank customers use online transactions frequently. So, the customer will have
username and password to access the bank account. These are sensitive and confidential information. When
these fall into hands of the phishing attackers, the information can be used by the attacker to access the
bank account and make a huge loss to the customer. Unfortunately many people fall into the scams of these
attacks and the victims lose their confidential information in wrong hands.
1.1. OVERVIEW
Now-a-days where online banking has been increasing rapidly, sadly, the Phishing scams are
increasing in same pace. The most used two methods of attacking by them are
(i) Email phishing
(ii) Website phishing
Email Phishing involves the sending of a fake mail to the victim and requesting them to provide
confidential information like an established organization. This can be avoided by just being aware of one
fact that no legitimate bank will include a form within an email that they send you. Website Phishing is
the process of creating a look-a-like fake website of an established organization and stealing the data from
the user. This can be avoided by verifying whether the website you are at is a secured website or not. But
verifying every time is not always possible even to expert customer. So this made rise to develop some
reliable techniques to overcome Website Phishing.
Unlike email phishing the victims of website phishing can lead to huge number of victims because
262
265
If the two shares are superimposed or overlapped, then the value of the original pixel P can be
determined. If P is a black pixel, then we get it as two black sub pixels. If it is a white pixel, then we get it
as one black sub pixel and the other one as one white sub pixel.
4. USER INTERFACE REQUIREMENTS
The modules that are to be implemented in the interface are enlisted as follows:
A. Registration With Secrete Code
B. Image Captcha Generation
C. Shares Creation(VCS)
D. Login Phase
4.1 REGISTRATION WITH SECRETE CODE TEXT
In the registration phase, the user details like username or user-Id, password, email-Id, address, and
a Security text are requested from the user at the time of registration for the securing of website and user
from Phishing attackers. The key string can be a combination of some alphabets and numbers to provide
more secure environment. This string is concatenated with some randomly generated unique Key string
in the server.
Fig.4.2. Security Image created with the Text (Security text + Key String)
Fig.4.3. Creating two shares for the security image using Visual Cryptography
Cognitive Authentication Schemes safe against Spyware IEEE publication under security and
privacy 2006 by Weinshall, D., Hebrew University of Jerusalem.
[3]
R. Dharnija and A. Perrig., Deja vu: A user study using images for authentication". In Proc. 9th
USENIX Security Symposium, 2000.
[4] S.Wiedenbeck, J.Waters, J.C.Birget, A.Brodskiy, and N.Memon, Pass Points: Design and
longitudinal evaluation of a graphical password system, Int. J. HCI, vol. 63, pp. 102127, Jul. 2005.
[5]
Steganography using Genetic Algorithm along with Visual Cryptography for Wireless Network
Application by G.Prema and S.Natarajan.
[6]
Dirik, N.Memon, and J.C.Birget, Modeling user choice in the pass points graphical password
scheme, in Proc. Symp. Usable Privacy Security, 2007, pp. 2028.
[7]
J.Thorpe and P.C.Van Oorschot, Human-seeded attacks and exploiting hot spots in graphical
passwords, in Proc. USENIX Security, 2007, pp. 103-118.
[8]
P.C.Van Oorschot and J.Thorpe, Exploiting predictability in click based graphical passwords, J.
Comput. Security, vol. 19, no. 4, pp. 669702, 2011.
[9]
Board of Governors of the Federal Reserve System, Federal Deposit Insurance Corporation,
National Credit Union Administration, Office of the Comptroller of the Currency, and Office of
Thrift Supervision, Authentication in an Internet Banking Environment October 12, 2005, the
FFIEC agencies.
[10] Alok Bansal, Yogeshwari Phatak and Raj Kishore Sharma, Quality Management Practices for
Global Excellence, Prestige Institute of Management and Research Indore, 2015, page number 253.
[11] P.C.Van Oorschot and S.Stubblebine, On countering online dictionary attacks with login histories
and humans-in-the-loop, ACM Trans. Inf. Syst. Security, vol. 9, no. 3, pp. 235258, 2006.
[12]
Ch.Ratna Babu, M.Sridhar and Dr. B.Raveendra Babu, Information Hiding in Gray Scale Images
using Pseudo -Randomized Visual Cryptography Algorithm for Visual Information Security.
8. WEB REFERENCES
[1] h t t p : / / b o o k s . g o o g l e . c o . i n / b o o k s ? i d = I - 9 P 1 E k T k i g C & p g = P A 4 3 3 & r e d i r _
esc=y#v=onepage&q&f=false
[2] http://googleonlinesecurity.blogspot.jp/2012/01/landing-another-blow-against-email.html
[3] http://www.computerworld.com/s/article/9219155/Suspected_Chinese_spear_phishing_attacks_
continue_to_hit_Gmail_users
[4]
http://archive.wired.com/science/discoveries/news/1998/01/9932
[5] http://www.pcworld.com/article/125739/article.html?page=1
[6] http://www.nytimes.com/2007/02/05/technology/05secure.html?ex=1328331600&en=295ec5d099
4b0755&ei=5090&partner=rssuserland&emc=rss&_r=0
[7] http://web.archive.org/web/20080406062149/http://people.deas.harvard.edu/~rachna/papers/
emperor-security-indicators-bank-sitekey-phishing-study.pdf
269
Abstract Gesture recognition makes humans to communicate with the machine and interact naturally
without any mechanical equipments. A lot of research has been already done in the field of gesture
recognition using different mechanism and algorithms. The majority of study is done in this field using
Image processing techniques and methodologies. This is aimed in designing a cost effective low power
consuming device to control the locomotion of robot using gesture from hand which leads to the advance
in the concept of unmanned vehicle. Recognition rate of postures has a lot of scope for improvement by
compromising the system response time. The theme of the study is to design a robot vehicle which can
be controlled by gesture
Index Terms gesture recognition, image processing techniques, unmanned vehicle, robot vehicle
I. INTRODUCTION
Revolution of robot in various areas, and people are trying to control them more accurately and
easily. The application of controlling robotic gadget it becomes quite complicated when there comes the
part of controlling it with remote or many different switches. In military application, industrial robotics,
construction vehicles in civil side, medical application for surgery. In these fields it is quite complicated
to control the robot with remote or switches, sometime the operator may get puzzled with switches and
button itself, so a new concept is introduced ,to control the robot vehicle by the moving the of hand and
then simultaneously control the movement of robot vehicle. Over the past few decades people are finding
easiler way to communicate with robots in order to enhance their contribution in our daily life. Humans and
robots are combined in order to overcome the new challenges. From the very early stages it was one of the
main objectives to control the robot smoothly and make humans feel comfortable. So rather using the older
method of controlling robot by means of remote or keyboard its better to control a robot with the help of
our hand gesture. Because hand gesture is very natural way of communication for humans.
Hand gesture technology being used more mostly used in many fields nowadays. Its becoming very
popular in the robotic industry . Gesture recognition enables humans to communicate with the machine
(HMI) and interact naturally without any mechanical devices. .The gestures of different organs of the body
are used to control the wheel chair and different intelligent mechanisms have been developed to
Here, a wireless camera is the affixed in the robot vehicle which may then shot the movement of
the vehicle and then displayed on the computer at the transmitter side. And here communication between
the vehicle and the computer may happen by means of radio frequency.
At the receiving side the robot will be placed which then
perform all the action that is then done by the user. And this
can be then it finds the application in civil and military.
271
The data that is transmitted at the transmitter side is then received by the zigbee at the other end .
And based on the action that is done by the user is then sensed by the microcontroller on the receiver side.
And then before the action done by the robot vehicle it is given to the relay and the driver circuit and the
robot perform the similar action as that of the human. And then action can be taken by the wireless camera
and then it is viewed on the Tv on the transmitter side.
A. GESTURE SENSOR:
Gesture sensors, also called bend sensors, measure the amount of deflection caused by bending
the sensor. There are various ways of sensing deflection, from strain-gauges[4]to hall-effect sensors. The
three most common types of flexion sensors are conductive ink-based, fiber-optic, conductive fabric/thread/
polymer-based. A property of gesture sensors is that folding the sensor at one point to a prescribed angle
is not the most effective use of the sensor. As well, by bending the sensor at one point that is more than
90 may permanently cause damage the sensor. Instead, fold the sensor around a radius of curvature. The
smaller the radius of curvature and the greater the whole length of the sensor may be involved in the
deflection, the greater it causes the resistance.
B. EMBEDDED SYSTEM:
ATmega328 is a chip microcontroller from the faily Atmel and correspondingly belongs to the
mega AVR series. The Atmel 8-bit AVR RISC based microcontroller hat combines 32kB ISP flash memory
with read-while-write capabilities, 1kB EEPROM, 2kB SRAM, 23 general-purpose I/O lines, and also
have 32 general-purpose working registers, three flexible timers/counters with compare modes, also have
internal as well as external interrupts, serial programmable universal asynchronous receiver transmitter
(USART), a byte-oriented 2-wire serial interface, SPI serial port, 10-bit A/D converter, programmable
watch-dog timer(PWT) with an internal oscillator and five software-selectable power-saving modes. Here,
in this study we prefer this mainly because it is for cost effective . and communication in microcontroller
can be done in bidirectional .
C. Wireless Transmission Protocol (Zigbee):
ZigBee is one of the standard based wireless technology designed to address the unique needs with
low cost, feasible and power wireless sensor control networks. Since ZigBee can be used almost anywhere,
is easy to implement and requires only a feasible power to operate. In this paper, communication happens
by means of wireless protocol in existing system only wired means of communication can be used by it
have lots of limitations which is overcome by means of wireless communication. ZigBee uses 128-bit
keys to implement its security mechanisms. A key can be associated either to a network, and also being
used by both ZigBee layers and the MAC sublayer, or to a link, which can be acquired through preinstallation, agreement or by transport. Link keys are established based on a master key which controls
link key correspondence. Ultimately, at least the initial master key must be acquired through the secured
272
medium (transport or pre-installation), as the security of the whole network depends on it. Link and master
keys are then only visible on the application layer. Different services uses different variations of the link
key in order to avoid leaks and security risks.
D. DRIVER UNIT (L293D):
This driver circuit is a 16-pin DIP package motor driver IC (IC6) having four input pins and four
output pins. All four input pins are joined to output pins of the decoder IC (IC5) and the four output pins are
connected to DC motors of the robot. Enable pins are then used to enable input/output pins on both sides
of IC6. Motor drivers circuit may act as current amplifiers and they take a low-current control signal and
provide a higher-current signal. This higher current signal is used to drive the motors. Enable pins 1 and
9 (corresponding to the two motors) must be high for motors to start operating. When an input is high, the
associated driver circuit gets enabled. As a result, the outputs become active and work in phase with their
inputs. Similarly, when the input enable is low, that driver is disabled, and their outputs are off and in the
high-impedance state.
E.WIRELESS CAMERA:
Wireless technology is the new concept that is applied then just about everything these days,
and video surveillance is an added advantage of it. A wireless camera may then includes a transmitter to
send video over the air(radio frequency) to a receiver instead of through a wire .Most wireless cameras
are technically cordless devices, meaning is that though they transmit a radio signal, they still need to be
connected it to a power source(battery). Still, wifi is the commonly used industry term. Some cameras do
have internal batteries of course, making them purely wireless. But battery lifetime is still an problem for
professional or even semi-professional applications. These devices work on a simple principle. The camera
contains a wifi-radio (RF) transmitter. This radio frequency transmitter may transmit the camera's video,
which can be receiver up by a receiver, which will be connected to a monitor or recording device. Some
receivers have internal storage, while others must be connected to a external storage devices.
III HARDWARE SETUP
Controlling robotic widgets has becomes quite hard and complicated and when there comes the
part of controlling it with remote or many distinct switches. Mostly in military application, industrial
robotics, construction vehicles in civil side, medical application for surgery/operation. In this field it is quite
complicated to control the robot with remote or switches, sometime the operator may get puzzled with the
switches and button itself, so a new concept is introduced to control the robot vehicle with the movement
of hand which will simultaneously control the movement of robot [5]. Here, is a prototype model which
illustrates the model of robot vehicle.
Based on the movement of hand the robot may move in the desired direction.
CONCLUSION:
Controlling issue is always the main factor. In this study to tried to make easier and simpler
controlling system. Here,it is mainly focused to make a system which is cheap and reliable. There are lots
of opportunities to make many important projects based on our project. We can add a video which will
transmit the footage wirelessly in our monitor. A robotic arm can be added in the system and which also can
be operated with the help of hand gesture. Hand gesture control wheel chair can be made by following the
same mechanism as our project with a bigger and high torque motor. This wireless gesture control system
can also be helpful for controlling our home appliances.
REFERENCES
[1] https://en.wikipedia.org/wiki/Gesture_recognition
[2]
[3]
E. Miranda and M. Wanderley, New Digital Instruments: Control and Interaction Beyond the
Keyboard, A-R Editions, Wisconsin, 2006.
[4]
K. K. Kim, K. C. Kwak, and S. Y. Ch, "Gesture Analysis for Human-Robot Interaction", Proc. of the
8th Int. Conf. on Advanced Communication Technology, Vol. 3, pp. 1824-1827. 2006
[5] http://www.engineersgarage.com/contribution/accelerometer-based-hand-gesture-controlled-robot
274
I. INTRODUCTION
Incoherent fiber optic communication systems, both quadratures and both polarizations of the
electromagnetic field are used. This naturally results in a four-dimensional (4D) signal space. To meet the
demands for spectral efficiency, multiple bits should be encapsulated in each constellation symbol, resulting
in multilevel 4D constellations. To combat the decreased sensitivity caused by multilevel modulation,
forward error correction is used. The combination of FEC and multilevel constellations is known as coded
modulation. The most popular modulation [1] multilevel coding and bit interleaved coded modulation [3][5].
A generic bit interleaved coded modulation systems with an approximate demodulator or log
likelihood ratio computer. Caireetal. in terms of the capacity of an independent parallel channel model with
binary inputs and continuous LLR [5] as outputs, and by Martinezetal.
In terms of the generalized mutual information GMI where the BICM decoder is viewed as a
mismatched decoder.GMI and capacity of the parallel channel model coincide under optimal demodulation,
they differ in general for the case of an approximate demodulator. Multidimensional constellations
optimized for uncoded systems were shown to give high MI and are thus good for ML decoders bits reach
the receiver. By GMI the transferred part without any losses in data and also if we use GMI the efficiency
would be more and also BER is reduced.
In Probability theory and information theory, the mutual information or formerly trans information
of two random variables is a measure of the variables mutual dependence.. MI is the expected value of the
point wise mutual information. The most common unit of measurement of mutual information is the bit.
The number of constellation points and the constellation dimensionality grows, or when many
different labelings are considered .A generic bit interleaved coded modulation systems with an approximate
demodulator or log likelihood ratio computer. Caireetal. in terms of the capacity of an independent parallel
channel model with binary inputs and continuous LLRs as outputs, and by Martinezetal.
In terms of the generalized mutual information GMI where the BICM decoder is viewed as a
mismatched decoder.GMI and capacity of the parallel-channel model coincide under optimal demodulation,
they differ in general for the case of an approximate demodulator. Multidimensional constellations optimized
for uncoded systems were shown to give high MI and are thus good for ML decoders bits reach the receiver.
By GMI the transferred part without any losses in data and also if we use GMI the efficiency would
be more and also BER is reduced.
The generalized mutual information is an achievable rate for bit interleaved coded modulation
and is highly dependent on the binary labeling of the constellation. The BICM GMI, sometimes called
the BICM capacity, can be evaluated numerically. This approach, however, becomes impractical when
the number of constellation points and/or the constellation dimensionality grows, or when many different
labeling are considered.
A simple approximation for the BICM GMI based on the area theorem of the demapper's extrinsic
information transfer function is proposed. Numerical results show the proposed approximation gives good
estimates of the BICM GMI for labelings with close to linear EXIT functions, which includes labelings
of common interest, such as the natural binary code, binary reflected Gray code, etc.
This approximation is used to optimize the binary labeling of the 32 APSK constellation defined
in the S2 standard. Gains of approximately 0.15 Db are obtained.
Four dimensional modulation formats present an attractive complement to conventional polarization
multiplexed formats in the context of bandwidth variable transceivers, where they enable a smooth transition
with respect to spectral efficiency while requiring marginal additional hardware effort. Results of numerical
simulations and experiments supporting this statement are presented. Bandwidth variable transceivers
enable the software controlled adaptation of physical layer parameters such as transmitted bit rate, spectral
efficiency and transparent reach according to the traffic demands at hand. In particular, we focus on recent
advances in four dimensional modulation formats and in modulation format transparent.
277
V. CONCLUSION
The achievable rates for coherent optical CM transceivers where the receiver is based on BW
decoder. Multidimensional constellations optimized for uncoded systems to give high MI and ML decoders.
These constellations are not well suited for BW decoders . It was shown that GMI is correct metric to study
the performance of capacity approaching CM transreceivers. Due to that BER and SNR increase to create
an extent.
REFERENCE
[1] G. Ungerboeck, Channel coding with multilevel/phase signals, IEEE Trans. Inf. Theory, vol. IT278
H. Imai and S. Hirakawa, A new multilevel coding method using error-correcting codes, IEEE
Trans. Inf. Theory, vol. IT-23, no. 3, pp. 371377, May 1977.
[3]
E. Zehavi, 8-PSK trellis codes for a Rayleigh channel, IEEE Trans. Commun., vol. 40, no. 3, pp.
873884, May 1992.
[4]
G. Caire, G. Taricco, and E. Biglieri, Bit-interleaved coded modulation, IEEE Trans. Inf. Theory,
vol. 44, no. 3, pp. 927946, May 1998.
[5]
[6]
S. Benedetto, G. Olmo, and P. Poggiolini, Trellis coded polarization shift keying modulation for
digital optical communications, IEEE Trans. Commun., vol. 43, no. 24, pp. 15911602, Feb.Apr.
1995.
[7]
H. Bulow, G. Thielecke, and F. Buchali, Optical trellis-coded modulation (oTCM), in Proc. IEEE
Optic. Fiber Commun. Conf., Los Angeles, CA, USA, Mar. 2004.
[8]
H. Zhao, E. Agrell, and M. Karlsson, Trellis-coded modulation in PSK and DPSK communications,
in Proc. Eur. Conf. Opt. Commun., Cannes, France, Sep. 2006.
[9]
M. S. Kumar, H. Yoon, and N. Park, Performance evaluation of trel-lis code modulated oDQPSK
using the KLSE method, IEEE Photon. Technol. Lett., vol. 19, no. 16, pp. 12451247, Aug. 2007.
[10] M. Magarini, R.-J. Essiambre, B. E. Basch, A. Ashikhmin, G. Kramer, and A. J. de Lind van
Wijngaarden, Concatenated coded modulation for optical communications systems, IEEE Photon.
Technol. Lett., vol. 22, no. 16, pp. 12441246, Aug. 2010.
[11] I. B. Djordjevic and B. Vasic, Multilevel coding in M -ary DPSK/Differential QAM high-speed
optical transmission with direct
[12] detection, J. Lightw. Technol., vol. 24, no. 1, pp. 420428, Jan. 2006.
[13] C. Gong and X. Wang, Multilevel LDPC-Coded high-speed optical sys-tems: Efficient hard
decoding and code optimization, IEEE J. Quantum Electron., vol. 16, no. 5, pp. 12681279, Sep./
Oct. 2010.
[14] L. Beygi, E. Agrell, P. Johannisson, and M. Karlsson, A novel multilevel coded modulation scheme
for fiber optical channel with nonlinear phase noise, in Proc. IEEE Global Telecomm. Conf., Miami,
FL, USA, Dec. 2010.
[15] B. P. Smith and F. R. Kschischang, A pragmatic coded modulation scheme for high-spectralefficiency fiber-optic communications, J. Lightw. Tech-nol., vol. 30, no. 13, pp. 20472053, Jul.
2012.
[16] R. Farhoudi and L. A. Rusch, Multi-level coded modulation for 16-ary constellations in presence
of phase noise, J. Lightw. Technol., vol. 32, no. 6, pp. 11591167, Mar. 2014.
[17] L. Beygi, E. Agrell, J. M. Kahn, and M. Karlsson, Coded modulation for fiber-optic networks:
Toward better tradeoff between signal processing complexity and optical transparent reach, IEEE
Signal Process. Mag., vol. 31, no. 2, pp. 93103, Mar. 2014.
[18] I. B. Djordjevic, S. Sankaranarayanan, S. K. Chilappagari, and B. Vasic, Low-density parity-check
279
280
I. INTRODUCTION
Measurement of temperature plays a very important and pivotal part in the quality of end product in
many process industries. Almost all chemical processes and reactions are temperature dependent. Several
types of temperature sensors are available in the market with varied degrees of accuracy. RTD is one such
sensor which finds a wide application in a process industry because of its very high coefficient of resistivity
and very stable operation over a considerable period of time.
Resistance Temperature Detectors (RTDs) operates on the principle of changes in the electrical
resistance of pure metals and are characterized by a linear positive change in resistance with temperature.
Edval J.P. Santos et al. predicts that these transducers display a high linearity and is observed to be better due
to its noise immunity [3]. They are among the most precise temperature sensors available with resolution
and measurement uncertainties of 0.1XC [1]. The data acquisition system is used to gather signals from
the measurement sources and LabVIEW is used to create the DAQ applications.
C.Nandhini, M.E-VLSI Design, Sri Ramakrishna Engineering College, Anna University (Chennai),
Coimbatore, India.
M.Jagadeeswari, Professor& HOD, M.E-VLSI Design, Sri Ramakrishna Engineering College, Anna
University (Chennai), Coimbatore, India
LabVIEW includes a set of VIs to configure, acquire data from, and send data to DAQ devices.
Nasrin Afsarimanesh et al. proposed LabVIEW based characterization and optimization of thermal sensors
for reliable high speed solution [5]. Each DAQ device is designed for specific hardware, platforms and
operating systems. The DAQ process is done by using NI 9219 module along with Hi-Speed USB Carrier
NI USB-9162.
The optimization of the real-world signals is done by using the signal conditioning stages which
varies widely in functionality depending on the sensor. For example, RTD produce very low-voltage
signals, which requires voltage/current excitation, amplification and linearization. Santhosh et al. proposed
a technique that makes the output independent of the physical properties of the RTD and avoids the
requirement of repeated calibration every time the RTD is replaced [7]. It requires very low excitation
current to prevent self-heating. The RTD (PT100), temperature sensor is used for temperature sensing and
the software signal conditioning is done using the LABVIEW 2014 tool.
The following are the advantages of using RTD:
A wide temperature range(-50 to 500XC for thin-film and -200 to 850XC for wire wound)
h Long-term stability
h Simplicity of recalibration
h Accurate readings over relatively narrow temperature spans
281
he Callendar-Van Dusen equation is commonly used to approximate the RTD curve as in equation
Where, Rt is the resistance of the RTD at temperature t,
R0 is the resistance of the RTD at 0 C,
A, B, and C are the Callendar-Van Dusen coefficients shown in Table I and t is the temperature in
C.
For temperatures above 0 C, the equation (2) reduces to a quadratic equation (3). If we pass a
known current, Iex , through the RTD and measure the output voltage developed across the RTD, V0, then
t can be estimated by,
282
B. RTD Calibration
When using RTDs, the temperature is computed from the measured RTD resistance. Depending on
the temperature range and accuracy we can use simple linear fit, quadratic or cubic equations, or a rational
polynomial function.
For the case of measurements between 0XC and 100XC, a linear approximation can be used as
in equation (4).
where R0 = 100, and \ = 0.00385
The average error over the interval 0.38XC can be minimized to 0.19XC by shifting the
equation a little, as in equation (5),
A quadratic fit provides a much greater accuracy in the range of 0XC to 200XC and provides an
rms error over the range of only 0.014XC, and a maximum error of only 0.036XC. The equation for a
European standard 100[ RTD is shown in equation (6),
A cubic fit equation over the range of -100XC to +600XC provides an rms error of only 0.038XC
over the entire range, and 0.026XC in the range of 0XC to 400XC as shown in equation (7),
Fitting the RTD data over its full range (-200 to +850XC) produces the formula (8) for computing
temperature from RTD resistance,
Using the rational polynomial function results in an average absolute error of only 0.015XC over
the full temperature range.
C. Accuracy of Various Approximations
The average absolute errors for the above approximations to the Temperature vs. Resistance RTD
curve are summarized in Table II.
TABLE II
TEMPERATURE RANGE AND AVERAGE ERROR
OF VARIOUS APPROXIMATIONS
283
The 3-Wire RTD mode as shown in Fig.4 compensates for lead wire resistance in hardware if all
the lead wires have the same resistance. The NI 9219 applies a gain of 2x to the voltage across the negative
lead wire and the ADC uses this voltage as the negative reference to cancel the resistance error across the
positive lead wire.
The 2-Wire Resistance mode as shown in Fig.5 do not compensate for lead wire resistance.
IV.PROPOSED WORK
The proposed work includes temperature measurement, signal conditioning and analysis of static
and dynamic resistance characteristics of RTD. The hardware setup of the proposed work is shown in Fig.6.
The acquired resistance is converted to temperature using linear fit, cubic fit, quadratic fit and
rational polynomial equation as shown in Fig.8.
Thus, from the results of static and dynamic temperature measurement, it is observed that the rational
polynomial equation in static temperature measurement exhibits high linearity than other approximation
techniques and the dynamic temperature measurement yields better gain value and hence results in improved
performance.
VII. CONCLUSION AND FUTURE WORK
The temperature was measured using the RTD & the software signal conditioning stages are done
using LabVIEW 2014 tool which includes voltage/current excitation, amplification and linearization. The
4 wire RTD provides good interchangeable configuration and cancels out the lead resistance effectively
compared to other RTD wire configuration. It is observed that the best suited way for static temperature
measurement of RTD is by using Rational Polynomial equation as it provides high linearity compared to
other techniques. In dynamic temperature measurement, the software signal conditioning yields high gain
and improved efficiency than the traditional method.
The future work is to implement the signal conditioning stages of RTD in an embedded based FPGA
hardware in order to obtain increased data acquisition rate and to enhance the performance.
ACKNOWLEDGMENT
The authors would like to thank Sri Ramakrishna Engineering College for providing excellent
computing facilities & Encouragement and Innovative Invaders Technology for providing a great chance
for learning and professional development.
REFERENCES
[1]
Bonnie C. Baker (2008), Precision Temperature Sensing With RTD CircuitsAN687, Microchip
Technology Inc.
[2]
Dr. M. Jagadeeswari and S. Kalaivani (2015), PLC & SCADA Based Effective Boiler Automation
System For Thermal Power Plant, International Journal of Advanced Research in Computer
Engineering & Technology (IJARCET), Volume 4, Issue 4. page(s):1653-1657.
[3]
Edval J .P. Santos, and Isabela B. Vasconcelos (2008), RTD based Smart Temperature
Sensor: Process Development and Circuit Design, PROC. 26th International Conference On
Microelectronics, Serbia.
[4]
Jikwang Kim, Jongsung Kim, Younghwa Shin and Youngsoo Yoon (2001), A study on the
fabrication of an RTD (resistance temperature detector) by using Pt thin film, Korean Journal of
287
Nasrin Afsarimanesh and Pathan Zaheer Ahmed (2013), LabVIEW Based Characterization and
Optimization of Thermal Sensors, international journal on smart sensing and intelligent system,
page(s):726-739.
[6]
S.K.Sen (2011), An Improved Lead Compensation Technique for Four Wire Resistance Temperature
Detectors, Journal IEEE Transactions on Instrumentation and Measurement, vol. 48, No. 5, page(s):
903-905.
[7]
288
I. INTRODUCTION
Haze (mist, fog, and other atmospheric phenomena) is a main degradation of outdoor images,
weakening both colors and contrasts, resulting from the fact that light is absorbed and scattered by the
turbid medium such as particles and water droplets in the atmosphere during the process of propagation.
Moreover in most automatic systems, which strongly depend on the definition of the input images, this
may fail to work normally caused by the degraded images. Hence image hazing is a challenging task in
computer vision application. Therefore, haze removal is highly desired to improve the visual effect of
these images. Early researches uses a traditional technique to remove haze from the single image in image
processing. First, uses a histogram based [2]-[4] dehazing effect is limited because, it possibly losses
the infrequently distributed pixels in intensity due to global processing on the entire image, and also the
histogram modification technique is difficult to implement in real time application due to large amount of
computational and storage requirements. Later, researches try to improve the dehazing effect with multiple
images. In [6]-[8] Polarization based methods are used for dehazing effect with multiple images, in this
polarization filtered images can remove the visual effects of haze. This method may fail in situations of fog
or very dense haze. In [11],[12] Narasimhan et al. proposes a haze removal approaches with multiple images
of the same scene under different weather conditions. The conventional image enhancement techniques are
not useful in this method since the effects of weather must be modeled by using atmospheric scattering
principles that are closely tied to scene depth. Tan [14] proposes a novel haze removal method is Markov
Random Field (MRF) which is based on maximizing the local image contrast.
Tans approach is tends to produce the over saturated values in images and also produce halo effects
in the images. Fattal [19] proposes a method to remove haze from the color images which is independent
component analysis (ICA), this approach is time consuming and cannot be used for gray scale images. The
main drawback is it has some difficulties to deal with dense hazy images. He et al [5] proposes a novel Dark
Channel Prior (DCP), it is mostly used in non sky patches and any one of color channel has some pixels
whose intensities are very low and close to zero. By using this prior we can estimate the thickness of haze
and can retrieve the original haze free images.
The DCP method is simple and very effective in some cases and also applicable for sky images.
Some improved algorithms [17],[18],[19] are proposed to overcome the weakness of the DCP approach
because the main drawback of this method is it may fail to recover the true scene radiance of the distant
objects and they remain bluish. For effective haze free image Tarel [9] at el proposed a novel guided
image filtering which can be used for edge-preserving smoothing operator, it is fast and non approximate
linear time algorithm. This method is effective and efficient in a computer vision applications. The main
drawback guided/bilateral filters would concentrate the blurring near these edges and introduce halos.
In this paper proposed a novel is change of detail prior algorithm for single image dehazing. This
is simple but effective prior it can estimate the thickness of haze from hazy image and recover a original
289
Among these pixels, the pixels with highest intensity in the input image I are selected as the
atmospheric luminance. The Dark channel is defined as
where I^c is a color channel of I, and (x) is a local patch centered at x. In this paper, an improved
version of He [5] et al.'s method is used to estimate LA. We filter each color channel of an input image by
a N x N minimum filter with a moving window. Then the maximum value of each color channel is taken as
the component of atmospheric luminance LA. When dealing with gray scale images, the filter is operated
on input and then the maximum value is selected as LA. This method produces a similar result but performs
more efficiently.
4.
ESTIMATE AIRLIGHT
This section focus on the approach of estimating airlight A. The airlight model quantifies how a
column of atmosphere acts as a light source by reflecting environmental illumination towards an observer.
In eq(2), the first term L_0.T is called direct attenuation. The second term L_A.(1-T),indicates the thickness
of haze. Hence the term is defined as
290
The change of detail prior (COD) is inspired by two common observations. Firstly, an image will
be blurred more due to haze in a local region were haze is very thicker in image. Secondly, if we sharpen
and smoothen a blurred image separately means producing a I_sharp and I_smooth images respectively, so
intensity between these two images will be negatively correlated due to blurring strength. By using equation
(4) can estimate the thickness of haze.
4.1 SHARPENING OPERATOR
Sharpness is actually the contrast between different colors. The purpose of sharpening an image
is to enhance the details attenuated by the scattering model and make it as close to the haze-free image
as possible. Image sharpening is done by gradient domain. In this method, image information detail is
represented by 1 D profile of gradient magnitude which is perpendicular to image edges as shown in fig(2).
In this gradient profile domain is defined in starting from edge pixel x_0, a path is traced from p(x_0)
along the gradient directions(two sides) pixel by pixel until the gradient magnitude no longer decreases.
The prior knowledge of the gradient profiles are learned from a large collection of natural images,
which are called gradient profile prior. Sub pixel technique is used to trace the curve of gradient profile.
4.2 SMOOTHING OPERATOR
Smoothing operator is done by Gaussian filter. Gauss filter is used to smooth the clear image from
the input. As smoothing operator is used to simulate multiple scattering effect. As analyzed in previous
method results multiple scattering is very complex process. The gauss filter works by using 2D distribution
as a point spread function..The PSF is defined as
Gaussian PSF do not produce the halo effects in the images.
4.3 COMPARISON
After sharpening and smoothing an image ,a stability criteria is desired to evaluate the difference
between them. The PSNR is not suitable for dehazing because its criteria is based on the neighborhood
pixels, while in smoothing and sharpening filters have already uses the neighborhood information. So the
use of PSNR is redundant. Hence sharpening and smoothing are a pair of opposite operations, so we need
not to compare with the input images.
Since airlight A is the component of input image I and its negatively correlated to difference between
the two filter images. so hazy image is subtracted from criteria, and multiply by a co efficient to get airlight
A result.
5. RESULTS AND DISSCUSION
In this section the of change of detail prior algorithm results are discussed. The execution is carried
out in MATLAB 2013a tool. The PSNR value of output image is found to be better than the input image.
5.1 SIMULATION RESULTS USING MATAB
291
Table 1 shows the analysis report of PSNR value and MSE value is compared with the existing
technique and proposed technique. Input hazy image contain more error by using visibility approach the
error has been minimized.
6. CONCLUSION AND FUTURE WORK
In this paper, we proposed a simple but effective prior is called change of detail algorithm for single
image dehazing. This algorithm is based on the multiple scattering phenomena so the input image becomes
blurry. When this method is combined with haze imaging model, single dehazing image becomes simple
and effective. This algorithm is based on local content rarer than color and this can be applied to large
variety of images. This method is meaningful for color based images for all application. In haze removal
method there is a still common problem is to be solved , that is scattering co efficient in atmospheric
scattering model cannot be regarded as constant in atmospheric conditions. To overcome this problem
some more physical models can be taken into account.
REFERENCES
[1] Cai Z., Xie B., and Guo F (2010), Improved single image dehazing using dark channel prior and
multi-scale retinex, in Proc. Int. Conf. Intell. Syst. Design Eng. Appl.
[2]
T. K. Kim, J. K. Paik, and B. S. Kang, Contrast enhancement system using spatially adaptive
histogram equalization with temporal filtering, IEEE Trans. Consum. Electron., vol. 44, no. 1, pp.
8287, Feb. 1998
[3]
292
J.-Y. Kim, L.-S. Kim, and S.-H. Hwang, An advanced contrast enhancement using partially
overlapped sub-block histogram equalization, IEEE Trans. Circuits Syst. Video Technol., vol. 11,
no. 4, pp. 475484, Apr. 2001
[5]
He K., Sun J., and Tang X (2011)., Single image haze removal using dark channel prior, IEEE
Trans. Pattern Anal. Mach. Intell., vol. 33, no. 12, pp. 23412353.
[6]
[7]
S. Shwartz, E. Namer, and Y. Y. Schechner, Blind haze separation, in Proc. IEEE Conf. Comput.
Vis. Pattern Recognit. (CVPR), vol. 2. 2006, pp. 19841991.
[8]
[9]
He K., Sun J., and Tang X (2013), Guided image filtering, IEEE Trans. Pattern Anal. Mach.
Intell., vol. 35, no. 6, pp. 13971409.
[10] Jiaming Mai, and Ling Shao, Qingsong Zhu (2015), A Fast Single Image Haze Removal Algorithm
Using Color Attenuation Prior, in proc. IEEE transactions on image processing, vol. 24, no. 11.
[11] S. G. Narasimhan and S. K. Nayar, Chromatic framework for vision in bad weather, in Proc. IEEE
Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2000, pp. 598605.
[12] S. K. Nayar and S. G. Narasimhan, Vision in bad weather, in Proc. IEEE Int. Conf. Comput. Vis.
(ICCV), vol. 2. Sep. 1999, pp. 820827.
[13] Narasimhan S. G. and Nayar S. K (2003)., Interactive (de) weathering of an image using physical
models, in Proc. IEEE Workshop Color Photometric Methods Comput. Vis., vol. 6.
[14] R. T. Tan, Visibility in bad weather froma single image, in Proc. IEEE Conf. Comput. Vis.Pattern
Recognit. (CVPR), Jun. 2008, pp. 18.
[15] Narasimhan S. G. and Nayar S. K (2003), Contrast restoration of weather degraded images, IEEE
Trans. Pattern Anal. Mach. Intel., vol. 25, no. 6, pp. 713724.
[16]
S.-C. Pei and T.-Y. Lee, Nighttime haze removal using color transfer pre-processing and dark
channel prior, in Proc. 19th IEEE Conf. Image Process. (ICIP), Sep./Oct. 2012, pp. 957960.
[17] Gibson. K. B., Vo D. T., and Nguyen T. Q (2012)., An investigation of dehazing effects on image
and video coding, IEEE Trans. Image Process., vol. 12, no. 2, pp. 662673.
[18] Yu. J., Xiao C., and Li D (2001), Physics-based fast single image fog removal, in Proc. IEEE 10th
Int. Conf. Signal Process. (ICSP), Oct. 2010, pp. 1048
[19] R. Fattal, Single image dehazing, ACM Trans. Graph., vol. 27, no. 3, p. 72, Aug. 20088, pp. 18.
[20] Tan R. T (2008), Visibility in bad weather from a single image, in Proc. IEEE Conf. Comput. Vis.
Pattern Recognit. (CVPR), pp. 18.
[21] Tomasi C. and Manduchi R (2006), Bilateral filtering for gray and color images, in Proc. 6th Int.
Conf. Comput. Vis. (ICCV), pp. 839846.
[22] Fattal (2009), Single image dehazing, ACM Trans. Graph., vol. 27, no. 3, p. 72.
293
2
PG student, Department of EEE,
Vivekanandha college of engineering for women,
Tiruchengode.
Abstract This project presents a new system configuration of the front-end rectifier stage for a hybrid
wind/photovoltaic energy system. As the power demand increases, power failure also increases. So,
renewable energy sources can be used to provide constant loads. Hybridizing solar and wind power
sources provide a realistic form of power generation. In this topology, both wind and solar energy
sources are incorporated together using a combination of Cuk and SEPIC converter. This configuration
allows the two sources to supply the load separately or simultaneously depending on the availability of
the energy sources. The fused multi input rectifier stage also allows Maximum Power Point Tracking
(MPPT) to be used to extract maximum power from sun when it is available. An Incremental Conductance
is used for the PV system. The average output voltage produced by the system is the sum of the inputs
of these two systems. All these advantages of the proposed hybrid system make it highly efficient and
reliable. Simulation results are given to highlight the merits of the proposed circuit.
Index Terms SEPIC converter, Cuk converter, PV & wind source, MPPT
I. INTRODUCTION
Solar energy and wind energy are the two renewable energy sources most common in use. Wind
energy has become the least expensive renewable energy technology in existence. Photovoltaic cells
convert the energy from sunlight into DC electricity. PVs offer added advantages over offer renewable
energy sources in that they give off no noise and practically require no maintenance.
Hybridizing solar and wind power sources provide a realistic form of power generation. When a
source is unavailable or insufficient in meeting the load demands, the other energy source can compensate
for the difference. Several hybrid wind/PV power systems with Maximum Power Point Tracking (MPPT)
control have been proposed earlier. They used a separate DC/DC buck and buck-boost converter connected
in fusion in the rectifier stage to perform the MPPT control for each of the renewable energy power sources.
This system requires passive input filters to remove the high frequency current harmonics injected into wind
turbine generations. The harmonic content in the generation current decreases its lifespan and increases the
power loss due to heating.
In this topology, both wind and solar energy sources are incorporated together using a combination
of Cuk and SEPIC converters, so that if one of them is unavailable, then the other source can compensate
for it. The Cuk-SEPIC fused converters have the capability to eliminate the HF current harmonics in
the wind generator. This eliminates the need of passive input filters in the system. These converters
can support step up and step down operations for each renewable energy sources. They can also support
individual and simultaneous operations. Solar energy source is the input to the Cuk converter and wind
energy source is the input to the SEPIC converter. The average output voltage produced by the system will
be the sum of the inputs of these two systems.
II. DC-DC CONVERTERS
DC-DC converters can be used as switching mode regulators to convert an unregulated dc voltage
to a regulated dc output voltage. The regulation is normally achieved by PWM at a fixed frequency and the
switching device is generally BJT, MOSFET or IGBT.
A. Cuk converter
The Cuk converter is a type of DC-DC converter that has an output voltage magnitude that is either
greater than or less than the input voltage magnitude. It provides the negative output voltage. This converter
always works in the continuous conduction mode. The Cuk converter operates when M1 is turned on, the
294
B. SEPIC converter
Single-ended primary-inductor converter (SEPIC) is a type of DC-DC converter allowing the voltage
at its output to be greater than, less than, or equal to that at its input. It is similar to a buck boost converter. It
has the capability for both steps up and step down operation. The output polarity of the converter is positive
with respect to the common terminal.
The capacitor C1 blocks any DC current path between the input and the output. The anode of the
diode D1 is connected to a defined potential. When the switch M1 is turned on, the input voltage, Vin
appears across the inductor L1 and the current IL1 increases. Energy is also stored in the inductor L2 as
soon as the voltage across the capacitor C1 appears across L2. The diode D1 is reverse biased during this
period. But when M1 turns off, D1 conducts. The energy stored in L1 and L2 is delivered to the output, and
C1 is recharged by L1 for the next period.
III. PROPOSED HYBRID SYSTEM
In order to eliminate the problems in the stand-alone PV and wind system and meeting the load
demand, The only solution to combine one or more renewable energy sources to meet the load demand. so
the new proposed input side converter topology with maximum power point tracking method to meet the
load and opt for grid connected load as well as commercial loads. The implementation of new converter
topology will eliminate the lower order harmonics present in the hybrid power system circuit.
A. BLOCK DIAGRAM
B. CIRCUIT DIAGRAM
PV array is the input to the Cuk converter and wind source is the input to the SEPIC converter. The
converters are fused together by reconfiguring the two existing diodes from each converter and the sharing
the Cuk output inductor by the SEPIC converter. This configuration allows each converter to operate
normally individually in the event that one source is unavailable. When only wind source is available, the
295
Figure 10: DC Output voltage Waveform of Solar Energy System in Boost mode
V. CONCLUSION
In this paper a new multi-input Cuk-SEPIC rectifier stage for hybrid wind/solar energy system has
been presented. It can support step up/step down operations for each renewable source. Both converters
are efficiently used to improve the system efficiency and voltage profile improvement. Additional input
filters are not necessary to filter out high frequency harmonics. Here MPPT can be realized for each source.
Individual and simultaneous operation is supported. The approach of varying complexity and current sharing
performance has been proposed. The advantage of parallel connected power supply is low component
stress, increased reliability, ease of maintenance and repair, thermal management. The presence of current
sharing loop has been clearly proved for achieving good performance in paralleling of these converters. The
input voltage of Cuk converter is 12 V and the output voltage is 34 V. The SEPIC converter input voltage
is 12 V and the output voltage is 37 V. while combining the Cuk and SEPIC converter, the input voltage
is 24 V and the output voltage is 42 V. A MATLAB Simulink has been developed and compared with the
parallel schemes.
VI. REFERENCES
[1] Divya Teja Reddy Challa, Raghavender Inguva (Nov 2012), An Inverter Fed with Combined WindSolar Energy System Cuk-SEPIC Converter, International Journal of Engineering Research and
Technology (IJERT) Vol.1 Issue 9.
[2]
Arun, Teena Jacob (Aug 2012), Modelling of Hybrid wind and Photovoltaic Energy System using
New Converter Topology, Electrical and Electronics Engineering: An International Journal (EEEIJ)
Vol.1, No.2.
[3]
[4]
298
D. Das and S. K. Pradhan (2011), Modelling and Simulation of PV Array with Boost Converter: An
Open Loop Study, National Institute Of Technology, Rourkela.
[6]
[7]
Chiang.S.J, Hsin-JangShieh, Member, IEEE, and Ming- Chen: (November 2009) Modelling and
Control of PV Charger System with SEPIC Converter, IEEE Transactions on Industrial Electronics,
Vol.56, No.11.
[8]
G. Adrian and D. C. Drago (2009) , Modelling of renewable hybrid energy sources, Scientific
Bulletin of the Petru Maior University of Tirgu Mures, Vol. 6.
[9]
E. Koutroulis and K. Kalaitzakis (April 2006), Design of a Maximum Power Tracking System for
Wind-Energy-Conversion Applications, IEEE Transactions on Industrial Electronics, Vol. 53.
[10] D. Das, R. Esmaili, D. Nichols, L. Xu (Nov. 2005) , An Optimal Design of a Grid Connected Hybrid
Wind/Photovoltaic/Fuel Cell System for Distributed Energy Production, in Proc. IEEE Industrial
Electronics Conference, pp. 2499-2504.
[11] Shu-Hung et al., (May 2003) A Novel Maximum Power Point Tracking Technique for Solar Panels
Using a SEPIC or Cuk Converter, IEEE Transactions on Power Electronics, Vol. 18.
[12]
R. Billinton and R. Karki (November 2001) Capacity Expansion of Small Isolated Power Systems
Using PV and Wind Energy, IEEE Transactions on Power Systems, Vol. 16.
299
I. INTRODUCTION
Digital watermarking is expressed as a knowledge concealing technique that is developed for
functions like identification, copyright protection and classification of digital media content. During this
technique, a secret information referred to as watermark is embedded into the digital transmission content
with in the decoder, watermark information is extracted from the watermarked signal in an exceedingly loss
less manner though original signal can't be obtained back. In some
necessary applications like military notional, rhetorical law and medical notional, distortion within
the original signal might cause fatal results. For instance, a tiny low distortion in an exceedingly medical
image might interfere with the accuracy of document identification. Distortion issues which can arise in
an exceedingly applications will be fastened in a reversible watermarking technique.Reversible image
watermarking algorithms can be divided into five groups. Lossless compression based algorithms, difference
expansion based algorithms, and histogram shifting based algorithms, prediction error expansion based
algorithms and integer to integer transform based algorithms. Performance of a watermarking algorithm
is categorized into three parts. They are visual quality, payload capacity and computational complexity.
A hardware implementation can be designed on a field programmable gate array board or custom
integrated circuit. The difference between FPGA and custom IC implementation is a trade-offamong the
cost, power consumption and performance. Hardware implementation using FPGA has advantages of low
investmentcost, simpler design cycle, field programmability and desktop testing with medium processing
speed. On the other side, due to lower unit cost, full customcapability and from an integration point, custom
implementation application specific integrated circuit design may be more useful. During past years, FPGAs
wereselected primarily for lower speed, complexity, volume designs, but todays FPGAs can easily push
upto the 500 MHz performance barrier.
A literature survey is survived for various papers which are important to know the previously
available techniques and their advantages and limitations. It also includes the various supporting papers
for the proposed technique and their advantages. There are many techniques available for reversible image
watermarking. The reversible contrast mapping method provides to embed and extract the watermarking.
The data of secretly hiding and communicating information has gained immense importance in the two
decades due to the advantages in generation, storage, and communication technology of digital content.
Watermarking is solutions for tamper detection and protection of digital content. Watermarking can
cause damage to the information present in the cover work. At the receiving end, the exact reconstruction
of the work may not be possible. In addition there exist certain applications that may not pass even
small distortions in report work priority to the downstream techniques. In that applications, reversible
300
Fig 2 indicates that the resultant value is left shifted by 1 bit with 1 padding. Similarly the watermark
bit is embedded into the LSB of y. Fig 3 indicates the Step-2 operation of watermark embedding,where
LSB of xis made 0 by two consecutive shifting operations.The value of x is first right shifted by 1 bit to
discard its LSB, then 1bit left shifting operation with 0 padding is performed to generatethe final result. In
the similar way watermark data is embeddedinto the LSB of y.Fig 4 indicates that Step-3, is the simplest
amongthe three.
IV. WATERMARK EXTRACTION
The steps are similar as in watermark embedding process. Here the watermarked image is partitioned
into smaller block size of 88 or 3232 and each block is a gain partitioned into pairs of pixels and using
302
Fig 6. Illustrates that in step 1 for watermark extraction, the input is given as x1. Then it is left
shifted by 1 bit for 2 times and it is added with the value w, divided by 3 and the output will be x. same
process repeated for y1.Fig 7 illustrates that in step 2 for watermark extraction, the input is given as x1,
and it is right shifted by 1 bit and the left shifted by 1 bit with 1 padding. The output will be same process
is repeated for y and LSB also extracted.Fig 8 illustrates that in step 3 for watermark extraction, the input
is given as x1 and it is right shifted by 1 bit and left shifted by 1 bit with extracted payload bit padding and
the output will be x. Next y1 is given as input without change the output will be y.
V. HDL CODER
A hardware description language enables a precise, formal description of an electronic circuit that
allows for the automated analysis, simulation, and simulated testing of an electronic circuit. It also allows
for the compilation of an HDL program into a lower level specification of physical electronic components,
such as the set of masks used to create an integrated circuit.The HDL Workflow Advisor in HDL Coder
automatically converts MATLAB code from floating-point to fixed-point and generates synthesizable
VHDL and Verilog code. This capability lets model the algorithm at a high level using abstract MATLAB
constructs and System objects while providing options for generating HDL code that is optimized for
hardware implementation.
VI. RESULTS AND DISCUSSION
Fig 13 illustrates that the input image, Fig 10 illustrates that secret image, Fig 11 illustrates that
watermark image contains secret image, Fig 12 illustrates that stego image, Fig 13 illustrates that extracted
watermark,Fig 14 illustrates that semicustom layout. After generating HDL code, net list is created. By
using net list, semicustom is implemented in mentor graphics.
305
C.W. Honsinger P.W. Jones M. Rabbani J.C. Stoffel (2001), Lossless recovery of an original image
containing embedded data, U.S. Patent No. 6, page no(s):278,791.
[3]
D. Zheng Y. Liu J. Zhao A. El Saddik (2007), A survey of RST invariant image watermarking
algorithms, ACM Comput. Surv, page no: 39 (2).
[4]
G. Xuan C. Yang Y. Zhen Y.Q. Shi Z. Ni(2005), Reversible data hiding using integer wavelet
transform and companding technique, in: Lecture Notes in Computer Science, Digital Watermarking,
vol. 3304, Springer, Berlin, Heidelberg, page no: 115124.
[5]
[6]
J.-M. Guo (2008), Watermarking in dithered halftone images with embeddable cells selection and
inverse half toning, Signal Process, page no(s):14961510.
[7]
J.-M. Guo (2012), J.-J. Tsai, Reversible data hiding in low complexity and high quality compression
scheme, Digit. Signal Process, page no(s):776785.
[8]
J. Feng I. Lin C. Tsai Y. Chu (2006), Reversible watermarking: current status and key issues, Int.
J. 2 (3), page no(s) :161170.
[9]
[10] K.S. Kim, M.J. Lee, H.Y. Lee, H.K. Lee, Reversible data hiding exploiting spatial correlation
between sub-sampled images, Pattern Recognit. 42 (2009)30833096.
[11] X. Li, B. Yang, T. Zeng, Efficient reversible watermarking based on adaptiveprediction-error
expansion and pixel selection, IEEE Trans. Image Process. 20(2011) 35243533.
[12]
C.C. Lin, N.L. Hsueh, A lossless data hiding scheme based on three-pixel blockdifferences, Pattern
Recognit. 41 (2008) 14151425.
[13] C.C. Lin, S.P. Yang, N.L. Hsueh, Lossless data hiding based on difference expansion without a
location map, in: Congress on Image and Signal Processing, Vol.2, 2008, pp. 812.
[14]
[15]
Z. Ni, Y.Q. Shi, N. Ansari, W. Su, Reversible data hiding, IEEE Trans. Circuits Syst.Video Technol.
16 (2006) 354362.[15] D. Thodi, J. Rodriguez,
306
I. INTRODUCTION
Cloud computing has become one of the most modern topics in the IT world today. Its model of
computing as a resource has changed the scenery of computing as we know it, and its promises of increased
flexibility, greater consistency, massive scalability, and decreased costs have enchanted businesses and
individuals alike.
Cloud computing, as defined by NIST, is a model for enabling always-on, convenient, on-demand
network access to a shared pool of configurable computing can be rapidly provisioned and released with
nominal management effort or service provider interaction [1]. It is a new model of providing computing
resources that utilizes existing technologies. At the heart of cloud computing is a datacenter that uses
virtualization to isolate instances of applications or services being hosted on the cloud. The datacenter
provides cloud users the ability to hire computing resources at a rate dependent on the datacenter services
being requested by the cloud user. Refer to the NIST definition of cloud computing, [1], for the core tenets
of cloud computing.
In this paper, we refer to the organization providing the datacenter and related management services
as the cloud
provider. We refer to the organization using the cloud to
host applications as the cloud service provider (CSP).
Lastly, we refer to the individuals and/or organizations
using the cloud services as the cloud clients or cloud users.
NIST defines three main service models for cloud
computing:
Software as a Service (SaaS) The cloud provider provides the cloud consumer with the capability
to deploy an application on a cloud infrastructure [1].
Platform as a Service (PaaS) The cloud provider provides the cloud consumer with the capability
to develop and deploy applications on a cloud infrastructure using tools, runtimes, and services
supported by the CSP [1].
Infrastructure as a Service (IaaS) The cloud provider provides the cloud consumer with essentially
a virtual machine. The cloud consumer has the ability to provision processing, storage, networks,
etc., and to deploy and run arbitrary software supported by the operating system run by the virtual
machine [1].
307
NIST also defines four deployment models for cloud computing: public, private, hybrid, and
community clouds.
Private cloud The cloud infrastructure is provisioned for exclusive use by a single organization
comprising multiple consumers (e.g., business units). It may be owned, managed, and operated by
the organization, a third party, or some combination of them, and it may exist on or off premises.
Community cloud - The cloud infrastructure is provisioned for exclusive use by a specific
community of consumers from organizations that have shared concerns (e.g., mission, security
requirements, policy, and compliance considerations). It may be owned, managed, and operated
by one or more of the organizations in the community, a third party, or some combination of them,
and it may exist on or off premises.
Public cloud - The cloud infrastructure is provisioned for open use by the general public. It may be
owned, managed, and operated by a business, academic, or government organization, or some
combination of them. It exists on the premises of the cloud provider.
Hybrid cloud - The cloud infrastructure is a composition of two or more distinct cloud infrastructures
(private, community, or public) that remain unique entities, but are bound together by standardized
or proprietary technology that enables data and application portability (e.g., cloud bursting for
load balancing between clouds).
One of the most appealing factors of cloud computing is its pay-as-you-go model of computing as
a resource. This new model of computing has allowed businesses and organizations in need of computing
power to purchase as many resources as they need without having to put forth a large capital investment in
the IT infrastructure. Other advantages of cloud computing are massive scalability and increased flexibility
for a relatively constant price. [2].
Despite the many advantages of cloud computing, many large enterprises are hesitant to adopt
cloud computing to replace their existing IT systems. In the Cloud Computing Services Survey done by
308
a. Audit and Compliance: This subsystem addresses the data collection, analysis, and archival
requirements in meeting standards of proof for an IT environment. It captures, analyzes, reports,
archives, and retrieves records of events and conditions during the operation of the system [4].
b. Access Control: This subsystem enforces security policies by gating access to processes and
services within a computing solution via identification, authentication, and authorization [4]. In
the context of cloud computing, all of these mechanisms must also be considered from the view of
a federated access control system.
c. Flow Control: This subsystem enforces security policies by gating information flow and visibility
and ensuring information integrity within a computing solution [4].
d. Identity and Credential Management: This subsystem creates and manages identity and permission
objects that describe access rights information across networks and among the subsystems,
platforms, and processes, in a computing solution [4]. It may be required to adhere to legal criteria
for creation and maintenance of credential objects.
e. Solution Integrity: This subsystem addresses the requirement for reliable and proper operation of
a computing solution [4].
Armbrust, M. et. al., (2009), Above the clouds: A Berkeley view of Cloud Computing, UC
Berkeley EECS, Feb 2013.
[3]
Ramgovind, S.; Eloff, M.M.; Smith, E., "The management of security in Cloud computing,"
Information Security for South Africa, 2010 , vol., no., pp.1-7, 2-4 Aug. 2014.
[4]
IBM Corporation, Enterprise Security Architecture Using IBM Tivoli Security Solutions, Aug 2007.
[5]
Cloud Security Alliance, Security Guidance for Critical Areas of Focus in Cloud Computing V2.1,
2009.
Cloud Computing Use Case Discussion Group, Cloud Computing Use Cases Whitepaper v4.0, July
2010.
[8] Shiping Chen; Nepal, S.; Ren Liu, "Secure Connectivity for Intra-cloud and Inter-cloud
Communication," Parallel Processing Workshops (ICPPW), 2011 40th International Conference on
, vol., no., pp.154-159, 13-16 Sept. 2011.
[9]
Xiao Zhang; Hong-tao Du; Jian-quan Chen; Yi Lin; Lei-jie Zeng, "Ensure Data Security in Cloud
Storage," Network Computing and Information Security (NCIS), 2011 International Conference on
, vol.1, no., pp.284-287, 14-15 May 2011.
313
Xiaojun Yu; Qiaoyan Wen, "A View about Cloud Data Security from Data Life Cycle," Computational
Intelligence and Software Engineering (CiSE), 2010 International Conference on , vol., no., pp.1-4,
10-12 Dec. 2010.
[11] He Yuan Huang; Bin Wang; Xiao Xi Liu; Jing Min Xu, "Identity Federation Broker for Service
Cloud," Service Sciences (ICSS), 2010 International Conference on , vol., no., pp.115-120, 13-14
May 2010.
[12] Shigang Chen, Meongchul Song, Sartaj Sahni, Two Techniques for Fast Computation of Constrained
Shortest Paths, IEEE/ACM Transactions on Networking, vol. 16, no. 1, pp. 105-115, February 2008..
[13] King-Shan Lui, Klara Nahrstedt, Shigang Chen, Hierarchical QoS Routing in Delay-Bandwidth
Sensitive Networks, in Proc. of IEEE Conference onLocal Area Networks (LCN2000), pp. 579588, Tampa, FL, November 2000.
[14] Shigang Chen, Yi Deng, Attie Paul, Wei Sun, Optimal Deadlock Detection in Distributed Systems
Based on Locally Constructed Wait-for Graphs, in Proc. of 16th IEEE International Conference on
Distributed Computing Systems (ICDCS96),HongKong,May1996.
314
I. INTRODUCTION
In mobile ad hoc networks (MANETs), the movement of nodes that makes the network partition
, where the nodes in one partition cannot be access the data by the nodes of other partitions. File
replication is the better solution to improve file availability in distributed systems. By replicating
the file at mobile nodes who are not in the owner of the source file.the file availability can
be improved because of there are multiple replica files in the network and the probability of
identifying one copy of the file is higher. Also,the file replication can be minimize the query
delay.the mobile nodes can be obtain the file from some nearby replicas. But the most
of the mobile nodes only have limited amount of memory space, range, and power,and
hence it is difficult for one node to collect and hold all the files considering these constraint and
independent nodes in MANETs cause file unavailability for the requesters. When a mobile node
that only replicates part of the file, there will be a trade-off between query delay and the file
availability.
MANET varying significantly from the wired networks from network topology,configuration
of network and network resources. Features of MANETs are dynamic topology due to host
movements, partition of network due to untrusted communication and minimum resources such as
limited power and limited memory capacity [1, 2]. File sharing is one of the important functionality
to be supported in MANETs. Without this facility, the performance and usage of MANET is greatly
minimizes [3].The best example where file sharing is important, in the conference where several
users share their presentations on discussing on a particular issue, and it is also applicable in
defence application, rescue operation, disaster management etc. The method used for file sharing
deeply depends upon the features of the MANET [3]. The sequential network partition due to host
movements or limited battery power minimize the file availability in the network. To overcome
file un-availability, the replication technique deals all these problems such that file is available at all
times in the network.
File replication
File Replication is a technique which improves the file availability by creating copies of
file. Replication allows better file sharing. It is a key approach for achieving high availability. File
replication has been widely used to maximize file availability in distributed systems, and we will
apply this technique to MANETs. It is suitable to maximize the response time of the access requests,
to distribute the load of processing of these requests on several servers and to eliminate the overload of
the paths of transmission to a unique server. The replications that are accessed in the time variations.
315
The multiple replications of files that improves the file availability and reliability in the case any of
network failures.
B.faster query response
The Queries initiated from the nodes where replicas are stored that can be satisfied directly without
affecting network transmission delays from remote nodes.
C.load sharing
The computational load of responding to the queries can be distributed in the number of nodes in
the network.
RESEARCH ISSUES RELATED TO FILE REPLICATION
A. Power consumption
The Mobile nodes in the MANET are used battery power. If a node with less power is replicated
with many frequently accessed file items, it soon gets drained and it cannot provide services any more. Thus
replication algorithm should replicate file in the nodes that need sufficient power by periodically checking
the remaining battery power of each node.
B. Node mobility
In MANET, hosts are mobile which leads to dynamic topology. Thus replication technique has to
support movement prediction such that if a host is likely to move away from the network, its replicas will
be changed in some other nodes which is expected to retain in the network for a particular unit of time.
C. Resource availability
Every nodes participating in MANET are portable hand held devices, stroage capacity is limited.
Before sending a replica to the node, the technique has to find whether a node has sufficient storage capacity
to hold the replication files or not.
D. Real-time applications
MANET applications like rescue and military operations are time-critical and may have both firm
and soft real-time transactions. Therefore, the replication technique should be able to deliver correct
information before the expiry of processing limits, taking into consideration both real-time firm and soft
transaction types in order to minimize the number of transactions missing their deadlines.
E. Network partitioning
The frequent disconnection of mobile nodes,the network partitioning occurs more often in MANET
databases than in traditional databases. Network partitioning is a important problem in MANET when the
server that contains the required file is isolated in a separate partition, thus reducing file accessibility to a
large extent. Therefore, the replication technique should be able to determine the time at which network
316
PERFOMANCE
The output of each technique in the simulation test on NS-2. We see the hit rates and average delays
of the four protocols.
We used the following metrics in the experiments:
Hit Rate
It is the number of requests successfully handled by either original les or replica files.
Average delay
This is the average time of all requests that finish execution. The delay that calculate using the
throughput and the performance of the requests.
Hit Rate
Figs. 4(a)The hit rates of the four methods with the simulations results .The hit rates continue
SAF>DAFN> DCG >PBDR. The PBDR achieve higher hit rate than other methods. since PBDR realizes
distributed way, it presents slightly differ from performance compared to others. PBDR considers the
intermediate connection properties of disconnected MANETs and replications. DCG only considers
temporarily con- nected group for fille replication, which is not stable in MANETs. Therefore, it has
a low hit rate. Random assigns resources to les randomly, which means it cannot create more replicas
for popular les, leading to the lowest hit rate. Such a result proves the effectiveness of the proposed
PBDR on improving the over- all le availability and the correctness of our MANETs.
320
Average Delay
Figs. 4(b) demonstrate the average delays of the four methods with simulation results.The average
delays shows PBDR<SAF<DAFN<DCG which is in reverse order of the relationship between the four
methods on hit rate as shown in Figs. 4a . This is because the average delay is related to the overall file
availability in decending order. The PBDR have high file availability .SAF distributes every le to
different Nodes while DCG only shares data among simultaneously identify neighbor nodes, and DAFN
has a low file availability since all les receive equal amount of memory resources for replicas. The PBDR
has the minimum average delay in the simulation results.
Replication Cost
Fig. 4(c) show the replication costs of the four methods. PBDR have the lowest replication cost
while the costs of other three methods continues PBDR<DAFN<DCG<SAF. PBDR, nodes only need
to communicate the file server for replica list, leading to the lowest cost. DCG generates the highest
replication cost since network partitions and its members need to transfer a huge amount of files to remove
duplicate replicas.In PBDR, a node tries at most K times to create a replica for each of its les, producing
much lower replication cost than SAF and DCG. Such the result demonstrates the high energy-efciency
of PBDR. Combining all above results, we conclude that PBDR has the highest overall le availability and
efciency compared to existing methods, and PBDR is effective in le replication in MANETs.
Therefore, the resources are allocated more strictly following the PBDR, leading to efficiant. The
other replication protocols having the higher replication costs. The other three methods that favor popular
les, we nd that the closer similarity with PBDR a protocol. The PBDR has the better performance over
all in manets. The storage capacity of the file replication can be overcome due to the file dynamics. The file
distributions among all the nodes in the distributed network having better performances. The file distributes
across the different partitions.This proves the correctness of our theoretical analysis and the resultant for
MANETs.
CONCLUSION
In this paper, we analyze the problem of how to allocate limited resources in the replications and
manage the resources in MANETs. Although previous protocols that only consider storage and resources,
we also consider the file additions and deletions in dynamic manner in the peer-to-peer communication in
distributed systems.the Priority Based Dynamic Replication(PBDR) techinique that are efficiently adding
and deleting the file replications and manage the replicas in the particular time intervals. NS-2 simulator
that are analysis the effectiveness of the PBDR techinique.The hit rate is higher then the previous protocols
and average query delay is reduced and the replication cost is lower then the previous protocols. Finally,
the PBDR protocol that minimize the average response delay in MANETs.
REFERENCES
[1] S. C.Sivaram Murthy and B.S Manoj,Ad Hoc Wireless Networks ", Pearson Education, Second
Edition India, 2001.
[2]
[3]
Lixin Wang ,File Sharing on a mobile ad hoc Network, Master Thesis,Department of Computer
Science at the University of Saskatchewan, Canada, 2003.
[4]
Kang Chen, Maximizing P2P File Access Availability in Mobile Ad Hoc Networks though
Replication for Efcient File Sharing, IEEE TRANSACTIONS ON COMPUTERS, VOL. 64, NO.
4, APRIL 2015.
[5]
Yang Zhang et.al, Balancing the Trade-Offs between Query Delay and Data Availability in
MANETs, IEEE Transactions on Parallel and Distributed Systems, Vol. 23, No. 4, pp.643-650,
2012.
[6]
T. Hara, Effective replica allocation in ad hoc networks for improving data accessibility, IEEE
INFOCOM, 2001.
[7]
322
Q. Ren, M. Dunham, and V. Kumar, Semantic caching and query processing, IEEE Transactions
on Knowledge and Data Engineering, Vol. 15, No. 1, pp. 192210, 2003.
[9]
F. Sailhan and V. Issarny, Scalable service discovery for MANET, IEEE International Conference
on Pervasive Computing and Communications, pp. 235244, 2005.
[10] L. Yin and G. Cao, Supporting cooperative caching in ad hoc networks, IEEE Transaction on
Mobile Computing, Vol. 5, No. 1, pp. 77-89, 2006.
[11]
J. Cao, Y. Zhang, G. Cao, and L. Xie, Data consistency for cooperative caching in mobile
environments, IEEE Computer, Vol. 40, No. 4, pp. 6066, 2007.
[12] B. Tang, H. Gupta, and S. Das, Benefit-based data caching in ad hoc networks, IEEE Transactions
on Mobile Computing, Vol. 7, No. 3, pp. 289304, 2008.
[13] X. Zhuo, Q. Li, W. Gao, G. Cao, and Y. Dai, Contact Duration Aware Data Replication in Delay
Tolerant Networks, Proc. IEEE 19th Intl Conf. Network Protocols (ICNP), 2011.
[14] ] X. Zhuo, Q. Li, G. Cao, Y. Dai, B.K. Szymanski, and T.L. Porta, Social-Based Cooperative
Caching in DTNs: A Contact Duration Aware Approach, Proc. IEEE Eighth Intl Conf. Mobile
Adhoc and Sensor Systems (MASS), 2011.
[15] Z. Li and H. Shen, SEDUM: Exploiting Social Networks in Utility-Based Distributed Routing
for DTNs, IEEE Trans. Com- puters, vol. 62, no. 1, pp. 83-97, Jan. 2012. [21] V. Gianuzzi,
Data Replication Effectiveness in Mobile Ad-Hoc Networks, Proc. ACM First Intl Workshop
Performance Evaluation of Wireless Ad Hoc, Sensor, and Ubiquitous Networks (PE-WASUN), pp.
17-22, 2004.
[16]
S. Chessa and P. Maestrini, Dependable and Secure Data Storage and Retrieval in Mobile Wireless
Networks, Proc. Intl Conf. Dependable Systems and Networks (DSN), 2003.
[17] X. Chen, Data Replication Approaches for Ad Hoc Wireless Net- works Satisfying Time
Constraints, Intl J. Parallel, Emergent and Distributed Systems, vol. 22, no. 3, pp. 149-161, 2007.
[18] J. Broch, D.A. Maltz, D.B. Johnson, Y. Hu, and J.G. Jetcheva, A Performance Comparison of
Multi-Hop Wireless Ad Hoc Net- work Routing Protocols, Proc. ACM MOBICOM, pp. 85-97,
1998.
[19] M. Musolesi and C. Mascolo, Designing Mobility Models Based on Social Network Theory, ACM
SIGMOBILE Mobile Computing and Comm. Rev., vol. 11, pp. 59-70, 2007.
[20] P. Costa, C. Mascolo, M. Musolesi, and G.P. Picco, Socially- Aware Routing for Publish-Subscribe
in Delay-Tolerant Mobile Ad Hoc Networks, IEEE J. Selected Areas in Comm., vol. 26, no. 5, pp.
748-760, June 2008.
323
NOMENCLATURE
PV
Photovoltaic system
MPPT
DVR
SMC
SDCS
Separate dc Sources
P&O
Index Terms Dynamic Voltage Restorer, H-bridge multi level inverter, Photovoltaic system,
The photovoltaic cells convert the incident photons to electron or hole pairs. The photovoltaic
module is the result of associating a group of PV cells in series and parallel and it represents the
conversion unit in this generation system. The relationship between the PV cell output current and
terminal voltage according to the single-diode model is governed by equation (1), (2)and(3).
Practical modules are compose of several connected PV cells which requires the inclusion of
additional parameters, Rs and Rp, which is given by (3)
Where, Iph is the current generated by the incident light, ID is the diode current, I0 is the
reverse saturation current, q is the electron charge, k is the Boltzmann constant, is the ideality factor.
T is the temperature, Rs is the series resistance, Rp is the parallel resistance.
The numbers of cells are connected to form a PV array. The equivalent circuit of PV array
is shown in fig.1. where Ns be number of cells in series and Np denote number of cells in parallel
arrangements. The cells connected in series provide greater voltage output and similarly the cells
connected in parallel provide greater current outputs.
The I-V characteristics of PV device not only depends on the internal characteristic but also
with external influences such as temperature and irradiation which is influenced by equation (4)
MPPT or Maximum Power Point Tracking is a technique used for extracting maximum
available power from PV module under certain conditions. The voltage at which PV module can
325
Sliding mode control is a form of variable structure control. It is a non linear control that alters
the dynamics of a non linear system by the application of a high frequency switching signal. The main
strength of the sliding mode control is its robustness [4].
Sliding mode control is a non linear control methodology, which uses a combined control of
continuous equivalent control (ueq) and discontinuous switching control (usw) which are used to
force the state trajectory to reach a predefined sliding surface/ switching surface(s) in the phase plane
and then forces it remain on the surface in the sliding mode until the desired state is reached. The
equivalent control ensures that the operating point slides along the sliding surface until the error
approaches to zero. The dynamics of the system is given by equation(5) which tends to zero if the
poles is on the left hand plane and condition of overshoot does not occur and act as the state feedback
control system. The principle of sliding mode control is shown in fig. 3.
X1(t) = (A+BK)X(t)
(5)
Dynamic Voltage Restorer (DVR) is a solid state device that injects the voltage into the system
in order to regulate the load side voltage [13]. The basic components of the DVR as shown in fig.5
which comprises of Injection transformer,
Harmonic filter, voltage source converter, Dc charging unit. The primary function is to boost
up the load side voltage in the event of disturbance in order to avoid the power disruptions to the load.
The difference between the pre sag voltage and the sag voltage is injected by the DVR by supplying
the real power from the energy storage element and the reactive power. During the normal operation
as there is no sag, DVR will not supply any voltage to the load.
The momentary amplitude of the three injected phase voltages is controlled. This means that
any differential voltage caused by the transient disturbances in the ac feeder will be compensated by
327
Fig. 5 shows the overall block diagram of the system. Fig. 7 and 8 shows the PV output and the
DC-DC converter output.
Multilevel inverter output is shown in the fig.9. Fig. 10 and 11 shows the load output voltage
before interfacing with DVR and after interfacing with DVR respectively.
IV. CONCLUSION
In this work, the voltage in the distribution side was improved when the disturbance occurs in the
load feeder by means of DVR which has excellent compensation for voltage disturbances. Simulation
was carried out with PV interfaced multilevel inverter and DVR using MATLAB/SIMULINK software.
The future scope of the present work is to increase multi level inverter stages to ensure the
harmonic free oscillation. We can get the desired results for the non linear load by means of the fuzzy
329
Ahmed M.Massoud and Shehab Ahmed, Evaluation of Multilevel cascaded type Dynamic Voltage
Restorer employing discontinuous space vector modulation IEEETransactions on Industrial
electronics, Vol.57,No.7,July 2010.
[3]
Benachaiba Chellali and Ferdi Brahim, Voltage Quality Improvement Using DVR, Electrical
Power quality and Utilisation, Journal Vol.14,No.1,2008.
[4]
Jaume Miret, Jorge Luis Sosa and Miguel Castilla, Sliding mode Input-Output Linearization
Controller for the DC/DC ZVS resonant converter, IEEE Transactions on Industrial electronics,
Vol.59,No.3,pp.1554-1564,March 2012.
[5]
Dash.P.P and Yazdani, A mathematical model and Performance for a single stage grid connected
Photovolatic (PV) system , International Journal of Emerging Electric Power Systems, Vol.9,Issue
6.Article 5,2008.
[6]
Ernesto Ruppert Filho, Jonas Rafael Gazoli and Marcelo Gradella, Comprehensive Aprroach to
Modelling and Simulation of Photovoltaic Arrays ,IEEE Transactions on Industrial Electronics,
Vol.24,No.5,pp.1198-1208, May 2009.
[7]
Chapman P.L and Esram T, Comparison of photovoltaic array maximum poer point tracking
techniques, IEEE Transactions on Energy Conversion, Vol.22,No.2,pp.434-449, June 2007.
[8]
Arindam Ghosh, avinash Joshi and Rajesh Gupta, Performance comparison of VSC based shunt
and series compensators used for load voltage control in distribution systems , IEEE Transactions
on Power Delivery, Vol.26.No.1,pp.268-278,January 2011.
[9]
Ritwik Majumder, Reactive power compensation in single phase operation of Microgrid, IEEE
Transactions on Industrial Electronics, vol.60,No.4,pp.1403-1416, April 2013.
[10] Bhim singh, Sabha Raj Arya, Adapative theory based Improved linear Sinusoidal Tracer control
algorithm for STATCOM , IEEE Transactions on Power electronics, Vol.28,No.8,pp.37683778,August 2013.
[11] Basim Alsayid and Samer Alsadi, Maximum
power point tracking simulation for Photovoltaic
system using Perturb and Observe algorithm , International Journal of Innovation and Technology
, Vol.2,Issue.6, December 2012.
[12] Gaurag Sharma and Jay Patel, Modeling and Simulation of solar photovoltaic module using Matlab/
Simulink , International Journal of Research in Engineering and Technology, Vol.02.Issue.03,
March 2013.
[13] Boochiam P and Mithulanathan N, Understanding of Dynamic Voltage Restorers through Matlab
, Thammasat International Jounal of Science and Technology, Vol.11,No.3,September 2006.
[14] Illindala M, Ventaramanan G and Wang B, Operation and control of a Dynamic voltage restorer
using transformr coupled H-bridge converters , IEEE Transactions on Power Electronics, Vol.21.
No.4,pp-1053-1061, July 2006.
[15] Divya Subramanian and Rebiya Rasheed, Cascaded Multilevel Inverter using the pulse width
modulation technique , International Journal of Engineering and Innovative Technology,
Vol.3,Issue.1,July 2013.
330
Abstract This paper deals with the Automatic Generation Control of the two area therma-thermal
system in the restructured power system environment. The main objective of the automatic generation
control is to regulate the power output of electric generator within an area in response to the changes
in the system frequency and tie line loading. In the present competitive electricity market, fast power
consumption may cause a problem of frequency oscillation. The oscillation of the system frequency
may sustain and grow to cause a series frequency stability problem if no adequate damping is available.
The concept of DISCO Participation matrix is introduced and reflected in the two area thermal-thermal
system. The AGC in restructured power system environment should be designed in such a way that can
contract individually with the GENCO for power. The concept of DISCO Participation matrix (DPM) is
presented to simulate the GENCO and DISCOs. By using DPM, the dynamic response are obtained to
satisfy the AGC requirements.
NOMENCLATURE
LFC
ACE
CPF
Integral gain
Ki
TG
TP
TT
KP
Ptie
Schd Scheduled
Act
Actual
Index Terms AGC, DISCO, DISCO Participation Matrix, GENCO, Restructured Power System.
331
332
The areas are connected by a single transmission line. The power flow over the transmission
line will appear as a positive load to one area and an equal but negative load to the other, or vice versa,
depending on the direction of flow. The direction of flow will be dictated by the relative phase angle
between the areas, which is determined by the relative speed-deviations in the areas.
Fig. 1 represents that the tie line power flow was defined as going from area 1 to area 2.
Therefore, the flow appears as a load to area 1 and a power source (negative load) to area 2. If
one assumes that mechanical powers are constant, the rotating masses and tie line exhibit damped
oscillatory characteristics known as Synchronizing oscillations. It is quite important to analyze the
steady-state frequency deviation, tie-flow deviation and generator output for an interconnected outputs
for an interconnected area after a load change occurs. The net tie flow is determined by the net change
in load and generation in each area.
B. Linearized model of an interconnected two area restructured power system
The traditional power system industry has a vertically integrated utility (VIU) structure. In the
restructured or deregulated environment, vertically integrated utilities no longer exist. The utilities no
longer own generation, transmission, and distribution; instead, there are three different entities, viz.,
GENCOs (generation companies), TRANSCOs (transmission companies) and DISCOs (distribution
companies). As there are several GENCOs and DISCOs in the deregulated structure, a DISCO has the
freedom to have a contract with any GENCO for transaction of power. After deregulation any DISCOs
can demand for the power supply from any GENCOs. There is no boundation on the DISCOs for
purchasing of electricity from any GENCOs. For understanding the concept of this kind of contracts
DISCO participation matrix (DPM) is presented [9].
A DISCO has freedom to make contract with a GENCO in another control area and such
transaction are called bilateral transactions. All such transactions are completed under the supervision
of independent system operator (ISO). The ISO controls various ancillary services, one of which is
AGC.
A DPM is a matrix with the number of rows equal to the number of GENCOs and the number
of columns equal to the number of DISCOs in the system [9]. Each entry in this matrix can be thought
of as fraction of a total load contracted by a DISCO (column) towards a GENCO(row). The sum of all
the entries in a column DPM is unity [9].
333
where cpfjd = Contract Participation factor of jth GENCO in the load following of dth DISCO.
ACE participation factors are apf1 =0.5, apf2=1-apf1=0.5; apf3=0.5, apf4=1-apf3=0.5. Thus,
the load is demanded only by DISCO1 and DISCO2 as defined in [2][14].
Fig. 3 shows the block diagram of the two area thermal-thermal system with the Restructured
power system.
334
Figs. 4 - 8 shows the response of the two area system before deregulation. The system
performance are in the terms of f1, f2 ,P1, P2 and Ptie of area 1 and 2. Figs. 9 - 13 shows the
response of the two area system after deregulation. The system performance are in the terms of f1,
f2 ,P1, P2 and Ptie of area 1 and 2.
338
VI. CONCLUSION
AGC is important in the power system. The frequency and tie-line power deviation responses
are obtained for 20% SLP. In this work, we compare the dynamic responses of frequency and tieline power for before and after deregulation. The concept of DISCO and GENCO are very useful
in the deregulated environment. The design of Integral controller also plays an important role in
obtaining the results in both before and after deregulation. The simulation results are satisfactory for
two different operating cases in AGC before and after deregulation.
The future scope of the present work includes the coordinated control of SMES and SSSC can
be proposed in the deregulated environment. The PID controller can also be used instead of Integral
controller in order to improve the dynamic response of the two area thermal-thermal power system. In
future we can apply some other artificial intelligent techniques for better result.
REFERENCES
[1] Jaleeli, N., Van Slyck, L.S., Ewart, D.N., Fink, L.H., and Hoffmann, A.G.: "Understanding automatic
generation control", IEEE Trans. Power Syst., vol. 3, no. 7, pp. 11061122, 1992.
[2]
[3]
Praghnesh Bhatt, R. Roy and S.P. Ghoshal, "Optimized multiarea AGC simulation in restructured
power system", accepted for the publication in Int. J. Electrical Power Energy Syst., 2010.
[4]
[5]
Vaibhav Donde, M. A. Pai, Ian A. Hiskens, Simulation and Optimization in an AGC System after
Deregulation, IEEE Trans, Power systems, vol. 16, no. 3, 2001.
[6]
A.Suresh babu, Ch.Saibabu, S.Sivanagaraju, Tuning of Integral Controller for Load Following of
SMES and SSSC based Multi Area System under Deregulated Scenario, IOSR Journal, e-ISSN:
2278-1676 Volume 4, Issue 3,PP 08-18, 2013.
[7]
[8]
R. D. Christie, and A. Bose, Load Frequency Control Issues In Power System Operation after
Deregulation, IEEE Transactions on Power Systems, vol. 11, No. 3, August 1996, pp. 1191-1200.
[9]
Vijay Rohilla, K. P. Singh Parmar, Sanju Saini, Optimization of agc parameters in the restructured
power system environment using GA, ISSN: 2231 6604, Volume 3, Issue 2, pp: 30-40, 2012.
[10] D. P. Kothari and I. J. Nagrath, Power System Engineering, 2nd edition, TMH, New Delhi, 2010.
[11] P. Kundur, Power system stability & control. New York: McGraw-Hill, 1994, pp. 418-448.
[12] Barjeev Tyagi and S. C. Srivastava, Automatic Generation Control Scheme based on Dynamic
Participation of Generators in Competitive Electricity Markets, Fifteenth National Power Systems
339
340
Abstract Free Space Optical Communication is a wireless optical technology in which the laser beam
is travelled through the atmospheric channel. The line-of-sight is maintained at the atmosphere between
the transmitter and receiver. When the laser beam is propagated through the free space atmosphere,
it can be severely affected by the atmospheric turbulence. Adaptive Optics is used to compensate the
atmospheric turbulence thereby improving the quality of an optical system, in which the wavefront sensor
(WFS) plays an important role in measuring the phase aberration. In this paper we presented an overview
of different sensor-less techniques that can be used for FSOC to compensate the atmospheric turbulences.
Keywords FSOC, Adaptive Optics, deformable mirror, wavefront sensorless techniques.
I. INTRODUCTION
Free space optical communication is the technique that propagate the light in the free space
which is similar to that of the wireless transmission. Growing commercial deployment of FSOC leads
to increase in the research and development activities over the past few years. Currently, FSOC allows
the transmission of data upto the rate of 2.5 Gbps. Unlike microwave and RF wireless communication
it is the secured licenece free technique. The main concern which has to be consider in the FSOC is the
Line-Of-Sight(LOS). The Line-Of-Sight should be maintained during transmission inorder to achieve
the better BER(Bit Error Rate).The block diagram of the FSOC system is as shown in the figure 1.
The major advantages of the FSOC system is the High-rate of data transmission at the very
high speed [3],high security, license free. Comparing to the fiber optics communication the cost of
installation is less. Immunity to electromagnetic interference and it is invisible and eye safe hence
there is no health hazards.
Most widely used in the Telecommunication and Computer networking, cellular communication
backhaul, Military and security applications and disaster recovery among other emerging applications.
The only limiting factor of the FSOC system is the atmospheric turbulence. Since the
outdoor environment which acts as the transmission medium for this technique could be depend on
the unpredictable weather conditions. Effect of fog, rain, dust and other dispersing particles in the
atmosphere could leads to the beam dispersion, scattering, beam attenuation beam spreading and
scintillation takes place. This results in the wave distortion and fluctuations in the phase and amplitude
341
Adaptive optics application was first applied in the retinal imaging , astronomical imaging[1],
microscopy[12], vision science[11], laser communication system[13].
III. SENSORLESS ADAPTIVE OPTICS
For last few years, the wavefront sensor such as Schack-Hartmann sensor is used for the
detection of the atmospheric aberration in the incoming laser beam and the deformable mirror is one
of the adaptive elements used to introduce some additional distortion that eliminate other aberration
in the system. Since there is complexity in the hardware of the wavefront sensor of conventional AO
system,we are going for the wavefront sensorless approach.
The major concern of the sensorless AO system is to determine the DM shape that removes all
other aberration in the laser beam. The control algorithm is designed for providing the relationship
between the second order atmospheric aberration and far-field intensity. The main advantage of the
wavefront sensor-less system is the far-field intensity is acts as the feedback signal.
The algorithm for wavefrontsensorless adaptive optics system is widely classified into two:
Image based and stochastic algorithm. The stochastic based algorithms which is widely in use are
Genetic algorithm and Ant colonies. The image based algorithms are sensorless modal correction,
low spatial frequency, point spread function,Optical Coherent Tomography(OCT) and Laser process
optimization.
1. Stochastic algorithms for wavefront sensor-less correction:
A. Genetic Algorithm:
342
344
The photomultiplier present at the output of the monochromator which detects the increases of
the harmonics signal. The photomultiplier visualized the result on the photon flux.
IV. CONCLUSION
The main problem in the FSOC links results from attenuation and fluctuation of optical signal at
the receiver . In this paper , we are attempting to improve free space optical communication by use of
several sensorless adaptive optics. Sensor-less adaptive optics having a great strength for finding new
application in current and future technologies.
V. REFERENCES
[1] Hardy J.W., Adaptive Optics for Astronomical Telescopes, (Oxford University Press,ISBN-10:
0195090195, USA, 1993
[2]
E. Fedrigo, R . Muradore, and D. zilio, High performance adaptive optics system with fine tip/tilt
control, Control Engineering Practice 17, 122-135(2009).
[3]
R.J. Noll, Zernike polynomials and atmospheric turbulence, J. Opt. Soc. Amer., vol. 66 , pp. 207211, 1976
[4]
[5]
Bing Dong, Dequing Ren, Xi Zhang ,Stochastic parallel Gradient Descent Based Adaptive Optics
used for high contrast imaging coronagraph
[6]
Steffen Mauch et.al Real-time spot detection and ordering for a shack-Hartmann wavefront sensor
with a low-cost FPGA IEEE Transaction on Instrumentation and Measurement, vol. 63, no. 10,
October 2014.
[7]
S. Bonora et.al Devices and Techniques for Sensor-less Adaptive optics INTECH publication:http://
dx.doi.org/10.5772/53550.
[8]
M. J. Booth, Wavefront sensorless adaptive optics for large aberrations, opt. let. 32(1), pp. 5-7, 2007.
[9]
M.S. Zakynthinaki and Y.G. Saridakis, Stochastic optimization for a tip/tilt adaptive correcting
system, Computer Physics Communications, vol. 150, no. 3, pp. 275-292, 2003.
[10]
[11] N. Doble and D. R. Williams,The application of MEMS technology for adaptive optics in vision
science, IEEE J, Sel. Top. Quant. 10(3),629-635(2004).
[12]
O. Azucena et al., Adaptive optics wide-field microscopy using direct wavefront sensing, Opt.
Lett. 36(6), 825-827(2011).
[13] T.A.Planchon et al., Adaptive wavefront correction on a 100-TW/10-Hz chirped pulse amplification
laser and effect of residual wavefront on beam propagation, Opt.Commun.252(4-6),222-228(2005)..
345
Abstract The low frequency electromechanical oscillations caused by swinging generator rotors are
inevitable in interconnected power systems. These oscillations limit the power transmission capability
of a network and, sometimes even cause a loss of synchronism and an eventual breakdown of the entire
system, thus making the system unstable. Power system stabilizer is used to damp out these oscillations
and hence improve the stability of the system. In this project, nature inspired Water cycle algorithm based
stabilizer design is carried out to mitigate the power system oscillation problem. The proposed controller
design is formulated as an optimization problem based on damping ratio and eigen value analysis. The
effectiveness of the proposed controller is tested by performing nonlinear time domain simulations of the
test power system model under various operating conditions and disturbances. The system performance
with Water cycle algorithm is also compared with conventional lead-lag controller design.
Index Terms Eigen value analysis, Low frequency oscillation, Multi machine infinite bus system,
x=Ax+Bu (1)
Where x = Vector of State variables.
A, B = State vector matrix and Input matrix respectively.
B. Power system stabilizer structure
In this paper a dual input PSS is used, the two inputs to dual-input PSS are and Pe, with
two frequency bands, lower frequency and higher frequency bands, unlike the conventional singleinput () PSS. PSS3B is found to be the best one within the periphery of the studied system
model. This dual input PSS configuration is considered for the present work and its block diagram
representation is shown in Figure 2. refer simply to the reference number, as in [4].
Hence Ks, T1, T2 are the PSS parameters which should be computed using CPSS and optimally
tuned using Water cycle optimizer PSS.
III. PROPOSED OPTIMIZATION CRITERION
To increase the damping over a wide range of operating conditions and configuration of power
system, a robust tuning of controllers must be implemented. The objective functions are represented
as,
EMODE in equations (2) and (3) represent the electromechanical mode of oscillations.
The maximum value of real part [Max(i)] of the eigen value will be located in right half of splane, making the system unstable. The weekly damped electromechanical mode will have minimum
value of damping ratio [Min(i)] among all the damping ratios of the system. The objective is to
minimize the objective function [J1] and maximize the [J2]. It involves shifting the real part of the ith
electromechanical eigen value to stable locations in left half of complex s-plane and the damping ratio
of the weakly damped electromechanical mode of oscillations will be enhanced to make the system
more stable. The single machine Heffron-Phillips generator model is extended to perform the modeling
347
The power system stabilizer optimization parameters (Ks, T1,T2) are taken as constants for the
proposed optimization problem.
The typical values for the optimized parameters are taken as [0.1-60] for k, [0.2-1.5] for T1
and [0.02-0.15] for T2 .The time constant Tw is considered as 10.0s [20].The damping ratio of the ith
critical mode is given by
The objective function J1 and J2 in equation (2) and (3) along with the constraints in (4),(5),(6)
is the proposed optimization criterion formulated in this paper to enhance the system stability.
IV . PROPOSED METAHEURISTIC OPTIMIZATION METHOD
The idea of the proposed Water Cycle Algorithm (WCA) is inspired from nature and based on
the observation of water cycle and how rivers and streams flow downhill towards the sea in the real
world. The evaporated water is carried into the atmosphere to generate clouds which then condenses
in the colder atmosphere, releasing the water back to the earth in the form of rain or precipitation. This
process is called the hydrologic cycle. refer simply to the reference number, as in [7].
The smallest river branches are the small streams where the rivers begins to form. These tiny
streams are called first-order streams. Wherever two first-order streams join, they make a
3
second-order stream . Where two second-order streams join, a third-order stream is formed and
so on until the rivers finally flow out into the sea .
where dmax is a small number (close to zero). Therefore, if the distance between a river and
348
349
350
The following are the dominant features of WCA based controller observed in this paper with
regard to stability improvement.
Better placement of closed loop eigen values in stable locations for all operating conditions
involved.
Provide more damping to the system for all conditions. (i.e.)Damping ratios more than the
threshold level (T= 0.07) and also more than the damping ratios of other controllers.
351
[3]
M Ravindra Babu, C Srivalli Soujanya, S V Padmavathi Design of PSS3B for Multimachine system
using GA Technique ISSN: 2248-9622 www.ijera.com Vol. 2, Issue 3, May-Jun 2012, pp.1265-1271
[4]
P. Pavan kumar Dynamic analysis of Single Machine Infinite Bus system using Single input and
Dual input PSS International Electrical Engineering Journal (IEEJ) Vol. 3 (2012) No. 2, pp. 632-641
ISSN 2078-2365
[5]
[6]
[7]
Ardeshir bahreininea , Ali sadollah water cycle algorithm A novel metaheuristic optimization
method for solving constrained engineering optimization problems. RESEARCH GATE.
[8]
Navid Ghaffarzadeh water cycle algorithm for power system stabilizer based robust design for power
system . Journal of ELECTRICAL ENGINEERING, VOL. 66, NO. 2, 2015, 9196.
[9]
Tridib K. Das and Ganesh K. Venayagamoorthy Bio-inspired Algorithms for the Design of Multiple
Optimal Power System Stabilizers: SPPSO and BFA IEEE
[10] Mostafa Abdollahi*, Saeid Ghasrdashti, Hassan Saeidinezhad and Farzad Hosseinzadeh
Multi Machine PSS Design by using Meta Heuristic Optimization Techniques 2013 JNAS
Journal-2013-2-9/410-416 ISSN 2322-5149 2013 JNAS
[11]
Ashik Ahmed Optimization of Power System Stabilizer for Multi-Machine Power System using
Invasive Weed Optimization Algorithm International Journal of Computer Applications (0975
8887) Volume 39 No.7, February 2012
[12] Saibal K. Pal, Dr. Comparative Study of Firefly Algorithm and Particle Swarm Optimization for
Noisy Non-Linear Optimization Problems I.J. Intelligent Systems and Applications, 2012, 10,
50-57 Published Online September 2012 in MECS (http://www.mecs-press.org/) DOI: 10.5815/
ijisa.2012.10.06
[13]
352
Eng. Andrei STATIVA PhD1 Student, Prof. Eng. Mihai GAVRILAS PhD1 a metaheuristi approach
for power system stability enhancement. Buletinul AGIR nr. 3/2012 iunie-august.
(LOH), Neural Network (NN), Pulse Width Modulation (PWM) and Modulation Index (M)
I. INTRODUCTION
The Pulse Width Modulated (PWM) inverters can control their output voltage and frequency
simultaneously and also they can reduce the harmonic components in load currents. These features
have made them suitable in many industrial applications such as variable speed drives, uninterruptible
power supplies, and other power conversion systems. The popular single-phase inverters adopt the
full bridge type using approximate sinusoidal modulation technique as the power circuits. The output
voltage of them has three values: zero, positive and negative of supply DC voltage levels. Therefore,
the harmonic components of their output voltage are determined by the carrier frequency and switching
functions [1].
Recently the multilevel inverter topology has drawn tremendous interest in the power industry
since it can easily provide the high power required for high power applications for such uses as
static VAR compensation, active power filters, and so that large motors can also be controlled by
high power adjustable frequency drives. Multilevel inverters synthesize the AC voltage from several
different levels of DC voltages. Each additional DC voltage level adds a step to the AC voltage
waveform. These DC voltages may or may not be equal to one another [3]. From a technological point
of view, appropriate DC voltage levels can be reached, allowing use of multilevel power inverter for
the medium voltage for adjustable speed drives ASD [4]. Multilevel inverters can reach high voltage
and reduce harmonics by their own structures without transformers [5]. There are three main types of
multilevel inverters: diode-clamped, flying capacitor, and cascaded H-bridges [9]. If the DC supply
voltage increased (adding more batteries in series to maintain the voltage or to decrease the current)
for the larger power requirement, the inverter component must be able to withstand the maximum
DC supply voltage. Apart from other multilevel inverters, is the capability of utilizing different DC
voltages on the individual H-bridge cells. The cascaded topology has many inherent benefits with one
particular advantage being its modular structure. In particular, the cascaded inverter has been reported
for use in applications such as medium voltage industrial drives, electric vehicles and grid connection
of photovoltaic cell generation systems.
The proposed inverter can reduce the harmonic components using sinusoidal PWM technique
353
2. For an output voltage level V0 = V1+V2, turn on all the switches as mentioned in step
1 and M21.
For an output voltage level V0 = V1+V2+V3, turn on all the switches as mentioned in step
2 and M31.
For an output voltage level V0 = V1+V2+V3+V4, turn on all the Switches as
amentioned in step 3 and M41.
Where the optimized switching angles, which must satisfy the following condition
355
The switching angles of the waveform will be adjusted to get the lowest output voltage THD.
If need to control the peak value of the output voltage to be V1 and eliminate the third and fifth order
harmonics, modulation index is given by
The resulting harmonic equations are
The Newton Raphson method is used to solve the harmonic elimination switching angles of 5th
and 7th order for nine level cascaded multilevel inverter.
IV. ARTIFICIAL NEURAL NETWORKS
An artificial neural network (ANN), usually called "neural network" (NN), is a mathematical
model or computational model that tries to simulate the structure and/or functional aspects of biological
neural networks. It consists of an interconnected group of artificial neurons and processes information
using a connectionist approach to computation. In most cases an ANN is an adaptive system that
changes its structure based on external or internal information that flows through the network during
the learning phase. Neural networks are non-linear statistical data modeling tools. They can be used to
model complex relationships between inputs and outputs or to find patterns in data. A neural network
is an interconnected group of nodes, akin to the vast network of neurons in the human brain.
A. Structure of Feed Forward Network
The feed forward neural network as shown in Fig. 3, the feed forward neural network was the
first and arguably simplest type of artificial neural network devised. In this network, the information
moves in only one direction, forward, from the input nodes, through the hidden nodes (if any) and to
the output nodes.
B. Training of Neural Network Using MATLAB Coding
To determine the switching angles for Selective Harmonic Eliminated PWM (SHEPWM)
cascaded multilevel inverter. Such switching angles are defined by a set of nonlinear equations to be
solved. In the case of two possible solutions for an angle i, the criteria for selecting one of them can
be the Total Harmonic Distortion (THD). The best angle values are therefore the ones leading to the
lowest THD. The flow chart representation of neural network implementation as shown in fig. 4. ANNs
have gained increasing popularity and have demonstrated superior results compared to alternative
methods in many studies.
356
In the case of two possible solutions for an angle i, the criteria for selecting one of them can be
the Total Harmonic Distortion (THD). The best angle values are therefore the ones leading to the lowest
THD. ANNs have gained increasing popularity and have demonstrated superior results compared to
alternative methods in many studies. Indeed, ANNs are able to map underlying relationship between
input and output data without prior understanding of the process under investigation. This mapping
is achieved by adjusting their internal parameters called weights from data. This process is called
the learning or the training process. Their interest comes also from their generalization capabilities,
i.e., their ability to deliver estimated responses to inputs that were not seen during training. Hence,
the application of ANNs to complex relationships and processes makes them highly attractive for
different types of modern problems [9, 14]. We use a neural network to learn the switching angles
previously provided by the resultant theory method. The number of inputs and outputs depends from
the considered process. In our application, the feed forward neural network has to map the underlying
relationship between the modulation rate (i/p) and the switching angles (o/p) as shown in figure 5.
The switching angles for different modulation index and their corresponding THD values are
shown in table-I. From that it has been inferred that the modulation index increases which results
reduction of THD values.
SIMULATION RESULTS
The simulink model of the proposed nine level cascaded multilevel inverter systems for SPWM
techniques with open loop, closed loop PI and neural implementation are described by following
simulation diagrams.
A. Sinusoidal Pulse Width Modulation Open Loop
Fig. 8 Simulated output voltage and current waveform for sine PWM
B. Closed Loop - PI
In closed loop the control of inverter can be performed using conventional PI controller. The
output voltage of the cascaded inverter is compared with reference sine wave and the output error
is given to the PI controller to generate the gating signals. The simulated circuit with PI controller
is shown in Fig.10. The Frequency spectrum of the output voltage is shown in Fig. 11. The THD is
measured for various values of m. The THD is found to be 4.21 % for m = 0.85.
Fig. 12 Simulated output voltage and current waveform for sine PWM with Neural Network
Fig. 13 Frequency Spectrum of the output voltage in Sine PWM with Neural Network
360
Fig. 14 Comparison of modulation index Vs THD for open loop, closed loop PI, neural network
IV . CONCLUSION
A complete analysis of the nine level cascaded multilevel inverter has been presented for open
and closed loop with PI controller and neural network comparison has been brought out. The neural
network approach is based on the learning and approximating of the relationship between the modulation
rate and the switching angles with a feed forward network. The resulting neural implementation of
the harmonic elimination strategy uses very few computational costs, high performance and technical
advantages of the neural implementation of the harmonic elimination strate
REFERENCES
[1] MEENAKSHI, J. ; SREEDEVI, V.T, Simulation of a transistor clamped H-bridge multilevel
inverter and its comparison with a conventional H-bridge multilevel inverter IEEE International
Conference on Circuit, Power and Computing Technologies (ICCPCT), 2014 , 20-21 March 2014
[2]
Kumar, D.V.A. ; Babu, C.S. New multilevel inverter topology with reduced number of switches
using advanced modulation strategies IEEE International Conference on Power, Energy and Control
(ICPEC), 2013 pages 693 699.
[3]
Haiwen Liu, Leon M. Tolbert, Surin Khomfoi, Burak Ozpineci, Zhong Du, Hybrid Cascaded
Multilevel Inverter with PWM control Method, conference and proceedings , pp: 162-166 , June
2008.
[4]
P.C. Loh, D.G. Holmes, T.A. Lipo, Implementation and control of distributed PWM cascaded
multilevel inverter with minimum harmonic distortion and common moode voltages, IEEE
Transactions on Power Electronics, vol. 20, no. 1, pp. 90-99 , Jan. 2005.
[5]
Chiasson, J.N.; Tolbert, L.M.; McKenzie, K.J.; Zhong Du,A Unified approach to solving the
harmonic Elimination Equation in multilevel converters, IEEE Transactions On Power Electronics,
Vol. pp: 478 490, March 2004
[6]
B. P. McGrath and D. G. Holmes, Multicarrier PWM Strategies for multilevel inverters, IEEE
Trans. Ind. Electron. vol. 49, no. 4, pp. 858867, Aug. 2002.
[7]
L.M.Tolbert and T.G.Habetler, Novel Multilevel Inverter Carrier Based PWM methods, Proc.
IEEE trans. Ind Applications, Vol.35, pp. 1098-1107, Sept.1999.
[8]
C. Wang, Q. Wang, K. Ren, and W. Lou, Privacy-preserving public auditing for data storage
security in cloud computing, in INFOCOM, 2010 Proceedings IEEE. IEEE, 2010, pp. 19.
[9]
[10] N.S. Choi, J. G. Cho, and G.H. Cho, A general circuit topology of multilevelinverter, in Proc.
IEEE PESC91, pp. 96-103.
[11] C. K. Duffey, R. P. Stratford, Update of harmonic standard IEEE-519, IEEE recommended practices
and requirement for harmonic control in electric power systems, IEEE Transactions on Industry
Applications, vol. 25, no. 6, pp. 1025-1034,Nov. /Dec. 1989.
[12] S. Walfram, Mathematica, a System for Doing Mathematics by Computer, 2nd ed. Reading MA:
Addison-Wesley, 1992.
[13] H. S. Patel and R. G. Hoft, Generalized harmonic elimination and voltage control in thyristor
converters: Part I-harmonic elimination, IEEE Trans. on Ind. Appl., Vol. 9, pp. 310-317, May/June
1973.
[14] H. S. Patel and R. G. Hoft, Generalized harmonic elimination and voltage control in thyristor
converters: Part II-voltage control technique, IEEE Trans. on Ind. Appl., Vol. 10, pp. 666-673,
Sept. /Oct. 1974.
[15] N. Mohan, T. M. Undeland and W. P. Robbins, 2003 -Power Electronics: Converters , Applications,
and Design, 3rd Edition. J. Wiley and Sons.
[16] Khomfoi, S., Tolbert, L. M., Fault Diagnostic System for a Multilevel Inverter Using a Neural
Network, in IEEE Transactions on Power Electronics, Vol. 22, No. 3,pp. 1062-1069, May
2007.
362
I. INTRODUCTION
VLSI stands "Very Large Scale Integration".The field which involves packing more logic
devices into smaller areas.VLSI devices are used in your computer, your car, your brand new state-ofthe-art digital camera, the cell-phones, and what enclose you.Everyone this involve a lot of expertise
on many fronts within the same field.VLSI has been roughly for a protracted time, there is zilch new
about it but as a side effect of advances in the world of computers, in attendance has been a dramatic
production of tools that can be used to design VLSI circuits. Alongside, obeying Moore's law, the
aptitude of an IC has augmented exponentially over the years.
In terms of computation power, utilization of available area, yield. The united achieve of these
two advances is with the aim of people can put different functionality into the IC's, open new frontiers.
These two fields are kind a related and their description can easily getting into another article.
Polynomial basis multipliers
Polynomial basis multipliers operate polynomial basis and no origin converters required.
These multipliers are by far implemented, because of hardware efficient and the time to produce the
outcome is the alike as for Berlekamp or Massey-Omura multipliers. The bit-serial polynomial basis
multipliers are operate serial-in parallel-out multipliers.In several applications, additional register
being obligatory is the result and adds an superfluous m clock cycles to the totaling time. This is the
main reason why polynomial basis multipliers are habitually disregarded for use in code design.
Fig 2. DPBM
II. SURVEY
A. High Throughput LFSR Design for BCH Encoder using Sample Period Reduction Technique for
MLC NAND based Flash Memories.
Errors that transpire in MLC NAND based flash memories a choice of coding techniques can
be functional for error detection and correction. There is an assortment of codes available for error
detection and correction. The codes are divided into two types; 1). conventional code and 2). block
codes Cyclic code is one of the classifications of block codes. The rift of the block code is BCH code.
BCH code initially forms a generator polynomial by the use of finite field (GF) concept and generates
a parity (check) bits to be appended to the message bits to form a codeword. Unfolding is a renovation
modus operandi, which describes J consecutive iterations of the original DSP aigorithm. It increases
the several Iteration bound to J. In order to trim down the sampling period it is important to calculate
the iteration bound ahead of unfolding the system to select the unfolding factor.
B. MPCN-Based Parallel Structural design in BCH Decoders for NAND Flash Memory Devices
This brief has provided a novel MPCN-based parallel structural design in long BCH decoders
for NAND Flash memory plans. Different previous approaches performing CFFMs calculations, the
proposed design has exploited MPCNs to improve the hardware efficiency since one MPCN require
the XOR gate requirement is
at most m - 1, whereas that of one CFFM is usually proportional to m.
The future MPCN-based structural design can combine the syndrome calculator and the Chien
search leading to significant hardware reduction. The parallel-32 BCH (4603, 4096; 39) decoder
contrast to design ,the proposed combine Chien search and syndrome calculator has gate count saving
46.7% according to the combination results in the CMOS 90-nm technology.
C. A Fully Parallel BCH Codec with Double Error Correcting neither Capability for NOR Flash
Applications
Double error correcting (DEC) BCH codes are necessary; however, their iterative processing
are not suitable for the latency-constrained memories. Thus, fully parallel architecture is proposed in
this paper at the cost of increasing area. New method is to combine encoder and syndrome calculator
by using matrix operations developed. In addition to reduce the degree of error location polynomial, a
new error location polynomial is defined.so that the hardware cost in Chien search will be sufficiently
reduced.
364
IV. CONCLUSION
Since the NAND FLASH memories require less delay encoders, a high throughput encoder
is premeditated by unfolding the LFSR of the BCH encoder by scrutiny design criteria for selecting
the unfolding factor .Moreover area, clock cycle and power is analyzed by simulating the design.
The obtained results reveal that unfolding increases the throughput, this in turn decreases the timer
cycle which automatically increases the speed but it increases the area and power. various-pipelining
techniques can be introduced to reduce the critical path of the encoder of BCH. Retiming also can be
functional to advance increase the speed and to reduce the power consumption and area.
IV. REFERENCE
[1] Manikandan.S.K,Nisha Angeline. M, Sharmitha.E.K, Palanisamy. C High Throughput LFSR
Design For Bch Encoder Using Sample Period Reduction Technique For MLC NAND Based Flash
MemoriesInternational Journal Of Computer Applications (0975 8887) Volume 66 No.10,
March 2013.
[2]
366
Yi-Min Lin, Chi-Heng Yang, Chih-Hsiang Hsu, Hsie-Chia Chang, And Chen-Yi Leea Mpcn-Based
Parallel Architecture In BCH Decoders For Nand Flash Memory Devices IEEE Transactions On
M. Prashanthi1, P. Samundiswary2 An Area Efficient (31, 16) Bch Decoder For ThreeErrors
International Journal Of Engineering Trends And Technology(IJETT)-Volume 10 Number 13-Apr
2014.
[4]
Chia-Ching Chu, Yi-Min Lin, Chi-Heng Yang, And Hsie-Chia Chang A Fully Parallel Bch Codec
With Double Error Correcting Capability For Nor Flash Applications IEEE Transaction ICASSP
2012 978-1-4673-0046-9/12/$26.00 2012
[5]
R.Elumalai, A.Ramachandran, J.V.Alamelu, Vibha B Raj Encoder And Decoder For (15,11,3)
And(63,39,4) Binary BCH Code With Multiple Error CorrectionInternational Journal Of Advanced
Research(An ISO 3297:2007 Certified Organization)Vol.3,Issue 3,March 2014.
[6]
[7]
[8]
Vinod Mukati High-Speed Parallel Architecture And Pipelining For LFSRInternational Journal Of
Scientific Research Engineering &Technology(IJSRET) Issn:2278-0882 IEERET-2014 Conference
Proceeding,3-4 November ,2014
[9]
Mahasiddayya R Hiremath, Manju Devi A Novel Method Implementation Of A FPGA Using (N, K)
Binary Bch Code International Journal Of Research In Engineering Technology And Management
ISSN 2347 7539 IJRETM-2014-Sp-008 ,June-2014.
[10] J .Shafiq Mansoor, A .M. Kiran Improved Error Correction Capability In Flash Memory Using Input
/ Output Pins International Journal Of Advanced Information Science And Technology(IJAIST)IS
SN:2319:2682Vol.12,No.12,April2013.
367
I. INTRODUCTION
Soil uses source material in many places. Laboratory approach is the traditional method of
analysis. It has lots of time consumption and human errors. Image processing is the method of analysis
using digital images. Digital image is full of information in the form of digital values. This analysis
is used to reduce the human errors and time consumption. It helps the immediate action of the soil
sources. Samples are taken in the suitable weather condition by digital camera. This analysis split into
two ways: i. physical characterization of soil ii.Chemical characterization of soil.
This analysis of soil already depicts the physical characteristics in box counting method [6].
The box counting method of analysis using LabVIEW 2014. This analysis of soil already depicts the
pH and pH of soil in RGB model of soil sample [3].The RGB method of analysis the acidic and basic
characterization of soil using LabVIEW 2014.
Images are captured by the sony digital camera. These capture the place with three different
positions with the specific light condition. These images are in the pixel value of 150x150 resolutions.
2. METHODOLOGY
A. Physical characterization
Physical characterization analyzed by using fractal dimension. This fractal dimension is
calculated in various methodologies like area perimeter method, line divider method, skyscraper
method and Box counting method. Box counting method is widely used to calculate the fractal
dimension. This method is used to estimate the fractal dimension in the form of binary images. To
cover the number of 1s present in the binary image. Depending upon the box size and number of 1s
present the fractal dimension is
FD= log(N(s))/log(1/s) (1)
FD- Fractal Dimension of sample.
N(s)- Number of 1s present in box.
These pixel values are extracted in Centre of the image because of purity of intensity value.
Averaging the pH values and get the pH value of the sample.
C. Input samples
These samples are prepared by the capturing with the resolution of 150x150. These are all 24bit depth square images. Image is taken in watrap, virudhunagar district in fig 3. This place is used in
the field of paddy cultivation.
These images are captured by the sony digital camera and the specified weather condition.
D. Thresholding
Thresholding is the process conversion of 24-bit color image into binary image. It is used to
conversion of pixel values into 0s and 1s. This type of representation is used to box counting method.
369
Background represented as red color and darkness of the objective as block color shown I fig
4. The pixel values of red and block is 0 and 1. Threshold value varies from the component of RGB
is 80-120.
E. Fractal dimension
Fractal dimension (FD) is defined as a mathematical descriptor of image feature which
characterizes the physical properties of soil images. The fractal, introduced in 1975 by Mandelbrot
(Buczko 2005) provides a framework for the analysis of natural phenomena in various scientific
domains. The fractal is an irregular geometric object with an infinite nesting of structure of different
sizes. Fractals can be used to make models of any natural object, such as soil, islands, rivers, mountains,
trees, clouds.
E. Plane extraction
Input color image is extracted the three different planes of RGB with their pixel values. These
pixel values are used to compute the pH index of the soil. Using pH index of soil, the pH value of
samples are determined.
RED GREEN BLUE
RED
GREE
BLUE
This single RGB image is extracted into three different components of Red-Green-Blue with
own pixel value shown in fig 5.
3. RESULTS AND DISCUSSION
Initially, input samples are converted into binary image using thrsholding. This pixel value of
binary image is taken under the box counting method and the formula (1), to find the average fractal
dimension of the sample. liquid limit, plastic limit, shrinkage limit, coefficient of uniformity, field
density are determined. Threshold value varies from 80-120. Depending upon the threshold value,
N(s) calculated by Box Counting method. The average fractal dimension was calculated and given in
370
In table 2 shows all the physical parameters of soil sample of fractal dimension 1.511.
Using the fractal dimension all the parameters was calculated. These are all the parameters used
in the field of civil Engineering and where the soil used in the field of Engineering.
The correlation between the fractal dimension and physical parameters represented as graph
[6]. Using the plane extraction, the pixel value of each plane is calculated. Using formula (2), pH index
371
Binod kumar, Mukesh kumar, Rakesh kumar and Vinay kumar,(2014) Determination of soil PH by
using Digital Image Processing Technique, Journal of Applied and Natural Science 6 (1): 14-18.
[4]
Buczko, Mikoajczak, Olena, Pawe, (2005) Shape analysis of MR brain images based on the fractal
dimension, Annales UMCS Informatica AI 3 (2005) 153-158.
[5]
Schulte.E.E and Kelling.K.A,(1993) soil calcium to magnesium ratios, university of Wisconsinextension SR-11-93.
Qihao Weng, Xuefei Hu,(2008) Medium Spatial Resolution Satellite Imagery for Estimating
and Mapping Urban Impervious Surfaces Using LSMA and ANN, IEEE TRANSACTIONS ON
GEOSCIENCE AND REMOTE SENSING, VOL. 46, NO. 8.
[9]
Chin-yuan lien, chien chuan huang, pei-yin chen, yi-fan lin, An Efficient Denoising Architecture
For Removal Of Impulse Noise In Images,Ieee Transactions On Computers(April 2013).
372
connected to N sub carriers and transmitted in parallel. An N-point IFFT carries out this operation. An effective OFCDM symbol is obtained with N samples after IFFT operation. The use of IFFT
will guarantee orthogonality among subcarrier and give computation per unit efficiency.
In a realistic broadband channel, signal transmission takes place in the atmosphere and nearer
the ground. A signal can voyage from transmitter to receiver over multipath fading. Therefore, the
proposed system considered Additive white Gaussian Noise (AWGN) channel and Rayleigh fading
channel for simulation. Furthermore, noise presence in the medium affects the signal and creates distortion in the information content. The channel simulation will allow examination of the noise effect
and multipath.
As described above, there are M_B=N/ST modulated symbols spread in frequency domain at
the same time. Thus, the transmitted OFCDM data symbol of the mth subcarrier on the ith symbol is
expressed as,
(1)
Where,P is the signal power of the data code and the
is represented by ,
(2)
375
(6)
Where, _fdenotes the frequency separation between m^th and m ^th sub-carrier, f_c is the
coherence bandwidth of the channel and (.)* represents the conjugate operation.
B. OFCDM Receiver
The receiver of the proposed OFCDM system is shown in Fig.2. The received OFCDM signal
is processed by the receiver until became to original data output. The signal received by the receiver is
usually corrupted by noise and channel distortion. The received OFCDM signal is changed from time
domain to frequency domain.
Fig.2 Receiver Structure for OFCDM
An N-point FFT realizes this operation. The receiver carries out the reverse process of the
transmitter to decode the received OFCDM symbol. After FFT block, the N chips involved with N sub
carrier are obtained. The output of FFT is accumulated to carry out channel estimation. Time domain
spreading code is assigned to the pilot; the channel estimator is realized by the summation time domain. The estimated channel fading will be used for channel equalization. Then the signal is passed to
376
The equation (10) represents that there is no interference from data channels due to orthogonality of time domain spreading codes. Np is the noise variance term
The channel estimation for ST OFCDM symbols (i=0, 1,.ST -1) is given by,
is the channel estimation error with variance
To take advantage of frequency domain orthogonality, frequency domain despreading is carried out firstly. The proposed system uses same time domain spreading code and different frequency
domain spreading code for data channel. The KC interference codes in F can be given by K= KFST,
KF = 1, 2, 3, . KC. Then IT,0((_m^-)) is expressed as,
Where IT,0(
) is the inference term from the KC code channels in T. Then the OFCDM
signal is despreaded in time domain,
(15)
Blind Minimum Mean Square Error (MMSE) detection is employed to recover the data symbol
on each code channel. The blind MMSE is given by,
377
Where, x is the symbol of received data to be detected, Pk is the received data of the kth user
and SFk is the spreading chips of kth user.
C. Two Dimensional Spreading
As mentioned before, the total spreading of OFCDM with two dimensional spreading is represented as Stot = ST x SF. The modulated symbol transmitted on the pth (p = 0,P-1) code channel
is spread by a one dimensional code with length Stot denoted C(p) = {C0(p), C1(p) ,,Cp-1(p)}[13], [15].
The Stot chips of the code consist of ST chips per sub carrier over SF sub carrier. A square matrix with
elements +1, -1 and size h, whose distinct row vectors are mutually orthogonal is referred to as an
Hadamard matrix of order h. The Hadamard matrix H2n is a 2n 2n matrix such that the first row of
the matrix contains all +1s and the other rows contains n of +1s follow by n of 1s. The rows of the
Hadamard matrix are then mutually orthogonal.
Where, the matrix H2n is formed by using matrix Hnrecursively until matrix H2. This code fulfills completely the orthogonality between each other. For simulation, Hadamard code is chosen as
the spreading code, a corresponding frequency domain Hadamard code spreading code of length SF ,
and the corresponding time domain Hadamard spreading code of len
gth
, .The relationship among,
and is expressed as,
.So, the pthcode channel employs a two dimensional code
.
III. EXPERIMENTAL RESULTS AND DISCUSSIONS
Simulations are conducted to evaluate the performance of the proposed blind multiuser detection for OFCDM based communication system using MATLAB 7.10 in a AWGN channel and Rayleigh
fading channel for 4, 8, 16 and 32 users on BPSK,QPSK and 16 QAM modulation schemes [7,8].
It also investigates the impact of several parameters. Experimental results show BER performance of
some parameters including different modulations, number of users and spreading factor. Parameters
are employed in this simulation are shown in Table 1. Empirical results confirmed that the proposed
method outperformed and its BER performance superior with BPSK compared to other modulation
schemes. The proposed blind multiuser system also suggests that the BPSK modulation scheme is the
best choice for OFCDM system.
Table I Simulation Parameters
10
user 4
user 8
user 16
user 32
-3
BER
10
-4
10
-5
10
-6
10
10
Eb/No (dB)
12
14
16
18
20
(a) BPSK
-2
10
user 4
user 8
user 16
user 32
-3
BER
10
-4
10
-5
10
-6
10
10
Eb/No (dB)
12
14
16
20
18
(b) QPSK
-2
10
user 4
user 8
user 16
user 32
-3
BER
10
-4
10
-5
10
-6
10
10
Eb/No (dB)
12
14
16
18
20
(c) 16 QAM
Fig. 3 BER vs Eb/N0 by varying No. of Users at SF = 4
-2
10
user 4
user 8
user 16
user 32
-3
BER
10
-4
10
-5
10
-6
10
10
Eb/No (dB)
12
14
16
18
20
(a) BPSK
10
-3
BER
10
-4
10
-5
10
user 4
user 8
user 16
user 32
379
10
BPSK
QPSK
16QAM
-1
10
-2
BER
10
-3
10
-4
10
-5
10
-6
10
10
Eb/No (dB)
12
14
16
18
20
10
BPSK
QPSK
16QAM
-1
10
-2
BER
10
-3
10
-4
10
-5
10
-6
10
380
10
Eb/No (dB)
12
14
16
18
10
-1
10
-2
10
BPSK
QPSK
16QAM
20
10
BPSK
QPSK
16QAM
-1
10
-2
BER
10
-3
10
-4
10
-5
10
-6
10
10
Eb/No (dB)
12
14
16
(b) 8
Users
10
18
20
BPSK
QPSK
16QAM
-1
10
-2
BER
10
-3
10
-4
10
-5
10
-6
10
10
Eb/No (dB)
12
14
16
18
20
(c) 16 Users
0
10
BPSK
QPSK
16QAM
-1
10
-2
BER
10
-3
10
-4
10
-5
10
-6
10
10
Eb/No (dB)
12
14
16
18
20
(d) 32 Users
Fig. 5 BER vs Eb/N0 by varying Modulation at SF = 4
IV Helpful Hints
C. Impact of Spreading Factor
Spreading factor is one of the most important parameter in OFCDM system. The BER against
Eb/N0 has been simulated for two different spreading codes. Fig.7. shows the simulation results for
the performance of OFCDM which is affected by spreading factor over Rayleigh fading channel. The
SF is set to 4 and 8 in this simulation. From Fig.7, it is observed that the higher of value of spreading
factor gives better performance because it has better interference rejection. As a result of comparison,
when Eb/N0 is equal to 28 dB and 4 users, BER for SF of 4 and 8are 1.5610-6 and 3.1310-7 respectively. Therefore, the BER is improved as the SF increases since the spreading codes can cancel
correlated noise.
381
BER
10
-1
10
-2
10
-3
10
-4
10
-5
10
-6
SF=4
SF=8
BER
10
15
Eb/No (dB)
20
25
30
(a) 4
Users
10
10
-1
10
-2
10
-3
10
-4
10
-5
10
SF=4
SF=8
15
Eb/No (dB)
25
20
30
(b) 32
Users
Fig. 7 BER vs Eb/N0
by varying Spread Factors
BER
10
-1
10
-2
10
-3
10
-4
10
-5
RAYLEIGH
AWGN
BER
10
Eb/No (dB)
14
16
(a) 4
Users
10
10
-1
10
-2
10
-3
10
-4
10
-5
10
Eb/No (dB)
(b) 8
382
12
18
20
RAYLEIGH
AWGN
12
14
16
18
20
BER
10
-1
10
-2
10
-3
10
-4
10
-5
RAYLEIGH
AWGN
10
Eb/No (dB)
12
14
16
20
18
BER
(c) 16 Users
10
10
-1
10
-2
10
-3
10
-4
10
-5
10
-6
RAYLEIGH
AWGN
10
Eb/No (dB)
12
14
20
18
16
(c) 16 Users
BER
10
-1
10
-2
10
-3
10
-4
10
-5
RAYLEIGH
AWGN
BER
10
Eb/No (dB)
12
14
16
(a) 4
Users
10
10
-1
10
-2
10
-3
10
-4
10
-5
10
Eb/No (dB)
(b) 8
Users
18
20
RAYLEIGH
AWGN
12
14
16
18
20
383
BER
10
-1
10
-2
10
-3
10
-4
10
-5
10
-6
RAYLEIGH
AWGN
10
Eb/No (dB)
12
14
16
18
20
(c) 16 Users
10
10
-1
BER
-3
10
-4
10
-5
10
-6
10
Eb/No (dB)
12
14
16
18
20
(d) 32 Users
IV. Conclusion
This paper proposed a novel blind algorithm for multiuser detection of OFCDM in frequency
selective fading channel. The proposed system is simulated in MATLAB 7.10 and the performance
of the proposed blind multi user detector is evaluated with various parameters. The investigated
parameters of OFCDM system are number of users, different modulation schemes, different channel
model and spreading factor. From the comparison results, it is clear that the proposed blind multiuser
system with more number of users and large spreading factor were given better performance of
OFCDM over Rayleigh fading channel. And also the system performs better with BPSK constellation
than QPSK and 16 QAM modulation schemes for more users and large spreading factor. In future, the
algorithm can be tested with some evolutionary algorithms for MIMO systems.
REFERENCES
[1] S. Verdu, Multiuser Detection, Cambridge, UK. Cambridge University Press, 1998.
[2]
Yiqing Zhou and Tung-Sang Ng, Jiangzhou Wang, Kenichi Higuchi and Mamoru Sawahashi,
OFCDM: A Promising Broadband Wireless Access Technique, IEEE Communications
Magazine,Vol. 46, no. 3, pp. 38 - 49, March 2008.
[3]
S.M. Zafi S.Shah, A.W.Umrani and Aftab A.Memon, Performance comparison of OFDM, MCCDMA and OFCDM for 4G Wireless Broadband Access and Beyond, PIERS Proceedings,
Marrakesh, MOROCCO, pp. 1396 1399, 2011.
[4]
Yiqing Zhou, Jiangzhou Wang and Mamoru Sawahashi, Downlink Transmission of Broadband
OFCDM Systems - Part I: Hybrid Detection, IEEE Transactions on Communications, Vol. 53, no.
4, pp. 718 - 729, April 2005.
[5]
Dr.E.Gopalakrishna Sarma and Dr.Sakuntala S. Pillai, A Robust Technique for Blind Multiuser
CDMA detection in Fading Channels, International Journal of Hybrid Information Technology,
Vol. 4, no. 2, pp.13 - 22, 2011.
[6]
Zheng Mao, Zheng Yu-Li and Yuan Ji-Bing, Improved Independent Component Analysis for Blind
384
Sajjad Ahmed Ghauri, SherazAlam, M. Farhan Sohail, AsadAli and FaizanSaleem, Implementation
of OFDM and Channel Estimation using LS and MMSE estimators, International Journal of
Computer and Electronics Research ,Vol.2, no. 1, pp. 41 - 46, 2013,.
[8]
Nasaruddin, Melinda and Ellsa Fitria Sari, A Model to Investigate Performance of Orthogonal
Frequency Code Division Multiplexing, TELKOMNIKA, Vol. 10, no. 3, pp. 579 585, 2012.
[9]
M. L. Honig, U. Madhow, and S. Verdu, Blind adaptive multiuser detection, IEEE Trans. Inform.
Theory, Vol. 41, pp. 944 960, 1995.
[10] S. Roy, Subspace blind adaptive multiuser detection for CDMA, IEEE Transaction on
Communication, Vol. 48, no. 1, pp. 169 175, 2000.
[11] P.Shi, H.Li and M. Ren, Multiuser detector based on blind adaptive Kalman filtering, Computer
Engineering and Applications, Vol. 48, pp. 131 134, 2012. (In Chinese)
[12] Y. Zhou, Tung-Sang, J Wang, K Higuchi and M. Sawahashi, Downlink Transmission of Broadband
of Broadband OFCDM Systems-Part V: Code Assignment, IEEE Transaction on Wireless
Communication, Vol. 7, Issue: 11, pp. 4546 - 4557, 2008.
[13] Bin Hu, Lie-Liang Yang and Lajos Hanzo, Time and Frequency-Domain-Spread Generalized
Multicarrier DS-CDMA Using Subspace-Based Blind and Group-Blind SpaceTime Multiuser
Detection, IEEE Transactions on Vehicular Technology, Vol. 57, no. 5, pp. 3235 - 3241, 2008.
[14] Khalifa Hunki, Ehab M. Shahee, Mohamed Soliman and K.A.El-Barbary, Performance Comparison
of Blind Adaptive Multiuser Detection Algorithms, International Journal of Research in Engineering
and Technology, Vol. 2, Issue: 11, pp. 454 - 461, 2013.
[15] Zia Muhammad, and Zhi Ding, Blind Multiuser Detection for Synchronous High Rate Space-Time
Block Coded Transmission, IEEE Transactions on Wireless Communications, Vol. 10, no. 7, pp.
2171 - 2185, 2011.
385
I. INTRODUCTION
Smart phones provide sophisticated real-time sensor information for dispensation. Researchers
contain studied a large number of sensors such as accelerometer, gyroscope, rotation vector, and
direction sensors in person step count projects. Of these the accelerometer is the majority precious nontransceiver sensor used to give the information for activity monitoring as it gives more information
concerning movement armed forces. Therefore the core center of this system is on using solely the smart
phone accelerometer for person pace count. The motivation for MBS, in contrast to LBS, includes:
1. Adapting dynamically the types of mobility information services based upon the travel mode,
e.g., a pedestrian map triggered after detecting walking, shows safer places to cross roads
whereas a motorist map focuses more on main road routes.
2.Mobility profile driven social and societal behaviour analysis changes via gamification
and incentives, e.g., to promote greater low carbon transportation modes and low-energy
transport usage.
3.Real-time human mobility profiling, such as determining the degree of physical exercise, the
usage patterns for types of public and private transport, lowcarbon transport usage and the
time spent at a location (This latter aspect can indirectly indicate human activities even
personal preferences at that location e.g., spending more time near one shop location rather
another one can indicate shopping and a greater user preference or interest for one shop as
compared to another.
4.Human activity driven system control and optimization, e.g., switching off power hungry
location sensors such as the GPS receiver and Wi-Fi when out of range, i.e., when travelling
in an underground train.
The accelerometer has three input advantages over transceiver based place signal sensors such
as GPS. First, low energy spending of 60 mW. Second, there is no wait when starting the accelerometer,
however receiving position updates in GPS depends on the start mode. In a hot start form the TermedTime-to-Subsequent-Fix is about 10 seconds and in a cold create mode the Time To-First-Fix could
take up to 15 minutes. Third, sensors interpretation are incessantly available with the accelerometer
as compare to GPS and Wi-Fi which could be thwarted as of signals transmit by GPS satellites and
being out of range of Wi-Fi signals in that order. Person movement categorization using smart phones
386
requires a movement condition gratitude technique that can function regardless of the position of
the smart phone because placing accelerometers on exact parts of the body makes it not practical for
use in the real-world. Acceleration information differs for similar behavior, thus making it harder to
finely secernate between certain types of activity. Limits have been found in the range of movement
activities identified by use of an only one sensor and; due to the complexity of person movement and
noise of sensor signal, action categorization algorithms tend to be probabilistic. They have in its place
designed a various modal sensor panel that concurrently captures information from many sensors. A
major challenge in the design of ubiquitous, context-aware smart phone applications is the increase
of algorithms that can find the person action using noisy and equivocal sensor information. There a
technique called Energy-efficient Real-time Smart phone Pedometer; an Android based smart phone
application to accurately calculate person steps. The novelty of this investigate as compared to existing
systems are: ERSP extracts five features this scheme works an energy-efficient frivolous arithmetical
model to process in real-time the activity accelerometer information with no need for noise filtering
and works in spite of of the smart phone on-body placement and orientation.
2. RELATED WORK
Takamasa Higuchi, Hirozumi Yamaguchi, and Teruo Higashino proposed a novel social
navigation framework, called PCN that leads users to their friends in a crowd of neighbors. PCN
provides relative positions of surrounding people based on sensor readings and Bluetooth RSS, both
of which can be easily obtained via o-the-shelf mobile phones. Through a field experiment in a real
trade fair, demonstrated that PCN improves positioning accuracy by 31% compared to a conventional
approach owing to its context-supported error correction mechanism. Furthermore, showed that the
geometrical clusters in the estimated positions are highly consistent with actual activity groups, which
would help users to easily identify actual nearby people.
Emiliano miluzzo, nicholas d. Lane, kristof fodor, ronald
peterson,mirco musolesi, shane b. Eisenman, xiao heng, hong lu, andrew t. Campbell proposed
the execution, evaluation, and user experiences of the CenceMe request, which represents one of the
primary application to without human intervention get back and issue sensing attendance to common
networks by Nokia N95mobile phones. Described a complete system execution of CenceMe with its
presentation assessment. Discussed a number of significant design decisions wanted to resolve various
limitations that are there when annoying to deploy an always-on sensing request on a profitable mobile
phone. Also obtainable the results from a long-lived experiment where CenceMe was used by 22 users
for a three week period. Discussed the user study and lessons learn from the deployment of the request
and tinted how might get better the application moving forward.
Jialiu
Lin
Yi
Wang,Murali
Annavaram,
QuinnA.Jacobson,
JasonHong,Bhaskar
Krishnamachari,Norman Sadeh,
Presented the design, execution, and evaluation of an Energy Efficient Mobile Sensing System
(EEMSS). The center part of EEMSS is a sensor organization scheme for mobile devices that operates
sensors hierarchically, by selectively turning on the minimum set of sensors to monitor user state and
triggers new set of sensors if necessary to achieve state transition findion. Energy consumption can
be reduced by shutting down unnecessary sensors at any particular time. Implementation of EEMSS
was on Nokia N95 devices that use sensor management scheme to manage built-in sensors on the N95,
including GPS, Wi-Fi find or accelerometer and microphone in order to achieve person daily activity
recognition. Also proposed and implemented novel categorization algorithms for accelerometer and
microphone readings that work in real-time and lead to good performance. Finally, we evaluated
EEMSS with 10 users from two universities and were able to provide a high level of accuracy for
state recognition, acceptable state transition findion latency, as well as more than 75% gain on device
lifetime compared to existing system
Donnie H. Kim, Jeffrey Hightower, Ramesh Govindan, Deborah Estrin proposed a Place Sense
provides a significant improvement in the aptitude to find out and be familiar with places. Precision and
recall with Place Sense are 89% and 92% versus the previous state-of-the-art Beacon Print approach at
82% and 65% precision and recall. Because it uses response rate to select representative beacons and
suppresses the influence of infrequent beacons, Place Senses accuracy gains are particularly noticeable
in challenging radio environments where beacons are inconsistent and coarse. Place Sense also finds
387
The Kalman filter outstanding the algorithms aptitude to efficiently computes accurate estimate
of the true value given noisy capacity. The accelerometer readings give sensibly precise information
for movement findion, and for this cause the Kalman filter algorithm is well suited for ltering the
Gaussian process and to aid in real-time person movement state calculation. Also there is no need to
retain historical measurements and estimates as only the present and self-assurance estimate levels
388
[3]
F. A. Levinzon, Fundamental noise limit of piezoelectric accelerometer, IEEE Sensors J., vol. 4,
no. 1, pp. 108111, Feb. 2004.
[4]
[5]
H. Zeng and M. D. Natale, An efcient formulation of the realtime feasibility region for design
optimization, IEEE Trans. Comput., vol. 62, no. 4, pp. 644661, Apr. 2013.
[6]
G. Hache, E. D. Lemaire, and N. Baddour, Movement change-ofstate findion using a smartphonebased approach, in Proc. IEEE Int. Workshop Med. Meas. Appl., 2010, pp. 4346.
[7]
A. M. Khan, Y.-K. Lee, S. Y. Lee, and T.-S. Kim, Person activity recognition via an accelerometer
enabled smart phone using kernel discriminant analysis, in Proc. 5th Int. Conf. Future Inf. Technol.,
2010, pp. 16.
[8]
[9]
T. O. Oshin, S. Poslad, and A. Ma, A method to evaluate the energy-efficiency of wide-area location
determination techniques used by smart phones, in Proc. 15th IEEE Int. Conf. Comput. Sci.Eng.,
2012, pp. 326333.
390
linear loads.mitigation
I. INTRODUCTION
microgrid has a AC-bus and DC-bus, interconnectedtogether with a tie line DC -AC converter.
AC-bus isconnected to wind power plants, pico-hydro plant, localAC-loads and to the electricity grid
with an islanding scheme.Power quality on AC bus has to be maintained in both themodes of operation
of microgrid (islanded and non-islanded).Sudden islanding of utility grid creat significantvoltage
disturbances on AC bus.The AC bus has grid tie inverters, AC-DC-AC converters, conventional
synchronous generatorsas the sources supplying dynamic real power loads as well as reactive power
loads. Supply of reactive power reducesthe maximum amount of real power that can be supplied by
the sources thereby resulting into poor utilization of theircapacity. This provokes need of dynamic
reactive power source on AC bus.STATCOM and SVC both are Flexible AC Transmission System
(FACTS) devices that can be used for addressing the described problem. STATCOM has a better
response time and better transient stability compared to SVC. This makes STATCOM an ideal choice
for microgrid.This paper describes modelling and optimization STATCOM on AC bus of Microgrid.
The paperbegins with explaining STATCOM as a potential solution tovoltage fluctuations and reactive
power demand on the ACbus and extends to dealing with the control strategies required for the
operation of STATCOM.
II. VOLTAGE FLUCTUATION PROBLEM ON AC BUS
In non-islanded mode of operation, in absence of STATCOM, local excessive reactive power
demand is supplied by the utility grid. Sudden transients in the reactive power demand are taken care
of by utility grid and the AC bus voltage is maintained. However, in islanded mode of operation, in
absence of STATCOM, reactive power demand is completely supplied by the converters of the power
sources
such as wind power plants, solar plants and the conventional synchronous generators of the
pico-hydro plants. With limited capability to supply the reactive power demand, islanded AC-bus of
microgrid shows drastic fluctuations in the voltage.This provokes need of AC-bus voltage regulating
control system to be embedded in STATCOM.
III. DESIGN OF STATCOM
A. Power Circuit
Power circuit contains the main topology of DC-AC conversion.The power circuit consists of
three parallel legs, each leg consisting of two IGBTs(FGA25N120NTD) which are switched using the
391
circuit.
A Driver circuit is interfaced with the power circuit to ensure required driving characteristics
of the IGBTs. The IGBTs are switched at a frequency of 2kHz. This leads to problem of high voltage
spikes across the switch due to circuit inductance and also it leads to ringing. To eliminate this, RC
snubber circuit is used in the STATCOM circuit. When the switch gets open, circuit eliminates the
voltage transients and ringing, as it provides alternate path for the current flowing through circuits
intrinsic leakage inductance. Also it dissipates the energy in resistor and thus junction temperature is
reduced.
B. Control System
STATCOM includes a 2-level voltage source inverter with a capacitor bank in DC link. The
voltage source inverter is driven by 3 phase SPWM waves. SPWM waves are equipped with dead band
programming in high side and low side IGBT circuit. Frequency, power angle and voltage magnitude
of STATCOM can be all controlled by controlling the SPWM waves. STATCOM is synchronized to
the utility grid using synchronizing control systems.It includes,
1) Frequency control:
A feedback of line to line voltage of grid is fed to the frequency measurement unit. The measured
frequency is then given to the SPWM generator. Response time of frequency control systems is
crucial for us to avoid power instability.
2) Phase-lock control system:
Feedback of grid voltage is fed to SPWM generator and SPWM is held in a constant phase
relation (power angle) with respect to the grid voltage. Reference given to phase control decides real
power transaction with the grid.
3) Charging and maintaining capacitor voltage:
With noactive source on DC side, charging of DC link capacitor is done by consuming real
power from the grid (fig. 3). Power
For positive VAR (supply of reactive power), STATCOM voltage has to be higher than the
grid voltage. Increasing the modulation index of the SPWM waves serves the purpose.Reactive power
flow out of the STATCOM can directly be controlled by controlling the modulation index of SPWM
waves. The actual control systems are configured to maintain the AC bus voltage constant to the
specified reference; which itself is indirectly done by controlling the modulation index.
IV. SIMULATION
STATCOM is simulated using a 2-level voltage source inverter in MATLAB Simulink (fig. 6).
SPWM generator block generates the gating pulses required for the 6 IGBTs.SPWM generator block
has input of Modulation control and phase angle ( delta). Modulation Index control is used for Control
of the AC bus voltage and delta control for control of the DC bus capacitor voltage. PID controllers
are used
for control of AC bus voltage and capacitor voltage in the DC link by setting some reference
accordingly. Controlling the AC bus voltage automatically controls demand of reactive power. LCL
filter filters the output waveform. Terminals DC2 and DC4 are connected to DC link capacitor.
Terminals AC1, AC2, AC3 are the output terminals after the filtering action. Model is simulated in
discrete mode.Ode-45 equation solver is used and the sampling time is 5e6.
393
VII. CONCLUSION
STATCOM is designed for the reactive power compensation of micro grid and AC bus voltage regulation.
STATCOM is simulated along with the micro grid in MATLAB to observeand improve transient using
multiple microcontrollers interfaced with personal computer via USART communication interface.
REFERENCES
[1] Balaguer I.J. ; Dept. of Electrical Eng. , Michigan State Univ. , East Lansing, MI, USA ; Control
for Grid-Connected and Intentional Islanding Operations of Distributed Power Generation IEEE
Transactions on Industrial Electronics, vol. 58, No. 1 , Dec. 2010.
[2]
Majumder R. ; ABB Corp. Res., Vasteras, Sweden, Reactive Power Compensation in Single-Phase
Operation of Microgrid IEEE Transactions on Industrial Electronics, vol. 60, No. 4 ,Nov. 2012.
[3]
[4]
[5]
Jamal Alnasseir Theoretical and Experimental Investigations on Snubber Circuits for High Voltage
Valves of FACTS Equipment for Over Voltage Protection Master Thesis Project Erlangen 2007.
[6]
PraneshRao and M. L. Crow, STATCOM Control for Power System Voltage Control Applications
IEEE Transactions on Power Delivery, vol. 15, NO. 4, October 2000.
[7]
[10] W. S. Meyer and H. W. Dommel "Numerical Modelling of Frequency- Dependent TransmissionLine Parameters in an Electromagnetic Transients Program", IEEE Trans. Power Apparatus and
394
M. Prodanovic and T. Green "High-quality power generation through distributed control of a power
park microgrid", IEEE Trans. Ind. Electron., vol. 53, no. 5, pp.1471 -1482 2006
[13]
S. Goldwasser, S. Micali, and R. Rivest, A digital signature scheme secure against adaptive chosen
message attacks, SIAM Journal of Computing, vol. 17, no. 2, pp. 281308, 1988.
[14] E. Figueres , G. Garcera , J. Sandia , F. Gonzalez-Espin and J. Rubio "Sensitivity study of the
dynamics of three-phase photovoltaic inverters with an $LCL$ grid filter", IEEE Trans. Ind.
Electron., vol. 56, no. 3, pp.706 - 717 2009
[15]
[16]
[17] R. H. Lasseter and P. Piagi "Microgrid: A conceptual solution", Proc. Power Electronics Specialists
Conf., vol. 6, pp.4285 -4290 2004
395
1, 2
Abstract In wireless sensor networks (WSNs), energy efficiency is considered to be a crucial issue due
to the limited battery capacity of the sensor nodes. Considering the usually random characteristics of the
deployment and the number of nodes deployed in the environment, an intrinsic property of WSNs is that
the network should be able to operate without human intervention for an adequately long time. In existing
system various hierarchical approaches have been experimented, in which each approach suffers from
overhead, hotspot and flooding problem. In this paper we propose ring routing approach energy-efficient
mobile sink routing protocol is introduced, which aims to minimize this overhead while preserving the
advantages of mobile sinks.The technique forms a ring of nodes from the available regular nodes. Ring
is formed with the help of certain radius from the centre, nodes closer to the ring which is defined by the
radius is formed. The location of the sink node is found through a ring node and is shared between all
the ring nodes. Then the source node with the data forwards it to the sink through the anchor nodes.The
proposed system achieves higher performance, lifetime and less delaywhile compared with the existing
system.
Key words Anchor Node, Wireless Sensor Networks, Hotspot, Flooding, Mobile sinks, Energy
Efficiency.
I. INTRODUCTION
1.1 INTRODUCTION ABOUT WIRELESS SENSOR NETWORK
A Wireless sensor network is a group of specialized transducers with a communications
infrastructure intended to monitor and record conditions at diverse locations. Commonly monitored
parameters are temperature, humidity, pressure, wind direction and speed, illumination intensity,
vibration intensity, sound intensity, power-line voltage, chemical concentrations, pollutant levels and
vital body functions.
1.2 OVERVIEW
A sensor network consists of multiple detection stations called sensor nodes, each of which
is small, lightweight and portable. Every sensor node is equipped with a transducer, microcomputer,
transceiver and power source. The transducer generates electrical signals based on sensed physical
effects and phenomena. The microcomputer processes and stores the sensor output. The power for
each sensor node is derived from the electric utility or from a battery. The solutions to the problem is
rectified by mobile sink [1],[6],[7],[10].
3. PROPOSED METHOD
Proposed system introduced a Ring Routing mechanism, a hierarchicalrouting protocol for
wireless sensor networks with amobile sink. The protocol imposes three roles on sensornodes:(i) Ring
nodes (ii) The Regular nodes (iii)anchor nodes. The three sensor roles are not fixed on its roles,
meaning that sensor nodes can change their functions while operating in the wireless sensor network.
The location of the sink node is found through a ring node and is shared between all the ring nodes.
Then the source node with the data forwards it to the sink.
3.1 RING ROUTING WITH SINGLE MOBILE SINK
Ring routing it establishes a ring structure. The ring is formed with a certain radius from the
centre node.It can able to form the ring structure with the node which has higher energy, ring can be
easily changed.The ring consists of a node-width distance of one, strip of nodes that is closed are
called the ring nodes. The shape of the ring may not be perfect as long as it forms a closed loop. After
the deployment of the WSN, the ring is initially constructed by the following mechanism: An initial
radius for the ring is determined.
The nodes nearer to the ring, that is defined by this radius and the center of network, by a
certain threshold point are determined to be ring node candidates. From the starting of certain node
(e.g. the node closest to the leftmost point on the ring) by geographic forwarding in a certain direction
(clockwise/counter clockwise), the ring nodes are selected in a greedy manner until the starting node
is reached and the closed loop is complete. If the starting node is not at a reachable distance, the
procedure is repeated with selection of different neighbors at each hop. If after a certain number
of trials the ring cannot be formed, the radius is set to a different value and the procedure above is
repeated. The advantages of this approach are given
4. ARCHITECTURE DIAGRAM
Starting from a certain node (e.g. the node closest to the leftmost point on the ring) by geographic
forwarding in a certain direction (clockwise/counter clockwise), the ring nodes are selected in a greedy
manner until the starting node is reached and the closed loop is complete. If the starting node cannot be
reached, the procedure is repeated with selection of different neighbors at each hop. If after a certain
number of trials the ring cannot be formed, the radius is set to a different value and the procedure
above is repeated.
6. CONCLUSION
A novel mobile sink routing protocol is proposed Ring Routing, by both considering the
benets and the drawbacks of the existing protocols in the literature. Ring Routing is an hierarchical
routing protocol based on a virtual ring structure which is designed to be easily accessible and easily
recongurable. The design requirement of our protocol is to mitigate the anticipated hotspot problem
observed in the hierarchical routing approaches and minimize the data reporting delays considering
the various mobility parameters of the mobile sink. The performance of Ring Routing is evaluated
extensively by simulations conducted in the network simulation environment. A wide range of
different scenario with varying network sizes and sink speed values are dened and used. Comparative
performance evaluation results of Ring Routing with two efcient mobile sink protocols, LBDD and
Railroad, which are also implemented in NS 2, are provided. The results show that Ring Routing indeed
is an energy-efcient protocol which extends the network lifetime. The reporting delays are conned
within reasonable limits which proves that Ring Routing is suitable for time sensitive applications.
In the future, we can modify Ring Routing to support multiple mobile sinks, and clustering
approach is used for large wireless sensor network to mitigate traffic and congestion problem.
REFERENCES
[1] M. Buettner, G. V. Yee, E. Anderson, and R. Han, X-MAC: A short preamble mac protocol for
duty-cycled wireless sensor networks, in Proc. 4th Inter. Conf. Embedded Network Sensor Syst.,
ser. SenSys 06. New York, NY, USA: ACM, 2006, pp. 307320.
[2]
[3]
[4]
M. Di Francesco, S. K. Das, and G. Anastasi, Data collection in wireless sensor networks with
mobile elements: A survey, ACM Trans. Sens. Netw., vol. 8, no. 1, pp. 131, 2011.
[5]
402
A. Gopakumar and L. Jacob, Localization in wireless sensor net- works using particle swarm
optimization, in Proc. IET Int. Conf. Wireless, Mobile Multimedia Netw., 2008, pp. 227230.
[7]
R. Jaichandran, A. Irudhayaraj, and J. Raja, Effective strategies and optimal solutions for hot spot
problem in wireless sensor networks (WSN), in Proc. 10th Int. Conf. Inf. Sci. Signal Process. Appl.,
2010, pp. 389392.
[8]
I. Kang and R. Poovendran, Maximizing static network lifetime of wireless broadcast ad hoc
networks, in Proc. IEEE int. Conf. Commun., vol. 3, 2003, pp. 22562261.
[9]
W. Liang, J. Luo, and X. Xu, Prolonging network lifetime via a controlled mobile sink in wireless
sensor networks, in Proc. IEEE Global Telecommun. Conf., 2010, pp. 16.
[10] K. Lin, M. Chen, S. Zeadally, and J. J. Rodrigues, Balancing energy consumption with mobile
agents in wireless sensor networks, Future Generation Comput. Syst., vol. 28, no. 2, pp. 446 456,
2012.
[11] C.-J. Lin, P.-L. Chou, and C.-F. Chou, HCDD: Hierarchical cluster-based data dissemination in
wireless sensor networks with mobile sink, in Proc. Int. Conf. Wireless Commun. Mobile Comput.,
2006, pp. 11891194.
[12]
X. Li, J. Yang, A. Nayak, and I. Stojmenovic, Localized geographic routing to a mobile sink with
guaranteed delivery in sensor networks, IEEE J. Sel. Areas Commun.., vol. 30, no. 9, pp. 1719
1729, Sep. 2012.
[13] J. Luo and J.-P. Hubaux, Joint mobility and routing for lifetime elongation in wireless sensor
networks, in Proc. INFOCOM 24thAnnu. Joint Conf. IEEE Comput. Commun. Soc., vol. 3, 2005,
pp. 17351746.
[14] D. Moss, and P. Levis, BoX-MACs: Exploiting physical and link layer boundaries in low-power
networking, Stanford Univ., Stan- ford, CA, USA, Tech. Rep. SING-08-00, 2008.
[15] D. Niculescu, Positioning in ad hoc sensor networks, IEEE Netw., vol. 18, no. 4, pp. 2429, Jul.
2004.
[16] S. Oh, Y. Yim, J. Lee, H. Park, and S.-H. Kim, Non-geographical shortest path data dissemination
for mobile sinks in wireless sen- sor networks, in Proc. IEEE Veh. Technol. Conf., Sep. 2011, pp.
15.
[17] S. Olariu and I. Stojmenovi, Design guidelines for maximizing lifetime and avoiding energy holes
in sensor networks with uniform distribution and uniform reporting, in Proc. IEEE INFOCOM,
2006, pp. 112.
[18]
J. Rao and S. Biswas, Network-assisted sink navigation for distributed data gathering: Stability and
delay-energy trade-offComput. Commun., vol. 33, no. 2, pp. 160175, 2010.
[19]
Z. Wang, S. Basagni, E. Melachrinoudis, and C. Petrioli, Exploiting sink mobility for maximizing
sensor networks lifetime, in Proc. 38th Annu. Hawaii Int. Conf. Syst. Sci., 2005, p. 287.
[20] Y. Yun and Y. Xia, Maximizing the lifetime of wireless sensor networks with mobile sink in delaytolerant applications, IEEE Trans. Mobile Comput., vol. 9, no. 9, pp. 13081318, Jul. 2010.
403
tool in MATLAB.
I. INTRODUCTION
In the recent days, PV power generation has gained more importance due its numerous
advantages such as fuel free, requires very little maintenance and environmental benefits. To improve
the energy efficiency, it is important to operate PV system always at its maximum power point. Partial
shading on a photovoltaic (PV) string comprising multiple modules/substrings triggers issues such as
a significant reduction in power generation and the occurrence of multiple maximum power points
(MPPs), including a global and local MPPs, that encumber MPP tracking algorithms. Single-switch
voltage equalizers using multi-stacked SEPIC is proposed to settle the partial shading issues. The
single-switch topology can considerably simplify the circuitry compared with conventional equalizers
requiring multiple switches in proportion to the number of PV modules/substrings.
One of the biggest reliability issues of PV systems is the difference between its expected and
actual power outputs. This problem can be called PV mismatch. It can have many sources, and the one
addressed in this paper is the partial shading of PV modules. Many authors have proposed ideas to
mitigate the effects related to partial shading. Solutions range from alternative interconnections among
the PV modules within a plant to PV module embedded PE applications.
404
In photovoltaic (PV) energy systems, PV modules are often connected in series for increased
string voltage; however, I-V characteristics mismatches often exist between series connected PV
modules, typically as a result of partial shading, manufacturing variability and thermal gradients.
Since all modules in a series string share the same current, the overall output power can be limited
by underperforming modules. A bypass diode is often connected in parallel with each PV module to
mitigate this mismatch and prevent PV hot spotting, but the efficiency loss is still significant when
only a central converter is used to perform MPPT on the PV string.
II. PROBLEMS OF THE PARTIAL SHADING IN PV MODULES
A PV module is partially shaded when the light cast upon some of its cells is obstructed by
some object, creating a shadow. In this paper, a shadow is considered to have a shape and opacity. The
opacity of the shadow is called shading factor (SF), varying from zero to one. An SF of zero means
that all the available irradiance shines on the PV module. On the contrary, an SF of one means that all
available irradiance is filtered by the shadow before reaching the PV module. The shape of the shadow
is determined by its length and width. The number of shaded cells or cell groups connected in parallel
determines the width of the shadow. Its length represents the number of shaded cells or cell groups
connected in series. The cells composing the PV modules studied in this
work are all considered to be connected in series. Thus, their shadows have no width. The
shaded PV cells will produce less current than the others, which will lead voltage mismatch.
1) The other cells impose their current over the shaded cells, making them work under negative
voltage, dissipating power and risking destruction.
2) The MPPT will track the current of the shaded cells, imposing it over the others and making
them produce less energy.
To protect the shaded cells from being destroyed and to minimize losses in power production,
PV modules are equipped with bypass diodes. They prevent the shaded cells from working under
reverse voltage by short-circuiting them, thus allowing the other cells to work at their normal current.
However, bypass diodes deform the IV curves while activated, interfering with the MPPT. This
makes the tracking of the MPP impossible for simple algorithms.
III. SINGLE-SWITCH VOLTAGE EQUALIZER USING SEPIC
Fig. 2
Fig. 4. (a) Transformed and (b) simplified circuits of the SEPIC-based voltage equalizer
Based on Kirchhoffs current law in Fig. 3, the average current of Li, ILi, equates to that of Di,
IDi, because the average current of Ci must be zero under a steady-state condition. the equalization
current supplied to PVi, Ieq-i is
Ieq-I = ILi = IDi . (1)
Therefore, both IL1IL3 and ID1ID3 are dependent on partial-shading conditions.
As mentioned in Section III, the ac coupling of C1C3, all inductors of L1L3 are driven
by the same asymmetric square-wave voltage, although the stacked CLD filters are at different dc
voltage levels. Since the stacked CLD filters are ac-coupled, the series-connected substrings can be
equivalently separated and grounded as shown in Fig. 4(a), in which a dc voltage source with VString
that is equivalent to the sum of VPV1VPV3 is used to power the equalizer. In this transformed voltage
equalizer, both the CLD filters and PV1 PV3 are connected in parallel. Accordingly, the transformed
circuit shown in Fig. 4(a) can be simplified to the equivalent circuit shown in Fig. 4(b), which is
identical to a traditional SEPIC shown in Fig. 2. This allows the overall operation of the proposed
voltage equalizer to be analyzed and expressed.
In the simplified equivalent circuit, the current of Ltot, iLtot, equates to the sum of iL1iL3;
iLtot = iL1+iL2+ iL3. (2)
the total of Ieq1Ieq3, Ieq-tot ,is
406
Fig. 5. Current flow paths in period of (a) Ton, (b) Toff-a, (c) Toff-b, (for CCM only).
The key operation waveforms and current flow paths in DCM under the partially shaded
condition are shown in Figs. 5 and 6, respectively
A. DCM Operation
During the on period, Ton, all inductor currents, iLin and iLi, linearly increase and flow through
the switch Q. iL1iL3 flow through C1C3 and Cout1Cout2. The lower the position of Cout1Cout3,
the higher the current tends to flow; the current of Cout1, iCout1, shows the largest amplitude. For
example, iL2 only flows through Cout1, whereas iL3 flows through both Cout1 and Cout2. Thus,
currents flowing through the upper smoothing capacitors are superimposed on lower ones.
As Q is turned off, the operation moves to Toff-a period. Diodes D1D3 start conducting,
and the inductor current linearly declines. Energies stored in L1L3 in the previous Ton period are
discharged to respective smoothing capacitors in this mode. iLin is distributed to C1-D1C3-D3
branches and flows toward Cout1Cout3. Depending on the shading conditions, some diodes that
correspond to slightly-shaded or unshaded substrings cease to conduct sooner than the others. For
example, iD3 reaches zero sooner than iD1 and iD2, as can be seen in Fig. 8. After iD3 declines to
zero, iL3 flows toward C1 and C2 through C3. Until all diode currents decrease to zero, this Toff-a
period lasts and all inductor currents keep linearly
decreasing. Similar to the Ton period, the higher current tends to flow through smoothing
capacitors in the lower place due to the current superposition.
The Toff-b period begins as all diodes cease. Since the applied voltages of all inductors in this
period are zero, all currents, including iCi, remain constant.
The duty cycle of Toff-a period, Da, is given by
Da = DVstring / (VPVi + VD) , (7)
where D is the duty cycle of Q, and VD is the forward voltage drop of the diodes.
409
410
Input is around 19.4 volts as common for all the panels and the output boost voltage is at 44V
represents the boost mode operation of the proposed converter on ideal mode.
B. INPUT AND OUTPUT POWER RESPONSE:
Fig : 11 Shows the input and out power chrecteristcis of the converter
411
In fig.12. shows the Compenstation current patterns of the proposed equalizer, the proposed
equalization current is at 0. In ideal conditions all the configurations are at equal conditions which
exhibits equal current input currents, provides a zero compenstation current.
Fig 13: Shows the diode curents of the proposed equalization technique under uniform radiation
shows a equalization current around 8 A .
Case 2:
In this mode equalization pattern of the proposed equalizer is studied by changing the irradiation
of the solar panel, which makes the panel as partial shading , panel 1 is partially shaded and the voltage
compensation is done by adjusting the duty ratio according to the voltage variations of the panel.
VII. CONCLUSION
Single-switch voltage equalizers for partially-shaded PV modules have been proposed in this
project. The single switch topology can simplify the circuitry compared with conventional DPP
converters and voltage equalizers requiring numerous switches proportional to the number of PV
substrings/modules in series.
412
H. J. Bergveld, D. Buthker, C. Castello, T. Doorn, A. D. Jong, R. V. Otten, and K. D. Waal, Modulelevel dc/dc conversion for photovoltaic systems: the delta-conversion concept, IEEE Trans. Power
Electron., vol. 28, no. 4, pp. 20052013, Apr. 2013.
[3] S. Qin and R. C. N. P. Podgurski, Sub-module differential power processing for photovoltaic
applications, IEEE Applied Power Electron. Conf. Expo., pp. 101108, 2013.
[4] S. Qin, S. T. Cady, A. D. D. Garcia, and R. C. N. P. Podgurski, A distributed approach to MPPT
for PV sub-module differential power processing, IEEE Energy Conversion Conf. Expo., pp. 2778
2785,2013.
[5]
L. F. L. Villa, T. P. Ho, J. C. Crebier, and B. Raison, A power electronics equalizer application for
partially shaded photovoltaic modules, IEEE Trans. Ind. Electron., vol. 60, no. 3, pp. 11791190,
Mar. 2013.
[6] J. T. Stauth, M. D. Seeman, and K. Kesarwani, Resonant switched-capacitor converters for submodule distributed photovoltaic power namagement, IEEE Trans. Power Electron., vol. 28, no. 3,
pp. 11891198, Mar. 2013.
[7]
J. Du, R. Xu, X. Chen, Y. Li, and J. Wu, A novel solar panel optimizer with self-compensation for
partial shadow condition, IEEE Applied Power Electron. Conf. Expo., pp. 9296, 2013.
[8]
S. Poshtkouhi, V. Palaniappan, M. Fard, and O. Trescases, A general approach for quantifying the
benefit of distributed power electronics for fine grained MPPT in photovoltaic applications using
3-D modeling, IEEE Trans. Power Electron., vol. 27, no. 11, pp. 46564666, Nov. 2012.
[9] M. Uno and K. Tanaka, Single-switch cell voltage equalizer using multistacked buckboost
converters operating in discontinuous conduction mode for series-connected energy storage cells,
IEEE Trans. Veh. Technol., vol. 60, no. 8, Oct. 2011, pp. 36353645.
[10] V. Eng and C. Bunlaksananusorn, Modeling of a SEPIC converter operating in discontinuous
conduction mode, in Proc. 6th ECTI-CON, pp. 140143, May 2009.
413
R.Senthilkumar
Assiatant Professor of Electrical and Electronics Engineering
M.Kumarasamy College of Engineering, Karur
Senthilkumarr.eee@mkce.ac.in
AbstractThis paper proposes an implementation of single phase seven level grid connected
inverter for photovoltaic systems by using FPGA based pulse width modulated control process.
Three types of reference signals that are identical to each other with an offset that is equivalent to
the amplitude of the triangular carrier were used to generate the PWM signals. The inverter is able
to produce seven levels of output voltage levels (Vdc, 2Vdc/3, Vdc/3, 0, -Vdc, -2Vdc/3, -Vdc/3)
from the dc supply voltage. A digital PI current control algorithm was implemented in a Xilinx
XC3S250E FPGA to keep the current injected into the grid as sinusoidal. The proposed system was
designed and verified through simulation and implemented in a prototype.
Key words: Grid Connected, Modulation index, Multi level inverter, Photo Voltaic (PV) system,
Pulse Width modulation (PWM), Total harmonic distortion (THD).
I. INTRODUCTION
The ever increasing energy consumption, fossil fuels, soaring costs and exhaustible nature, and
worsening environment have created a booming interest in renewable energy generation systems, one of
which is photovoltaic. Such a system generates electricity by converting the suns energy directly into
electricity. Energy generated by photovoltaic system and can be delivered to power system networks
through grid connected inverters.
A single-phase grid-connected inverter is usually used for residential or low-power applications
of power ranges that are less than 10 kW [1]. Types of single-phase grid-connected inverters have been
investigated [2]. A common topology of this inverter is full bridge three-level. The three level inverter should
the satisfy specifications through its very high switching, but it may be unfortunately increase switching
losses, acoustic noise, and level of interference to other equipment. By improving its output waveform
reduces its harmonic content and hence, also the size of the filter used and the level of electromagnetic
interference (EMI) generated by the inverters switching operation [3].
Multilevel inverters are guaranteed that they have nearly sinu- soidal output-voltage waveforms,
output current with better harmonic profile, less stressing of electronic components owing to decreased
voltages, switching losses that are lower than those of conventional two-level inverters, a smaller filter size,
and lower EMI, all of which make them
414
A multilevel power converter structure has been introduced as an alternative in high power and
medium power situations. A multilevel converter not only assures high power ratings, but also enables
the ease of usage for renewable energy sources such as photovoltaic, fuel cells and wind, can be easily
interfaced to a multilevel converter system for a high power application.
The term multilevel begun with the three level, subsequently, several multilevel converter
topologies has been developed over the years. However, the elementary concept of a multilevel converter to
achieve higher power is to use a series power semiconductor switches with several lower voltage DC sources
to perform the power conversion by synthesizing a staircase voltage waveform. Batteries, Capacitors, and
renewable energy voltage sources can be used as the multiple DC sources in order to achieve high voltage at
the output; however, the calculated rated voltage of the power semiconductor switches depends only upon
the rating of the DC voltage sources to which they are connected.
A multilevel converter gives more advantages over a conventional three level inverter that uses
high switching frequency pulse width modulation (PWM).
415
The proposed inverters operation can be divided into seven switching states, they are shown in Fig. 2(a)
to 2(g). Fig. 2(a), (d), and (g) shows a conventional inverters operational states in sequence, while Fig.
2(b), (c), (e), and (f) shows additional states in the proposed inverter synthesizing one- and two-third levels
of the dc-bus voltage. The required seven levels of output voltage were generated as follows
1)
Maximum positive output (Vdc ): When S1 is ON state, connecting the load positive terminal
to Vdc , and S4 is ON, con- necting the load negative terminal to ground. Remaining controlled
switches are OFF; the voltage applied to the load terminals is Vdc . Fig. 2(a) shows the current paths
that are active at this stage.
2)
Two-third positive output (2Vdc /3): The bidirectional switch S5 is ON state, connecting the load
positive terminal, and S4
is ON, connecting the load negative terminal to ground. Remaining
controlled switches are OFF; the voltage applied to the load terminals is 2Vdc /3. Fig. 2(b) shows
the current paths that are active at this stage.
3)
One-third positive output (Vdc /3): The bidirectional switch S6 is ON state, connecting the load
positive terminal, and S4
is ON, connecting the load negative terminal to ground. Remaining
controlled switches are OFF; the voltage applied to the load terminals is Vdc /3. Fig. 2(c) shows the
current paths that are active at this stage.
4)
Zero output: This level can be produced by two switching combinations; switches S3 and S4 are
ON, or S1 and S2 are ON state, and remaining controlled switches are OFF; terminal ab is a short
circuit level and the voltage applied to the load terminals is zero. Fig. 2(d) shows the current paths
that are active at this stage.
5)
One-third negative output (Vdc /3): The bidirectional switch S5 is ON state, connecting the load
positive terminal, and S2 is ON, connecting the load negative terminal to Vdc . Remaining switches
are OFF; the voltage applied to the load terminals is Vdc /3. Fig. 2(e) shows the current paths that
are active at this stage.
6)
Two-third negative output (2Vdc /3): The bidirectional switch S6 is ON, connecting the load
in positive terminal, and S2 is ON, connecting the load negative terminal to ground. Remaining
controlled switches are OFF; the voltage applied to the load terminals is 2Vdc /3. Fig. 2(f) shows
the current paths that are active at this stage.
7)
Maximum negative output (Vdc ): When S2 is ONstate, connecting the load negative terminal
to Vdc , and S3 is ON, con- necting the load positive terminal to ground. Remaining controlled
switches are OFF; the voltage applied to the load terminals is Vdc. Fig. 2(g) shows the current paths
that are active at this stage.
416
417
418
TABLE I
OUTPUT VOLTAGE ACCORDING TO T HE SWITCHES ONOFF CONDITION
419
The phase angle of the device depends on the modulation index Ma. Theoretically the modulation index is
Where Ac is the peak to peak value of the carrier signal and Am is the peak to peak value of the voltage
reference signal Vref.
When the modulation intex more than 0.33 and less than 0.66. The phase angle displacement is given by
421
VI.CONCLUSION
Using Xilinx FPGA to generate the PWM provides flexibility to modify the designed circuit without
altering the hardware part. When Concurrent operation is used, it requires less hardware, easy and fast
circuit modification, especially low cost for a complex circuitry and rapid prototyping make it as the most
favorable choice for the PWM generation. From the analysis of simulation and experimental results it is
confirmed that the harmonic distortion of the output current waveform of the inverter fed to the grid is
within the stipulated limits laid down by the utility companies, the THD is less than five and three level
inverter. All the above advantages have made the inverter configuration highly suitable for grid connected
photovoltaic application (5kW).
REFERENCES
1.
M. Calais and V. G. Agelidis, Multilevel converters for single-phase grid connected photovoltaic
systemsAn overview, in Proc. IEEE Int. Symp.Ind. Electron., 1998, vol. 1, pp. 224229.
2.
S. B. Kjaer, J. K. Pedersen, and F. Blaabjerg, A review of single-phase grid connected inverters for
photovoltaic modules, IEEE Trans. Ind. Appl., vol. 41, no. 5, pp. 12921306, Sep./Oct. 2005.
3.
P. K. Hinga, T. Ohnishi, and T. Suzuki, A new PWM inverter for photovoltaic power generation
system, in Conf. Rec. IEEE Power Electron. Spec. Conf., 1994, pp. 391395.
4.
5.
M. Saeedifard, R. Iravani, and J. Pou, A space vector modulation strategy for a back-to-back velevel HVDC converter system, IEEE Trans. Ind. Electron., vol. 56, no. 2, pp. 452466, Feb. 2009.
6.
7.
J. Rodrguez, J. S. Lai, and F. Z. Peng, Multilevel inverters: A survey of topologies, controls, and
applications, IEEE Trans. Ind. Electron., vol. 49, no. 4, pp. 724738, Aug. 2002.
424
I. INTRODUCTION
Today, over two billion people around the world use Internet for browsing Web, sending
and receiving emails, accessing multimedia content and services, playing games, social networking
applications and many other tasks. From the saying, A world where things can automatically
communicate to computers and each other providing services to the benet of the human kind, it is
predictable that, within the next decade, Internet will exist as a seamless tool for classic networks and
networked objects. The Internet of Things (IoT) is a network of networks where a massive number of
objects/things/sensors/devices are connected through communication and information infrastructure
to provide a value-added service[10]. The Internet of Things allows people and things to be connected
Anytime, Anyplace, with Anything and Anyone, ideally using Any path/network and Any service[10]
[11]. The innovation of IoT will be enabled by the embedding of electronics into everyday physical
objects, making them smart and letting them integrate and operate within the physical infrastructure.
Over the last thirty years, energy demand has shown a huge increase in residential as well as
industrial sectors. Electricity demand in EU-27 increased by 70% between 1980 and 2008 [1].
Therefore, creating intelligent home energy management systems which are able to save energy
while meeting user preferences has become an interesting research topic. Due to their relatively
low-cost, wireless nature, flexibility and easy deployment, wireless sensor networks represent a
promising technology for providing such systems. This ability to control usage is called as Demand
Side Management (DSM).Thus the system furnishes the need for a heterogeneous Information fusion
technology of IoT in the smart home [8]. DSM plays a major role in reducing the electricity usage cost
by altering the system load shape [12].
In the study of dynamic DSM, different techniques and algorithms have been proposed, where
the basic idea has been to reduce the energy bill corresponding to the time-of-use(TOU) tariffs
incentives offered by the utility [13].In the study of the appliance scheduling, the smart home aims to
offer the appropriate services to the user based on residents lifestyle[9].
IoT builds on three pillars, related to the ability of smart objects: (i)to be identiable (ii)to
communicate and (iii)to interacteither among themselves, building networks of interconnected objects,
or with end-users. The Three characteristics of IoT are (i) Anything communicates: Smart things have
the ability to wirelessly communicate among themselves and form networks of interconnected objects,
(ii)Anything identiable: Smart things are identied with a digital name. Relationships among things
can be specied in the digital domain whenever physical interconnection cannot be established and
(iii)Anything interacts: Smart things can interact with the local environment through sensing and
actuation capabilities whenever present[11].
425
This paper is organised into VI sections. Section II describes the objective of the proposed smart
home system. The objective has been explained in terms of 3 sectors namely automation, monitoring
and control. Section III describes the working of the smart home system and its efficient performance
in saving energy compared to the existing appliances. Section IV discusses the cloud storage used in
this work. Conclusions and Acknowledgement is in Section V and VI respectively.
II. THE PROPOSED SMART HOME SYSTEM
In our day-to-day life, situation comes where it is difficult to control the home appliances
in case when no one is available at home or when the user is far away from home or when the user
leaves home forgetting to switch off some appliances which leads to unnecessary wastage of power
and also may lead to accidents. Sometimes, one may also want to monitor the status of the household
appliances, staying away from home. In all the above cases the presence of the user is mandatory to
monitor and control the appliances which are not possible all the time.
This short coming can be eliminated by connecting the home appliances to the user via some
medium. However, connectivity can be established with the help of GSM, internet, Bluetooth, Zigbee.
It is reliable to connect devices via internet so that the remote user can monitor and control home
appliances from anywhere and at any time around the world. This increases the comfort of the user
by connecting the user with the appliances at home, to monitor the status of the home appliances
through a mobile app, to control the appliances from any corner of the world, to understand the power
consumption of each appliance and to estimate the tariff well in advance.
A. AUTOMATION SYSTEM
The appliances are classified according to the nature of their operation / control. Appliances like
Geyser and Toaster have to be switched ON/OFF at particular time intervals. For efficient utilization
of the appliance, the device has to be switched ON and OFF appropriately. An RTC based system can
performs the control precisely which enhances the appliances life and saves the power. When the
match takes place between the loaded time and the real time, the controller turns ON the appliance and
similarly when the duration gets over, the controller turns OFF the appliance. Thus the appliances are
controlled as per the time schedule defined by the user.
Some appliances need to work only during human presence. In this proposed work, human
movements are detected using PIR sensor and the necessary automation is done. The desired light
intensity in the room can be established using Smart Lamp. The lamp must switch ON and OFF
only during human presence which is implemented using motion sensor based lighting system. This
lighting systems performance is increased by switching them only when there is no sufficient light
and the light will not switch ON during daytime. This is done by including LDR in the system. The
desired ambient light intensity is set by varying the brightness of the lamp using PWM techniques
which helps in energy saving. By proper positioning of LDR, the light intensity of the room can be
maintained as shown in Fig 1.
B. MONITORING SYSTEM
After leaving home few metres away, the user may have doubts regarding the status of the
appliances at home. In such cases, returning back to our home and checking for its status will not be
difficult. When the distance extends to about few miles, returning back becomes tedious. In case of
426
Thus the PIR sensor installed at the specific points at the home senses the location of the user at
the home abiding to the location awareness system [9] which makes use of a floor mapping algorithm
for a single user. The intended usage of the PIR sensor is to detect the human presence. The Light
Dependant Resistor(LDR) placed at suitable locations determines Luminance intensity of the place at
the location and sends the value to the control system for further interpretation. Thus LDR enhances
the feature cited in [3] by making use of sensors to detect the environment factors and adjust to the
same desired by the user. The smart plug associates to the work represented in [14].These smart plugs
enhance the feature of scheduling the devices as well as controlling the power consumption of lamps
such as LED lamps.
The device status has to be monitored periodically and when the user sends a request, the status
of the appliance is presented. The status monitoring of the appliances can be realized with the help of
a flow chart shown in Fig.2.When the device is in standby mode, as soon as the PIR sensor detects the
human presence the controller calculates the room luminance value and compares with the prefixed
value. Based on the results, the lamp is either switched ON with the given brightness or switched OFF
and the cycle continues.
C. CONTROL SYSTEM
Remote controlling of the appliances can also be performed. When the user sends a command,
depending on the command received, the appliances can be switched ON and OFF accordingly.
Sometimes there arises a discrepancy where no device has been connected or fault in the existing
device. In such case, an open circuit prevails. With the help of a current sensing mechanism, fault
detection can be performed. This can be implemented by sensing the current flowing to the appliance
with the help of a current sensor as shown in Fig.3.This helps in further saving energy as well as the
427
Fig.4: Simulation showing the relation between power consumed and LUX.
As the light intensity increases, the brightness of the lights are adjusted and hence the power
consumption is reduced.The simulation results providing the relation between the light intensity and
the power consumption of the LED lights aredepicted in the Fig.4. A user friendly environment is
created at home with the help of a LCD and Keypad to let the user to enter the starting time and the
duration for which the device has to remain switched ON.The same can be provided with the help of
the mobile application at the user end.Hence an LDR based adaptive lighting system is used to save
power by varying the PWM for LED lamps rather than using fluorescent lamps with fixed power
consumption.
Table 1: Power and LUX comparison between LED using LDR and Fluorescent Lamp
428
Fig.5 Results showing the comparison between LED and Fluorescent lamps
The results tabulated in Table 1were obtained by placing the LEDs at a distance of about 2 feet
from the fluorescent lamp and both are opposite to each other. About 60 LEDs were used in the setup.
The power consumed by the LEDs at a LUX of 2600 Lumens is approximately 7 watts whereas the
power consumed by a fluorescent lamp is 40 watts. This saves power at a rate of 33 watts/hour.The
results illustrating the difference in power consumption between the LED lamp and Fluorescent lamp
observed with the help of test setup is shown in
Fig.7 Status of the Air conditioner logged into the cloud channel
Lin Liu, Yang Liu, Lizhe Wang, Albert Zomaya, and Shiyan Hu, Economical and Balanced Energy
Usage in The Smart Home Infrastructure: A Tutorial and New Results", IEEE, 2015.
[3]
Dae-Man Han and Jae-Hyun Lim, Smart Home Energy Management System using IEEE802.15.4
and ZigBee, Smart Home Energy Management System using IEEE 802.15.4 and ZigBee,2010.
[4]
Mingfu Li and Hung-Ju Lin, Design and Implementation of Smart Home Control Systems Based
on Wireless Sensor Networks and Power Line Communications, IEEE Transactions on Industrial
Electronics, Vol. 62, No. 7, July 2015.
[5] Charith Perera, Chi Harold Liu and SrimalJayawardena, The Emerging Internet of Things
Marketplace Froman Industrial Perspective: A Survey, IEEE, 2015.
[6]
Rahul Godha, Sneh Prateek, Nikhita Kataria, Home Automation: Access Control for IoT Devices,
International Journal of Scientific and Research Publications, Volume 4, Issue 10, October 2014.
[7]
[8]
Baoan Lia, Jianjun Yub Research and application on the smart home based onComponent
technologies and Internet of Things.
[9]
Suk Lee, Kyoung Nam Ha, Kyung Chang Lee, A Pyroelectric Infrared Sensor-based Indoor
Location-Aware System for the Smart Home.
[10] Luigi Atzori, Antonio Lera, The Internet of Things: A Survey, 1st June 2010.
[11]
Daniel Miorandi, Sabrina Sicari, IoT Vision, Applications and Research Challenges,2012.
432
I. INTRODUCTION
With the development of urbanization and the popularization of the automobile, problems
associated with road traffic congestion, frequent traffic accidents, and the low efficiency level of road
transport has become increasingly more serious [1]. In order to alleviate these problems, a Driver
Assistance System (DAS) was designed to help or even substitute human drivers to enhance the
safety of driving [2,3]. This system films the road information in its natural scene using a camera
that is mounted inside the vehicle, and this information is subsequently processed in real time using
a relevant circuit system. Then, the system provides information, such as warnings and tips, to the
driver. This can greatly reduce driving risks and enhance road traffic and the driver's personal safety.
Strictly complying with the traffic rules can improve vehicle safety performance, and it can also
effectively reduce traffic accidents. A variety of important traffic signs placed on the road by the traffic
department communicates and supports road traffic rules for the driver [4]. Traffic signs are designed
to help drivers with piloting tasks while providing information, such as the maximum or minimum
speed allowed, the shape of the road, and any forbidden maneuvers. Therefore, recognition of traffic
signs is one of the important tasks of the DAS in Intelligent Transportation.
The fast detection and accurate identification of traffic signs hold great significance for automatic
vehicles. The ability to project a sharp image is one of the preconditions to correctly recognizing a
traffic sign. However, the relative motion between the camera and the natural scene during the exposure
time usually causes motion-blurred images, which will severely affect the image's visual quality. It is
a challenge to quickly and accurately identify traffic signs in motion-blurred images. There are two
main approaches used to solve this problem. First, by improving the performance index of the camera,
we can avoid the motion blur from a hardware perspective of image processing. However, there are
bottlenecks in technology that affect the camera's performance. The second way is to enhance and
restore the motion-blurred images by means of a motion-blurred image restoration algorithm. There
are also additional things we can do in this field to enhance image quality.
Nowadays, the recognition of traffic sign has also made great progress. The Hough transformation
and a multi-frames validation method were used by Gonzalez and Garrido [6]. A system based on
433
In this model, the output is calculated by means of the following formula [23]
where g(x,y) is the blurred image, f(x,y) is the undegraded image, n(x,y) is system noise, h(x,y)
is the point spread function (PSF), and is the convolution in spatial domain. Since the space domain
convolution is equal to frequency domain multiplication, the frequency domain representation of Eq
(1) is G(u,v) = F(u,v)H(u,v)+N(u,v).
Motion-blurred restoration involves reversing the image degradation process and adopting the
inverse process to obtain clear images. Motion-blurred is one case that was featured in the model of
Lin et al [11]. The model assumes that the target or camera moves at a certain speed and direction,
and a distance, s, is moved during the exposure time, T. Regardless of the effect of noise, it can be
presented by the formula
g(x,y)=1T0Tf(xx(t),yy(t))dt.
In addition x(t),y(t) are the time-varying components of motion in the x-direction and
y-direction.
434
Fig 2
It can be clearly observed that motion causes the border's regularity to change. Fig 3(a) is the
original border before it became motion blurred. Fig 3(b) moved in a direction of 0. The border
435
From Eq 11 and Fig 4, we can see that when axa+n, g(x) grows from 0 to 1. When
a+nxb, the value of g(x) is 1, and when b<x<b+n, g(x) decreases from 1 to 0. In the image the
saturation of the pixels in the middle of the border is the highest, and it decreases gradually from the
middle to the edge of the border. The threshold is set as 1, a+nxb, and we consider it as the width
of the sequence after blurring. This is to say that the width of the pixels whose saturation equals 1 in
original sequence is d, and the scale of motion is n. Thus, the width following motion is d' = d+n.
When considering the entire border, the width of the border along the motion direction would
apparently change, while the width of the border perpendicular to the motion direction changes little.
Finally, the width between the two directions changes gradually, which is shown in Fig 5.
Fig 5
Binary image of motion-blurred border segmented by a certain threshold.
Given that the circle is isotropic, no matter which direction is the image blurred in, the width of
the border would change in the same way, which makes it possible to confirm both the blurred direction
and scale. Specifically, after measuring the width of all the directions, two maximum values (dmax)
and two minimum values (dmin) could be extracted. By connecting two groups of points, respectively,
these two lines are perpendicular, and the blurred direction is the direction of the minimum value line.
We can also obtain the scale from the width of the border. Assuming that the width of the border
before the motion is d, it is easy to know that the maximum value is dmax = d and that the minimum
value is dmin = d-n, so the scale is n = dmax-dmin. However, the result is easily affected by threshold
determination, so the results should be corrected. An appropriate coefficient (K) is introduced to the
436
Fig 7
437
Fig 8
Measurement result of motion-blurred traffic sign.
Following this, we can then determine the two extreme points (the maximum point and the
minimum point), and the direction of the minimum point is the motion direction. To ensure that the
results are more precise, we use the direction of the maximum point to correct any errors.
Conclusions
This paper proposed a new method to measure two important parametersdirection and scale
of motion-blurred traffic signs in the spatial domain. . This method is robust, and it can reduce the
impact of changing illumination on parameter extraction. Using the measured parameters to restore the
motion-blurred traffic sign images, we obtained good results that could meet the system's requirements
in image recognition. The results illustrated that the method can deal with recognition-based problems
associated with motion-blurred traffic sign images. Compared with the methods based on the frequency
domain, the impact of noise on parameters extraction is much smaller. In conclusion, application of the
algorithm offers an advantage in traffic signs recognition. This method can improve the performance
of the DAS and help to improve automatic driving and road safety.
As for future work, we will continue to investigate this subject by providing a more detailed
background of this problem, and we will work to improve the robustness of border extraction with
more suitable features in reducing the effects of the environment.
References
1. Ji RR, Duan LY, Chen J, Yao HX, Yuan JS, Rui Y, et al. Location discriminative vocabulary coding
for mobile landmark search. International Journal of Computer Vision, 2012; 96: 290314.
2.
De la Escalera A, Armingol JM, Pastor JM, and Rodrguez FJ. Visual Sign Information Extraction
and Identification by Deformable Models for Intelligent Vehicles. IEEE Transactions on Intelligent
Transportation Systems, 2004; 5: 5768.
3.
Gao Y, Tang JH, Hong RC, Dai QH, Chua TS, Jain R. W2Go: a travel guidance system by automatic
438
4.
5.
Gonzalez A, Garrido MA, Llorca DF, Gavilan M, Fernandez JP, Alcantarilla PF, et al. Automatic
Traffic Signs and Panels Inspection System Using Computer Vision. IEEE Transactions on Intelligent
Transportation Systems, 2011; 12: 485499.
6.
7.
Khan JF., Bhuiyan SMA., Adhami RR. Image Segmentation and Shape Analysis for Road-Sign
Detection.IEEE Transactions on Intelligent Transportation Systems, 2011; 12: 8396.
8.
Barnes N, Zelinsky A, Fletcher LS. Real-Time Speed Sign Detection Using the Radial Symmetry
Detector.IEEE Transcations on Intelligent Transportation Systems, 2008; 9: 322332.
9.
Stallkamp J, Schlipsing M, Salmen J, Igel C. The German Traffic Sign Recognition Benchmark: A
multi-class classification competition. In Proceedings of the IEEE International Joint Conference on
Neural Networks, 2011; 14531460.
10
. Lin HT, Tai YW, Brown MS. Motion Regularization for Matting Motion Blurred Objects. IEEE
Transactions on Pattern Analysis and Machine Intelligence, 2011; 33: 23292336. doi: 10.1109/
TPAMI.2011.93 [PubMed]
11. Jiang XY, Cheng DC, Wachenfeld S, Rothaus K. Motion Deblurring, Available: http://cvpr.unimuenster.de/teaching/ws04/seminarWS04/downloads/MotionDeblurring-Ausarbeitung.pdf.
12. Fang C, Fuh CS, Chen SW, Yen PS. A road sign recognition system based on dynamic visual
model. IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2003; 1:
750755.
13.
Fleyeh H., Davami E. Eigen-based traffic sign recognition. IET Intelligent Transport Systems, 2011;
5:190196.
439
I. INTRODUCTION
In mobile ad hoc networks (MANETs), the movement of nodes that makes the network partition
, where the nodes in one partition cannot be access the data by the nodes of other partitions. File
replication is the better solution to improve file availability in distributed systems. By replicating the
file at mobile nodes who are not in the owner of the source file.the file availability can be improved
because of there are multiple replica files in the network and the probability of identifying one copy
of the file is higher. Also,the file replication can be minimize the query delay.the mobile nodes can be
obtain the file from some nearby replicas. But the most of the mobile nodes only have limited amount
of memory space, range, and power,and hence it is difficult for one node to collect and hold all the
files considering these constraint and independent nodes in MANETs cause file unavailability for the
requesters. When a mobile node that only replicates part of the file, there will be a trade-off between
query delay and the file availability.
MANET varying significantly from the wired networks from network topology,configuration of
network and network resources. Features of MANETs are dynamic topology due to host movements,
partition of network due to untrusted communication and minimum resources such as limited power
and limited memory capacity [1, 2]. File sharing is one of the important functionality to be supported
in MANETs. Without this facility, the performance and usage of MANET is greatly minimizes [3].
The best example where file sharing is important, in the conference where several users share their
presentations on discussing on a particular issue, and it is also applicable in defence application,
rescue operation, disaster management etc. The method used for file sharing deeply depends upon the
features of the MANET [3]. The sequential network partition due to host movements or limited battery
power minimize the file availability in the network. To overcome file un-availability, the replication
technique deals all these problems such that file is available at all times in the network.
File replication
File Replication is a technique which improves the file availability by creating copies of
file. Replication allows better file sharing. It is a key approach for achieving high availability. File
replication has been widely used to maximize file availability in distributed systems, and we will
apply this technique to MANETs. It is suitable to maximize the response time of the access requests,
to distribute the load of processing of these requests on several servers and to eliminate the overload of
the paths of transmission to a unique server. The replications that are accessed in the time variations.
440
PERFOMANCE
The output of each technique in the simulation test on NS-2. We see the hit rates and average
delays of the four protocols.
We used the following metrics in the experiments:
Hit Rate
It is the number of requests successfully handled by either original les or replica files.
Average delay
This is the average time of all requests that finish execution. The delay that calculate using the
throughput and the performance of the requests.
445
Average Delay
Figs. 4(b) demonstrate the average delays of the four methods with simulation results.The
average delays shows PBDR<SAF<DAFN<DCG which is in reverse order of the relationship between
the four methods on hit rate as shown in Figs. 4a . This is because the average delay is related to the
overall file availability in decending order. The PBDR have high file availability .SAF distributes
every le to different Nodes while DCG only shares data among simultaneously identify neighbor
nodes, and DAFN has a low file availability since all les receive equal amount of memory resources
for replicas. The PBDR has the minimum average delay in the simulation results.
Replication Cost
Fig. 4(c) show the replication costs of the four methods. PBDR have the lowest replication
cost while the costs of other three methods continues PBDR<DAFN<DCG<SAF. PBDR, nodes only
need to communicate the file server for replica list, leading to the lowest cost. DCG generates the
highest replication cost since network partitions and its members need to transfer a huge amount of
files to remove duplicate replicas.In PBDR, a node tries at most K times to create a replica for each of
its les, producing much lower replication cost than SAF and DCG. Such the result demonstrates the
high energy-efciency of PBDR. Combining all above results, we conclude that PBDR has the highest
overall le availability and efciency compared to existing methods, and PBDR is effective in le
446
Replica Distribution
Fig. 4d show the proportion of resources allocated to replicas in each protocol in the simulation.
We see in both gures, PBDR presents very close similarity to DAFN and the other two follow SAF
and DCG. SAF also presents similarity to PBDR on the replica distribution. However, the difference
between PBDR and SAF is that PBDR assigns priority for popular les and check the priority test
for files in the networks. DAFN gives even priority to all les. Since popular les are queried more
frequently, SAF still leads to a low performance in the file replications.
Therefore, the resources are allocated more strictly following the PBDR, leading to efficiant.
The other replication protocols having the higher replication costs. The other three methods that
favor popular les, we nd that the closer similarity with PBDR a protocol. The PBDR has the better
performance over all in manets. The storage capacity of the file replication can be overcome due to
the file dynamics. The file distributions among all the nodes in the distributed network having better
performances. The file distributes across the different partitions.This proves the correctness of our
theoretical analysis and the resultant for MANETs.
CONCLUSION
In this paper, we analyze the problem of how to allocate limited resources in the replications
and manage the resources in MANETs. Although previous protocols that only consider storage and
resources, we also consider the file additions and deletions in dynamic manner in the peer-to-peer
communication in distributed systems.the Priority Based Dynamic Replication(PBDR) techinique that
are efficiently adding and deleting the file replications and manage the replicas in the particular time
intervals. NS-2 simulator that are analysis the effectiveness of the PBDR techinique.The hit rate is
higher then the previous protocols and average query delay is reduced and the replication cost is lower
then the previous protocols. Finally, the PBDR protocol that minimize the average response delay in
MANETs.
REFERENCES:
[1] S. C.Sivaram Murthy and B.S Manoj,Ad Hoc Wireless Networks , Pearson Education, Second
Edition India, 2001.
447
[3]
Lixin Wang ,File Sharing on a mobile ad hoc Network, Master Thesis,Department of Computer
Science at the University of Saskatchewan, Canada, 2003.
[4] Kang Chen, Maximizing P2P File Access Availability in Mobile Ad Hoc Networks though
Replication for Efcient File Sharing, IEEE TRANSACTIONS ON COMPUTERS, VOL. 64, NO.
4, APRIL 2015.
[5] Yang Zhang et.al, Balancing the Trade-Offs between Query Delay and Data Availability in
MANETs, IEEE Transactions on Parallel and Distributed Systems, Vol. 23, No. 4, pp.643-650,
2012.
[6] T. Hara, Effective replica allocation in ad hoc networks for improving data accessibility, IEEE
INFOCOM, 2001.
[7] V. Ramany and P. Bertok, Replication of location-dependent data in mobile ad hoc networks,
ACM Mobile, pp. 3946, 2008.
[8]
Q. Ren, M. Dunham, and V. Kumar, Semantic caching and query processing, IEEE Transactions
on Knowledge and Data Engineering, Vol. 15, No. 1, pp. 192210, 2003.
[9] F. Sailhan and V. Issarny, Scalable service discovery for MANET, IEEE International Conference
on Pervasive Computing and Communications, pp. 235244, 2005.
[10] L. Yin and G. Cao, Supporting cooperative caching in ad hoc networks, IEEE Transaction on
Mobile Computing, Vol. 5, No. 1, pp. 77-89, 2006.
[11] J. Cao, Y. Zhang, G. Cao, and L. Xie, Data consistency for cooperative caching in mobile
environments, IEEE Computer, Vol. 40, No. 4, pp. 6066, 2007.
[12] B. Tang, H. Gupta, and S. Das, Benefit-based data caching in ad hoc networks, IEEE Transactions
on Mobile Computing, Vol. 7, No. 3, pp. 289304, 2008.
[13] X. Zhuo, Q. Li, W. Gao, G. Cao, and Y. Dai, Contact Duration Aware Data Replication in Delay
Tolerant Networks, Proc. IEEE 19th Intl Conf. Network Protocols (ICNP), 2011.
[14] X. Zhuo, Q. Li, G. Cao, Y. Dai, B.K. Szymanski, and T.L. Porta, Social-Based Cooperative Caching
in DTNs: A Contact Duration Aware Approach, Proc. IEEE Eighth Intl Conf. Mobile Adhoc and
Sensor Systems (MASS), 2011.
[15] Z. Li and H. Shen, SEDUM: Exploiting Social Networks in Utility-Based Distributed Routing
for DTNs, IEEE Trans. Com- puters, vol. 62, no. 1, pp. 83-97, Jan. 2012. [21] V. Gianuzzi,
Data Replication Effectiveness in Mobile Ad-Hoc Networks, Proc. ACM First Intl Workshop
Performance Evaluation of Wireless Ad Hoc, Sensor, and Ubiquitous Networks (PE-WASUN), pp.
17-22, 2004.
[16] S. Chessa and P. Maestrini, Dependable and Secure Data Storage and Retrieval in Mobile Wireless
Networks, Proc. Intl Conf. Dependable Systems and Networks (DSN), 2003.
[17] X. Chen, Data Replication Approaches for Ad Hoc Wireless Net- works Satisfying Time
Constraints, Intl J. Parallel, Emergent and Distributed Systems, vol. 22, no. 3, pp. 149-161, 2007.
[18] J. Broch, D.A. Maltz, D.B. Johnson, Y. Hu, and J.G. Jetcheva, A Performance Comparison of
Multi-Hop Wireless Ad Hoc Net- work Routing Protocols, Proc. ACM MOBICOM, pp. 85-97,
1998.
[19] M. Musolesi and C. Mascolo, Designing Mobility Models Based on Social Network Theory, ACM
SIGMOBILE Mobile Computing and Comm. Rev., vol. 11, pp. 59-70, 2007.
[20] P. Costa, C. Mascolo, M. Musolesi, and G.P. Picco, Socially- Aware Routing for Publish-Subscribe
in Delay-Tolerant Mobile Ad Hoc Networks, IEEE J. Selected Areas in Comm., vol. 26, no. 5, pp.
748-760, June 2008.
448
I. INTRODUCTION
Cloud Computing is a composition of two or more agents like clouds (private, community
or public), users and auditors that remain distinct but are bound together and offering the benefits
of multiple deployment agents. Hybrid cloud can also be defined as ability to connect collocation,
managed and dedicated service with cloud resources attached with it . This hybrid cloud services
crosses isolation and provider boundaries so that it cant be simply gathered in one category of private,
public, or community cloud service. It allows extending either the capability or the capacity of a cloud
service, by aggregation, assimilation or customization with a different cloud service.
Varieties of use cases for hybrid cloud composition exist. For example, an organization may
lay up susceptible client data in the warehouse on a private cloud application, but interconnects that
application to business intelligence applications which is provided on a public cloud as a software
service. This example of hybrid cloud storage extends the capability of the enterprise to deliver specific
business services through the addition of externally available public cloud services.
Yet another example of hybrid cloud computing is about IT organizations which uses public
cloud computing resource to meet capacity requirements that cannot be met by private cloud. This
capabilities enables hybrid clouds enables employing of cloud bursting for scaling across number
of clouds. Cloud bursting is an application deployment model which runs in a private cloud or data
center along with "bursts" to a public cloud when there is a claim for computing capacity increases.
The primary advantages of cloud bursting and a hybrid cloud model is that the association can pay for
work out possessions only when they are needed.
To make efficient data management in cloud computing, deduplication of data has been a wellknown technique and has attracted more attentions recently. Data deduplication is specialized data
compression based technique used for eliminating duplicate copies of repeating data in storage. The
scheme is to expand storage utilizations and can be also applied to network data transfer to diminish
the number of bytes that should be sent. Instead of maintaining multiple data copy with the same
content, deduplication remove unneeded data by keeping only one physical copy and to refer other
redundant data to that copies. Deduplication occurs at each of the file level and the block level. For
file level, it eliminates duplicate copies of same file. Deduplication can take place at block level too,
which eliminate duplicate block of data that occur in non-identical filesystem.
Even though data deduplication brings number of benefits, security with privacy concerns arise
as users sensitive data are vulnerable to both inside and outside attacks. Traditional encryptions,
while providing confidentiality of data, is incompatible with data deduplication system. Traditional
encryption require different users to encrypt their datum with their own keysets. Thus, identical data
copy for different users will lead to different cipher text, making deduplication daunting. Convergent
encryption has been used to implement data privacy while making deduplication technique feasible. It
449
4. CONCLUSION
In this work, the notion of authorized data deduplication was proposed to protect the data
security by including differential privileges of users in the duplicate check.We also presented several
new deduplication constructions supporting authorized duplicate check in hybrid cloud architecture,
in which the duplicate-check tokens of files are generated by the private cloud server with private
keys.We showed that our authorized duplicate check scheme incurs minimal overhead compared to
convergent encryption and network transfer.
5. ACKNOWLEDGMENTS
Our thanks to the almighty god, our experts and friends who have contributed towards
development of the paper and for their innovative ideas.
6. REFERENCES
[1] Amazon, Case Studies, https://aws.amazon.com/solutions/casestudies/#backup.
[2]
J. Gantz and D. Reinsel, The digital universe in 2020: Bigdata, bigger digi tal shadows, and biggest
growth in thefar east, http://www.emc.com/collateral/analyst-reports/idcthe-digital-universein-2020.pdf, Dec 2012.
453
[5] M. Bellare, S. Keelveedhi, and T. Ristenpart, Dupless: Serveraided encryption for deduplicated
storage, in USENIX Security Symposium, 2013.
[6]
A. D. Santis and B. Masucci, Multiple ramp schemes, IEEE Transactions on Information Theory,
vol. 45, no. 5, pp. 17201728,Jul. 1999.
[9]
M. O. Rabin, Efficient dispersal of information for security, load balancing, and fault tolerance,
Journal of the ACM, vol. 36, no. 2,pp. 335348, Apr. 1989.
[10] A. Shamir, How to share a secret, Commun. ACM, vol. 22, no. 11,pp. 612613, 1979.
[11]
J. Li, X. Chen, M. Li, J. Li, P. Lee, and W. Lou, Secure deduplication with efficient and reliable
convergent key management, in IEEE Transactions on Parallel and Distributed Systems, 2014, pp.
vol. 25(6), pp. 16151625.
[12] S. Halevi, D. Harnik, B. Pinkas, and A. Shulman-Peleg, Proofs of ownership in remote storage
systems. in ACM Conference on Computer and Communications Security, Y. Chen, G.Danezis,
and V. Shmatikov, Eds. ACM, 2011, pp. 491500.
[13] J. S. Plank, S. Simmerman, and C. D. Schuman, Jerasure: A library in C/C++ facilitating erasure
coding for storage applications- Version 1.2, University of Tennessee, Tech. Rep. CS-08-627,August
2008.
[14] J. S. Plank and L. Xu, Optimizing Cauchy Reed-solomon Codes for fault-tolerant network storage
applications, in NCA-06: 5th IEEE International Symposium on Network Computing Applications,
Cambridge, MA, July 2006.
[15] C. Liu, Y. Gu, L. Sun, B. Yan, and D. Wang, R-admad: High reliability provision for large-scale
de-duplication archival storage systems, in Proceedings of the 23rd international conference on
Supercomputing, pp. 370379.
[16] M. Li, C. Qin, P. P. C. Lee, and J. Li, Convergent dispersal:Toward storage-efficient security in a
cloud-of-clouds, in The 6th USENIX Workshop on Hot Topics in Storage and File Systems, 2014.
[17] P. Anderson and L. Zhang, Fast and secure laptop backups with encrypted de-duplication, in Proc.
of USENIX LISA, 2010.
[18] Z. Wilcox-OHearn and B. Warner, Tahoe: the least-authority filesystem, in Proc. of ACM
StorageSS, 2008.
[19] A. Rahumed, H. C. H. Chen, Y. Tang, P. P. C. Lee, and J. C. S.Lui, A secure cloud backup system
with assured deletion and version control, in 3rd International Workshop on Security in Cloud
Computing, 2011.
[20] M. W. Storer, K. Greenan, D. D. E. Long, and E. L. Miller, Secure data deduplication, in Proc. of
StorageSS, 2008.
[21] J. Stanek, A. Sorniotti, E. Androulaki, and L. Kencl, A secure data deduplication scheme for cloud
storage, in Technical Report,2013.
[22] D. Harnik, B. Pinkas, and A. Shulman-Peleg, Side channels in cloud services: Deduplication in
cloud storage. IEEE Security &Privacy, vol. 8, no. 6, pp. 4047, 2010.
454
R. D. Pietro and A. Sorniotti, Boosting efficiency and security in proof of ownership for
deduplication. in ACM Symposium on Information, Computer and Communications Security, H.
Y. Youm and Y. Won, Eds. ACM, 2012, pp. 8182.
[24] J. Xu, E.-C. Chang, and J. Zhou, Weak leakage-resilient client-side deduplication of encrypted data
in cloud storage, in ASIACCS, 2013, pp. 195206.
[25]
W. K. Ng, Y. Wen, and H. Zhu, Private data deduplication protocols in cloud storage. in Proceedings
of the 27th Annual ACM Symposium on Applied Computing, S. Ossowski and P.Lecca, Eds.ACM,
2012, pp. 441446.
455
ABSTRACT - Most of us would have felt the pain of dropping a mobile phone or tablet, only
to find the screen is shattered beyond recognition or use. The pain is further heightened when we
receive the huge repair bill to fix or replace the screen of our smart phone. But there are those
happy moments when we retrieve our dropped mobiles from the floor to find that the screen has
remained intact. The main objective of our project is to prevent the front screen of our phone from
breakage when it slips from our hand or falls down accidentally. This can effectively be overcome
by designing a case that contains an airbag which prevents the gadgets front screen from touching
the ground. The freefall is sensed by an inbuilt accelerometer which measures the acceleration due
to gravity in all the three directions and predict the fall instantly in prior. Whenever the freefall is
detected the case protects the front panel of the mobile phone by popping small air balloons at all
the four corners of the gadget.
KEYWORDS: Free fall, accelerometer, airbag.
1.INTRODUCTION
In automobiles, a central Airbag control unit monitors a number of related sensors within the vehicle,
including accelerometers, wheel speed sensors etc. The crash is detected with the help of an accelerometer
in modern cars by measuring the change of speed [1]. If the deceleration is great enough, the accelerometer
triggers the airbag circuit. Normal breaking does not generate enough force to do this. A similar concept
to the above is used to detect the crash in modern electronic gadgets like mobile phones, tablets, iPod etc
but the only difference is to detect the fall prior to the crash. The mechanism of depletion of an air bag in
automobile involves ignition of harmless gas [2] like nitrogen or argon which is packed behind the steering
wheel whereas in mobile airbag compressed air is made to push through small tubes thus blowing small
pop up bags at all the four corners of the mobile case. Table 1 summarizes the main difference between
the car and mobile airbag.
AIRBAG IN AUTOMOBILES.
AIRBAG IN MOBILE PHONES.
1. This takes place due to chemical reactions in 1. This is done with the help of a mechanical system.
automobile system.
2. Airbag is deployed just before the accident.
2. Airbag is deployed after the accident.
3. Cost is low.
3. Cost is high.
4. Freefall should be detected.
4. Crash is to be detected.
Table 1.difference in car and mobile airbag
New glass such as Corning Gorilla Glass has been introduced to avoid the breakage of front panel. Tests
show that it could withstand around 100,000 pounds of pressure per square inch. It can withstand, without
shattering or cracking, a 535g ball being dropped on it from 1.8m above. The screen technology has already
made its way into the Samsung Galaxy S3 smart phone onwards [3]. How the phone or tablet falls to the
ground is the key to the shattering question. If it falls face down it might escape without too much damage
because the stress of impact is spread across the entire surface. It would almost certainly undergo damage,
which cannot be visualized with the naked eye. But if it is dropped onto one of the corners, the uneven
456
2.BLOCK DIAGRAM
The block diagram shows how the signal from the accelerometer drives the protecting device.
The components of the device include a motor, a compressor and four airbags at the corners. The signal
from the accelerometer drives the motor which in turn forces the movement of piston that pumps air into
the four small airbag structures through small tubes. Once the accelerometer senses the freefall it sends its
acceleration due to gravity values to the processor which gives the interrupt to the motor based on looping
mechanism. The value is checked in a loop for more than three times before it gives a phase shift to the
motors. Once the value exceeds the threshold value the motor is driven at a step angle of 3.6 degree until
the piston is pushed upwards. The piston contains compressed air and on pushing it with force it will cause
the compressed air to blow in all the four tubes and thus deploying the air balloon at all the four corners of
the gadget. The reason for keeping the airbag at the corners is that the centre of mass is concentrated at the
corners. Hence according to Newtons second law, force is directly proportional to mass and so the force
for hitting the ground is more at corners due to the centre of mass being concentrated there.
457
Here a briefly present the free Android application "Accelerometer Monitor ver.1.5.0" we have used in our
experiments. This application takes 348 kB of SD card memory and can also be downloaded from Google
play website [10]. This App shows the acceleration components ax, ay and az on x, y and z- axes at each
time step. The resolution of the sensor in the measurement of the acceleration is a = 0.01197 m/s and
the average sampling time is t=0.02 s. This application also allows saving an output file, from which the
data can be retrieved for further analysis. The output of the mobile application with the acceleration data
is collected in an ASCII file as shown in figure-4. Probably, the simplest experiment we can perform with
the mobile acceleration sensor is the study of a body which falls in the gravitational field of the Earth. This
experiment was treated in reference [11]. Authors suspended a smartphone from a string. After cutting the
string, the Smartphone fell freely for a period of time until getting to a soft surface which stops its motion.
With the measurement of the fall time and the initial time and constant acceleration due to gravity during
freefall, the height to deploy the airbag can be evaluated as follows by assuming a distance of 2 meters.
T=0.628.
V=6.260.
t=3.6 volts
458
Initially when the code is dumped into the android device, the app loads with three buttons like start
/reset, stop and print.
When the device in our hand after pressing the reset button the value is equal to 9.8 which is
acceleration due to the gravity.
When the phone is in free fall the value is decreased corresponding to the distance and the variation
occurs in the Z direction.
459
When the print button is pressed, all the value of accelerometer is printed in LOGCAT.
We have derived using equation of motion that after a threshold value of 3.6the mobile is in free fall.
So after this particular value of 3.6 something has to be indicated as a beep sound.
The above threshold value is found using the LAWS OF MOTION. The following equations are
further calculated.
STEP 1
Initially once the code is dumped into the android device the screen appears to be like the following.
It will have three states START, STOP and PRINT. The start section starts counting the acceleration due
to gravity values and the stop section stops counting the gravity values and meanwhile the print section
displays all the counted values between start to stop in the eclipse working window.
STEP 2
Once there is a movement in the device the acceleration due to gravity is sensed and displayed as
follows. Until the stop button is pressed the value keeps on changing in the Z-direction. The values will be
close to 9.8 which is the standardized value for the acceleration due to gravity
STEP 3
During the motion of the device towards the ground the different values are sensed and the threshold is
calculated using the laws of motion. The value displayed on the screen is the minimum acceleration faced
by the device during its freefall. This value allows us to justify our threshold for our device that any value
below 3.5g in the S-factor denotes freefall
460
STEP 4
This step of the project deals with the print button present in our APP, it is used to display the
different acceleration due to gravity values faced by the gadget between the start and the stop sections
http://www.explainthatstuff.com/airbags.html
[3]
http://www.corninggorillaglass.com/en/products-with -gorilla/samsung/samsung-galaxy-siii
[4]
http://thetechjournal.com/tag/gorilla-glass
[5] https://www.apple.com/support/iphone/repair/screen-damage/
[6] http://m.gadgets.ndtv.com/motorola-google-nexus-6-2060
[7]
http://m.gsmarena.com/samsung_galaxy_s6_edge-7079.php
[8]
http://www.physicsclassroom.com/class/1Dkin/u1l5a
[9]
J.A.Monsoriu, M.H.Gimenez, E.Ballester, L.M.Sanchez Ruiz, J.C.Castro-Palacio and L.VelazquezAhad Smartphone acceleration sensors in undergraduate physics experiments
462
ABSTRACT- Voltage source convertors based static synchronous compensators are used in
the transmission and distribution line for voltage regulation and reactive power compensation.
Nowadays angle controlled STATCOM have been deployed in the utilities to improve the output
voltage waveform quality with lower losses compared to that of PWM STATCOMS. Even though
angle control STATCOM has lot of advantages, it suffers in their operation, when unbalanced and
fault conditions occur in the transmission and distribution lines. This paper presents an approach of
Dual Angle control strategy in STATCOM to overcome the drawbacks of the conventional angle
control and PWM controlled STATCOMS. Here, this paper will not completely changes the design
of conventional angle control STATCOM, instead it add only (ac ) AC oscillations to the output
of the conventional angle controller output (dc) to make it as a dual angle controlled. Hence the
STATCOM is called dual angle controlled (DAC) STATCOM.
Index terms- Dual angle control (DAC),hysteresis controller, STATCOM.
I. INTRODUCTION
There are lot of devices used in the power system for voltage regulation, reactive power
compensation and power factor regulation[1]. The voltage source convertor (VSC) based STATCOM is
one of the widely used device in the large transmission and distribution systems for voltage regulation and
reactive power compensation. Nowadays angle controlled STATCOM have been deployed in the utilities
to improve the output voltage waveform quality with lower losses compared to that of PWM STATCOMS.
The first commercially implemented installation was 100 MVAr STATCOM at TVA Sullivan substation
and followed by New York Power Authority installation at Marcy substation in New York state in [13] and
[16].150-MVA STATCOM at Leardo and Brownsville substation at Texas,160-MVA STATCOM at Inez
substation in Eastern Kentucky, 43-MVA PG&E Santa cruz STATCOM and 40-MVA KEPCO (Korea
Electric Power Corporation) STATCOM at Kangjin substation in South Korea are the few examples of the
commercially implemented and operating angle controlled STATCOM on worldwide.
Even though angle control STATCOM has lot of advantages compared to other STATCOMS,
it suffers in their operation by over current and possible saturation of the interfacing transformers caused
by negative sequence during unbalanced and fault conditions occur in the transmission and distribution
lines in [4]. This paper presents an approach of Dual Angle control strategy in STATCOM to overcome
the drawbacks of the conventional angle control and PWM controlled STATCOMS [2]. Here, this paper
will not completely changes the design of conventional angle control STATCOM, instead it add only
(ac ) AC oscillations to the output of the conventional angle controller output (dc) to make it as a dual
angle controlled. Hence the STATCOM is called dual angle controlled (DAC) STATCOM. Angle control
STATCOM same degree of freedom compared to that of PWM STATCOM, but it is widely used because
it has higher waveform quality of voltage compared to that of PWM STATCOM.
This paper presents a new control structure for high power angle controlled STATCOM. Here the
only control input to angle control STATCOM is phase difference between VSC and ac bus instantaneous
voltage vector. In the proposed control structure, is split into two parts, dc and ac. The DC part dc
which is the final output of the conventional angle controller is incharge of controlling the positive sequence
VSC output voltage. The oscillating part ac controls the dc link voltage oscillations. The proposed model
STATCOM has the capablity to operate under fault conditions and able to clear the faults and unbalanced
occurs in the transmission and distribution lines.
In this paper, we have implemented a new control structure in STATCOM ,which has the ability to clear
such as sag and swell and other types of which will appears in the power systems. The analysis of the
463
Then the second type is the angle controlled STATCOM .Here by changing the output voltage angle of the
STATCOM for a particular time on compared to that of line voltage angle, the inverter can be able provide
both inductive and capacitive reactive power.
By controlling the towards the positive and negative direction and varying the dc link voltage ,we can
able increase or decrease the final output voltage of voltage source convertors(VSC) in [2]. Here the ratio
between the dc and ac voltage in STATCOM should be kept constant. If the final output voltage of the
STATCOM is greater than the line voltage it will absorb reactive power from the line. But, if the output
voltage of the STATCOM is lesser than the line voltage ,then it will inject reactive power into the line.
Throughout this paper the performance of the proposed control structure will be shown by MATLAB
simulations.
III. ANGLE CONTROLLED STATCOM UNDER UNBALANCED CONDITIONS
VSC is the basic building block of the all conventional and angle controlled STATCOMs.
Therefore, study about this method to improve the performance of VSC under unbalanced and fault
conditions is important and practical. There are many methods are proposed in the literature about improving
the performance of the voltage source convertors. But all cannot be applicable to the angle controlled
STATCOM with only one control input angle().
464
This system will protects the switch and limits STATCOM current under fault conditions. The
dc-link voltage oscillations will be occurred in this method and it will cause the STATCOM to trip. The
injection of poor quality voltage and current waveforms into faulted power system will produce undesirable
stress on the power system components [7].
IV. ANALYSIS OF STATCOM UNDER UNBALANCED OPERATING CONDITIONS
In this method a set of unbalanced three phase phasor is split into two symmetrical positive
and negative sequences and zero sequence component. The line currents in the three phases of system is
represented by the equations 1,2,3 and 4 mentioned below,
In the angle controlled STATCOM the only control input angle should be identically applied to all three
phases of the invertor. Here the zero sequence components can be neglected because there is no path for the
neutral current flow in the three phase line. The switching function for an angle control STATCOM should
always be symmetric. The switching function for the three phases a,b and c are represented in the equation
5,6,7 mentioned below,
465
Basically, the unbalanced system can be analysed by postulating a set of negative sequence
voltage source connected in series with the STATCOM tie line. The main idea of the Dual Angle Control
strategy is to generate a fundamental negative sequence voltage vector at VSC output terminals to attenuate
the effect of negative sequence bus voltage. The generated negative sequence voltage will minimize the
negative sequence current produced on STATCOM under fault conditions. The third harmonic voltage
will be produced at VSC output terminals because of interaction between dc link voltage second harmonic
oscillations and switching function. The third harmonic voltage is positive sequence and contains phase a,
b and c which are 1200 apart. Basically, the negative sequence current will be produced in the unbalanced
ac system conditions generates the second harmonic oscillations on the dc link voltage and it will reflects
as third harmonic voltage at the VSC output terminals and fundamental negative sequence voltage. Similar
to fundamental negative sequence voltage, dc link voltage oscillations will decide the amplitude of second
harmonic voltage in [3]. Here by controlling the second harmonic oscillations on the dc link voltage ,the
negative sequence current can be reduced. Decreased negative sequence current will reduce the dc link
voltage. Reducing the dc link voltage second harmonic will reduce the third harmonic voltage and current
at the STATCOM tie line in [12]. Here the control analysis of STATCOM under fault conditions are done
in MATLAB
V. PROPOSED CONTROL STRUCTURE DEVELOPMENT
As discussed in the previous section ,the STATCOM voltage and current during unbalanced
conditions are calculated by connecting a set of negative sequence voltage in series with STATCOM tie
line are shown in Fig.
The derivative of STATCOM tie line negative sequence currents with respect to time are calculated
by the equation 14,15 and 16 mentioned below,
466
In proposed structure,angle is divide into two parts dc and ac .The angle dc is the output of
the positive sequence controller and ac is the output of the negative sequence controller. The angle ac is
the second harmonic oscillations which will generate negative sequence voltage vector at the VSC output
terminals to attenuate the effect of the negative sequence bus voltage on fault conditions. The ac should
be properly filtered out otherwise it will leads to higher order harmonics on the ac side.
Here the voltage is suddenly decreasing in the particular time interval due to sudden change in the
load value. when the load connected to the system does not remain constant, then the current and voltage
of the line will not remain constant. During fault occurance, the current and voltage of the grid will not
remain constant, so the STATCOM can be used to maintain the voltage .Because voltage is the important
protection parameter, it has capability to damage insulations of the transmission and protection device.
467
Here ,the reduction in amplitude of voltage is observed because of the sudden change in the
load value. This is due to the inverse proportionality nature of voltage and current value in normal power
systems. This sudden increase of load is achieved by connecting a load to the grid by means of switch. By
giving a time sequence to the switch for connecting it with the grid, we can able to connect and disconnect
the load automatically for the particular time sequence.
Here, the voltage is maintained on a constant value due to the reactive power compensation by the
STATCOM.The reactive power compensation is done by STATCOM on supplying current in leading angle
to the line voltage.Here the STATCOM is connected to the grid by means of switch .
VII. CONCLUSION
This paper proposed a new control structure to improve the performance of the conventional angle
controller STATCOM under unbalanced and fault conditions occurs on the transmission line.This method
does not completely redesign the structure of the STATCOM instead it add only ac oscillations to the output
of conventional angle controller.The ac oscillations will generate negative sequence voltage at the VSC
output terminals to attenuate the effect of the negative sequence bus voltage generated at the line terminals
during fault conditions.
468
C. Schauder and H. Mehta, Vector analysis and control of advanced static VAR compensators" in .
Eng.-C, Jul. 1993, vol. 140,pp. 299306.
2.
H. Song and K. Nam, Dual current control scheme for PWM converter under unbalanced input
voltage conditions, IEEE Trans. Ind. Electr., vol. 46, no. 5, pp. 953959, Oct. 1999.
3.
Yazdani and R. Iravani, A unified dynamic model and control for the voltage-sourced converter
under unbalanced grid conditions,IEEE Trans. Power Del., vol. 21, no. 6, pp. 16201629, Jul. 2006.
4.
Z. Xi and S. Bhattacharya, STATCOM operation strategy with saturable transformer under threephase power system fault,in Proc. IEEE and .Electron. Soc., 2007, pp. 17201725.
5.
M. Guan and Z. Xu, Modeling and control of a modular multilevel converter-based HVDC system
under unbalanced grid conditions,IEEE Trans. Power Electron., vol. 27, no. 12, pp. 48584867, Dec.
2012.
6.
Z. Yao, P. Kesimpar, V. Donescu, N. Uchevin and V. Rajagopalan, Nonlinear Control for STATCOM
Based on Differential Algebra,29th Annual IEEE Power Electronics Conference, Fukuoka, vol.1,
pp.329-334, 1998.
7.
F. Liu, S. Mei, Q. Lu, Y. Ni, F.F. Wu and A. Yokoyama, The Nonlinear Internal Control of
STATCOM: Theory and Application,International Journal of Electrical Power & Energy Systems,
vol. 25, pp.421-430, 2003.
8.
9.
N.G. Hingorani and L. Gyugyi, Understanding FACTS, Concepts and Technology of Flexible AC
Transmission Systems. Piscataway, NJ: IEEE Press, 1999.
10. P. Rao et al., STATCOM control for power system voltage control applications,IEEE Trans. Power
Del., vol. 15, no. 4, pp. 13111317, Oct. 2000.
11. D. Soto and R. Pena, Nonlinear control strategies for cascaded multilevel STATCOMs, IEEE Trans.
Power Del., vol. 19, no. 4, pp. 19191927, Oct. 2004.
12. VAR Planning With Tuning of STATCOM in a DG Integrated Industrial System T. Aziz, M.
J.Hossain, T. K. Saha and N. Mithulananthan IEEE transactions on power delivery, vol. 28, no. 2,
April 2013.
13. S. Bhattacharya, B. Fardenesh, and B. Sherpling, Convertible static compensator: Voltage source
converter based FACTS application in the New York 345 kV transmission system, presented at the
5th Int. Power Electron. Conf., Niigata, Japan, Apr. 2005.
14. P. N. Enjeti and S. A. Choudhury, A new control strategy to improve the performance of a PWM AC
to DC converter under unbalanced operating conditions, IEEE Trans. Power Electron., vol. 8, no. 4,
pp. 493500, Oct. 1993.
15. P.W. Lehn and R. Iravani, Experimental evaluation of STATCOM closed loop dynamics, IEEE
Trans. Power Del., vol. 13, no. 4, pp. 13781384, Oct. 1998.
16.
17. Performance analysis and location identification of STATCOM on IEEE-14 bus using power flow
analysis K. sudararaju , A. Nirmal kumar, S.Jeeva, A. Nandhakumar, Journal of Theoretical &
Applied Information Technology 2/20/2014, Vol. 60 Issue 2, p365-371. 7p.
18. Cascaded Control of Multilevel Converter based STATCOM for Power System Compensation of
Load Variation, K. Sundararaju, A. Nirmal Kumar, International Journal of Computer Applications
(0975 8887), Volume 40 No.5, February 2012
19. Performance analysis of STATCOM in real time power system , K. Sundararaju, A. Nirmal Kumar,
International Conference Advances in Electrical Engineering (ICAEE), 2014 .
469
Murugan M
Associate Professor/Dept.Of.EEE, K.S.Rangasamy College Of Engineering
Abstract: The proposed novel four-switch three-phase (FSTP) inverter is to design to reduce the
rate, difficulty, mass, and switching losses of the DC-AC conversion system. Here the output line
voltage cannot exceed half the input voltage in the out-dated FSTP inverter and it operates at half
the DC input voltage. Single-Ended Primary-Inductance Converter (SEPIC) is a novel design for
the FSTP inverter proposed in this paper. In this proposed topology the necessity of output filters
is not necessary for the pure sinusoidal output voltage. Related to out-dated FSTP inverter, the
proposed FSTP SEPIC inverter raises the voltage utilization aspect of the input DC supply, where
the suggested topology delivers the higher output line voltage which can be extended up to the full
value of the DC input voltage. In the proposed topology a control used called the integral slidingmode (ISM) control and this control is used to enhance its dynamics and to ensure strength of the
system during different operating conditions. Simulation model and results are used to authorise the
proposed concept and simulations results show the effectiveness of the proposed inverter.
I. INTRODUCTION
The conventional six-switch three-phase (SSTP) voltage source inverter shown in Fig 1 has found
well-known industrial tenders in different forms such as lift, cranes, conveyors, motor drives, renewable
energy conversion systems, and active power filters. However, in some low power range applications,
reduced switch count inverter topologies are considered to alleviate the volume, losses, and cost.
Some research efforts have been directed to develop inverter topologies that can achieve the
aforesaid goal. By the results obtained it shows that it has a possibility to implement a three-phase inverter
with the usage of only four Switches [1]. In four- switch three-phase (FSTP) inverter, two of the output
load phases are sustained from the two inverter legs, while the middle point of the DC-link of a splitcapacitor bank is connected to the third load phase. Recently, the FSTP inverter has attracted features like
its performance, control, and applications etc [2]-[17].
Compared to the out-dated SSTP inverter, the FSTP inverter has various benefits such as reduction
in cost and reliability increased due to the reduction in the number of switches, conduction and switching
losses is reduced by 1/3, where one complete leg is omitted, and compact number of interface circuits to
supply PWM signals for the switches. The FSTP inverter can also be operated in fault tolerant control to
solve the open/short circuit fault of the SSTP inverter [2], [8], [10].
In order to improve the power quality at AC mains, bridgeless buck-boost converter is used as a frontend
rectifier. Single phase AC supply is given to the bridgeless buck-boost converter through LC filter. The
switching pulse for the rectifier is generated with the help of error in DC link voltage. The reference DC
link voltage is created from the reference speed value. The speed of the motor is controlled by varying the
DC link voltage.
471
Fig. 3 A basic approach to achieve DC-AC conversion with four switches using two SEPIC DC-DC
converters (a) reference output voltage of the first converter, (b) reference output voltage of the second
converter.
As shown in Fig. 4, the bi-directional SEPIC converter includes DC input voltagedcV, input inductor1L,
two complementary power switches 1'1,SS, transfer capacitor 1C, output inductor 2Land output capacitor
2C feeding a load resistance 0R. SEPIC operation core implies charging the inductors 1 L and 2 L during
the ON state of the switching period taking the energy respectively from the input source and from the
transfer capacitor 1 C ,and discharging them simultaneously into the load through the switch '1S during the
OFF state of the switching period. Depend upon the duty cycle the output voltage of the SEPIC DC-DC
converter may be less or more than the input voltage. Output and input voltage relation is explained in the
equation as follows.
Where D is the duty cycle, while 0V andinV are the output and input voltage of the converter respectively.
The reference voltage of each converter with respect to the ground implies that the sinusoidal modulation
of each SEPIC converter. The reference voltage of each converter with respect to the ground is given by
Where w is the desired radian frequency, while VmL-L peak of the desired line to line output voltage. Thus,
established on Kirchhoffs voltage law in Fig. 5, the output line voltages across the load are given by:
Although the FSTP SEPIC inverter can give an output line voltage up to a value equals the voltage of the
input source vwDCVas indicated by equation (2). To avoid operating at zero duty it is recommended to
define mL L V { lower than the value of the input DC (i.e. minimum duty cycle is selected to be slightly
higher than zero).
473
Where mI is the peak value of load current, and is the phase of the load impedance (ZL).
The input inductor current for both SEPIC converters can be achieved by applying energy balance rule for
each SEPIC converter. Assuming ideal converters, the input inductor currents for both converters are given
by,
From equation (7), it shows that the average values of both input inductor currents are equal only at a pure
resistive load (unity power factor), in this event, same amount of power to the load side will be transferred
by the both SEPIC converter. Otherwise, the average currents will be unequal (according to equation (7)),
i.e. SEPIC converters will transfer different amount of power to the load side.
The proposed inverter topology of DC input current)(itDC(t)is equal to the summation of the load current
drawn by phase A)(iA(t ), and the input inductors currents of both SEPIC converters (iLIB(t)and (iLIC(t)) as
follows. )
Where )(tiA is the load current of phase A as described in equation (6), which is drawn directly from the DC
input source. Substituting equation (6) into (9), the DC supply current could be given in the following form:
Equation (9) shows that the DC supply current drawn by the proposed inverter topology is constant.
For line-to-line voltage peak of 86.66% of the DC input voltage, the normalized load current drawn by
phase A mAlti)(, normalized input inductor current for each SEPIC convertermBLlti)( 1, and mcLlti)(1the
474
Fig. 6. SEPIC equivalent circuit for (a) switch ON and (b) switch OFF.
The reason for choosing 1Liinstead of 2Li is to allow the sliding surface to directly control the input of each
converter in addition to its output, which is more steady than the other cases.
At an extremely high switching frequency, the
sliding-mode controller will ensure that both input inductor current and output capacitor voltage are
controlled to follow exactly their sudden references i Leref and Vc2refrespectively. However, in the case of
fixed frequency or finite frequency sliding-mode controllers, the control is unsatisfactory, where steadystate errors occur at both inductor current and output capacitor voltage. To introduce an additional integral
term of the state variables is the good method for conquering these errors into the sliding surface. Therefore,
an integral term of these errors is introduced into the sliding-mode controller as an additional controlled
state-variable to reduce these steady-state errors. This is commonly known as integral sliding-mode control
(ISMC), and the sliding surface is selected as specified by equation (12):
Where a1,a2 and a3 arepresent the desired control parameters denoted sliding coefficients, whil e1e2 and
e3 eare expressed as:
To obtain the dynamic model substituting the SEPIC state-space models under CCM into the time derivative.
Where the three-state errors time derivative given by:
Where D is the equivalent control signal denoted as the duty cycle of the converter, which could be
formulated using the invariance conditions by setting the time derivative of to zero as follows:
Substituting the SEPIC state-space models under CCM into the time derivative of (19) gives the dynamical
model of the system as:
The equivalent control signal deduced from setting the time derivative of (20) into zero gives:
(a)
(b)
(c)
478
(d)
Fig. 9 Performance of the FSTP SEPIC inverter under normal operating conditions. (a) Output capacitor
voltage of both SEPIC converters. (b) Three phase output line voltages. (c) Input inductor current of both
SEPIC converters. (d) DC supply current.
(a)
(b)
Fig. 10 Step response of the FSTP SEPIC inverter. (a) Load voltage and load current for a step change of
the reference load voltage from 50 to 100% with doubled frequency. (b) Load voltage and load current for
a load step change from 50 to 100%.
v) CONCLUSIONS
A DC-AC four-switch three-phase SEPIC-based inverter is proposed in this paper. The proposed inverter
improves the operation of the DC bus by a two factor when it compared to the conventional four-switch
three-phase voltage source inverter. Then, without need for an output filter, it can produce a pure sinusoidal
three-phase output voltage. Unlike conventional four-switch three-phase inverter, the proposed inverter
does not suffer from the problems of voltage fluctuation across the DC link split-capacitors and without
circulation in any passive component the third phase load current is directly drawn from the DC source. A
sliding-mode controller was designed and applied to the reduced second- order model of the SEPIC DC-DC
converter. Simulation results verified the performance of the proposed inverter.
REFERRENCES
[1] H. W. V. D. Broeck and J. D. V. Wyk, A comparative investigation of a three-phase induction
machine drive with a component minimized voltage-fed inverter under different control options,
IEEE Trans. Ind. Appl., vol. IA-20, no. 2, pp. 309320, Mar. 1984.
479
[3] M. N. Uddin, T. S. Radwan, and M. A. Rahman, Fuzzy-logiccontrollerbased cost-effective fourswitch three-phase inverter-fed IPM synchronous motor drive system, IEEE Trans. Ind. Appl., vol.
42, no. 1, pp. 2130, Jan./Feb. 2006.
[4] C.-T. Lin, C.-W. Hung, and C.-W. Liu, Position sensorless control for four-switch three-phase
brushless DC motor drives, IEEE Trans. Power Electron., vol. 23, no. 1, pp. 438444, Jan. 2008.
[5]
J. Kim, J. Hong, and K. Nam, A current distortion compensation scheme for four-switch inverters,
IEEE Trans. Power Electron., vol. 24, no. 4, pp. 10321040, Apr. 2009.
[6]
C. Xia, Z. Li, and T. Shi, A control strategy for four-switch three-phase brushless DC motor using
single current sensor, IEEE Trans. Ind.Electron., vol. 56, no. 6, pp. 20582066, Jun. 2009.
[7]
[8] K. D. Hoang, Z. Q. Zhu, andM. P. Foster, Influence and compensation of inverter voltage drop
in direct torque-controlled four-switch threephase PM brushless AC drives, IEEE Trans. Power
Electron., vol. 26, no. 8, pp. 23432357, Aug. 2011.
[9] Tzann-Shin Lee; Jia-Hong Liu, Modeling and Control of a Three-Phase Four-Switch PWM
Voltage-Source Rectifier in d-q Synchronous Frame, IEEE Trans. Power Electron., vol.26, no.9,
pp.2476-2489, Sept. 2011.
[10] R. Wang, J. Zhao, and Y. Liu, A comprehensive investigation of fourswitch three-phase voltage
source inverter based on double fourier integral analysis, IEEE Trans. Power Electron., vol. 26, no.
10, pp. 27742787, Oct. 2011.
[11] Wen Wang; An Luo; Xianyong Xu; Lu Fang; Thuyen Minh Chau; Zhou Li, Space vector pulsewidth modulation algorithm and DC-side voltage control strategy of three-phase four-switch active
power filters, Power Electronics, IET , vol.6, no.1, pp.125-135, Jan. 2013.
[12] Narimani, Mehdi; Moschopoulos, Gerry, A method to reduce zerosequence circulating current in
three-phase multi-module VSIs with reduced switch count, Applied Power Electronics Conference
and Exposition (APEC), 2013 Twenty-Eighth Annual IEEE , vol., no., pp.496-501, 17-21 March
2013.
[13] Tan, X.; Li, Q.; Wang, H.; Cao, L.; Han, S., Variable parameter pulse width modulation-based
current tracking technology applied to fourswitch three-phase shunt active power filter, Power
Electronics, IET , vol.6, no.3, pp.543-553, March 2013.
[14] Changliang Xia; Youwen Xiao; Wei Chen; Tingna Shi, Three effective vectors-based current control
scheme for four-switch three-phase trapezoidal brushless DC motor, Electric Power Applications,
IET , vol.7, no.7, pp.566-574, Aug. 2013.
[15] Masmoudi, M.; El Badsi, B.; Masmoudi, A, DTC of B4-Inverter-Fed BLDC Motor Drives With
Reduced Torque Ripple During Sector-toSector Commutations, IEEE Trans. Power Electron.,
vol.29, no.9, pp.4855-4865, Sept. 2014.
[16] S. Dasgupta, S.N. Mohan, S.K. Sahoo, S.K. Panda, Application of Four-Switch-Based ThreePhase Grid-Connected Inverter to Connect Renewable Energy Source to a Generalized Unbalanced
Microgrid System, IEEE Trans. Ind.Electron., vol.60, no.3, pp.1204-1215, March 2013.
[17] B. El Badsi, B. Bouzidi, A. Masmoudi, DTC Scheme for a Four-Switch Inverter-Fed Induction
Motor Emulating the Six-Switch Inverter Operation, IEEE Trans. Power Electron., vol.28, no.7,
pp.3528-3538, July 2013.
[18] R. Wang, J. Zhao, and Y. Liu, DC-link capacitor voltage fluctuation analysis of four-switch threephase inverter, in Conf. Rec. IECON, 2011, pp. 12761281.
[19] M. Veerachary, Power tracking for nonlinear PV sources with coupled inductor SEPIC converter,
480
481
M.Kumarasamy
College of Engineering, Karur. Keerthimahi95@gmail.com
Ms.K.Kavitha,
Final Year, Electronics And Communication engineering
Abstract: The dynamic clustering algorithm changes the composition of clusters periodically. We
consider two well-known dynamic clustering algorithms the full-search clustering algorithm (FSCA)
and the greedy-search clustering algorithm (GSCA). The coordinated communication system as a
new parameters are to maximize the coordinated communication system. Two well-known dynamic
clustering algorithms in this paper: the full-search clustering algorithm (FSCA) and the greedysearch clustering algorithm (GSCA. Simulation results show that the MAX-CG clustering algorithm
improves the average user rate and the edge user rate and IW clustering algorithm improves the
edge users rate and reduces the complexity to only half of the existing algorithm.
I. INTRODUCTION
In the uplink communication system, the base station (BS) receives low intensity signals from
cell-edge users and signals from users at the edge of adjacent cells simultaneously. In the downlink
communication system, the user receives signals from the BS in its own cell and signals from the BSs
at the adjacent cells with similar power. The received signals from other cells act as interference and
cause performance degradation. In this case, both the capacity and the data rate are reduced by the intercell interference (ICI)[1]. In the past, the fractional frequency reuse (FFR) scheme, which is a simple ICI
reduction technique, has been used to achieve required performance in interference limited environment .
Since the FFR scheme increases the performance at the cell-edge but degrades the overall cell throughput, a
coordinated system was proposed to overcome the weakness of the FFR scheme. Also, the techniques for ICI
mitigation and performance enhancement by sharing the full channel state information (CSI) and transmit
data were studied in. [2]However, the techniques are difficult to implement in a practical communication
system because of the large amount of information to be shared between BSs. Instead of the impractical
scenario that requires full CSI and transmit data sharing among the whole network, a clustering algorithm
has been applied to practical communication systems by conguring the cluster for sharing full CSI between
a limited number of cells. The clustering algorithms are classied into two types: static clustering algorithm
and dynamic clustering algorithm. A dynamic clustering algorithm to avoid the ICI was developed , whose
objective is that the overall network has the minimum performance degradation while also improving the
performance of the cell-edge user. A clustering algorithm for sum rate maximization by using the greedy
search was proposed to improve the sum rate without guaranteeing the cell-edge users data rate. However,
when the size of the whole network is large, the complexity of the algorithm is increased rapidly. If the
complexity of the algorithm is large.the processing speed cannot adopt the change of the channels. [3]The
purpose of coordinated communication is to minimize the inter-cell interference to the cell-edge user and to
improve their performance. When the clusters are not properly congured, the performance of the cell-edge
users will be further degraded. Even though the existing algorithm improves the overall data rate, it does
not consider the goal of the coordinated communication: the improvement of the performance of cell-edge
users.
482
FLOW CHART
484
[2]
H. Zhang and H. Dai, Cochannel interference mitigation and cooperative processing in downlink
multicell multiuser MIMO networks, EURASIP J. on Wireless Communication. and Networking,
July 2004.
[3]
[4] S.Kaviani and W.A.Krzymien,Sum rate maximization of MIMO broadcast channels with
coordination of base stations, in Proc. IEEE WCNC, 2008.
[5]
J. Zhang, R. Chen, J. G. Andrews, A. Ghosh, and R. W. Heath, Networked MIMO with clustered
linear precoding, IEEE Trans. Wireless Communication. vol. 8, no. 4, Apr. 2009.
[6]
[7]
B. O. Lee, H. W. Je, I. Sohn, O. Shin, and K. B. Lee, Interference aware decentralized precoding
for multicell MIMO TDD systems, in Proc. IEEE GLOBECOM, 2008,
[8] SEECH: Secure and Energy Efficient Centralized Routing Protocol for Hierarchical WSN,
International Journal of Engineering Research and Development e-ISSN: 2278-067X, p-ISSN:
2278-800X, www.ijerd.com, Volume 2, August 2012
485
Abstract: Domino logic is widely used in many applications with high performance and less area
overhead. As the technology is scaled down, the supply voltage is reduced for low power and the
threshold voltage is also reduced to achieve high performance. Since lowering the threshold voltage
leads to an exponential increase of sub-threshold leakage current. Reducing power dissipation
has become an important objective in the design of digital circuits. The proposed technique uses
an analog current mirror circuit and it is based on comparison of mirrored current of the pull-up
network with its worst case leakage current. This technique reduces the parasitic capacitance on the
dynamic node, yielding a smaller keeper for wide fan-in gates to implement high-speed and robust
circuits. Thus the delay and power consumption are reduced in the wide fan-in domino circuit. The
leakage current is also reduced by exploiting footer transistor in the diode configuration. Domino
gate that uses an analog current mirror to replicate the leakage current of a dynamic gate pull-down
stack and it tracks the process, voltage and temperature using the stacking effect. It improves the
performance of the current mirror circuit which will reduce the leakage current.
Keywords OR gate, domino logic, current mirror, Leakage current
I. INTRODUCTION
Dynamic logic such as domino logic is widely used in many applications to achieve high
performance, which cannot be achieved with static logic styles. However, the main drawback of dynamic
logic families is that they are more sensitive to noise than static logic families. On the other hand, as the
technology scales down, the supply voltage is reduced for low power, and the threshold voltage (Vth) is
also scaled down to achieve high performance. Since reducing the threshold voltage exponentially increases
the sub threshold leakage current, reduction of leakage current and improving noise immunity are of major
concern in robust and high-performance designs in recent technology generations. However, in wide fan-in
dynamic gates, especially for wide fan-in OR gates, robustness and performance significantly degrade with
increasing leakage current.
Wide fan-in domino is used for variety of applications like memories and comparators. Domino
logic circuit is also a kind of dynamic logic circuit which is used for the high speed and high performance
application. Also the domino logic circuit plays a vital role where fan in are high in any circuit. Domino
circuits are widely used in high performance microprocessors, register files, ALU, DSP circuits and
priority encoders in content addressable memories, such as high fan-in multiplexer or comparator circuits.
Thus domino logic circuit techniques are extensively applied in high performance microprocessors due to
superior speed and area characteristics.
A. Static Logic Circuits
Static Logic circuits can maintain their output logic levels for indefinite period as long as the inputs are
unaffected as shown in Figure 1. Although Static CMOS logic is widely used for its high noise margins,
good performance and low power consumption with no static power dissipation, still these circuits are
limited at running extremely high clock speeds and suffers from glitches [4]. Number of transistors requires
to implement an N fan-in gate is almost equal to 2N; therefore it will consume large silicon area. An
alternate logic style is the dynamic CMOS Logic.
486
C. Domino Logic
The name domino comes from the behaviour of a chain of the logic gates. It is a noninverting structure
as shown in Figure 3. It runs 1.5-2 times faster than static logic circuits. It is simply a logic which permits
high-speed operation and enables the implementation of complex functions which otherwise is not achieved
by Static and Dynamic circuits [4]-[6]. Domino logic offers a simple technique to eliminate the need of
complex clocking scheme by utilizing a single phase clock and have no static power consumption as it is
removed by clock input in the first stage. These logic circuits are glitch free, have fast switching threshold
and possibility to cascade. Domino circuits employ a dual-phase dynamic logic style with each clock cycle
divided into a pre charge and an evaluation phase.
487
Where W and L denote the transistor size, and n and p are the electron and hole mobilities,
respectively. However, the traditional keeper approach is less effective in new generations of CMOS
technology. Although keeper upsizing improves noise immunity, it increases current contention between
the keeper transistor and the evaluation network. Thus, it increases power consumption and evaluation
delay of standard domino circuits. These problems are more critical in wide fan-in dynamic gates due to
the large number of leaky NMOS transistors connected to the dynamic node. Hence, there is a trade off
between robustness and performance, and the number of pull-down legs is limited.
Several circuit techniques are proposed [3] to address these issues. These circuit techniques can be divided
into two categories.
In the first category, circuit techniques change the controlling circuit of the gate voltage of the keeper
such as Conditional-Keeper Domino (CKD) [7], High Speed Domino (HSD) [8], Leakage Current Replica
(LCR) keeper domino [9], and Controlled Keeper by Current-Comparison Domino (CKCCD) [10]. On the
other hand, in the second category, designs including the proposed designs change the circuit topology of
the footer transistor or reengineer the evaluation network such as Diode Footed Domino (DFD) [13] and
488
The circuit works as follows: At the start of the evaluation phase, when clock is high, MP3 turns on and
then the keeper transistor MP2 turns OFF. In this way, the contention between evaluation network and
keeper transistor is reduced by turning off the keeper transistor at the beginning of evaluation mode. After
the delay equals the delay of two inverters, transistor MP3 turns off. At this moment, if the dynamic node
has been discharged to ground, i.e. if any input goes high, the nMOS transistor MN1 remains OFF. Thus
the voltage at the gate of the keeper goes to VDD-Vth and not VDD causing higher leakage current though
the keeper transistor. On the other hand, if the dynamic node remains high during the evaluation phase (all
inputs at 0, standby mode), MN1 turns on and pulls the gate of the keeper transistor. Thus keeper transistor
will turn on to keep the dynamic node high, fighting the effects of leakage.
C. Controlled Keeper by Current Comparison Domino logic (CKCCD)
A new circuit design with Controlled Keeper by Current Comparison Domino (CKCCD) is proposed to
make the domino circuits more robust and with low leakage without significant performance degradation
or increased power consumption. The reference current is compared with the pull down network current.
If there is no conducting path from dynamic node to ground and only current in the PDN is the leakage
489
This idea is conceptually illustrated in figure 7. In fact there is a race between the pull down network and
the reference current
D. Current Comparison Domino (CCD)
In the CCD circuit, the current of the PUN is mirrored by transistor M2 and compared with the reference
current, which replicates the leakage current of the PUN. The topology of the keeper transistors and
the reference circuit, which is shared for all gates, which successfully tracked the process, voltage and
temperature variations. The CCD circuit employs pMOS transistors to implement logical function, as
shown in figure 8. This circuit is similar to a Replica Leakage Circuit, in which a series diode-connection
transistor M6 similar to M1 is added. Using the N-well process, source and body terminals of the pMOS
transistors can be connected together such that the body effect is eliminated. By this means, the threshold
voltage of transistors is only varied due to the process variation and not the body effect. Moreover, utilizing
pMOS transistors instead of nMOS ones in the N-well process, it is possible to prevent increasing the
threshold voltage due to the body effect in existence of a voltage drop due to the diode configuration of
transistor M1, yielding decreasing the delay.
Normally in domino logic, in the Pull up network, we use only one PMOS transistor instead of
n transistor n=4,8&16 etc., and the logic will be implemented with the pull down network. By reducing
the number of transistors, capacitance decreases. Few transistors means limited switching (charging and
Discharging) of the capacitor. So dynamic power consumption gets decreased since the main source for
dynamic power dissipation is charging and discharging.
As known already leakage current occurs due to unwanted current flow between source and the drain of a
transistor. This leakage current can be avoided by the stacking effect. Stacking effect says that when two
or more transistors in series are at off condition, the leakage current can be reduced.
V.
CONCLUSION
A continuous scaling of CMOS technology, effective management of leakage power is a great challenge.
492
C. Arun Prasath and B. Manjula, Design and Simulation of Low Power Wide Fan-In Gates,
International Journal of Science and Innovative Engineering & Technology, Vol.1, ISBN 978-81904760-6-5, May 2015.
[2]
C.Arun Prasath, S.Sindhu, Domino Based Low Leakage Circuit for Wide Fan-in or Gates, CiiT
International Journal of Programmable Device Circuits and Systems , Vol 6, No 3 , ISSN: 0974
9624,2014.
[3]
[4]
[5]
Sung-Mo Kang and Yusuf Leblebici, CMOS Digital Integrated Circuits: Analysis and Design, 3rd
Edition, Tata McGraw-Hill Publishing Company Ltd, New Delhi, 2007.
[6]
Neil H. E. Weste and K. Eshragian, Principle of CMOS VLSI Design, 2nd Edition, Pearson
Education (Asia) Pvt. Ltd.2000.
[7]
[8]
[9]
Y. Lih, N. Tzartzanis, and W. W. Walker, A leakage current replica keeper for dynamic circuits,
IEEE J. Solid-State Circuits, vol. 42, no. 1, pp. 4855, Jan. 2007.
[10] A. Peiravi and M. Asyaei, Robust low leakage controlled keeper by current-comparison domino for
wide fan-in gates, integration, VLSI J., vol. 45, no. 1, pp. 2232, 2012.
[11] H. Suzuki, C. H. Kim, and K. Roy, Fast tag comparator using diode partitioned domino for 64-bit
microprocessors, IEEE Trans. Circuits Syst., vol. 54, no. 2, pp. 322328, Feb. 2007.
[12] Sherif M. Sharroush, Yasser S. Abdalla, Ahmed A. Dessouki and El-Sayed A El-Badawy,
Compensating for the Keeper Current of CMOS Domino Logic Using a Well Designed NMOS
Transistor , 26th National Radio Science Conference (NRSC2009).
[13] H. Mahmoodi and K. Roy, Diode-footed domino: A leakage-tolerant high fan-in dynamic circuit
design style, IEEE Trans. Circuits Syst. Reg. Papers, vol. 51, no. 3, pp. 495503, Mar. 2004.
[14] ulo F. Butzen, Andre I. Reis, Chris H. Kim, Renato P. Ribas, Modeling and Estimating Leakage
Current in Series- Parallel CMOS Networks, GLSVLSI07,2007.
[15] Nikhil Saxena, Sonal Soni, Leakage current reduction in CMOS circuits using stacking effect,
International Journal of Application or Innovation in Engineering & Management (IJAIEM), Vol. 2,
Issue: 11, pp.213-216,Nov. 2013.
[16] Ankita Nagar, Sampath Kumar.V and Payal Kaushik, Power Minimization Of Logical Circuit
Through Transistor Stacking, International Journal of Application or Innovation in Engineering &
Management (IJAIEM), Vol. 1, Issue: 3, pp.256-260,April-May. 2013.
493
494
Abstract: A novel Identity-based Batch Verification Scheme in Vehicular ad hoc network (VANET)
can outstandingly improve the traffic safety and effectiveness. The basic idea is to allow vehicles
to send traffic message to roadside units (RSUs) or other vehicles. Vehicles have to be prohibited
from some attacks on their privacy and misuse of their private data. For this reason, the security and
privacy protection issues are important prerequisites for VANET. The Novel identity-based batch
verification scheme was newly future to make VANET more secure and efficient for practical use.
The current IBV system exist some security risks. To set up an improved scheme that can satisfy
the security and isolation desired by vehicles. The proposed NIBV scheme provides the verifiable
security in the casual Mysql model. In addition, the batch confirmation of the proposed scheme
needs only effectual approach for VSNs to achieve confirmation, reliability, and authority. However,
when the number of signatures received by a Roadside Unit(RSU) becomes bulky, a scalability
problem appear immediately, where theRSU could be difficult to consecutively verify each received
signature within 300 ms period according to the current committed short range communications
broadcast protocol. To introduce a new identity-based batch verification scheme for transportation
between vehicles and RSUs, in which an RSU can confirm abundant received signatures at the same
instance such that the total verification time can be drastically reduced.
Index Terms: Authenticity, novel batch verification, Privacy, Vehicular ad-hoc network.
I. INTRODUCTION
VANETs are a subgroup of mobile ad-hoc networks. The main difference is that the mobile routers
construction the network are vehicles like cars or trucks and their movement is controlled by factors like
road route, surrounding traffic and traffic system. It is a feasible supposition that the members of VANETs
can connect to fixed networks like the Internet occasionally, at least at usual service intervals. A main goal
of VANETs is to enhance road safety. In VANET they have three important entities like trusted authority,
road side unit, on board unit. In trusted authority (TA) schedule the route to the vehicle. The TA can
communicate via a road side unit (RSU).In RSU is a communication between the TA and OBU. In OBU to
commune with roadside units (RSUs) situated at roadside or street intersection. Vehicles can also use OBUs
to commune with each other. VANET can be classifying into two types: vehicle-to-infrastructure (V2I)
communication or inter-vehicle (V2V) communication. The basic use of VANET is that OBUs at regular
intervals transmit information on their nearby states. The information like current time, position, direction,
speed and traffic events are passed to other nearby vehicles and RSUs. For example, the traffic actions
could be accident location, brake light warning, change lane/merge traffic warning, emergency vehicle
warning, etc. Other vehicles may modify their travelling routers and RSUs may inform the traffic control
centre to alter traffic lights for avoiding possible traffic jamming. VANET offers a variety of services
and profit to users, and thus deserve deployment efforts. The wonderful benefits expected from vehicular
communications and the enormous number of vehicles, it is clear that vehicular communications are
probable to become the most relevant understanding of mobile ad hoc networks. The appropriate integration
of on-board units and position devices, such as GPS receivers along with communiqu capabilities, opens
marvelous business opportunities, but also raises alarming research challenges.
495
1) TA is totally confidential by everybody and it is motorized with enough calculation and storage ability.
The laid off TA are installing to keep away from being a bottleneck or a solitary point of failure.2) TA is
the only can decide the vehicles real individuality but not by other vehicles or RSU.
3) TA and RSUs converse via a secure fixed network.
4) RSUs are not confidential. As they are located down road side, they can be simply co-operation. They
are inquisitive about vehicles seclusion.
5) Tamper-proof devices on vehicles are supposed to be believable and its information is for no reason
to been reveal. The WAVE standard, every OBU is capable with a hardware security module , which is a
tamper-resistant module used to accumulate the security resources The HSM in each OBU is accountable
for drama all the cryptographic process such as signing messages, keys update. It is hard for lawful OBUs
to take out their private keys from their tamper-proof devices. The system has its individual clock for make
accurate timestamp and is clever to sprint on its individual battery. TA, RSUs and OBUs have approximately
coordinated clocks.
B.ADVERSARY MODEL
All participating RSUs and OBUs are not believable and the communication channel is not protected. An
opponent is able to performing the following without the novel IBV scheme.
1) An opponent may adjust or repeat existing messages, even an opponent may disperse or mimic any
rightful vehicle to produce incorrect information into the scheme to influence the behavior of other users or
damage the transportation of VANET.
2) An opponent may draw the real identity of any vehicle and can disclose the vehicles real identity by
analyzing many messages sent by it.
IV. PROPOSED SYSTEM
(1)The OBU of the vehicle broadcast or distribute traffic information to RSU or nearby vehicles.
(2)RSU verify the traffic information and send to the TA. (3)TA schedules the route of the vehicles, which
route is traffic free and shortest. (4)To applying a dynamic routing algorithm find shortest energetic routers
without traffic. (5)Energy level should be increased in vehicular networks during that time of providing
497
To proposed a novel IBV scheme having a 5.0 in verification delay and 0.5 in signing a message.
Fig. 3 indicates the connection between the transmission overhead and the number of messages received by
an RSU in 10 seconds. As the number of messages increases, the transmission overhead increases linearly.
The transmission overhead of the novel IBV system
498
is least among the four schemes. Here, 45,000 correspond to the number of messages transmitted by 150
vehicles in 10 seconds. The previous IBV systems they have transmitted by 150 vehicles in 30 seconds.
IV. CONCLUSION
To proposed an efficient identity-based batch verification (NIBV) scheme for vehicle-toinfrastructure and inter-vehicle communications in vehicular ad hoc network (VANET). The batch-based
verification for multiple message signatures is more efficient than one-by-one single verification when
the receiver has to confirm a large number of messages In particular; the batch verification process of the
proposed NIBV scheme needs only a constant number of pairing and point multiplication computations,
independent of the number of message signatures. The proposed NIBV scheme is secure against existential
forgery in the random oracle model under the computational Diffie-Hellman problem. In the performance
analysis, we have evaluated the proposed NIBV scheme with other batch verification schemes in terms of
computation delay and transmission overhead. Moreover, we verify the efficiency and practicality of the
proposed scheme by the simulation analysis. Simulation results show that both the average message delay
and message loss rate of the proposed IBV scheme are less than those of the existing schemes.
VII. FUTURE WORK
In the future work, we will continue our efforts to enhance the features of IBV scheme for VANET, such
as recognizing illegal signatures. When attackers send some invalid messages, the batch verification may
lose its efficacy. This problem commonly accompanies other batch-based verification schemes. Therefore,
thwarting the invalid signature problem is a challenging and a topic for study in our future research.
VII. REFERENCE
[1]
Shiang-Feng Tzeng, Shi-Jinn Horng, Enhancing security and privacy scheme for identity based
batch verification scheme in VANET, IEEE Transaction on Vehicular technology, 2015.
[2]
C. C. Lee and Y. M. Lai, Toward a secure batch verification with group testing for VANET,
Wireless Networks, vol. 19, no. 6, pp. 1441-1449, 2013.
499
Shi-Jinn Horng, Shiang-Feng Tzeng, b-SPECS+: Batch Verification for Secure Pseudonymous
Authentication in VANET, information forensics and security, vol. 8, no. 11, November 2013.
[4]
C. Zhang, R. Lu, X. Lin, P. H. Ho, and X. Shen, An efficient identity-based batch verification
scheme for vehicular sensor networks, in Proceedings of the 27th IEEE International Conference
on Computer Communications (INFOCOM08), pp. 816-824, 2008.
[5]
M. Raya and J. P. Hubaux, Securing vehicular ad hoc networks, Journal of Computer Security
Special Issue Security Ad Hoc Sensor Networks, vol. 15, no. 1, pp. 39-68, 2007.
500
INTRODUCTION
The diseases of heart or blood vessels are generally known as cardiovascular disease (CVD). It is
a big health problem. The study says that in 2011, there were almost 160,000 deaths as a result of CVD.
Around 74,000 of these deaths were caused by coronary heart disease. Most deaths from heart disease are
caused by heart attacks. In the UK, there are about 103,000 heart attacks, 152,000 strokes in each year,
resulting in more than 41,000 deaths [7]. This disease can be discovered using ECG analysis. An ECG is the
electrocardiogram which is a test that measures the electrical activity of the heart. The electrical impulses
are generated due to the heartbeat are recorded and usually shown on a piece of paper. This is referred as
an electrocardiogram, and records if any problems with the heart's rhythm, and the conduction of the heart
beat through the heart which may be affected by underlying heart disease. It consists of five waves namely
P, QRS and T. The normal ECG wave is explained in fig.1. Each wave occurs due to some electrical
variations in the heart. QRS complex represents the depolarization of the ventricles of the heart which have
greater muscle mass and therefore its process consumes more electrical activity.
The applications of ECG are to determine the electric axis of the heart, Heart rate monitoring
Arrhythmias, Carditis, Pacemaker monitoring [11]. The automatic detection of QRS is critical for
reliable Heart Rate Variability (HRV) analysis, which is recognized as an effective tool for diagnosing
cardiac arrhythmias, understanding the autonomic regulation of the cardiovascular system during sleep
and hypertension, detecting breath disorder like Obstructive Sleep Apnea Syndrome, and monitoring
other structural or functional cardiac disorders. The detection of QRS complexes have been extensively
investigated in the last two decades. Many attempts have been made to find a satisfying universal solution
for QRS complex detection. The difficulties arise mainly because of the huge diversity of the QRS complex
waveforms, abnormalities, low signal-to-noise ratio (SNR) and the artefacts accompanying the ECG signals
as described in [8]. The main aim of this work is to increase the accuracy of QRS detection in Arrhythmia
ECG signals that suffer from non-stationary random effects, low signal-to-noise ratio (SNR), negative
QRS, and low-amplitude QRS.
501
There are many methods proposed to intent the QRS complexes. But it have more hardware
complexity and the cost is high. If the cost is less, then the accuracy will be low [9]. Thus, the proposed
technique is to implement with the MIT-BIH database as described by [15] in the hardware with less
complexity and less cost. This can be done using ARM microcontroller, so that the response of ECG signal
will be fast. A novel filter is designed to eliminate the noise in QRS complex and the accuracy of the signal
is increased. This concept will be more useful in wearable devices [1], portable devices and in battery
operated devices.
COMPARISON OF OTHER METHODS
The acquisition of ECG is a two-step process. (1) Feature extraction (2) Detection. In 1st step, QRS
complexes are enhanced and noise in it are removed. This process is done by using different types of band
pass filtering. The low frequency noise removal is done by high pass FIR filtering technique [2]. It consists
of low cutoff frequencies with numerous taps. Therefore to compute each sample, these filters require n
fixed point adders and multipliers. This setup increases the operation complexity to 2.n. If IIR filters are
used then floating point coefficients are needed. In 2nd step, the position of QRS complexes is determined
by differentiation and squaring of signal. In some methods integration are also used to detect the signal.
Pan and Tompkins is the ancient algorithm [3] for the real time QRS Detection algorithm based on
analysis of the slope, amplitude and width of QRS complexes. This algorithm includes series of filters and
methods that perform low pass, high pass, derivative, squaring, integration, adaptive thresholding. In recent
times, there are many research work have been narrated in automatic detection of QRS complex from ECG
signal. Some of them are frequency based methods, time component analysis, dynamic thresholds [5],
heart beat interval techniques. These methods are executed using fourier transform or Hilbert transform
[6]. It is limited to stationary signals. To overcome this, short fourier transform is introduced.it gives
both frequency and time domain analysis. But some signal cannot be detected using STFT [13]. Wavelet
transform methods [4] & [14] are developed and among other methods it produces reasonable results, the
hardware complexity and cost is high. Though for accurate measurements it is used along with Artificial
Neural Network (ANN), multi-layer perceptron based neural network (MLPNN).
PROPOSED SYSTEM
A novel algorithm is proposed to apply over real time ECG signals. It is flexible, upgradeable,
operates in low processing power and also inexpensive. As similar proposals, it also have enhancement
phase followed by detection phase. The computational costs for the first phase are high in other proposals.
But the proposed technique is to reduce these costs and avail for the wearable applications. The block
diagram for the proposed system is shown in fig.2.
502
A ECG SIGNAL
The real time ECG signal from the patient is taken for the diagnosis of any cardiac diseases. This
signal consists of P, QRS and T waves. In which the proposed algorithm will concentrate on QRS detection.
Since it is difficult to determine and it is responsible for the diagnosis of arrhythmia.
B PRE-PROCESSING
The ECG signals have various kinds of unwanted noise due to power line interference,
electromagnetic noise and baseline wandering in the heartbeat signal as similar to [12]. So the accuracy of
the signal will be degraded, if the signal is acquired in the same way. For this purpose, the pre-processing
is needed. The process constitutes passing the raw ECG signal through a low pass filter. It performs both
linear and nonlinear filtering of the ECG signal and produces a set of periodic vectors which describe
the events. Thus after filtering, the signal is allowed to amplify using amplifier. This will help in feature
extraction of the peaks.
C LOW FREQUENCY NOISE REDUCTION
A novel high pass filter is designed with low computational costs to remove the base wander. A
high pass non-linear filter is designed. The operation of the filter is to subtract a low-pass filtered signal
from the original signal to attain the higher frequency signal. This subtraction is carried out between
maximum and minimum values of the signal. Let s(t) be the discrete time function of the digitized ECG
signal, then the filtered output y(t) is given by Equation (1), (2) & (3).
503
This m(t) represents the noise factor and it should be subtracted from the signal. Thus the filtered signal
with the noise reduction is given by the equation (5). The duration of the QRS pulse is normally in the range
of 0.06-0.12s. Therefore some effects are made to improve their detection.
E PEAK DETECTION
All the samples from r(t) is not useful to detect the QRS complexes. The R peaks in this complex
will come as positive and negative values. From this the potential R peaks can be filtered. A certain time
period is set to filter these peaks. The advantage of this method is that it supports situations were the
maximum value remains constant for several samples. The maximum and minimum value of time period is
defined by the equation (6) & (7). Thus, this value is considered as peak.
F BEAT UNIFICATION
As already discussed that QRS complexes are both in positive or negative peaks. So, the negative
peaks are discarded or it can be unified by changing the sign of negative peaks to become positive peaks.
This makes useful for fixing the threshold value in the detection step. Normally in other methods it is done
by squaring the signal. But the aim of the proposed technique is to reduce the multiplication operation as
it requires a higher cost than adding or subtracting. Thus the beat unification time function is defined to
reduce the cost is given by Equation. (8).
504
The heartbeat is evaluated using the parameters namely sensitivity, positive detection and detection error.
Sensitivity is the detection of accurate events among total number of events. Positive detection is the rate of
correctly classified events in all detected events. To evaluate these parameters true positive, false positive,
false negative are used. A true positive happens when a beat is correctly detected for a certain instant. A
false positive happens when a beat has been detected in a certain instant, but no beat has been annotated in
the database for that instant. A false negative happens when the database reports a beat for a certain instant
but the algorithm fails to detect it.
506
The simulation of the proposed filter is shown in the fig 4 is the extension of [10]. It is drawn in MATLAB
SIMULINK and ran through the same software to display the results. The entire circuit shows the ECG
wave acquisition, amplification, noise reduction using high pass filter and the mathematical calculation of
parameters to measure the heart rate are represented.
The fig.5 displays the normal ECG signal which is generated using simulink block to determine the peak of
the signal. Normally, the ECG signal contains some low frequency noise. So this signal is given to the filter.
Another ECG signal with more noise is generated to analyse the performance of the filter. It is used to
compare the accuracy of the normal state of heart rate to abnormal state. This signal is shown in the Fig.6.
507
The peak is calculated based on the algorithm. The Fig.7 represents the peak calculation. The result shows
the peaks compared with the adaptive threshold. Only the peaks with the threshold value are shown. Other
peaks are eliminated as noise.
The final simulation result of the proposed alogorithm with the detected peaks are shown in the Figure 6.6.
The performance of the results obtained was better than the existing methods.
508
ACCURACY
95.5%
90%
97.93%
98.61%
The low frequency noise is reduced by using high pass filter. This facilitates in the real time processing of
ECG signals with low computational complexity which is useful for portable devices, wearable devices and
ultra-low power chips. The proposed filter results 98.61% of efficiency. To determine the heart rate, the
parameters were calculated. The result gives 71 beats per minute of heart rate which is the normal heart rate.
6) REFERENCES
[1]
Liang-Hung Wang, Implementation of a Wireless ECG Acquisition SoC for IEEE 802.15.4 (ZigBee)
Applications, IEEE Journal Of Biomedical And Health Informatics, Vol. 19, No. 1, January 2015
[2] Rani.S, Kaur.A, Ubhi.J.S, Comparative study of FIR and fIR filters for the removal of Baseline
noises from ECG signal, Int. J. Comput. Sci. Inf. Technol. 2 (3) (2011), pp.11051108.
[3]
Xiaoyang Zhang, A 300-mV 220-nW Event-Driven ADC With Real-Time QRS Detection for
Wearable ECG Sensors, IEEE Trans.Bio.Med Circuits And Sys., Vol. 8, no. 6, December 2014
[4]
[5]
[6]
Simranjit Singh Kohli, et al., Hilbert Transform Based Adaptive ECG R-Peak Detection Technique,
International Journal of Electrical and Computer Engineering (IJECE) Vol. 2, No. 5, October 2012,
pp. 639~643
[7] http://www.nhs.uk/conditions/cardiovascular-disease/Pages/Introductio.aspx.
[8]
Dorthe B. Saadi, et al., Automatic Real-Time Embedded QRS Complex Detection for a Novel
Patch-Type Electrocardiogram Recorder, IEEE Journal Of Translational Engg. In Health And
Medicine, Vol. 3 2015
[9]
Chatterjee.H.K., et al., Real time P and T wave detection from ECG using FPGA, Procedia
Technology 4 ( 2012 ) 840 844
[10] Karunamoorthy.B,et al., Performance Improvement Of Qrs Wave In Ecg Using Arm Processor,
International Journal of Applied Engineering Research, Vol. 10 No.88 (2015)
[11] http://www.bem.fi/book/19/19.html
[12] Kaur. M, et al, Comparison of different approaches for removal of baseline wander from ECG
signal, in: Proceedings of the International Conference & Workshop on Emerging Trends in
Technology, 2011, pp. 12901294.
509
510
Abstract - This paper presents a new meta-heuristic technique, krillherd algorithm for solving
capacitor placement problem in radial distribution system (RDS). The algorithm predicted the
optimal size of the capacitors and should be placed at the proper location for loss minimization and
hence improvement in voltage. The krillherd algorithm is established along the biological herding
behavior of krills. The method is implemented in 10 and 85 bus RDS test systems and the results
are compared with other algorithms from the literature. The outcomes reveal the potency of the
algorithm. The simulation is taken away on the MATLAB environment.
Keywords: Capacitor placement, Krillherd algorithm, Power loss minimization, Radial distribution
system (RDS),
I. INTRODUCTION
From the studies, at the distribution side 13% of the total power is exhausted as ohmic losses
caused by reactive current flowing in the network. Shunt capacitors are used for the reduction of reactive
currents which consequences in loss minimization, power factor improvement, system security and better
voltage regulation. The main steps of capacitor problem are (i) optimal location of capacitor units and (ii)
sizing of capacitor units. Hence, getting the optimal position and size of capacitors plays a significant part
in the planning and operation of an electrical system.
In [1], the authors gave a brief survey about the shunt capacitor problem in radial distribution
system from the year 1956 to 2013.
The authors of [2] presented the overview of optimum shunt capacitor placement in distribution
system based upon the techniques that is (i) analytical method (ii) numerical programming method (iii)
heuristic method (iv) artificial intelligence methods (v) multidimensional problems. The authors also
compared the results with Particle Swarm Optimization (PSO) on the basis of power losses reduction,
voltage profile improvement, maximizing loadability and line limit constraint.
The authors of [3] gave a brief introduction and discussed various works done on the Shunt
Capacitor Problem (SCP) till 2014. Also, they used two methods, namely sensitivity analysis for searching
suitable locality of capacitors and gravitational search algorithm for selecting the size of capacitors.
From 2015, Artificial Bee Colony (ABC) algorithm [4], HCODEQ method [5], Bacterial Foraging
Optimization Algorithm (BFOA) [6], monkey search optimization algorithm [7], Bat and Cuckoo search
algorithm [8], Particle Swarm Optimization (PSO) [9], [10] and flower pollination optimization algorithm
[11] are applied for solving optimal capacitor placement and sizing in the RDS.
The main drawback in all the above methods is poor convergence speed and obtaining near
optimal solutions. This is overcome by presenting one of the new bio inspired algorithm, namely Krill
herd algorithm is used for solving the capacitor optimization problem. RDS active power loss minimization
is taken as an objective function subjected to various constraints namely voltage limit, reactive power limit
and capacitor location and an optimum solution is obtained using KH algorithm.
512
(local) (ii) foraging motion (global) (iii) random physical diffusion. The fitness (imaginary distances) is the
value of the objective function.
The n dimensional decision space is given by
513
Read system data, dimension of the problem, KHA parameters and number of iterations.
2.
Randomly generate the initial population of the size and location of capacitors and regularized
between the maximum and minimum limits.
3.
4.
Update motion of each krill individual by induced motion, foraging motion and random diffusion
using the equations (1), (2) and (3) respectively.
5.
6.
Applying crossover and mutation to modify the position of each krill individual using equations (6)
and (7) respectively.
7.
Check for the limits of individual Capacitor size variables; if it violates then set the minimum or
maximum value.
8.
Check the fitness function; the unfeasible solution is replaced by the solution with the best previously
visited position.
9.
Go to step 3 and repeat the procedure from step 3 to 8 until the maximum number of iterations is
reached.
514
517
520
521