Você está na página 1de 7

Proceedings of the Ninth International Conference on Machine Learning and Cybernetics, Qingdao, 11-14 July 2010

HYBRID BACTERIA FORAGING-DE BASED ALGORITHM FOR ECONOMIC LOAD DISPATCH WITH NON-CONVEX LOADS
NIDUL SINHA1, SENIOR MEMBER IEEE, LOI LEI LAI2, FELLOW IEEE, L. C. SAIKIA1, T. MALAKAR1
1

Department of Electrical Engineering, NIT, Silchar, Assam, India-788010 2 Energy Systems Group, City University, London, UK E-MAIL: nidulsinha@hotmail.com l.l.lai@city.ac.uk

Abstract:
An algorithm based on hybridization of Bacteria Foraging (BF) and Differential Evolution (DE) was developed to solve the problem of finding the optimum load allocation amongst the committed units in a power system with non-convex loads. The performance of the proposed algorithm is evaluated on a test case of 15 units. The performance of the proposed algorithm is compared with BF method. Also, the effect of swarming in performance of the hybrid algorithm is investigated. Results demonstrate that the performance of the hybrid algorithm is much better than BF in terms of convergence rate and solution quality. The swarming effect equips the hybrid algorithm with better search capability as demonstrated on the test case.

Keywords:
Bacterial Foraging; Differential Evolution; Floating point Genetic Algorithm; Economic Load dispatch

1.

Introduction

Economic Load Dispatch (ELD) in electric power system is one of the major optimization problems in the operation of power system. Economic dispatch in electric power system finds the optimum allocation of load amongst the committed generating units subject to satisfaction of the constraints. The problem becomes highly non-convex on the incorporation of realistic features like valve point loading, restricted operating zones, combined cycle units etc. Most of the conventional classical dispatch algorithms, like lambda-iteration method, base point and participation factors method, and the gradient method [1], [2] are gradient based methods and hence, cannot tackle the non-convexity well. These algorithms usually approximate the characteristics as quadratic ones to meet the their requirements. However, such approximations may result into huge loss of revenue over the time. In addition, they have the tendency of easily getting trapped in local minima. And most of the modern practical thermal units do have highly non-linear input978-1-4244-6527-9/10/$26.00 2010 IEEE

output characteristics because of valve point loadings prohibiting operating zones etc resulting in multi-ple local minima in the cost function. The solution of multi-modal optimization problems like ELD demands for solution methods, which have no restrictions on the shape of the fuel cost curves. Though enumerative method like dynamic programming (DP) [1] is capable of solving ELD problems with inherently nonlinear and discontinuous cost curves but proves to suffer from intensive mathematical computations and memory requirement. With nonlinear and nondifferentiable objective functions, modern heuristic search approaches are the methods of choice. The best known of these are genetic algorithm (GA) [3]-[11], evolutionary strategy (ES) [12], [13], evolutionary programming (EP) [13]-[21], simulated annealing (SA) [3], [22], particle swarm optimization (PSO) [23]-[25], and differential evolution (DE) [26]-[27]. Every heuristic search method uses a strategy that creates new solutions and some criterion to accept or reject the new solutions. While doing this all basic heuristic search methods use some greedy criteria to converge to better search point. One of the greedy criteria is to accept a new solution if and only if it reduces the value of the objective function (in case of minimization) and the other may be forcing to create more new solutions nearer to already found better solutions. However, the greedy features may make the algorithm gets easily trapped at a local optimum even though they enhance the convergence capability of the algorithm. Inherently all parallel search techniques like evolutionary algorithms have some built-in safeguard features like exploration to forestall misconvergence. Though some researchers have reported on the performance of simulated annealing [3], [22] in solving non-linear ELD problems, the main drawback of SA is the difficulty in determining an appropriate annealing schedule, otherwise the solution achieved may still be a locally optimal one. Recent trends in research, therefore, have been directed towards use of evolutionary algorithms (EAs) i.e. GA, ES and EP, which are based on the simulated evolutionary

3206

Proceedings of the Ninth International Conference on Machine Learning and Cybernetics, Qingdao, 11-14 July 2010

process of natural selection and genetics. EAs are more flexible and robust than conventional calculus based methods. Due to its high potential for global optimization, GA has received great attention in solving ELD problems. Walters and Sheble [4] reported a GA model that employed units output as the encoded parameter of chromosome to solve an ELD problem with valve-point discontinuities. To enhance the performance of GA, Yalcinoz et al. [10] have proposed the real-coded representation scheme, arithmetic crossover, mutation, and elitism in the GA to solve more efficiently the ELD problem, and it can obtain a high-quality solution with less computation time. Recently researchers have been attracted by the impressive performance of bacteria foraging algorithm (BFA) as proposed by Passino [28] and further applied to: harmonic estimation problem in power systems [29], optimize both real power loss and voltage stability Limit [30] and optimize active power filter for load compensation [31]. The algorithm is based on the foraging behavior of E. coli bacteria present in human intestine. The objective is the minimization of the total production cost over the scheduling horizon while the constraints must be satisfied during the optimization process. BFA includes most of the features like chamotaxis, swarming, reproduction, elimination, and dispersal of improved modern heuristic search methods, which make the algorithm very promising. It has been reported that BFA algorithm performs much better than optimized floating point GA [38]. Differential evolution algorithm has promising features like better convergence ratee, and greediness, while BFA sometimes suffers from poor convergence though it has better explorability. Hence, an urge is felt to develop an algorithm by blending BFA algorithm with DE features to exploit the positive features of both the algorithms and at the same time their demerits can be overcome. In view of the above, the main objectives of the present work are: (i) To develop a program based on BF algorithm to solve the non-convex ELD problem. (ii) To develop a program based on hybridization of BF algorithm and DE algorithm to solve the same nonconvex ELD problem as in (i). (iii) To compare the performance of the algorithms on the same problem. (iv) Also to investigate into the effect of swarming of BF algorithm on the performance. The rest of the paper is organized as follows: In Section II the mathematical problem formulation is briefly reviewed. BFA and DE algorithms are described in Section III. Hybrid BFA-DE Algorithm for ELD problems is outlined in Section IV. Section V presents the experimental results together with

comparisons with other methods Conclusions are drawn in Section VI. 2. ELD problem formulation.

and

discussions.

The economic load dispatch problem can be described as an optimization (minimization) process with the following objective function:

min.F = FCj (Pj )


j =1

(1)

Where FCj (Pj) is the fuel cost function of the jth unit and Pj is the power generated by the jth unit. Subject to power balance constraints:
D =

Pj

(2)

j =1

Where D is the system load demand and PL is the transmission loss, and generating capacity constrains: P j min P j P j max for j = 1,2, . n (3) Where Pjmin and Pjmax are the minimum and maximum power outputs of the jth unit. The fuel cost function of the generating units without valve point loadings are given by FC j ( P j ) = a j + b j P j + c j P j2 (4) And the fuel cost function considering valve point loadings of the generating units are given as (5) FCj(Pj) = aj + bjPj + cjPj2 + |ej x sin (fj x (Pjmin Pj))| where aj,bj,cj are the fuel cost coefficients of the jth unit and ej and fj are the fuel cost coefficients of the jth unit with valve point effects. The generating units with multi-valve steam turbines exhibit a greater variation in the fuel cost functions. The valve-point effects introduce ripples in the heat rate curves. Now the fitness function which is the sum of production cost and penalty for constraint violation can be calculated for each individual of the parent population as

FIT = F + PF i z
z =1

Nc

(6)

and PFz = 3.

X [VIOLz]2

Bacterial Foraging Algorithm (BFA)

The idea of foraging under BFA is based on the fact that natural selection tends to eliminate animals with poor foraging strategies and favour those having successful foraging strategies. After many generations, poor foraging strategies are either eliminated or reshaped into good ones. The E. coli bacteria that are present in our intestines have a foraging strategy governed by four processes, namely,

3207

Proceedings of the Ninth International Conference on Machine Learning and Cybernetics, Qingdao, 11-14 July 2010

chemotaxis, swarming, reproduction, and elimination and dispersal. For detail description readers may refer to the references [28][38]. However, a brief introduction is made here for ready reference. A. Chemotaxis: This process is achieved through swimming and tumbling. Depending upon the rotation of the flagella, the bacterium decides what direction it should move (tumbling) and if the new location of bacterium after movement is better, the bacterium will continue to swim in the same previous direction (swimming) for a specific number of steps. B. Swarming: It is always desired that the bacterium that has searched the optimum path of food should try to attract other bacteria so that they reach the desired place more rapidly. Swarming makes the bacteria congregate into groups and hence move as concentric patterns of groups with high bacterial density. Let P( j k l ) = { i (j, k, l)| i = 1,2,..., S}. Mathematically, swarming can be represented by:
J cc ( ) =

dispersal with probability ped. To keep the number of bacteria constant, if we eliminate a bacterium, simply disperse one to a random location in the optimization domain. 4. Differential Evolution(DE)

J
i =1

cc s

[-d
i =1

attract

exp( w attract

(
j

- ij ) 2 )]

(7) where Jcc( ,P(I,j,l)), due to the movements of all the cells, is a time varying function that is added to J(i,j,k,l ) so that the cells will try to find nutrients, avoid noxious substances, and at the same time try to move toward other cells, but not too close to them. S is the total number of bacteria. p is the number of parameters to be optimized that are presented in each bacterium position. dattract, wattract, hrepelent, and wrepelent are different coefficients that are to be chosen judiciously. C. Reproduction: Half of the total bacteria i.e. Sr = S/2 with least health die, and the comparatively remaining healthier bacteria each split into two bacteria, which is placed in the same location. This makes the population of bacteria constant and follows the natural principle of preferring better fit bacteria to survive and produce. D. Elimination and Dispersal: It is possible that in the local environment, the life of a population of bacteria changes either gradually by consumption of nutrients or suddenly due to some other influence. Events can kill or disperse all the bacteria in a region. They have the effect of possibly destroying chemotactic progress, at the same time they also have the possibility of assisting in chemotaxis, since dispersal may result into bacteria at better locations i.e. solutions.. Elimination and dispersal prevents bacteria from getting trapped in local optima. For each elimination-dispersal event each bacterium in the population is subjected to eliminationi =1

+ [h repellent exp( w repellent

- ij ) 2 )]

The DE algorithm is a population based algorithm like GAs using the similar operators; crossover, mutation and selection. The main difference between GA and DE is that GAs rely mostly on crossover while DE relies on mutation operation. The algorithm uses mutation operation as a search mechanism and selection operation to direct the search toward the prospective regions in the search space. Mutation in DE uses differences of randomly sampled pairs of solutions in the population and greediness may be embedded in it. The DE algorithm also uses a non-uniform crossover that can take child vector parameters from one parent more often than it does from others. By using the components of the existing population members to construct trial vectors, the recombination (crossover) operator efficiently shuffles information about successful combinations, enabling the search for a better solution space. An optimization task consisting of n parameters can be represented by a n-dimensional vector. In DE, a population of Np solution vectors is randomly created at the start. This population is successfully improved by applying mutation, crossover and selection operators. 5. The hybrid algorithm

DE has reportedly outperformed powerful meta-heuristics like genetic algorithm (GA) and particle swarm optimization. Practical experiences suggest that DE may occasionally stop proceeding towards the global optima, while the population has not converged to a local optima or any other point. Occasionally even new individuals may enter the population but the algorithm does not progress by finding any better solutions. This situation is usually referred to as stagnation . In the present work, we have incorporated an adaptive hemotactic step borrowed from the realm of BFA into DE. The computational hemotaxis in BFA serves as a stochastic gradient descent based local search .It was seen to greatly improvise the convergence characteristics of the classical DE. The resulting hybrid algorithm is outlined as given below: E. BFA_DE Algorithm in brief: Step 1 Initialization First following variables must be chosen. 1) S: Number of bacteria to be used in the search. 2) p: Number of parameters to be optimized.

3208

Proceedings of the Ninth International Conference on Machine Learning and Cybernetics, Qingdao, 11-14 July 2010

3) Ns: Swimming length. 4) Nc: Number of chemotactic steps. 5) Nre: Number of reproduction steps. 6) Ned: Number of elimination and dispersal events. 7) ped: Probability of elimination and dispersal. 8) C(i), i= 1,2,,S: unit length run for every bacterium 9) CR: crossing factor in DE 10) F: Mutation factor in DE 11) The values of dattract, attract, hrepelent and repelent 12) Initial values for the i, i= 1,2,,S Step 2 Iterative algorithm for optimization 1) Elimination-dispersal loop: l=l+1 2) Reproduction loop: k=k+1 3) Chemotaxis loop: j=j+1 a) For i =1,2,,S, take a chemotactic step for bacterium i as follows. b) Compute J (i, j, k, l ). Let J (i j k l ) = J (i j k l )+ Jsw (i j k l )+Jcc( i ( j k l ),P( j k l )) (i.e., add on the cell-to-cell attractant effect for swarming behavior). c) Let Jlast = Jsw (i j k l ) to save this value since we may find a better cost via a run. d) Tumble: Generate a random vector (i) Rp with each element m (i), m= 1,2,..., p, a random number on [1,1]. e) Move: Let: (i) =(i)/ (T(i)(i))1/2 i ( j+1, k ,l )= i( j, k ,l )+ C(i) (i) This results in a step of size C(i) in the direction ofthe tumble for bacterium i. f) Compute J (i, j +1, k, l ), and then Let Jsw (i j+1, k, l) =J (I, j+1, k, l,)+Jcc ( i( j+1,k, l),P( j+1, k,l)) g) Swim (i) Let m = 0 (counter for swim length). (ii) While m<Ns Let m=m+1. If Jsw(i, j +1, k, l) < Jlast (if doing better), Let Jlast = J(i,j+1,k,l ) and let i( j+1,k,l) = i( j,k,l) + C(i) (i) and use this i(j+1, k, l ) to compute the new J(i, j +1, k, l ) as we did in f. Else, let m = Ns. This is the end of the while statement. h) Differential Evolution Mutation Loop: (i) For each (i, j +1, t) trial solution vector we choose randomly two other distinct vectors from the current population namely (m), and (n) such that imn (ii) V (i, j +1, t) = (i)+ .( g(j)- (i)) + F.( (m) (n)); Where, V (i, j +1, t) is the donor vector corresponding to (i, j +1, t), g is the global best vector at jth chemotactic step, is the greediness.

(iii) Then the donor and the target vector interchange components probabilistically to yield a trial vector U(i, j +1, t) following: Up (i, j +1, t) = V p (i, j 1,t) + If ( rand p (0,1) CR ) or (p = rn(i)) p(i, j 1, t) + If (rand p (0,1) > CR ) or (prn(i)) for p-th dimension. where rand p (0, 1) [0,1] is the pth evaluation of a uniform random number generator. rn(i) {1,2,....,D}is a randomly chosen index which ensures that U(i, j +1, t) gets at least one component from V(i, j +1, t) . (iv) J (i, j +1, t) is computed for trial vector; (v) If J (U(i, j +1, t)) < J (q (i, j +1, t)) , q (i, j +1, t +1) =U(i, j +1, t); Original vector is replaced by offspring if value of objective function for it is smaller. i) Go to the next bacterium (i+1) if i S (i.e., go to b) to process the next bacterium. 4) If j<Nc, go to step 3. In this case, continue chemotaxis, since the life of the bacteria is not over. 5) Reproduction: a) For the given k and l, and for each i=1,2,,S, let Ji health = min {J(sw)(i,j, k,l)} be the health of bacterium i. Sort bacteria in order of ascending cost Jhealth (higher cost means lower health). b) The Sr bacteria with the highest Jhealth values die and the other Sr bacteria are moved to the location with cost equal to Ji health and then split (the copies that are made are placed at the same location as their parent). 6) If k<Nre, go to step 2. In this case, we have not reached the number of specified reproduction steps, so we start the next generation in the chemotactic loop. 7) Elimination-dispersal: For i=1,2,,S, with probability ped, eliminate and disperse each bacterium. To eliminate a bacterium, simply disperse one to a random location on the optimization domain. 8) If l<Ned, then move to step 1; otherwise end. 6. Numerical tests

The performance of the proposed algorithms are experimented on the test case of 15 units. The programs are developed in Matlab command line and results are obtained on a Pentium IV PC, 2.4 GHz 500 MB RAM. Table-1 shows the units data for the test case under study. Tuned Parameters for BFA: Population Size = 60 Maximum chemotactic steps = 50 Penalty multiplier = 100. Nc = 50, Nre = 10 Ns = 6, ped = 0.25 Ned = 4

3209

Proceedings of the Ninth International Conference on Machine Learning and Cybernetics, Qingdao, 11-14 July 2010

Tuned addl. Parameters for BFA-DE: CR = 0.7 F = 0.4 = 0.6


TABLE-I: UNITS DATA FOR THE TEST CASE (15 UNITS CASE) WITH LOAD 2650MW (WITH VALVE POINT LOADINGS)

Pi

0.00444 12 7 0 0.077 0.00018 30 0 2 455 574.54 10.22 3 0.035 0.00112 12 6 0 3 20 130 374.59 8.8 0.077 0.00112 12 4 20 130 374.59 8.8 6 0 0.077 15 0.00020 30 5 0 5 0 470 461.37 10.4 0.035 13 0.00030 30 1 0 6 5 460 630.14 10.1 0.035 13 0.00036 30 4 0 7 5 465 548.2 9.87 0.035 0.00033 20 8 0 8 60 300 227.09 11.5 0.042 0.00080 12 9 25 162 173.72 11.21 7 0 0.077 1 0.00120 12 0 20 160 175.95 10.72 3 0 0.077 1 0.00358 12 1 20 80 0 186.86 11.21 6 0.077 1 0.00551 12 2 20 80 3 0 230.27 9.9 0.077 1 0.00037 12 3 25 85 0 225.28 13.12 1 0.077 1 0.00192 12 4 15 55 0 309.03 12.12 9 0.077 1 15 0.00029 30 5 0 455 671.03 10.07 9 0 0.035 The convergence characteristics of the BFA and hybrid BFA-DE algorithms both with swarming and without swarming are shown in Fig.1for the test case. Investigation of the figure reveals that the hybrid BFA algorithm, with swarming and without swarming converges faster than BFA algorithm with swarming as well as without swarming. The BFA-DE algorithm with swarming converges faster than that without swarming. Similarly, BFA with swarming converges faster than that without swarming. This 1 15 15 0 55 323.79 12.41

Output Limits Mi Max n

Fuel Coefficients a b c e f

demonstrate the effectiveness of the swarming effect in convergence capability of the algorithms. To investigate the effects of initial trial solutions all the algorithms were run with 50 different initial trial solutions and the performance is reported in table-2. The average cost achieved in all the runs with each of the algorithm shows the capability of the algorithm in escaping local minima and find the better global solutions. Also, the lower value in the difference between maximum and minimum values further demonstrate better performance. It can be observed from the table that BFA with swarming has the least average cost amongst three and the least difference between maximum and minimum values. The performance of BFA algorithms in both forms is better than FPGA.
3.36 x 10
4

3.35 BFA without swarming 3.34 BFA with swarming

Cost($)

3.33 BFA-DE without swarming

3.32

3.31 BFADE with swarming 3.3

3.29

10

15 20 25 30 35 Number of Chamotectic steps

40

45

50

Figure 1. The convergence nature of BFA and BFA-DE algorithms with and without swarming on the test case.

7.

Conclusion

Algorithms based on BFA with and without swarming and hybrid algorithms BFA-DE with and without swarming are developed in Matlab and their performances are tested on a test case of 15 units for non-convex economic load dispatch problems with valve point loading effects. Experimental results reveal that all the algorithms are competent to provide better quality solutions. The hybrid algorithm BFA-DE with swarming exhibits the best capability of converging to better quality solutions with higher convergence rate. Both the algorithms with swarming show better convergence features as compared to that without swarming. In between BFA-DEs with swarming and without swarming, the earlier one demonstrates to be more efficient in finding better quality solutions and converging to the global optimal at a faster rate. Statistical results with 50 different initial populations demonstrate that BFA-DE with

3210

Proceedings of the Ninth International Conference on Machine Learning and Cybernetics, Qingdao, 11-14 July 2010

swarming is the most competent in find better quality solutions for highly non-linear ELD problems. Hence, BFA-DE with swarming is recommended for solution of highly nonlinear ELD problems in power system. However, lot more scope is there for further works in improvement of BFA algorithm like adaptive tuning of chemotactic steps, step size etc.
TABLE II. STATISTICAL TEST RESULTS OF 50 RUNS WITH DIFFERENT INITIAL SOLUTIONS (WITH NON-SMOOTH COST CURVES) FOR THE TEST CASE.

[6]

[7]

[8]

Method

Average cost (Rs.) 33380

Maximu m cost (Rs,) 33756

Minimu m cost (Rs.) 33170

[9]

BFA (without swarming) BFA (with swarming) BFA-DE (without swarming) BFA-DE (with swarming)

[10]

33372

33608

33075

[11]

33244

33538

33009 [12]

33237

33496

32948

[13]

REFERENCES [1] Liang, Zi-Xiong and Glover, J. Duncan, A zoom feature for a dynamic programming solution to economic dispatch including transmission losses, IEEE Trans. on Power Systems, Vol.7, No.2, May 1992, pp.544-549. [2] Wood, A.J., Wollenberg, B.F., Power Generation, Operation and Control, second edition, Wiley, New York, USA, 1996. [3] Wong, K.P. and Wong, Y.W., Thermal generator scheduling using hybrid genetic/simulated annealing approach, IEE Proc. Part-C, Vol.142, No.4, July 1995, pp.372-380. [4] Walter, D.C. and Sheble, G.B., Genetic algorithm solution of economic dispatch with valve point loading, IEEE Trans. on Power Systems, Vol.8, No.3, August, 1993, pp.1325-1332. [5] Bakirtzis, A. Petridis, V. and Kazarlis, S., Genetic algorithm solution to the economic dispatch problem,

[14]

[15]

[16]

[17]

[18]

[19]

IEE Proc. Part-D, Vol. 141, No. 4, July 1994, pp. 377 382. Sheble, G.B. and Brittig, K., Refined genetic algorithm Economic dispatch example, IEEE Trans. on. Power Systems, Vol. 10, Feb. 1995, pp. 117124. Chen, P.H. and Chang, H.C., Large-scale economic dispatch by genetic algorithm, IEEE Trans. on Power Systems, Vol.10, No.4, November 1995, pp.1919-1926. Fogel, D.B., A comparison of evolutionary programming and genetic algorithms on selected constrained optimization problems, Simulation, June 1995, pp.397-404. Goldberg, D.E., Genetic Algorithms in Search, Optimization and Machine Learning, Addison Wesley, MA, 1989. Yalcionoz, T., Altun, H., and Uzam, M., Economic dispatch solution using a genetic algorithm based on arithmetic crossover, Proc. Power Tech. Conf., IEEE, Portugal, Sept. 2001. Houck, C.R., Joines, J.A., and Kay, M.G., A genetic algorithm for function optimization: A Matlab implementation, Technical Report NCSU-IE TR 9509, North Carolina State University, 1995. Bck, Th. and Schwefel, H.P., An overview of evolutionary algorithms for parameter optimization, Evolutionary Computation, Vol.1, No.1, 1993, pp.1-23. Fogel, D.B., Evolutionary Computation: Toward a New Philosophy of Machine Intelligence, IEEE Press, Piscataway, NJ, 1995. Fogel, D.B., An introduction to simulated evolutionary optimization, IEEE Trans. on Neural Networks, Vol.5, No.1, 1994, pp.3-14. Chellapilla, K., Combining mutation operators in evolutionary programming, IEEE Trans. on Evolutionary Computation, Vol.2, No.3, 1998, pp.9196. Wolpert, D.H. and Macready, W.G., No free lunch theorems for optimization, IEEE Trans. on Evolutionary Computation, Vol.1, No.1, 1997, pp.6782. Yang, H.T., Yang, P.C. and Huang, C.L., Evolutionary programming based economic dispatch for units with non-smooth fuel cost functions, IEEE Trans. on Power Systems, Vol.11, No.1, February 1996, pp.112-118. Yao, X. , Liu, Y. and Lin, G., Evolutionary programming made faster, IEEE Trans. Evolutionary Computation, Vol. 3, July 1999, pp. 82102. Sinha, Nidul, Chakrabarti, R. and Chattopadhyay, P.K., Evolutionary programming techniques for economic

3211

Proceedings of the Ninth International Conference on Machine Learning and Cybernetics, Qingdao, 11-14 July 2010

[20]

[21]

[22]

[23]

[24]

[25]

[26]

[27]

[28]

load dispatch, IEEE Trans. on Evolutionary Computation, Vol.7, No.1, February, 2003, pp.83-94. Sinha, Nidul, Chakrabarti, R. and Chattopadhyay, P.K., Fast Evolutionary programming techniques for shortterm hydrothermal scheduling, IEEE Trans. on Power Systems, Vol.18, No.1, February 2003, pp.214-220. Yao, X. and Liu, Y., Fast evolutionary programming, Proc. 5th Annu.Conf. Evolutionary Programming, L. J. Fogel, T. Bck, and P. J. Ange-line, Eds. Cambridge, MA, 1996, pp. 451460. Wong, K.P. and Fung, C.C., Simulated annealing based economic dispatch algorithm, IEE Proc. Part-C, Vol.140, No.6, 1992, pp. 544-550. Kennedy, J., and Eberhart, R., Particle Swarm Optimization, Proceedings of IEEE International Conference on Neural Networks, Vol. IV, Perth, Australia, 1995, pp. 1942-1948. Park, J.-B., Lee, K.-S., Shin, J.-R., and Lee, K.Y., "A particle swarm optimization for economic dispatch with non-smooth cost Functions, IEEE Trans. on Power Systems, Vol. 20, No. 1, February 2005, pp. 3442. El-Gallad, A., El-Hawary, M., Sallam, A., and Kalas, A., Particle swarm optimizer for constrained economic dispatch with prohibited operating zones, Proc. IEEE Canadian Conf. on Electrical and Computer Engineering, 2002, pp.78-81. Storn, R., System design by constraint adaptation and differential evolution, IEEE Trans. on Evolutionary Computation, Vol. 3, No. 1, 1999, pp. 22-34. Price K., Differential Evolution: A Fast and Simple Numerical Optimizer, NAFIPS 1996, Berkeley, pp.524-527. K. M. Passino, Biomimicry of bacterial foraging for distributedoptimization and control, IEEE Control Syst. Mag., vol. 22, no. 3, pp. 5267, Jun. 2002.

[29] S. Mishra, A hybrid least square-fuzzy bacteria foraging strategy for harmonic estimation, IEEE Trans. Evol. Comput., vol. 9, no. 1, pp. 6173, Feb. 2005. [30] M. Tripathy and S. Mishra, Bacteria foraging-based solution to optimize both real power loss and voltage stability limit, IEEE Trans. Power Syst., vol. 22, no. 1, pp. 240-248, Feb. 2007. [31] S. Mishra, C. N. Bhende, Bacterial Foraging Technique-Based Optimized Active Power Filter for Load Compensation, IEEE Trans. Power Syst., vol. 22, no. 1, pp. 457 565, Jan. 2007. [32] Storn, R., System design by constraint adaptation and differential evolution, IEEE Trans. on Evolutionary Computation, Vol. 3, No. 1, 1999, pp. 22-34. [33] Storn, R. and Price K., Minimizing the real functions of the ICEC96 contest by differential evolution, Int. Conf. Evolutionary Computation, IEEE, 1996, pp.842844. [34] Price K., Differential Evolution: A Fast and Simple Numerical Optimizer, NAFIPS 1996, Berkeley, pp.524-527. [35] Ursem, R.K., and Vadstrup, P., Parameter identification of induction motors using differential evolution, Proceedings of the Fifth Congress on Evol. Comp. CEC-03, IEEE, 2003, pp.790 796. [36] Paterlini, S., and Krink, T., High performance clustering using differential evolution, Proceedings of the Six Congress on Evol. Comp., CEC-04, IEEE, 2004. [37] Huse E.S., Power generation scheduling- a free market based procedure with reserve constraints included, Ph.D. thesis, Norwegian University of Science and Technology, Norway, 1998. [38] Sinha, N., Paul, D., Singh, B. B., Barua, M., Chauhan, Y., Bacteria Foraging Based Algorithm for Optimum Economic Load Dispatch with Non-convex Loads Proceedings of Int. conference on Operation research and energy management (ICOREM09), Anna Univ., Trichi, India, 27-29 May, 2009.

3212

Você também pode gostar