Você está na página 1de 10

Applied Mathematics and Computation 168 (2005) 13701379

www.elsevier.com/locate/amc

A new nonlinear neural network for solving convex nonlinear programming problems
S. Eati *, M. Baymani
Department of Mathematics, Teacher Training University of Sabzevar, Sabzevar, Iran

Abstract This paper presents a new recurrent neural network for solving convex nonlinear programming problems. The new model is simpler and more intuitive than existing models and converge very fast to exact solution of the original problem. We show that this new model is asymptotically stable. 2004 Elsevier Inc. All rights reserved.
Keywords: Neural network; Convex nonlinear programming; Dierential equation

1. Introduction In 1985 and 1986 Hopeld and Tank [1,2] proposed a neural network for solving linear programming problems. Their seminal work has inspired many researchers to investigate alternative neural networks for solving linear and nonlinear programming problems. We consider the nonlinear programming problem of the form 1:
*

Corresponding author. E-mail address: eati911@yahoo.com (S. Eati). 1 Suppose the convex nonlinear programming problem has solution.

0096-3003/$ - see front matter 2004 Elsevier Inc. All rights reserved. doi:10.1016/j.amc.2004.10.028

S. Eati, M. Baymani / Appl. Math. Comput. 168 (2005) 13701379

1371

minimize subject to where f x;

f x ; gx g1 x; . . . ; gm x 6 0;

gi x : Rn ! R1 ;

x2Rn, the functions f(x) and gj(x), j = 1, . . . , m are dierentiable and convex. Let $f(x) and $g(x) be the gradients of f(x) and g(x), respectively. The Tank and Hopeld model can be described in compact form as & ' 1 1 1 _C x rf x rgx R x s; 2 s where C is an n n diagonal matrix due to the self-capacitance of each neuron, s is the penalty parameter, and R is an n n diagonal matrix with rii
1 pi 1 pi

1 Pm

j 1

d ji

being the self-conductance of each neuron. The corresponding energy function is chosen as m n X X x2 2 i g x ; 3 E1 x f x j 2 sr ii j 1 j 1 where g j x maxf0; gj xg: In 1987, Kennedy and Chua [3] proposed an improved model that always guaranteed convergence. The Kennedy and Chua model can be described by _ C 1 frf x srgxg xg; x where
g x g 1 x; g2 x; . . . ; gm x;

and C and s are dened the same as in (2), the corresponding energy function becomes m sX E2 x f x g x2 : 5 2 j 1 j However, their new model converges to only an approximation of the optimal solution. In 1990, Rodriguez-Vazquez et al. [4] proposed a class of neural networks for solving optimization problems. The Rodriguez-Vazquez et al. model is of the following form: _ ux rf x srgxg x; x 6

1372

S. Eati, M. Baymani / Appl. Math. Comput. 168 (2005) 13701379

where ux is the feasibility index of x, such as ux 1 if g x 6 0

otherwise ux = 0. The corresponding energy function is E 3 x ux f x


m sX 2 g x : 2 j 1 j

In 1996, Wu et al. [7] and Xia [5] introduced a new model that solves both the primal and dual problems of linear and quadratic programming problems. Their new model always globally converge to the solutions of the primal and dual problems. In 2002, Xia et al. [8] introduced a recurrent neural network for solving the nonlinear projection formulation. In this paper, we will present a new nonlinear neural network that has a much faster convergence. The new model is based on a nonlinear dynamical system.

2. New neural network model The new neural network model can be described by the following nonlinear dynamical system: m d xj of x X og x y i i ; j 1; . . . ; n; 8 ox j ox j dt i1 dy i gi x; dt y i 6 0; i 1; . . . ; m: 9

Note that Eqs. (8) and (9) can be written in vector form as follow: dx rf x y rgx; dt dy gx; dt y 6 0: 10 11

The main property of the above system is stated in following theorem. Theorem 1. If the neural network whose dynamics is described by the nonlinear differential equations (10) and (11) converges to a stable state x( ) and y( ) then the state x( ) convergent to the optimal solution of the problem (1). Proof. Let yi be the ith component of y. Eq. (11) can be rewritten as: dy i gi x dt 12

S. Eati, M. Baymani / Appl. Math. Comput. 168 (2005) 13701379

1373

if yi < 0, dy i min fgi x; 0g dt if yi = 0. Let x* and y* be the limit of x and y respectively, that is
t!1

13

lim xt x ; lim y t y :
dx dt

t!1

By stability of convergence, we have then becomes: g i x 0 if y i < 0, minfgi x ; 0g 0 if y i < 0. In other words: g i x 6 0;
y i g i x 0;

0 and

dy dt

0. Eqs. (12) and (13) 14 15

i 1; . . . ; m ; y i 6 0; i 1; . . . ; m : 16 y 6 0: 17 18

Or: g x 6 0; y g x 0;

Similarly, taking the limit of (10) we will have: rf x y rgx 0: From Eq. (16), it is clear that x* is a feasible solution for the problem (1). Also, from Eqs. (17) and (18) it implie that x* satisfy in the KuhnTucker conditions. Since f and gj, j = 1, . . . , m are the convex functions, then x* is an optimal solution for the problem (1) (see [6,9]). h

3. Stability analysis In this section, we show the neural network whose dynamics is described by the nonlinear dierential equations (10) and (11) has a good stability. The rst we dene a suitable Lyapunov function for (10), (11) as follow: E x ; y F T x ; y F x ; y ;

1374

S. Eati, M. Baymani / Appl. Math. Comput. 168 (2005) 13701379

where F = (F1, F2, . . . , Fn, Fn+1, . . . , Fn+m)T, and Fj


m o f x X og x yi i ; ox j ox j i1

j 1; . . . ; n;

19 20

F i gi x;

i n 1; n 2; . . . ; n m:

We assume (x*, y*) is an isolated equilibrium point from the neural network (10) and (11). It is clear that E(x, y) over some neighborhood X of (x*, y*) is positive denite. Let f(x) and gj(x), j = 1, . . . , m are twice dierentiable. We obtain Jacobian matrix of the neural network (10) and (11) by: " # T r2 f x y 1 r2 g1 x y m r2 gm x rgx J x; y ; rgx 0 where $gT is: rgT rg1 x rg2 x Also, " J x; y J x; y
T

. . . r g m x

# 2r2 f x y 1 r2 g1 x . . . y m r2 gm x 0 : 0 0

Lemma. The matrix J(x, y) + J(x, y)T over some neighborhood X of (x*, y*) is negative denite when f(x) is strictly convex in Rn. Proof. Let Y = (p, q)T is arbitrary vector that is not zero, such that pT = (p1, p2, . . . , pn) and qT = (q1, q2, . . . , qm). Since f, gi (i = 1, 2, . . . , m) are convex functions and yi 6 0(i = 1, . . . , m) then we have Y T J J T Y pt r2 f x y 1 r2 g1 x y m r 2 g m x p < 0: 21 Theorem 2. If (x*, y*) is an isolated equilibrium point of the neural network (10) and (11), then (x*, y*) is asymptotically stable for (10) and (11). Proof. It is be saw that dF oF dx oF dy J x; y F x; y ; dt ox dt oy dt 22

S. Eati, M. Baymani / Appl. Math. Comput. 168 (2005) 13701379

1375

therefore dE dt  T dF dF F T J T F F T JF F T J J T F : F FT dt dt 23

Since the matrix J(x, y) + J(x, y)T over some neighborhood X of (x*, y*) is E negative denite, then d < 0. Thus (x*, y*) is asymptotically stable for the dt neural network (10) and (11). This proof is complete. Note: The Euler method is used to solve nonlinear dierential equations (10) and (11). The following Matlab code describes the discrete implementation of our neural network. For i 1 : n; dy dt gx; dy miny dy ; 0 y ; y y dy ; dx dt rf x y rgx; x x dx ; i i 1; end:

4. Simulation results We consider several examples to demonstrate the behaviors of our neural network model. Example 1. Consider the following convex nonlinear programming problem: minimize subject to f x x1 x2 ; g 1 x x2 1 x2 6 3

The new nonlinear neural network for the above problem is dy 3 x2 1 x2 ; dt d x1 x2 2x1 y ; dt d x2 x1 y : dt

1376

S. Eati, M. Baymani / Appl. Math. Comput. 168 (2005) 13701379

6 5 4 3 2 1 0 1 4 2 0 2 4 6

Fig. 1. Five trajectories with dierent initial points for Example 1.

This new model converges with any initial point. This model can take larger discrete time step (dt) without becoming unstable. The new model converges after about 650 iteration to the optimal solution x* = (1, 2)T and y* = 1. We use the new nonlinear neural network to solve the above problem with initial points (0,0), (1,1), (1,4), (4,4), (5,2) for x and initial point 0 for y, and dt = 0.05. Fig. 1 shows the trajectories of the system with ve different initial points converges to the optimal solution x* = (1,2)T. Example 2. Consider the following convex nonlinear programming problem: minimize subject to f x 2x 1 x 2 ;
2 g 1 x x2 1 x 2 6 1;

g 2 x x 1 2x 2 6 2: The new nonlinear neural network for the above problem is dy 1 2 1 x2 1 x2 ; dt dy 2 2 x 1 2x 2 ; dt d x1 2 2y 1 x 1 y 2 ; dt d x2 1 2y 1 x 2 2y 2 : dt

S. Eati, M. Baymani / Appl. Math. Comput. 168 (2005) 13701379

1377

5 5

Fig. 2. Six trajectories with dierent initial points for Example 2.

The new model converges after about 3000 iteration to the optimal solution x* = (0.8944,0.4472)T and y* = (1.1180,0). We use the new nonlinear neural network to solve the above problem with initial points (1,1), (5,5), (0,5), (5,5), (5,0), (5,0) for x and initial point (0,0) for y, and dt = 0.02. Fig. 2 shows the trajectories of the system with six different initial points converges the optimal solution x* = (0.8944,0.4472)T. Example 3. Consider the following convex nonlinear programming problem: minimize subject to f x x1 x2 ;
2 g 1 x 3x 2 1 2x 1 x 2 x 2 6 1:

The new nonlinear neural network for the above problem is dy 2 1 3x 2 1 2x1 x2 x2 ; dt d x1 1 y 6x 1 2x 2 ; dt d x2 1 y 2x1 2x2 : dt The new model for this example converges after about 650 iteration to the optimal solution x* = (0,1)T and y* = (0.5,0). We use the new nonlinear neural network to solve the above problem with initial points (0,0), (1,1), (5,5), (0,5), (5,5), (5,0), (5,0) for x and initial point (0,0) for y, and dt = 0.02. Fig. 3 shows the trajectories of the system with seven different initial points converges to the optimal solution x* = (0,1)T.

1378

S. Eati, M. Baymani / Appl. Math. Comput. 168 (2005) 13701379

5 5

Fig. 3. Seven trajectories with dierent initial points for Example 3.

Example 4. Consider the following convex nonlinear programming problem: minimize subject to
2 f x x2 1 2x2 2x1 x2 10x1 12x2 ; g1 x x1 3x2 ; 6 8; 2 g 2 x x2 1 x 2 2x 1 2x 2 6 3:

The new nonlinear neural network for the above problem is dy 1 8 x 1 3x 2 ; dt dy 2 2 3 x2 1 x 2 2x 1 2x 2 ; dt

5 5

Fig. 4. Seven trajectories with dierent initial points for Example 4.

S. Eati, M. Baymani / Appl. Math. Comput. 168 (2005) 13701379

1379

d x1 10 2x1 2x2 y 1 y 2 2 2x1 ; dt d x2 12 2x1 4x2 3y 1 y 2 2 2x1 : dt The new model for this example converges after about 1500 iteration to the optimal solution x* = (1.2168,0.7072)T and y* = (0,3.3497). We use the new nonlinear neural network to solve the above problem with initial points (0,0), (1,1), (5,5), (0,5), (5,5), (5,0), (5,0) for x and initial point 0 for y, and dt = 0.005. Fig. 4 shows the trajectories of the system with seven different initial points converges to the optimal solution x* = (1.2168,0.7072)T.

References
[1] J.J. Hopeld, D.W. Tank, Neural computation of decisions in optimization problems, Biol. Cybern. 52 (1985) 141152. [2] D.W. Tank, J.J. Hopeld, Simple neural optimization networks: An A/D converter, signal decision circuit, and a linear programming circuit, IEEE Trans. Circuits Syst. 33 (1986) 533 541. [3] M.P. Kennedy, L.O. Chua, Neural networks for nonlinear programming, IEEE Trans. Circuits Syst. 35 (1988) 554562. [4] A. Rodriguez-Vazquez, R. Dominguez-Castro, A. Rueda, J.L. Huertas, E. Sanchez-Sinencio, Nonlinear switched-capacitor neural networks for optimization problems, IEEE Trans. Circuits Syst. 37 (1990) 384397. [5] Y. Xia, A neural networks for solving linear programming problem and its application, IEEE Trans. Neural Networks 7 (1996) 525529. [6] D.G. Luenberger, Introduction to Linear and Nonlinear Programming, Addison Wesley, Reading, MA, 1973. [7] Y. Wu, Y. Xia, J. Li, W. Chen, A high-performance neural network for solving linear and quadratic programming problems, IEEE Trans. Neural Networks 7 (1996) 643651. [8] Y. Xia, J. Wang, A projection neural network and its application to constrained optimization problems, IEEE Trans. Circuits Syst. 49 (2000) 447457. [9] M.S. Bazarra, C.M. Shetty, Nonlinear programming; theory and algorithms, John Wiley and Sons, 1989.

Você também pode gostar