Você está na página 1de 6

A General Projection Neural Network for Solving Optimization and

Related Problems
Youshen Xia and Jun Wang
Department of Automation and Computer-Aided Engineering
The Chinese University of Hong Kong, Shatin, New Territories, Hong Kong
Abrtmct-In this paper, we propose a general pmjec
tion neural network for solving a wider class of optimization and related problems. In addition to its simple structure and low complexity, the proposed neural
network include existing neural networks for optimiaation, such as the projection neural network, the primaldual neural network, and the dual neural network, as
special cases. Under various mild Conditions, the proposed general projection neural network is shown to
he globally convergent, globally asymptotically stable,
and globally exponentially stable. Furthermore, several
improved stability criteria on two special eases of the
general projection neural network are obtained under
weaker conditions. Simulation results demonstrate the
effectiveness and characteristics of the proposed neural
network.

I. INTRODUCTION
Many engineering problems can be formulated as
constrained nonlinear optimization problems and complementarity problems 111. Real-time solutions of these
problems are often needed in engineeringsystems, such
as signal processing, system identification, and robot
motion control 12-41. The numbers of decision variables
and constraints are usually very large and large-scale
optimization problems are even more challengingwhen
they have to be solved in real time to optimize the performance of dynamical systems. For such applications,
conventional numerical methods may not be adequate
due to the problem dimensionality and stringent requirement on computational time.
A promising approach to solving such problems in
real time is to employ artificial neural networks based
on circuit implementation [5]. As parallel computa,
tional models, neural networks possess many desirable
properties such as real-time information processing.
Therefore, neural networks for optimization, control,
and signal processing received tremendous interests.
In the past two decades, the theory, methodology, and
applications of neural networks have been widely investigated (see [5-18) and references therein). Tank and
Hopfild [5] first proposed a neural network for solving linear programming problems that was mapped
onto a closed-loop circuit [SI. Kennedy and Chua extended [7] their work and proposed a neural network
for solving nonlinear convex programming problems.
Having a penalty parameter, the true minimizer could
be obtained only when the penalty parameter is infinite. Moreover, their network has both the implementation and deconvergence problem when the penalty
parameter is very large. To avoid using penalty pa0-7 803-7898-9/03/$17.oO 02W3 IEEE

rameters, some significance work have been done in


recent years. RodriguwV6zquez et al. proposed a
switched-capacitor neural network for solving nonlinear convex programming problems, where the optimal
solution is assumed to be inside of the feasible set 181.
Zhang et al. proposed a second-order neural network
for solving a nonlinear convex programming problem
with equality constraints [9]. The second-order neural
network is complex in implementation due to the need
for computing varying inverse matrices. Bouzerdoum
and Pattison presented a neural network for solving
quadratic convex optimization problems with bounded
constraints (101. Recently, we developed several neural networks: primal-dual neural networks for solving
linear and quadratic convex programming problems
and monotone linear complementary problems, a dual
neural network for solving strictly convex quadratic
programming problems, and a projection neural network for solving a class of nonlinear convex programming problems and monotone nonlinear complementary problems (12-161.
In this paper, based on a generalized equation in
[20] we propose a general projection neural network
for solving a wider class of optimization and related
problems. The proposed neural network is a significant
generalization of existing neural networks for optimization such as the primal-dual neural network, the dual
neural network, and the projection neural network. In
addition to its low complexity for implementation, the
proposed neural network is shown t o be stable in the
sense of Lyapunov and globally convergent, globally
asymptotically stable, or globally exponentially stable,
under different mild conditions. Moreover, several improved stability conditions for two special cases of the
general projection neural network are obtained under
weaker conditions. Illustrative examples demonstrate
the performance and effectiveness of the proposed neural network.
This paper is organized as follows. In the next section, a general projection neural network and its advantages are described. In Section 111, the global convergence properties of the proposed neural network,
including global asymptotic stability and global expcr
nential stability, are studied, respectively under some
mild conditions. In Section TV,several illustrative examples are presented. Section V gives the conclusions
of this paper.

2334

11. MODELDESCRIPTION
"

We propose a general projection neural network with


its dynamical equation defined a s
du
dt

A{Px(G(u)

- F ( u ) )- G(u)},

Next, the general projection neural network in (1)is


useful for solving optimization and related problems.
This is because it is intimately related to the following
general variational inequality (GVI) [20]: find 'U E X
such that G(u') E X and

(1)

where u E R" is a state vector, A = diag(Xi) is a


positive diagonal matrix, F(u) and G(u) are continuously differentiable vector-valued functions from R"
into R", X = {U E R" 1 l; 5 U ; 5 h;,i = 1, ...,n},
Px : R". -iX is a projection operator defined by
Px(u) = [ P x ( w ) , ...,Px(u,)IT, and Px(u;)is given
bY
l;
ui < li
Px(ui) = ui
li 5 U; 5 hi

(U - G ( u ' ) ) ~ F ( u ' 2
)

0, VU E X.

(6)

Fkom [20] it can be seen that solving GVI is equivalent


to finding a zero of the generalized equation
Px(G(.)

- F ( u ) ) - G(u) = 0.

(7)

Therefore, the equilibrium point of the general projection neural network in (1) solves GVI. This property
shows that the existence of the equilibrium point of (1)
is equivalent to the one of the solutions of GVI. This
hi
ui >hi.
implies that there is at least an equilibrium point of
The dynamic equation described in (1) can be easily re- (1) when GVI has a solution. As for the existence of
alized by a recurrent neural network with a single layer the solutions of GVI, the reader is referred to related
structure as shown in Fig. 1. The projection operator papers [19,20]. It is well known that GVI has been
Px(.)can be implemented by using a piecewise activa- viewed as the general framework of unifying the treattion function. It can be seen that the circuit realizing ment of many optimization, economic, and engineerthe prop,osed neural network consists of 2n summers, n ing problems [22-241. For example, GVI includes two
integrators, n piecewise linear activation functions, and useful models: the variational inequalities and general
n processors for G(u) and F(u). Therefore, the net- complementarity problems. The variational inequality
work complexity depends only on the mapping G(u) problem is to find an U* E X such that
and F(u).
(U - U * ) T F ( U ' ) 2 0, vu E
(8)
In addition to its low comulexitv for realization. the
general projection neural network in (1) has several
The general complementarity problem is to find an
advantages. First, it is a significant generalization of
U* E R" such that
some existing neural networks for optimization. For
example; let G(u) = U , then the proposed neural netG(z) 2 0, F ( u ) 2 0, G ( u ) ~ F ( u =
) 0.
(9)
work model becomes the projection neural network
model [16] given by
Because solving GVI is equivalent to finding an equilibrium point of (I), the desirable solutions to' GVI
du
- = A{Px(u - F(u)) - U}.
(2) can he obtained by tracking the continuous trajectory
dt
of (1). Therefore, the proposed neural network in (1)
the affine case that qu)
= M~ + where
' E is attractive alternative as a real-time solver for many
R"X" is a positive semi-definitematrixand E R",the optimization and related problems.
proposed neural network model becomes ihe primal111. CONVERGENCE
RESULTS
dual neural network model [13]
We estabilish the following results on the .general
projection neural network.
du
(3)
A. General Case
dt - A { P ~ ( u - ( M u + ~ ) ) - u } .
Theorem I . Assume that F(u) is G-monotone at an
Let F(u) = u, then the proposed neural network model equilibrum point,
of (1):
becomes
(F(u)- F(u*),G(u)- G(u*))2 O,VU E R",
du
= A{Px(G(u)- U) - G ( u ) } .
(4)
dt
where (. ,.) denotes an inner product. If V F ( u )+
In the affine case that G(u) = Wu q where W E VG(u) is symmetric and positive semi-definite in R",
R"'" is a positive semi-definite matrix and q E R", then the general projection neural network in (1) is staand is globally convergent
the proposed neural ,,etwork model becomes the dual ble in the Lyapunov
to
an
equilibrium
points
of
(1).Specially, the general
neural network model [15]
projection neural network in (1) is globally asymptotically stable if the equilibrium point is unique.
du
- = A{Px(WU
p - U )- W Z L 9 ) .
(5)

x.

_-

dt

2335

model becomes
du
dt

- = A{Px((N - M ) u + c - p ) - NU - c}.

"1

...

(10)

As for the convergence of (lo), we have the following


results.
Theorem 5. Assume that matrix A = NT MT.
The neural network in (10) is globally convergent to
an equilibrium point of (10) when M T N is positive
semi-definite, and is globally exponentially stable when
M T N is positive definite.
As an immediate corollary of Theorem 5, we have
the following result, which is presented in [13].
Corollary 2. Assume that N = I and A = I MT.
The neural network in (10) is globally convergent t o
an equilibrium point of (10) when M is positive semidefinite, and is globally exponentially stable when M
is positive definite.

U"

Fig. 1. A block diagram of the general projection neural network


in (1)

Theorem 2. Assume that F(u) is G-strongly monotone at U':

( F ( u ) - F(u*),G(u)- G(u*))2 @ I ~ U -

U*II~,VUE R",

IV. ILLUSTRATIVE
EXAMPLES
In order t o demonstrate the effectiveness and performance of the general projection neural network, in
this section, we give several illustrative examples.
Example 1. Consider the implicit complementarity
problem (ICP) [28] : find U E R" such that

where /I . 11 denotes the 12-norm of R" and 0 > 0 is a


constant. If V F ( u ) VG(u) is symmetric and posiU
tive semidefinite, and has an upper bound in R", then
the general projection neural network in (1) is globally
exponentially stable at U*.
As an immediate corollary of Theorems 1 and 2, we
bave the following results.
CoroJIary 1. Assume that F(u) = U. If VG(u) is
symmetric and positive semidefinite, then the neural
network in (4) is stable in the Lyapunov sense and
is globally convergent to an equilibrium point of (4).
If VG(u) is symmetric and uniformly positive definite
and has an upper bound in R", then the neural network
in (4) is globally exponentially stable.
As a special case the G(u) = W U q, where W E and
RnX".IS symmetric and positive semidefinite and q E
R", the result of Corollary 1 is presented in [15].
B. Two special cases
As for the projection neural network of (2), we have
two further results below.
Theorem 3. Assume that VF(u) is positive semidefinite in R". If

+ I)

(.

F ( U ) T ( U - U')
- F(U)) -U)TvF(U)(pX(U

- *(U) 2 0, F(u) 2 0, (U - * ( U ) ) T F ( U ) = 0,
2

-1
;12

0
2o
2

...

-1

...

...
::2
...

-1
-1

"Y"

-1.5~1 0.25~:
-1.5~2 0.25~;

=0

It can be seen that the ICP can be viewed as the GNCP


where G ( U ) = U - @(). and F ( U ) = M U + ~, The
implies that is a solution to (8), then the projection general projection neural network in (1) is thus applied
neural network in (2) is stable in the sense of L~~~~~~~ to solve the above ICP, and its state equation is given
and globally convergent to an equilibrium point of (2).
Theorem 4. If V F ( u )is uniformly positive definite
dU =
- *(U) - F ( U ) ) + - U "(U)}, (11)
in R", then the projection neural network in (2) is
dt
globally exponentially stable.
In the affine case that F ( u ) = M u f q , G(u) = N U + where (U)+ = [(.I)+, ...,(u,)+IT and ( u p =
c, M , N E P X "and
, q,c E R", the neural network max(0,ui) for i = 1, ...,n. All simulation results show

- F ( U ) ) -')

2336

1%

me

wms

Fig. 2. The complementarity error based on the proposed neural


network in (11) for solving ICP in Example 1.

Fig. 4. Global convergence of the propbsed neural network in


(13) for solving GNCP in Example 2.

21

h(z)=

'
I

Fig. 3.

+
+

1,.

This problem has an optimal solution given in [27]

-0 I

-I I
0

3(21 - 2)' 4121 - 3)' 22: - 724 - 120


5 ~ : (23 - 6) 822 - 224 - 40
(21- 8)'/2
2 ( a - 4)'
32: - Z B - 30
2:
2(2a - 2)' - 22122 142s - 62e
421 521 - 327 928 - 105.
1021 - 811 - 1727 228
12(2Q - 8)' - 321 622 - 7210 '
-82,
222
529 - 2210 - 12

05

>

15

2s

"

3s

.I

Global convergence of the proposed neural network in

(11) for solving ICP in Example 1.

= [ 2.172,2.364,8.774,5.096,0.991,
1.431,1.321,9.829,8.280,8.376]T.

By the Kuhn-Tucker condition (11 we see that there exthat


the
if and Only
kt Y E

if U = (2,y) E
and

R" solves the GNCP where F(u) = U

qU)
= f(r) + V W Y
that the trajectory of (11) always is globally convergent
-Wx)
to a solution to ICP. For example, let A = 2 1 and let
The
general
projection
neural
network in (1) is thus
the initial point he random. Fig. 2 shows the transient
applied
to
solve
the
GNCP,
and
its state equation is
behaviors of the implementarity error ( u - C J ( U ) ) ~ F ( U )
based on (11) with n = 10,30,60,90. Fig. 3 shows the
du
- = A{(G(u) -U)+ - G(u)},
(13)
transient behavior of the proposed neural network with
dt
n = 10.
where (U)+ = [(uI)+,...,(
and (U;)+ =
Example 2. Consider the variational inequality
max{O,
U;} for i = 1,...,18. All simulation results show
problem (VIP) with nonlinear constraints: find I E
the trajectory of (13) always is globally convergent t o
Rla such that
U* = (z*,y*). For example, let A = 2 1 and let the iniV
I
E
(I - f ( I * ) ) T I * 2 0,
(12) tial point be zero. A solution t o the GNCP is obtained
as follows
where X = {I E R"' I h ( r ) 5 0, I 2 O}, and
I* = [ 2.178,2.368,8.743,5.09,0.991,
1.430, 1.322,9.823,8.286,8.371]T,

x,

where time parameter t = 6. Fig. 4 shows the transient


behavior of z ( t ) with a random initial point.
Example 3. Consider the nonlinear complementarity
problem (NCP)

2331

1 - 1 0
' I

and
os

Q=

-2
-2%

d.

A A

d,

I,*

3:.

>'@

3;

VT.

Fig. 5. Global Exponential stability of the projection neural


network in (2) for solving VIP in Example 3.

( ::)

,P=

-0.5

-0.5

,b=

- F ( U ) ) = U,

R4 x R3 x @,

'
t

(Z*>Y'> s*,w*)T,

+3

.X = {(I,y , s , w ) E

This problem has only one solution U* = [0,0.167, 0IT


and F(u*) = [0.833,0,0.334IT. According to [19], U* is
a solution to the above NCP if and only if U* satisfies
the following equation
PX(U

,d=

- N t ' ) T ( M t * + 1 ) 2 0, vt E x,

where I = (x,y,s,w) E R3 x
2u1 exp(u: +(U, - I ) ~ ) UI - uz UQ 1
2(Ul - l ) e X p ( U ?
(U2
1)Z) - U1 2uz 2u3
-U1
2ua 3%

a) ( i).

This problem has an optimal solution (z*,y*) =


(0.5,1.5,1,0,0,0). According to the well-known saddle point Theorem [l], it can be seen that the above
GLQP can be converted into an general linear variational inequality (GLCP): find z* E X such that
(2

where

(15)

I'

B
0

Ri41I 2 0 , y 2 0,O 5 s 5 b,O 5 w 5 d } ,


0

0 0

0 0 0

c o o

and

A -H BT 0
where X = R I , Px(u)= [(U')+, ...,( u3)+IT, and
M=
HT Q
0
CT
(U;)+ = max{O,u;} for i = 1,2,3. It can be seen that
F(u) is strongly monotone on R:. We use the neural
network in (2) to solve the above NCP. All simulation where Il E R 3 x 3 and E p X 4 are unit matrice. The
results show that the corresponding neural network in
neural network in
is applied to solve the
(2) always is globally exponentially stable at U*. For above GLCP, and it becomes
example, Fig. 5 displays the exponential stability of
the trajectory of (2) with 20 random initial points,
dz
- = X(MT NT){Px(Nz- M t - 1 ) - N I } . (17)
where A = 41.
dt
Example 4. Consider the general linear-quadratic
All simulation results show the trajectory of (17) aloptimization problem (GLQP) I291
ways is globally convergent to t*. For example, let
X = 2 1 and let the initial point be zero. A solution to
1 ,
minmax
f(., Y) = qTz - pTY + --2:
x t o yro
2
GLCP is obtained as follows
1
-xTHy - ;iyTQy
(16)
(d,
y') = (0.499,1.504,1.004,0.012,0.012,0,0),

0 5 Bx

subject to

5 b,O 5 Cy 5 d

where time parameter t = 5. Fig. 6 shows the transient


behavior of (r(t),y(t)) with an random initial point.

where

V. CONCLUSION

2 1 1
1 0 0 1
1 0 0).
A = ( l Z ' O ) , H = ( O
1 0 1
0 0 1 0
-1

Q = ( i

0 0
1 y1 , ] , B = ( -1

1:) '
-1

Optimization has found wide applications in various


areas such as control and signal processing. In this paper, we have proposed a general projection neural network, which has a simple structure and low complexity for implementation. The general projection neural network includes existing neural networks for optimization, such as the primal-dual neural network, the

2338

-0 4

os

I
I

$ 5

21

1s

IS

Ime

Fig. 6. Global convergence of(i(t),g(t)) based on the neural


network in (17) in Example 4.

dual neural network, the projection neural network,


as special cases. Moreover, its equilibrium points are
able to solve a wide variety of optimization and related
problems. Under mild conditions, we have shown the
general projection neural network has a global convergence, a global asymptotic stability, and a global expcnential stability, respectively. Since the general projection neural network contains existing neural networks
,in (2)-(5) as special cases, the obtained stability results
naturally generalize the existing ones for a special case
of such neural networks. Furthermore, we have obtained several improved stability results on two special
cases of the general projection neural network under
weaker conditions. The obtained results are helpful
for wide applications of the the general projection neural network. nlustrative examples with application to
optimization and related problems show that the prcposed neural network is effective in solving these problems. Further investigations will be aimed at the improvement of the stability conditions and engineering
applications of the general projection neural network
to robot motion control and signal processing, etc.

ACKNOWLEDGMENT
This work was supported by the Hong Kong Research Grants Council under Grant CUHK4174/00E.

REFERENCES
(11 M. S. Bazaraa, H. D. Sherali, C. M. Shetty, Nonlinear Pm-

gmmming: Theory and Algorithms (2nd Ed.), John Wiley,


New York, 1993.
[2] T. Yoshikawa, Foundations of Rolmties: Annlysis and Control, MIT Press, Cambridge, MA, 1990.
(31 N. Kalauptisidis, Signal Pmeessing Systems, Theory and
Design. Wiley, New York, 1997.
(41 B. Kosko, Net Networks for Signoi Processing, PrenticeHall, Inc., USA, 1992.
[5] J. J. Hopfield and D. W. W , N e u r a l computation of decisions in optimization problems, Biological Cybernetics, vol.
52, no. 3, pp. 141-152,1985.
161 D. W. l h k and J. J. Hopfield, Simple neural optimization
networks: an A/D converter, si@ decision circuit, and a
linear programming circuit, IEEE %ctions
on Circuits
and Systems, vol. 33, no. 5,pp. 533-541,1986.

[7] M. P. Kennedy and L. 0. Chua, Neural networks for nonlinear programming, LEEE a c t i o n s on Circuits and S y s
tems, vol. 35,no. 5,pp. 554-562,1988.
[E] A. Radriguez-V&ques, R. Dodnguez-Castro, A. Rueda, J.
L. Huertas, and E. S&nchez-Sinencio, Nonlinear switchedcapacitor ,neural networks for optimization problems,
IEEE llamactions on Circuits and Systems, vol. 37, no. 3,
384-397, 1990.
[9] S. Zhang, X. Zhu, and L.-H. Zou, Second-order neural networks for constrained outimiaation. IEEE llansactions M
Neural Networks, vol. 3,no. 6,pp. 1021-1024,1992.
[lo] A. Bouzerdoum and T. R. Pattison, Neural network
for quadratic Optimization with bound constraints, IEEE
lhn.?actions on Neural Networks, Vol. 4, No. 2, pp. 293304, 1993.
[11] A. Cichmki and R. Unbehauen, Neural Networks for OptimiBation and Signal Processing, Wiley, England, 1993.
[12] Y. Xia, A new neural network for solving linear programming problems and its applications, IEEE llamaetions on
Neural Networks, vol. 7, pp. 525-529,1996.
[U]Y. Xia A new neural network for solving linear and
quadratic programming problems, IEEE llmsactions on
Neural Networks, vol. 7,no. 4,pp. 1544-1547,1996.
[14] Y.Xia and J. Wang, A general methodology for designing globally convergent optimization neural networks, IEEE
llansactions Neural Networks, vol. 9, 1331-1343,1998.
[E]
Y. Xia and J. Wang, A dual neural network for kinematic
control of redundant robot manipulators, IEEE lkansactions M Systems, Man and Cybernetics .Part B, vol. 31,
no. 1, pp. 147-154,2001.
[16] Y. Xia, H. Leung, and J. Wang, I A projection neural network and its application to constrained optimization problems, IEEE Dansctions on Circuits and Systems - Part I,
vol. 49, no. 4,pp. 447-458,2002.
1171 C.Y.Maa and M.A. Shanblatt,Linear and quadratic programming neural network analysis, IEEE s a c t i o n s on
Neural Networks, vol. 3, no. 6, pp. 580-594,1992.
[I81 W. E. Lillo, M.H. Loh, S. Hui, and S. H: Z a k , Onsolving
constrained optimization problems with neural networks: A
penalty method approach, IEEE 7bnsactianr on Ne!
Networks, vol. 4,no. 6,pp. 931-939,1993.
1191 D. Kinderlehrer and G. StamDccbia. An Introduction to
Variational Inequalities and Tieir Applications, Academic
Press, New York, 1980.
[20] J.-S. Pang and J . 4 . Yao, On a generaliiation of a normal
map and equation, SIAM J. Control and Optimization, vol.
33,pp. 168-184,1995.
[21] M. W u s h i m a , Equivalent differentiable optimization
problems and descent methods for asymmetric variational
inequality problems, Mathematical Programming, vol. 53,
pp. 99-110,1992.
[22] T. L. Friesz, D.H. Bernstein, N.J. Mehta, R.L. Tohin, and
S. Ganjlizadeb, Day-tmday dynamic network disequilibria
and idealized traveler information systems, Operations Research, vol. 42, pp. 1120-1136,1994
[23] M. C. Ferris and J. S. pang, Engineeling and economic
applications of Complementarity problems, SIAM Review,
vol. 39,pp. 669-713,1997.
(241 L. Vandenbergbe, B. L. De Moor, and J. Vandewalle, The
generalized linear complementary problem applied to the
complete analysis of resistive piecewiselinear circuits, IEEE
Trans. Circuits and systems, 01.36,no.11, 1382-1391,1989.
1251 R. K.Miller and A. N. Michel, Ordinary Differential Equations, Academic Press, New York, 1980.
1261 J. M. Ortega and W. G. Rheinboldt, Iterative Solution of
Nonlinear Equations in Several Variables, Academic Press,
New York, 1970.
[27] C. Cbaralambous, Nonlinear lest pth optimiaatian and
nonlinear programming, Mathematical Programming, vol.
12,pp. 195-225,1977.
[Z8] R.Andremi, A.Friedlander, and S. A. Santos, On the resolution of the generalized nonlinear complementarity problem, SIAM Journal of Optimization, vol. 12, no. 2, pp. 303321,2001.
[29] R. T. Rockafellar and R. J. B. Wets, Linear-quadratic programming and Optimal control, SIAM Journal of Control
and Optimization, vol. 25, pp. 781-814,1987.

.,

2339

Você também pode gostar