Escolar Documentos
Profissional Documentos
Cultura Documentos
Related Problems
Youshen Xia and Jun Wang
Department of Automation and Computer-Aided Engineering
The Chinese University of Hong Kong, Shatin, New Territories, Hong Kong
Abrtmct-In this paper, we propose a general pmjec
tion neural network for solving a wider class of optimization and related problems. In addition to its simple structure and low complexity, the proposed neural
network include existing neural networks for optimiaation, such as the projection neural network, the primaldual neural network, and the dual neural network, as
special cases. Under various mild Conditions, the proposed general projection neural network is shown to
he globally convergent, globally asymptotically stable,
and globally exponentially stable. Furthermore, several
improved stability criteria on two special eases of the
general projection neural network are obtained under
weaker conditions. Simulation results demonstrate the
effectiveness and characteristics of the proposed neural
network.
I. INTRODUCTION
Many engineering problems can be formulated as
constrained nonlinear optimization problems and complementarity problems 111. Real-time solutions of these
problems are often needed in engineeringsystems, such
as signal processing, system identification, and robot
motion control 12-41. The numbers of decision variables
and constraints are usually very large and large-scale
optimization problems are even more challengingwhen
they have to be solved in real time to optimize the performance of dynamical systems. For such applications,
conventional numerical methods may not be adequate
due to the problem dimensionality and stringent requirement on computational time.
A promising approach to solving such problems in
real time is to employ artificial neural networks based
on circuit implementation [5]. As parallel computa,
tional models, neural networks possess many desirable
properties such as real-time information processing.
Therefore, neural networks for optimization, control,
and signal processing received tremendous interests.
In the past two decades, the theory, methodology, and
applications of neural networks have been widely investigated (see [5-18) and references therein). Tank and
Hopfild [5] first proposed a neural network for solving linear programming problems that was mapped
onto a closed-loop circuit [SI. Kennedy and Chua extended [7] their work and proposed a neural network
for solving nonlinear convex programming problems.
Having a penalty parameter, the true minimizer could
be obtained only when the penalty parameter is infinite. Moreover, their network has both the implementation and deconvergence problem when the penalty
parameter is very large. To avoid using penalty pa0-7 803-7898-9/03/$17.oO 02W3 IEEE
2334
11. MODELDESCRIPTION
"
A{Px(G(u)
- F ( u ) )- G(u)},
(1)
(U - G ( u ' ) ) ~ F ( u ' 2
)
0, VU E X.
(6)
- F ( u ) ) - G(u) = 0.
(7)
Therefore, the equilibrium point of the general projection neural network in (1) solves GVI. This property
shows that the existence of the equilibrium point of (1)
is equivalent to the one of the solutions of GVI. This
hi
ui >hi.
implies that there is at least an equilibrium point of
The dynamic equation described in (1) can be easily re- (1) when GVI has a solution. As for the existence of
alized by a recurrent neural network with a single layer the solutions of GVI, the reader is referred to related
structure as shown in Fig. 1. The projection operator papers [19,20]. It is well known that GVI has been
Px(.)can be implemented by using a piecewise activa- viewed as the general framework of unifying the treattion function. It can be seen that the circuit realizing ment of many optimization, economic, and engineerthe prop,osed neural network consists of 2n summers, n ing problems [22-241. For example, GVI includes two
integrators, n piecewise linear activation functions, and useful models: the variational inequalities and general
n processors for G(u) and F(u). Therefore, the net- complementarity problems. The variational inequality
work complexity depends only on the mapping G(u) problem is to find an U* E X such that
and F(u).
(U - U * ) T F ( U ' ) 2 0, vu E
(8)
In addition to its low comulexitv for realization. the
general projection neural network in (1) has several
The general complementarity problem is to find an
advantages. First, it is a significant generalization of
U* E R" such that
some existing neural networks for optimization. For
example; let G(u) = U , then the proposed neural netG(z) 2 0, F ( u ) 2 0, G ( u ) ~ F ( u =
) 0.
(9)
work model becomes the projection neural network
model [16] given by
Because solving GVI is equivalent to finding an equilibrium point of (I), the desirable solutions to' GVI
du
- = A{Px(u - F(u)) - U}.
(2) can he obtained by tracking the continuous trajectory
dt
of (1). Therefore, the proposed neural network in (1)
the affine case that qu)
= M~ + where
' E is attractive alternative as a real-time solver for many
R"X" is a positive semi-definitematrixand E R",the optimization and related problems.
proposed neural network model becomes ihe primal111. CONVERGENCE
RESULTS
dual neural network model [13]
We estabilish the following results on the .general
projection neural network.
du
(3)
A. General Case
dt - A { P ~ ( u - ( M u + ~ ) ) - u } .
Theorem I . Assume that F(u) is G-monotone at an
Let F(u) = u, then the proposed neural network model equilibrum point,
of (1):
becomes
(F(u)- F(u*),G(u)- G(u*))2 O,VU E R",
du
= A{Px(G(u)- U) - G ( u ) } .
(4)
dt
where (. ,.) denotes an inner product. If V F ( u )+
In the affine case that G(u) = Wu q where W E VG(u) is symmetric and positive semi-definite in R",
R"'" is a positive semi-definite matrix and q E R", then the general projection neural network in (1) is staand is globally convergent
the proposed neural ,,etwork model becomes the dual ble in the Lyapunov
to
an
equilibrium
points
of
(1).Specially, the general
neural network model [15]
projection neural network in (1) is globally asymptotically stable if the equilibrium point is unique.
du
- = A{Px(WU
p - U )- W Z L 9 ) .
(5)
x.
_-
dt
2335
model becomes
du
dt
- = A{Px((N - M ) u + c - p ) - NU - c}.
"1
...
(10)
U"
( F ( u ) - F(u*),G(u)- G(u*))2 @ I ~ U -
U*II~,VUE R",
IV. ILLUSTRATIVE
EXAMPLES
In order t o demonstrate the effectiveness and performance of the general projection neural network, in
this section, we give several illustrative examples.
Example 1. Consider the implicit complementarity
problem (ICP) [28] : find U E R" such that
+ I)
(.
F ( U ) T ( U - U')
- F(U)) -U)TvF(U)(pX(U
- *(U) 2 0, F(u) 2 0, (U - * ( U ) ) T F ( U ) = 0,
2
-1
;12
0
2o
2
...
-1
...
...
::2
...
-1
-1
"Y"
-1.5~1 0.25~:
-1.5~2 0.25~;
=0
- F ( U ) ) -')
2336
1%
me
wms
21
h(z)=
'
I
Fig. 3.
+
+
1,.
-0 I
-I I
0
05
>
15
2s
"
3s
.I
= [ 2.172,2.364,8.774,5.096,0.991,
1.431,1.321,9.829,8.280,8.376]T.
if U = (2,y) E
and
qU)
= f(r) + V W Y
that the trajectory of (11) always is globally convergent
-Wx)
to a solution to ICP. For example, let A = 2 1 and let
The
general
projection
neural
network in (1) is thus
the initial point he random. Fig. 2 shows the transient
applied
to
solve
the
GNCP,
and
its state equation is
behaviors of the implementarity error ( u - C J ( U ) ) ~ F ( U )
based on (11) with n = 10,30,60,90. Fig. 3 shows the
du
- = A{(G(u) -U)+ - G(u)},
(13)
transient behavior of the proposed neural network with
dt
n = 10.
where (U)+ = [(uI)+,...,(
and (U;)+ =
Example 2. Consider the variational inequality
max{O,
U;} for i = 1,...,18. All simulation results show
problem (VIP) with nonlinear constraints: find I E
the trajectory of (13) always is globally convergent t o
Rla such that
U* = (z*,y*). For example, let A = 2 1 and let the iniV
I
E
(I - f ( I * ) ) T I * 2 0,
(12) tial point be zero. A solution t o the GNCP is obtained
as follows
where X = {I E R"' I h ( r ) 5 0, I 2 O}, and
I* = [ 2.178,2.368,8.743,5.09,0.991,
1.430, 1.322,9.823,8.286,8.371]T,
x,
2331
1 - 1 0
' I
and
os
Q=
-2
-2%
d.
A A
d,
I,*
3:.
>'@
3;
VT.
( ::)
,P=
-0.5
-0.5
,b=
- F ( U ) ) = U,
R4 x R3 x @,
'
t
(Z*>Y'> s*,w*)T,
+3
.X = {(I,y , s , w ) E
,d=
- N t ' ) T ( M t * + 1 ) 2 0, vt E x,
where I = (x,y,s,w) E R3 x
2u1 exp(u: +(U, - I ) ~ ) UI - uz UQ 1
2(Ul - l ) e X p ( U ?
(U2
1)Z) - U1 2uz 2u3
-U1
2ua 3%
a) ( i).
where
(15)
I'
B
0
0 0
0 0 0
c o o
and
A -H BT 0
where X = R I , Px(u)= [(U')+, ...,( u3)+IT, and
M=
HT Q
0
CT
(U;)+ = max{O,u;} for i = 1,2,3. It can be seen that
F(u) is strongly monotone on R:. We use the neural
network in (2) to solve the above NCP. All simulation where Il E R 3 x 3 and E p X 4 are unit matrice. The
results show that the corresponding neural network in
neural network in
is applied to solve the
(2) always is globally exponentially stable at U*. For above GLCP, and it becomes
example, Fig. 5 displays the exponential stability of
the trajectory of (2) with 20 random initial points,
dz
- = X(MT NT){Px(Nz- M t - 1 ) - N I } . (17)
where A = 41.
dt
Example 4. Consider the general linear-quadratic
All simulation results show the trajectory of (17) aloptimization problem (GLQP) I291
ways is globally convergent to t*. For example, let
X = 2 1 and let the initial point be zero. A solution to
1 ,
minmax
f(., Y) = qTz - pTY + --2:
x t o yro
2
GLCP is obtained as follows
1
-xTHy - ;iyTQy
(16)
(d,
y') = (0.499,1.504,1.004,0.012,0.012,0,0),
0 5 Bx
subject to
5 b,O 5 Cy 5 d
where
V. CONCLUSION
2 1 1
1 0 0 1
1 0 0).
A = ( l Z ' O ) , H = ( O
1 0 1
0 0 1 0
-1
Q = ( i
0 0
1 y1 , ] , B = ( -1
1:) '
-1
2338
-0 4
os
I
I
$ 5
21
1s
IS
Ime
ACKNOWLEDGMENT
This work was supported by the Hong Kong Research Grants Council under Grant CUHK4174/00E.
REFERENCES
(11 M. S. Bazaraa, H. D. Sherali, C. M. Shetty, Nonlinear Pm-
[7] M. P. Kennedy and L. 0. Chua, Neural networks for nonlinear programming, LEEE a c t i o n s on Circuits and S y s
tems, vol. 35,no. 5,pp. 554-562,1988.
[E] A. Radriguez-V&ques, R. Dodnguez-Castro, A. Rueda, J.
L. Huertas, and E. S&nchez-Sinencio, Nonlinear switchedcapacitor ,neural networks for optimization problems,
IEEE llamactions on Circuits and Systems, vol. 37, no. 3,
384-397, 1990.
[9] S. Zhang, X. Zhu, and L.-H. Zou, Second-order neural networks for constrained outimiaation. IEEE llansactions M
Neural Networks, vol. 3,no. 6,pp. 1021-1024,1992.
[lo] A. Bouzerdoum and T. R. Pattison, Neural network
for quadratic Optimization with bound constraints, IEEE
lhn.?actions on Neural Networks, Vol. 4, No. 2, pp. 293304, 1993.
[11] A. Cichmki and R. Unbehauen, Neural Networks for OptimiBation and Signal Processing, Wiley, England, 1993.
[12] Y. Xia, A new neural network for solving linear programming problems and its applications, IEEE llamaetions on
Neural Networks, vol. 7, pp. 525-529,1996.
[U]Y. Xia A new neural network for solving linear and
quadratic programming problems, IEEE llmsactions on
Neural Networks, vol. 7,no. 4,pp. 1544-1547,1996.
[14] Y.Xia and J. Wang, A general methodology for designing globally convergent optimization neural networks, IEEE
llansactions Neural Networks, vol. 9, 1331-1343,1998.
[E]
Y. Xia and J. Wang, A dual neural network for kinematic
control of redundant robot manipulators, IEEE lkansactions M Systems, Man and Cybernetics .Part B, vol. 31,
no. 1, pp. 147-154,2001.
[16] Y. Xia, H. Leung, and J. Wang, I A projection neural network and its application to constrained optimization problems, IEEE Dansctions on Circuits and Systems - Part I,
vol. 49, no. 4,pp. 447-458,2002.
1171 C.Y.Maa and M.A. Shanblatt,Linear and quadratic programming neural network analysis, IEEE s a c t i o n s on
Neural Networks, vol. 3, no. 6, pp. 580-594,1992.
[I81 W. E. Lillo, M.H. Loh, S. Hui, and S. H: Z a k , Onsolving
constrained optimization problems with neural networks: A
penalty method approach, IEEE 7bnsactianr on Ne!
Networks, vol. 4,no. 6,pp. 931-939,1993.
1191 D. Kinderlehrer and G. StamDccbia. An Introduction to
Variational Inequalities and Tieir Applications, Academic
Press, New York, 1980.
[20] J.-S. Pang and J . 4 . Yao, On a generaliiation of a normal
map and equation, SIAM J. Control and Optimization, vol.
33,pp. 168-184,1995.
[21] M. W u s h i m a , Equivalent differentiable optimization
problems and descent methods for asymmetric variational
inequality problems, Mathematical Programming, vol. 53,
pp. 99-110,1992.
[22] T. L. Friesz, D.H. Bernstein, N.J. Mehta, R.L. Tohin, and
S. Ganjlizadeb, Day-tmday dynamic network disequilibria
and idealized traveler information systems, Operations Research, vol. 42, pp. 1120-1136,1994
[23] M. C. Ferris and J. S. pang, Engineeling and economic
applications of Complementarity problems, SIAM Review,
vol. 39,pp. 669-713,1997.
(241 L. Vandenbergbe, B. L. De Moor, and J. Vandewalle, The
generalized linear complementary problem applied to the
complete analysis of resistive piecewiselinear circuits, IEEE
Trans. Circuits and systems, 01.36,no.11, 1382-1391,1989.
1251 R. K.Miller and A. N. Michel, Ordinary Differential Equations, Academic Press, New York, 1980.
1261 J. M. Ortega and W. G. Rheinboldt, Iterative Solution of
Nonlinear Equations in Several Variables, Academic Press,
New York, 1970.
[27] C. Cbaralambous, Nonlinear lest pth optimiaatian and
nonlinear programming, Mathematical Programming, vol.
12,pp. 195-225,1977.
[Z8] R.Andremi, A.Friedlander, and S. A. Santos, On the resolution of the generalized nonlinear complementarity problem, SIAM Journal of Optimization, vol. 12, no. 2, pp. 303321,2001.
[29] R. T. Rockafellar and R. J. B. Wets, Linear-quadratic programming and Optimal control, SIAM Journal of Control
and Optimization, vol. 25, pp. 781-814,1987.
.,
2339