Você está na página 1de 12

Proceedings of the International Congress of Mathematicians

August 16-24, 1983, Warszawa


H. W. KNOBLOOH
Nonlinear Systems: Local Controllability and Higher
Order Necessary Conditions for Optimal Solutions
1. Introduction
We consider control systems "which are defined in terms of an ordinary-
differential equation
x =f(t;x,u). (1.1)
u is the control variable and may he subject to a constraint of t he form
u e TJ. We allow specialization of u to an admissible control function
u(-), t hat is, a function which is piecewise of class 0 on R and has a range
"whose closure is contained in TJ. The function / on the right-hand side of
(1.1) is assumed to be sufficiently smooth. Hence, if an admissible con-
trol function is substituted for u in (1.1), we obtain a differential equation
which allows application of all standard results concerning the existence,
uniqueness and continuous dependence of solutions (see e.g. [1], Sections
2-4). Any one of these solutions will be denoted by x(-) and called an
admissible trajectory. We also refer to the pair (u(-),x(-)) as a solution
of (1.1). If we speak of an optimal solution, we mean a solution which mini-
mizes the functional within the class of all admissible trajectories satis-
fying boundary conditions of the usual type. It is always tacitly assumed
t hat the value of the functional can be identified with the terminal value
of a component of the state vector.
We are concerned in this lecture with two types of problems which can
be studied independent from each other. However, it is clear from the
beginning t hat one can expect some kind of duality between statements
pertaining to each of these problems. Among other things we will undertake
in this lecture an attempt to put this duality into more concrete forms.
Problems of the first type deal with necessary conditions which have
[1369]
1370 Section 14: H. W. Knobloch
to hold along singular ares. A singular arc is a portion of an optimal sol-
ution which is such that the control variable is specialized to interior
values of the control set TJ. We restrict our attention to conditions which
have to hold pointwise along a singular arc and which assume the form
of a multiplier rule, i.e., a rule which can be expressed as an inequality
of the form y(t)
T
a(t) ^0, where y(-) is the usual adjoint state vector.
Problems of the second type carry the label "local controllability".
The precise definition goes as follows. Let there be given, for all t in some
interval tt
0
,t], an arbitrary solution u(-)
9
x(') of (1.1) (called "reference
solution" from now on). Local controllability along this solution and for
t = t means : There exists, for every sufficiently small e > 0, a full neigh-
borhood of x(t) which can be reached at time t = t by travelling along
admissible trajectories starting at time t ==?e from x(t e). In other
words: x(t) is an interior point of the set of all states to which the system
can be steered from x(t s) within time s. We remark that our notion of
local controllability coincides with Sussmann's "small time local control-
lability" (cf. [2], Sec. 2.3), if the system equation is autonomous and the
reference solution stationary.
It is somehow clear from the above definitions that problems of both
types are concerned with local properties of solutions and that these
properties, in a certain sense, exclude each other. If a solution is optimal
(in the sense as explained above), then the set of all states into which the
system can be steered from x(t e) within time e is situated on one side of
a certain hyperplane through the terminal point x(t) and we have no
local controllability for t = t. Thus one can expect some kind of correspond-
ence between results concerning singular extremals and those concerning
local controllability which roughly speaking, amounts to "reversing"
conclusions in a suitable way. We will demonstrate in Section 2 how this
kind of reasoning can be put on more solid grounds by presenting two
theorems one giving necessary conditions for singular arcs and the
other giving sufficient conditions for local controllability in which all
statements are expressed in terms of one and the same object, namely the
local cone of attainability. This is a set of elements of the state space which
is associated with each point of the reference solution. Since we may think
of a solution as a curve parametrized by th time t, we denote this set
by Jff. The precise definition is given in Section 2 ; it will turn out to be
a modification of the definition of the set II
t
which was introduced in
[1], Section 9. In fact, jf
t
is a subset of II
t
. The reason that we dispense
here with some elements of II
t
is the gain in mathematical structure.
jT
t
enjoys certain properties which cannot be inferred from the definition
Nonlinear Systems: Local Controllability and Optimal Solutions 1371
of II
t
: It is a convex cone and its maximal linear subspace is invariant
under a certain operator J
7
(Theorem 2.1). The operator .Twill be described
in detail in Section 2. In contrast to the philosophy adojvted in [1] we
prefer here a definition which depends on the choice of a special reference
solution since it helps to bring out the system theoretic aspect of this
operator. One can look upon J
7
from the viewpoint of linear systems theory ;
it then appears as a generalization of the process which leads to the con-
struction of the controllability matrix. Indeed, if the system equation
is linear and is given as
x =A(t)x + B(t)u, (1.2)
then the columns of this matrix can be generated from the columns of
B(t) by repeated application of J
7
. One can also look on it from the differ-
ential geometric viewpoint. If the system equation is autonomous and
the reference solution stationary then the simplest way to explain the
application of r is in terms of a Lie-bracket involving / ( = the function
which appears on the right-hand side of (1.1)). This, by the way, explains
why the forming of the Lie-bracket with / is a nonlinear analogue of the
linear mapping is defined in terms of the matrix A (t) of the linear system
(1.2).
Eegardless of which view one prefers, what counts for our purposes
are the following two facts, (i) We can define r without any restrictive
assumptions, as linearity of the equation or time independence of the
reference solution, (ii) One can use J
7
in order to generate new elements
of dC
i
out of given ones : From the previously mentioned invariance property
of c/C
i
one infers that the following statement holds true:
i p e j r , implies F
tt
(p)ei
t
, ^ = 1 , 2 , . . . (1.3)
To get an impression of the scope of this result it might be helpful to
consider a special case. Let us assume that the system is linear in u, and
hence defined by a differential equation of the form
m
A^U{t;x) + ^]u
v
g
r
{t;x), u = (it
1
,..., %
m
). (1.4)
= 1
Furthermore, let us assume that the reference control satisfies the condi-
tion u(t) eintZJ for all t e p
0
,*]. Using standard variational techniques
it is then not difficult to see that g
v
(t-, x(t)) e X
t
for all t e [t
0
, *]. Hence,
it follows from (1.3) that the linear space spanned by (r^g^it, x(t)),
1372 Section 14: H. W. Knobloch
p = 0,1, ...,v =1, ...,m,is contained in ct
t
. This space was introduced
in [1] and denoted there by 33(2) (cf. Section 1, in particular p. 5), it can
be identified with the columns of the controllability matrix for the linear-
ized system (i.e., for the linear system (1.2) with A(t) : =(dfldx) [t; x(t), u(t))
and B(t) : = (dfldu)(t; x(t), u(t)). It is therefore not surprising to redis-
cover 33 (J) as a part of #
%
and to establish its invariance with respect
to the operator B\ this could be verified also by standard arguments.
The importance of the statement (1.3) rests upon t he fact t hat it allows t o
extend the space 93 (t) by adjoining further elements p without losing its
two basic properties, namely, t hat of being a subspace of jf
t
and being
invariant with respect to r. In other words : One can add to the generators
g
v
of the space 93 (t) all elements p e c/T
t
which satisfy the condition +p e i
t
and then treat the enlarged set as if it would be t he set of generators for
33(2). Whether this is a useful insight or not, depends on the concrete
possibilities of constructing vectors p with the property +p e cf
t
and which
are not already elements of 93(2). What is known in this respect is very
little, nevertheless it seems worthwhile to review carefully the material
existing up to now.
The first examples of non-trivial elements p which are contained in
X
t
together with their negatives are among what we will call "second
order elements" and discuss in detail in Section 3. The name stems from
the fact t hat t he necessary conditions which can be /expressed in terms
of these elements are commonly called "second order". We will present
in Section 3 a general definition of the "order" of an element of tf
t
and
give a complete description of the set of all second order elements. Special
emphasis is put on those p which appear together with p in this set and
which therefore must be orthogonal to the adjoint state variable along
an optimal solution. I t has been known since long t hat for a system of the
form (1.4) the "mixed" Lie-brackets
Pv,t*'-= C^jffJ (1-5)
enjoy this property along a singular arc. But the background of this was
not recognized until recently when Vrsan [4] announced the following
result: Local controllability along a reference solution of the system (1.4)
can be inferred from the following two conditions :
(i) the reference control assumes values in t he interior of TJ for all t
9
(ii) the controllable subspace 33(2) and the elements
r
v
(p
v
J
9
v,ii = l , . . . , m, y = 0 , 1 , . . .
generate together t he whole state space.
Nonlinear Systems: Local Controllability and Optimal Solutions 1373
As we will see in Section 3, both the second order equality type condi-
tions and Vrsan's result are true since the hypothesis (i) implies p
VffX
e jf
/
for any system of the form (1.4). It is also possible under suitable
extra hypotheses to add further second order elements to the p
Vtf
in
such a way that one arrives at a similar type of controllability criterion.
However, all second order elements p which are such that +p e X
t
reduce
to zero if the dimension m of the control variable is equal to one (note that
Pv
>f
A = 0, in view of (1.5)). The absence of those elements can be under-
stood from the nature of the corresponding necessary conditions: they
can be compared to the standard second order tests in calculus. Basically,
these tests are inequalities (semi-definiteness of a quadratic form), which
eventually may lead to equality type statements ; namely, if the form fails
to be definite. All these statements, however, are trivial if there is not more
than one variable.
It should be pointed out that Vrsan's result reflects a typical non-linear
system property: There exists a kind of "crosswise" interaction between
the components of u which is exercised through the state vector (note that
p
Vtf4
= 0 if the g
v
do not depend upon x) and which cannot be recognized
by means of linearization since for a linear system the action of u = (u
1
, ...
..., u
m
) is just the superposition of the action of the components u
v
. In
precise mathematical terms this interaction is expressed by the fact that
one can simply adjoin the p
Vift
to the generators of SB (2) without destroying
the controllability properties of this space.
Next, we wish to say a few words about possible extensions of 58(2)
in case of a scalar control variable u. It is clear from what was said above
that one has to search for possible candidates among higher-than-second-
order elements, but it is presently not obvious how this search can be
carried out in a systematic way. What one expects to find is some kind
of hierarchy among the subspaces of tf
%
, which corresponds to the hier-
archy among higher order tests in optimization. Of course the controllable
subspace SB (2) of the linearized system equation should be the member
of lowest rank.
The first attempt to put this idea into a more concrete form has been
undertaken by H. Hermes and completed by H. Sussmann [3]. It led
to a controllability criterion for a system of the form
x =f
0
(x) + ug(x), u scalar, (1.6)
with a stationary point (u
09
x
Q
) playing the role of the reference solution.
The crucial condition which enters this criterion concerns the Lie-
brackets associated with system (1.6) and evaluated at (u, x) = (u
0
, x
0
).
1374 Section 14: H. W. Knobloch
To be more specific, it is assumed that all brackets which involve the quan-
tity g an even number of times can be expressed as a linear combination
of Lie-brackets which are of odd order with respect to g. This even-odd
relationship resembles the one which is well known, from elementary
calculus: If all derivatives up to order 2Jc vanish at an extremal point
of a function, then the (2ft+l)th derivative must also vanish there. Now,
vanishing of some derivative at an extremal point is an equality-type
necessary condition. In optimal control theory conditions of this kind
appear in the form of an orthogonality relation y(t)
T
p = 0. As we have
seen before, they arise from elements p of the state space which satisfy
the condition +p e X
t
. We wish therefore to pose the following question
which is a natural modification of the Hermes conjecture: Assume that
the above stated condition holds for all Lie-brackets which are of order
at most 2Jfc with respect to g, Jc being a fixed positive integer. Is it then true
that the linear space spanned by all Lie-brackets which are of order at
most 2Jc +1 with respect to g belong to #"
t
?
In this generality the question probably cannot be answered along
the lines of existing methods ; in particular, it is unlikely that Sussmann's
proof of the original conjecture could be carried over. Note that it is
required to establish the existence of specific elements in jf
t9
regardless
of whether we have local controllability or not. It seems, however, con-
ceivable that special cases can be treated e.g. with methods taken from
[1] and that one would then be able to examine from case to case how
much of the assumptions underlying the Hermes-Sussmann result is
actually required. From the viewpoint of applications one would anyhow
welcome results which are more restricted in its scope in return for more
flexibility with respect to the hypotheses. Some steps in this direction
have been undertaken and will be discussed in the lecture. In particular,
it seems very likely though not all details have been cleared that for
systems of the form (1.6) one can extend the space SB (2) by adjoining
third order elements (i.e. vectors which can be written as third order
polynomials in the components of g, g
x
, g
xx
, etc.) under the assumption
that the Lie-bracket
[g, [ss/o]] (1-7)
evaluated at the reference trajectory x = x(t) is contained in 93(2) for
t e [2 ,*]. The reference solution need not be stationary; however,
u( ) has to assume values in the interior of the control set TJ. To compare ,
this result with the Hermes conjecture, one has to take into account that
Nonlinear Systems: Local Controllability and Optimal Solutions 1375
in case of a stationary reference solution the space 93(2) is independent
from 2 and coincides with the linear span of those Lie-brackets which are
first order with respect to g. Hence it is required in case of a stationary
solution that (1.7) is a linear combination of first-order Lie-brackets
in order to ensure the existence of certain third-order elements in X
t
. The
conclusion is certainly much weaker than what would follow from the
Hermes conjecture (in case of Jc 1). On the other hand, one is relieved
from the necessity of checking all Lie-brackets which are of order 2 with
respect to g. In fact, there are some examples in the engineering literature
(e.g. Lawden's spiral) where (1.7) is the only one among these brackets
which is easy to compute.
The results which have been outlined so far (one more will be added
in Section 3) can all be proved by a combination of methods, which could
be summarized as the "analytic" approach to control theory. A consider-
able portion of it has been developed in [1] and used there to establish
higher order necessary conditions for singular arcs. The starting point
is the notion of control variations. These are parameter-dependent local
modifications of the control function and the trajectory around a given
reference solution. Later, in order to handle formal problems, one finds
it convenient not to relate all results with the reference solution but to
work directly with the right-hand side of the system equation. The analytic
approach leads thereby straight into an ad-hoc-made algebraic theory of
non-linear systems, which appears at first glance to be a rather natural
generalization of linear system theory. The connection with the differential
geometric approach is less obvious; the comparison of these two basic
methods in control theory will play a major role in the lecture. At present
it is safe to say that the analytic techniques seem to be rather efficient
if one wants to refine existing results and, in particular, get rid of restrict-
ive assumptions concerning the system equation or the reference solution.
Furthermore, they seem to be well suited for a better exploitation of the
specific nature of a given problem. This also can be of an advantage if one
has to compute from the right-hand side of equation (1.1) those quantities
which one has to know in order to apply the general results. An illustrative
example is the "economic" version of the generalized Olebsch-Legendre
condition which was given in [1] (Theorem 20.2).
The following two sections constitute a short account of the essential
definitions and facts on which the analytic approach to non-linear systems
theory is based. Except for occasional remarks we will not enter into a dis-
cussion of the proofs. All details as far as they cannot be found in
1376 Section 14: H. W. Knobloch
existing literature will be given in a dissertation, which is presently
prepared at the Department of Mathematics in Wrzburg.
2. The local cone of attainability
This section is devoted to a closer study of the sets X
v
We consider
a reference solution u(-),x(<), which is defined on some interval [2
0
,2]
and which will be kept fixed throughout this and the next section. For
simplicity we will assume t hat u ( ) is of class 0 on some interval [2 s, t ],
in particular, t hat any derivative of u(-) has a left-hand limit at * =? .
Our first aim is to define j f f .
To this purpose we have to introduce some notation. In the sequel
we will use the symbol X in order to denote a positive parameter. When-
ever X appears in a formula involving 0-terms we wish to formulate an
asymptotic relation which has to hold for X->0, uniformly with respect
to all remaining variables occurring in the formula.
The definition of j f j- will be based on a modification of what was called
in [1] a "control variation concentrated at 2 =2 . " We consider families
of control.functions u(t; r , X) which depend on two real parameters r , X
and which are defined for t eR, 0 < X < X
0
, 2 e
0
< * < 2. The following
conditions should hold (X
0
, s
0
, K are positive constants).
(i) u(t; r, X) is bounded for all 2, r, X.
(ii) u(-,r,X) is for fixed r , X an admissible control function.
(iii) u(t; x, X) coincides with the reference control whenever 2, r sat-
isfy the relation 2 < T X".
(iv) Let x(t; r, X, x
Q
) be t he solution of the initial value problem
= f(t',x,u(t-,r,X)), x(t
0
)=x
0
-
9
(2.1)
x
then x(t; t, X, x
0
) can be extended to a -function of 2, X, x
Q
on a full
neighborhood of the set {2, X, x
0
: te [2 s
Q
, 2], X = 0, x
Q
= x(t
Q
)}.
Note t hat (i) and (iii) imply by standard arguments that the
solution of the initial value problem (2.1) exists on the interval [2
0
, r ]
whenever X and ||a?
0
--^(*o)ll
a r e
sufficiently small. Furthermore, this sol-
ution approaches the solution of the initial value problem
x =f(t'
9
x
9
u(t))
9
x(t
0
) =x
0
(2.2)
if X->0. So x(t; t, X
9
x
Q
) is well defined on a set of t he form
{t,X,x
:
te[t-e,tl, 0<X<X'
Q
, \\x
0
-x(t
0
)\\<e
0
}. (2.3)
Nonlinear Systems: Local Controllability and Optimal Solutions 1377
I t is then required (cf. (iv)) t hat this function could be extended to a O
00
-
function of 2, X, x
0
defined on a full neighborhood (in the (2, X, #
0
)-space)
of the set (2.3). For simplicity we will denote the extended function also
by x(t;t,X,x
0
).
If we take x
0
= x(t
Q
) then the solution of the initial value problem
(2.2) is just the reference trajectory x(t). In other words: we have
x[t; x, 0, x(t
0
j) = x(t) identically in 2, x and therefore also
x[l)t, 0,x(t
0
)) = x(t)
for all t e[t E
0
,tli. Now, according to our hypothesis, t he function
x[t; t, X, x(t
0
)) admits a Taylor expansion with respect to X. Hence we
have a relation of the form
<s(t-
9
t,X,x(t
0
)) =x(t) + X
s
p(t)+0(X
s+l
) (2.4)
where s is a positive integer and p (t) is infinitely differentiable on [? e
0
,2 ].
Furthermore, p ( ) does not vanish for all 2. In this manner one can associ-
ai e with any family of control functions an w-dimensional vector p(t)
which is of class G
00
and is not identically zero on [2 e
0
,2]. These p(-)
play the decisive role in the definition of X %, which will now be given.
DEFINITION 2.1. X~
%
is the collection of all ^-dimensional vectors
p having t he following property :
There exists a family of control functions satisfying conditions (i)-(iv)
and such t hat p equals the terminal value p(t) of the first nonvanishing
coefficient in the Taylor expansion (2.4).
One can see immediately t hat t he inclusion holds :
^ t ^ n
r
, (2.5)
where TJ% is the set associated with the reference solution (and t he time-
instant 2 =2 ) according to Definition 9.2 in [1], We will not go into the
discussion whether X% may be a proper subset of 11% or not. In fact, most
of the elements of III which have been constructed in [1] actually belong
to X% . That this statement holds true for
/ (?; x(t), u)-f{t; x(t), u(i)), ueTJ,
can be seen by an argument of the same type as that used in [1] in order
to verify (9.8). Further and more interesting examples will be discussed
in Section 3. Though the definition of TJ% appears to be simpler t han t hat
of X% one will probably, in a concrete situation, find it equally difficult
to verify tlie conditions of either of them. On the other hand, working
1378 Section 14: H. W. Knobloch
with jf % is more convenient because of certain a-priori-statements, which
can be made and whlsh are independent both from the specific nature
of the system equation (1.1) and from the choice of the reference solution.
The most important ones are summarized in the following theorem.
THEOEEM 2.1. (i) X% is a convex cone, (ii) Let +peX~
l
and p(-) be
a G-function which is associated with a family of control f%mctions according
to (2.4) and for which p = p(t). Then we have
{-P0)+Mh *(), (?))*(?)} e ^ 7 . (2.6)
One can observe that the second assertion of the theorem can.be stated
in the following way: The maximal linear subspace of X% is invariant
with respect to the mapping
p->-p+f
x
{h (), (*))*. (2.7
If the system equation is linear and given by (1.2) then the mapping
assumes the form
p-> -p + A(t)p.
This is nothing else than the operation which can be applied in order
to generate the controllable subspace out of the columns of the matrix
B(t).
If p(t) is of the form p (t, x(t)), where p is a sufficiently smooth func-
tion of 2, #,then
p = dpldt+p
x
f
and (2.7) can be written in the form
p->-dpldt-[f,p],
where the expressions appearing in this formula have to be evaluated
at x = x(t), t =t. Hence the mapping (2.7) is in fact nothing else than
the application of the operator r, as introduced in [1].
We conclude this section by stating the two fundamental theorems
about jf| which were announced in the introduction. The proof of the
second one follows immediately, in view of (2.5), from Theorem 9.1 in [1].
THEOEEM 2.2. If X% = R
n
then we have local controllability along the
reference solution and for 2=2.
THEOEEM 2.3. Let the reference solution be optimal. Then there exists
an adjoint state vector y(-) which satisfies the transversality conditions at
the endpoints and the inequalities y(t)
T
p < 0 for all p e X
t
and all t e[t
Q9
tj.
Nonlinear Systems: Local Controllability and Optimal Solutions 1379
3. Second order elements
Special attention is deserved by those elements in X
t
which lead to
second order necessary conditions. This notion was explained in [1] (Sec-
tion 1, p. 5); the definition can easily be modified so as to make sense if
one works with X
t
instead of II
t
. We consider a family of control func-
tions as specified in Section 2 and we assume in addition that u(t; x,X)
can be written as
u(t) + X
r
v(t'
9
x, X) (3.1)
where r is some positive integer, v (2; r, X) is supposed to vanish for 2 < x X"
in order to meet the requirement (iii). It follows then from property (iv)
by standard arguments that one can expand x[t;t, X, x(t
0
)) into an asymp-
totic series of the form
00
where the coefficients p
v
(-) are functions of 2 infinitely differentiable
on some interval [t e,t]. We pick the first p
v
( ) which does not vanish
identically and call it p( ). Clearly, p(t) e X
t
for all 2 e (2 e, 2]. If p( )
= #,,() and r < v < 2r then p( ) is called a second order element. Accord-
ingly, one would call p( ) a Jc-tli order element if (Jcl)r < v < Jcr.
A survey of second order necessary conditions can be found in [1],
Sections 20, 21. Each condition is stated there as a multiplier rule y(t)
T
p(t)
< 0 (or y(t)
T
p(t) =0) and holds under the hypothesis that a relation
of the form
q(t) e SB (2) (3.2)
is satisfied for all 2 in some interval [2 e, 2]. q(') is of course a vector,
which does not trivially belong to the controllable subspace. A natural
question arises whether the element which appears in the rule as factor
along with y is actually second order (in the sense explained above) and
whether this statement holds true irrespective of the optimal character
of the reference solution. As can be seen by a thorough analysis of the
proofs given in [1] the answer will always be affirmative if, at least, (3.2)
is replaced by a slightly stronger hypothesis. This modification, however,
is unnecessary if the space SB (2) has constant dimension. For simplicity
we will therefore work for the remaining portion of this section with
the following additional hypothesis:
The dimension of the space SB (2) is independent from 2,
for all 2 e [2 - e , 2]. (3.3)
1380 Section 14: H. W. Knobloch
We then restate here two of the second order conditions which have
been proved in [1] (Theorem 20.2 and 21.2) for a general system of the
form (1.1). The first is commonly known as the generalized Olebsch-Legendre
condition. I t is a non-trivial result, regardless of whether u is scalar or
not, and we will restrict ourselves to the case of a scalar control. The sec-
ond one is the prototype of an equality type necessary condition, and
hence is of interest only if u is not scalar as we have remarked in the intro-
duction. Therefore we will here assume that u = (u
1
, u*)
T
is 2-dimensional.
As before we denote by (u(-),x(-)) the given reference solution and
assume t hat u( ) satisfies the condition u(t) e int TJ for all 2. We associate
with this solution a sequence of vectors B\, v = 0 , 1 , . . . , i = 1 , . . . , m
( = dimension of u), which are recursively defined as follows
/ (; * ) : = /(<} i (*)), -Bod a) s = (3//3^)(2; x, u(t)\
Bl
+1
(t
9
x):=(dBlldt)(t, x) + U, Bi\(t, x)
9
v = 0 , 1 , . . .
In case of m == 1 we write B
v
instead of B\.
THEOEEM 3.1. Eypotheses: (i) u is scalar, (3.3) holds true, (ii) We have
(d*fldu
2
)[t; x(t)
9
(*)) G$B(2), \B
v
^B
v
l{t, x(t)) e(<)
for all t e [2 e, 2] and v = 1, ..., Q > 0.
Conclusion: [J5
e
, -B
e+1
] (2, 5(2)) G jf^.
THEOEEM 3.2. Eypotheses: (i) tt = (u
l
9
u
2
)
T
is 2-dimensional, (3.3)
holds, (ii) TFe W
(5
2
//3(^)
2
)(2; x(t), u(t)) GSB(2), [ B ^ I , #] ( , *(*)) e93(2)
/or 2 G [2 e, 2], v = 1, . . . , Q

> 0, i = 1 , 2.
Conclusion : \B\, B
2
V
] (t, x(2)) G X
f
if r + p < e
x
+ Q
2
.
References
[1] Knobloch H. W., Higher Order Necessary Conditions in Optimal Control Theory,
Lecture Notes m Control and Information Sciences, Vol. 34, Springer-Verlag, Berlin,
Heidelberg, New York, 1981.
[2] Sussmann H. J. , Lie Brackets, Eeal Analyticity and Geometric Control, i n: Dif-
ferential Geometric Control Theory, E. W. Brockett, E. S. Millman, H. J. Sussmann,
Eds., Progress in Mathematics, Vol. 27, Birkhuser, Boston, Basel, Stuttgart, 1983.
[3] Sussmann H. J. , Lie brackets and local controllability: A sufficient condition
for scalar-input systems, to appear in SIAM J. on Control and Optimization.
[4] Vrsan C, On Local Controllability for Non-linear Control Systems, Preprint
Series in Mathematics, Nr. 115/1981, Bucuresti, 1981.

Você também pode gostar